Maintenance and Migration

Upgrading Deployments

Once you have deployed the platform into your environment, it will become a common practice to upgrade to the latest versions of the software. Since this does not happen automatically, you will need to become familiar with the process.

It should be noted that many of the Cloud Platforms such as AWS and Azure provide their own methods for upgrading container deployments, and those methods should be used and the documentation for such processes will be provided by the Cloud Platforms. Because of this, this section will describe how to upgrade a local deployment of the platform.

Before upgrading, it should be noted that the platform is a "stateless" system, which means that each of the containers provided by do not maintain any files, sessions, or any other data that would "go away" between upgrades. Because of this, you can consider each deployment as an ephemeral system that can be removed from the load balancer without any affect in the stability of the overall system (if more than one instance reside behind a load balancer). It is for this reason, that recommends that all non-development environments contain more than a single container instance behind a load balancer so that updates can be performed in a "rolling" fashion where each container is removed from the load-balancer, updated, and then added back into the load balancer once the update is complete.

Major Version Upgrades

When performing an upgrade, you should be aware when you are performing a "major" version upgrade. This is anytime you are upgrading to a version number where the left most number in the semantic version has been incremented. For example if you are updating your 6.10.0 server to version 7.1.0, this is considered a "major" upgrade, whereas upgrading from 6.10.0 to 6.11.0 is considered a "minor" version upgrade. These semantic versions are positioned in a way to indicate "risk" associated with the upgrade and should be taken into consideration. Here are the numbers and what they mean.

  • 6.10.2 - The 2 here is considered the "patch" version. Increments of this version are considered "patch" releases and are the least risky versions. They only include minor bug fixes against the current minor release.

  • 6.10.2 - The 10 here is considered the "minor" version. Increments of this version are considered "minor" releases, which include new minor versions of the renderer which could introduce new features, etc.

  • 6.10.2 - The 6 here is considered the "major" version. Increments of this number are considered "major" releases. They only occur rarely (usually once per year) and include major refactoring and improvements to the platform. There may be some reverse compatibility breaks in these versions so migrations to new major versions must be done with great care.

When upgrading major versions, where there are more than one integer different between, them (such as if you are upgrading from 5.0.0 to 7.0.0), then you must perform two separate migrations. Once from 5.0.0 to the latest version of 6.x such as 6.11.1. And then finally migrate from the 6.x version to the latest version of 7.x such as 7.1.0. Never migrate two major versions in a single upgrade process.

Adherence to Reverse Compatibility

Every release that we issue, it is our upmost highest priority to maintain reverse compatibility, even with Major version releases. It is VERY rare that we will issue any release where reverse compatibility is broken, especially within the Form JSON schema that is used to render forms. Because of this, it is generally safe to upgrade your software without concern that your forms will "break". If any refactoring does occur, they will usually be paired with an update hook that will automatically update the database to the correct Schema version so that the behavior of the server is maintained. Please see Resolving Update Failures for more explanation on the update schema.

Database Backup

Before you begin upgrading your server, it is important to create a Database backup of your deployment so that you can restore from backup if anything should go wrong. It is rare that this would occur, but this is a good measure to ensure that you have minimal downtime in the event that a problem should arise during the upgrade process.

The platform only depends on MongoDB for any "state" that is stored within the server, so as long as you have backed up the database, the upgrade can continue. There are two ways to perform a backup.

  • mongodump - This should be used if you are planning on keeping the same MongoDB database between upgrades. This maintains all indexes and internal metadata about your database.

  • mongoexport - This should be used if you plan on switching databases during the export process. This is more of a JSON export that will store all your records within JSON files that can then be re-imported into a new database. This is useful for importing the database into a new database where new indexes will be created.

Please refer MongoDB documentation for both of these options. For most upgrade process, we would recommend that mongodump always be used, and only if situations arise in which it cannot be used, then a mongoexport could be used as a backup plan. In either case, it does not hurt to perform both operations before an upgrade process.

Upgrading Enterprise Server

Once your database has been backed up, you can now update your api server.

To perform an update with a Docker container system, you just actually stop the currently running container, remove it, and then re-register the new version utilizing all of the same environment variables that were used when originally deploying. The following commands illustrate how this can be done on a per-instance basis to perform a manual upgrade within a single environment.

If you do not know the values that were used when originally deploying the container, you can use the following command to determine the values of these environment variables.

docker inspect formio-server

Once you have the values of these environment variables, upgrading is easily achieved with the following command.

docker pull formio/formio-enterprise && \
docker rm formio-server-old || true && \
docker stop formio-server && \
docker rename formio-server formio-server-old && \
docker run -d \
-e "MONGO=mongodb://mongo:27017/formio" \
-e "PORTAL_ENABLED=true" \
-e "PDF_SERVER=http://pdf-server:4005" \
--restart unless-stopped \
--network formio \
--link pdf-server:pdf-server \
--name formio-server \
-p 3000:80 \

After you run this command, you will then want to inspect the logs by typing the following.

docker logs formio-server

For updates that require a database update, you may see something that looks like the following when upgrading.

You will need to ensure that the updates run smoothly and all complete. If any update fails, then you will need to follow these instructions to ensure you are able to get through the updates.

Resolving Update Failures

If any of the updates fail to execute, there is a series of steps that can be taken to help resolve the problem. To understand how to resolve the update failures, it is important to understand how our DB schema's work and how to get the server back up and running.

When an update is started, the first thing that our server does is compare the code schema version with the version that is present in the database. The code schema version can be found by looking at the package.json file within your deployment codebase and looking at the "schema" property of that file. This indicates the "db schema" of the codebase. This value is compared with the value that is found within the database by running the following command.


Running this command within your database connection would provide you with the following result.

> db.schema.find({key:'formio'}).pretty();
"_id" : ObjectId("55cd5c1f2c4aaf01001fe799"),
"key" : "formio",
"isLocked" : false,
"version" : "3.3.9"

This tells us that our database is currently on the 3.3.9 version and is not locked. If the "code" schema version were a larger version number than this, this would indicate that an update needs to be ran. If an update fails, then you will see that the "isLocked" property is set to a timestamp, and that the "version" is set to the last successful update.

In the event of a problem, the first thing we will do is reset the "isLocked" flag to "null" so that we can re-try the update by doing the following.


After we have done this, we will then re-try our update by restarting our docker container

docker restart formio-server

Once it has restarted, you will then inspect the logs to see if it has moved past the "stuck" update. If it has not, then we can bump the version of the update by one patch version and then retry as follows. Here is what we would run if we were stuck on update 3.3.7.


You will then need to restart the server as follows.

docker restart formio-server

In most cases, this will not have any ill effects, but if you run into this scenario, please reach out to Support so that we can provide you with the "manual" update script you had to skip so that are able to ensure all updates are applied cleanly to your deployment.

Upgrading PDF Server

Upgrading a PDF Server can be done by using the following command.

docker pull formio/pdf-server && \
docker rm pdf-server-old || true && \
docker stop pdf-server && \
docker rename pdf-server pdf-server-old && \
docker run -itd \
-e "MONGO=mongodb://mongo:27017/formio" \
-e "FORMIO_S3_SERVER=minio" \
-e "FORMIO_S3_PORT=9000" \
-e "FORMIO_S3_BUCKET=formio" \
--network formio \
--link formio-mongo:mongo \
--link formio-minio:minio \
--restart unless-stopped \
--name pdf-server \
-p 4005:4005 \

The same process will need to be followed as described in the API Server upgrade to ensure that the update was processed cleanly. If not, then you can follow the same steps provided in the Resolving Update Failures section to ensure that your deployment is running smoothly.

Migrating Projects

The following documentation describes how to perform different kinds of migrations within the platform. Throughout this documentation, we will refer to both Source and Destination projects, where the source is the project where you would like to migrate FROM while the Destination is the project you would like to migrate into.

Migrating from one project to another can easily be achieved with a combination of the Staging system as well as the CLI tool. The Staging system is used to migrate the Forms, Resources, and all Project level configurations. It is used to migrate everything EXCEPT Submissions and Settings.

Migrating Forms, Resources, Actions, and Roles

To start, we will first export the project of the Source Project using the staging interface. This can be found by clicking on the Source Project in the Developer portal, and then clicking on Settings | Stage Versions.

Export Template interface

For a complete migration, we will export the whole template, so just click on the Export Template button. This will download a JSON file onto your local machine, which we will use to migrate to our new project.

It is also possible to use the Export Template system to only migrate single Forms and Resources into a destination project by clicking on the Include All checkbox, and then only select the Forms and Resources you wish to migrate.

Now that we have our export JSON file on our local machine, we will now either create a new project with this JSON template, or we can also update an existing project with this template.

To create a new Destination project, simply click on the Create Project button in the home page, and then under the Template Settings, we will select our template JSON file.

This is used to Create a new Project with a template export.

To update an existing Destination project, you will just click on the Destination project, then go to Settings | Stage Versions (like above), and then click on the Import Template section. Once you are here, you will then click Choose File and then select the template exported from the Source project. Then, you will click the Import Project Template button to complete the import.

This is used to import a template into an existing project

This will clone the Forms, Resources, Roles, and Actions into this Destination Project. Now we are ready to migrate the submissions using the CLI tool.

Migrating Submissions

To migrate submissions, we will now use the CLI tool, which can be found @ We can download this tool to our local machines using the following command.

npm install -g formio-cli

With the CLI tool now on our local machines, we will now need to make sure that we have an API key configured for both the Source and Destination Projects. For both of these projects, we will navigate to the Settings | API Keys section and create an API key.

Create an API Key for both Source and Destination Projects

Once you have an API key for both the Source and Destination keys, you can now use the following command from your local computer terminal.

formio migrate \
project \ \
--src-key [SOURCE_API_KEY] \

This will now copy all submissions from the Source project into the Destination project.

Migrating Project Settings (optional)

If you wish to migrate the project settings, you will need to use the Project API's to perform this. The API's that will be used and the step process will be as follows.

  1. GET source project using Project GET API

  2. GET destination project using Project GET API

  3. Copy the Destination project API Keys and save for later

  4. Copy the Source project settings, and set them as the Destination settings.

  5. Copy the Destination project keys back into the Destination settings

  6. Perform a PUT Request to save the settings into the Destination settings.

The following shell command performs all of the steps above, and can be used to migrate project settings from one project into another, while maintaining the same API Keys for future migrations.

SOURCE_URL='' && \
DEST_URL='' && \
SOURCE_PROJECT=$(curl --location \
--request GET $SOURCE_URL \
--header "x-token: $SOURCE_APIKEY"\
) && \
DEST_PROJECT=$(curl --location \
--request GET $DEST_URL \
--header "x-token: $DEST_APIKEY"\
) && \
UPDATE=$(node -e "\
const dst=$DEST_PROJECT; \
const src=$SOURCE_PROJECT; \
const keys=dst.settings.keys; \
dst.settings=src.settings; \
dst.settings.keys = keys;
) && \
curl --location \
--request PUT $DEST_URL \
--header "x-token: $DEST_APIKEY" \
--header 'Content-Type: application/json' \
--data-raw "$UPDATE"

Enabling the Developer Portal on Existing environment

The platform allows you to use the Hosted portal @ to connect to your remote environments through the On-Premise Environments section within the project. This is useful to allow your remote environment to serve as an API-only interface for your applications, while at the same time, manage that deployment through a hosted portal interface. In some cases, though, you may wish to enable the developer portal within a remote environment by introducing the following environment variables.


It is also no longer necessary to use the PORTAL_SECRET environment variable, so you may also now remove this variable since you will no longer be connecting to this environment with a remote portal.

Once this is done, you will need to restart the server so that the initialization process will install the Portal Base project as well as create the initial admin account for this project.

The Portal Base project is a special project that is used to control the portal application. Any users that can log into the portal are added to the User resource within this project, and anyone with the Authenticated role within this project will have the ability to log in and create new projects.

After the portal has been enabled, you can now login to the portal, by just navigating to the root url of the deployed API. Once you log in, you will probably notice that you do not see any of your existing projects. Do not worry, they are still there, but the "owner" of these projects needs to be established so that they show up when you login as the root user account. Do do this, you will need to first connect to your MongoDB database, and then run the following command.

var account = db.submissions.find({'': '[email protected]'}).next();
db.projects.updateMany({}, {$set:{owner:account._id}});

This command will now set all of the projects to have the "owner" of the account that was created within the Portal Base project. Now, when you log into your server, you will see all of your existing projects within that environment show up so that they can be managed accordingly.

Migrating from Community Edition to Enterprise

If you wish to deploy all of your forms and resources from the Community Edition into the Hosted platform @, you can do this by using the CLI command line tool.

npm install -g formio-cli

Once you have this tool installed, you will need to follow these steps.

  • Create a new project within

  • Create an API Key within this project by going to the Project Settings | Stage Settings | API Keys

  • Next, you can execute the following command to deploy your local project into Hosted

formio deploy http://localhost:3001 https://{PROJECTNAME} --dst-key={APIKEY}

You will need to make sure you replace {PROJECTNAME} and {APIKEY} with your new Hosted project name (found in the API url), as well as the API key that was created in the second step above.

This will then ask you to log into the local server (which can be provided within the Admin resource), and then after it authenticates, it will export the project and deploy that project to the hosted form.

Next, all submissions can be migrated by following the Migrating Submissions documentation.

Migrate from 5.x Server to 6.x Server

The migration process between 5.x server to the 6.x server should be a very seamless process. You will simply need to follow the instructions provided in the Upgrading Deployment sections. There are some cases where you may run into db update failures. For these cases, you will want to make sure that you follow the instructions provided in the Resolving Update Failures section to mitigate these issues.

Migrate from 6.x Server to 7.x Server

Migrating your deployment from a 6.x version into a 7.x version is not complicated, but there are a few things that you should be aware of to ensure that your transition goes smoothly. The 7.x version of the Enterprise Server introduces a number of new features that you will certainly want to take advantage of, which are as follows.

7.0 Features



New Licensing System

This allows you to manage all of your licenses with the platform in one single location. It also streamlines how licenses are applied to both the Enterprise Server as well as the PDF Server. In addition to this, both PDF and Enterprise Servers now manage their licenses and configure their licenses in the same way. For more information, see the License Management section of our help docs.

Group Permission Levels

With the 7.x Server release, you can now configure the Group permissions to be categorized into different levels. For example, you can now configure users to be Admins of a group, or members of a group and then assign permissions separately based on their role within the group. For more information, please see the Group Roles section in our user guide.

User Session Management

The 7.x Server release adds additional levels of security around user session management and ensures that any outstanding JWT token can no longer be utilized once a user has logged out of their current "session". This ensures that each JWT token can only be associated with a "current" session and JWT tokens associated with invalid sessions (through logout) can no longer be used. For more information, see our User Session Management user guide.

Audit Logging

The audit logging system allows you add additional logging capabilities that logs ever action taken by all users within the system. See Audit Logging for more information.

Simplified PDF Setup

In addition to the 7.x Enterprise Server, this release also includes the 3.x Enterprise PDF server, which introduces a number of improvements to the management and configuration of how the PDF server is connected to the API Server.

Isomorphic Validation

The new Isomorphic validation utilizes the core renderer found @ as the mechanism for validating submissions within the server logic. This ensures that any validation that occurs on the front end form, is the exact same validation that occurs within the server validation system.

Before you begin your migration, it is recommended that you first create complete backups of your existing environment. Once this is completed, you can now spin up a replica environment and point this environment to the SAME database as your current 6.x environment. This should not cause any problems because the 7.x upgrade does not introduce any changes to database schemas as well as does not perform any update hooks (as of version 7.0.0). Before this environment is launched, however, you will need to ensure that you change the following environment variables for this cloned deployment.

Enterprise Server Environment Variable Changes

6.x variable

7.x variable




Change name. Same value.


Delete this environment variable


Set value as new license key



Change name. Same value.

PDF Server Environment Variable Changes

2.x variable

3.x variable



Delete this environment variable


Delete this environment variable


Delete this environment variable


Delete this environment variable


Add same value as provided to enterprise server.


Set value as the new license key. Same as API Server

For the PDF Server (3.x version), you will also need to ensure that this container has access to the same database as the API Server. In some cases, this may require you to add a --link to the mongo container. Please read the PDF Server Deployment User Guide for detailed instructions on how to accomplish this.

After your two environments are up and running, you can now perform necessary tests against your replica environment. Once it is determined that all features are working to your expectation, you can then flip a DNS switch over to your replica environment, making it become the new production, while at the same time keeping the old 6.x environment running as a fail safe if anything should happen (where you can then DNS switch back to the old environment).

Fixing Deployed (remote) Projects

In some cases, you may have been using the Hosted Portal (found at to connect to your remote environment, which was running 6.x. Once you upgrade your deployments to the version 7.0.0 or greater it must be known that you can not use any portal version less than version 7.1.0. This version can be found at the footer of the hosted portal. If the hosted portal is not yet on that version, then one option is to enable the deployed portal within your environment. This can be accomplished by following the Enable the Developer Portal user guide.

One other thing that should be noted is that, in some cases, the License will erroneously register your "stage" projects within your deployment as "licensed" projects. This occurs because your projects within your deployed environment do not contain the correct "project" property associating them with the correct licensed project. To fix this, you must do the following.

  • Determine the Project ID of the "main" project that is licensed. This can be found within the License Manager or by clicking on the main project in your hosted portal and taking note of the Project ID within the Url of the browser.

  • Once you have the Project ID, you will need to connect your terminal to your MongoDB instance and perform the following command to set the correct "project" property values on your remote projects.

db.projects.updateMany({}, {$set:{project:ObjectId('LICENSED_PROJECT_ID')}});

Of course, you will replace the LICENSED_PROJECT_ID with the ID of your "main" project. Once you do this, any projects within your new remote environment will register as "stages" within the main project and will not erroneously count against your license.