Microsoft Azure

Microsoft Azure is perfect for the Enterprise customers who wish to have a very robust platform with high availability. It is used and trusted by some of the largest companies in the world to serve as their Cloud infrastructure.

Resource Group

Once you create a new Azure account, the first thing you will want to do is to create a Resource Group that will contain all of the deployments from this walkthrough. You can do this by click on the Resource Groups menu item on the left side of the screen, and then click + Add button.

Now that you have a resource group, you can now add all your other resources within this group, starting with a Cosmos database.

CosmosDB requires a MongoDB compatible database to run its platform. While you could setup a stand alone MongoDB or even use MongoDB Atlas, for Azure, it is also perfectly reasonable to use CosmosDB for the database. Follow these instructions to setup a new database.

  • In the left hand column, click on the menu item that says Azure Cosmos DB
  • Click on the + Add button
  • Create a new ID, then select MongoDB under the API selection, and then finally ensure that you select the Existing Resource Group and then select the resource group we just created, then click Create button.

  • You will then wait a few minutes for your database to create.
  • After the database is done creating, you will now need to enable the MongoDB Aggregation Framework for this CosmosDB. Since this is a “Preview” feature, you can enable it by clicking on your created database, and then clicking Preview Features
  • Once you are there, you will then click on the Enable button next to the MongoDB Aggregation Framework

  • After this is complete, you will then Connection String.
  • You will now make a copy of the connection string, and then place it in a Text editor. does require that you use a standard connection string, so you will need to make a small change to what Azure provides you by adding your database name to the connection string. This is placed right after the and before the text ?ssl=true. It should look like the following.


  • You will also need to add the variable retrywrites=false to your connection string as well as follows.


  • Make sure you save this modified connection string in a Text editor for later usage.
  • Now that we have a Database ready to go, we will now create a new Redis deployment, which is needed for PDF Server deployments and analytics.

Please Note: In Cosmos DB (set to Mongo DB) version 3.6: In order to do a sort on a property, there must be an index set. This is different from how MongoDB works as MongoDB allows sorting on any property regardless of an index being set. To resolve this, a wildcard index can be added to the following collections:

db.submissions.createIndex( {"$**": 1 } );
db.forms.createIndex( {"$**": 1 } );
db.projects.createIndex( {"$**": 1 } );
db.actions.createIndex( {"$**": 1 } );'
db.roles.createIndex( {"$**": 1 } );

Azure Blob

In order to get PDF File uploads working, you will now need to create an Azure Blob to store all of the pdf files that are uploaded.

To setup this, please perform the following.

  • Click on Storage Accounts in the left navigation bar, then click + Add
  • Give your storage a Name, and then select Blob Storage under Account Kind
  • Select your preferred replication method
  • Make sure you select Enabled on Secure transfer required
  • Select you existing Resource Group
  • Click Create
  • This will now create a new Blob Storage for your Azure Account
  • Once it is created, click on the created instance, and then click on Access Keys
  • Make sure you save the first key in your Text editor.

NOTE: You can also use the following above to provide settings for the File Upload component. The documentation for this is provided in our

Azure File Upload Documentation

You are now ready to setup your hosted Virtual Machine.

Virtual Machine Setup

You will now setup the API Server by using the Azure Container Instances setup.

  • An On-Premise Enterprise license is required to complete this installation
  • In the left menu, click on the Virtual Machines menu item, and then press + Add button.
  • In the search for images, type the word Docker, and then click on the Docker on Ubuntu Server
  • You will then see an info page with a Create button at the bottom. Click on this button.
  • Next, you will provide the Host name, and a root user account credentials.
  • You will now need to select your Instance for your machine. We recommend the following.
    • Test Environments - A1 Standard
    • Production Environments - A6 Standard

  • Now press Create
  • You will now wait for this machine to be created, and once it is, we will need to open up the HTTP port 80 on this machine.
  • To do this, you will click on this machine in the Virtual Machines section, and then click on Endpoints and then click + Add
  • We will now configure the following endpoint.
    • Name: HTTP
    • Protocol: TCP
    • Public Port: 80
    • Private Port: 80
    • Press OK
  • Now that we have an HTTP port open on this machine, we can now SSH into the machine using your computers Terminal.
  • To login to this machine, you will first need to get the DNS Name by clicking on the Overview tab.
  • You will then see the DNS Name, which you will then need to copy.

  • Next, you will open up a Terminal on your local computer and type the following.

    ssh [user]@[dns name]

    Make sure you replace [user] with the username that you used to create the virtual machine, and the [dns name] with the name you just copied.

  • You will then be prompted to enter your password which you will enter the same password you provided when you created the virtual machine.
  • You should then see the console of the Ubuntu virtual machine. You will now need to login to your Docker Hub acount by typing

    docker login
  • Once logged in, you will now need to download the Docker containers. While it is recommended to put the API Server and the PDF Server on separate Virtual machines, for this example, we will just download both of them on the same machine.

    docker pull formio/formio-enterprise
    docker pull formio/formio-files-core
  • You will now need to create an “internal” network that you will use to connect all the internal docker containers together.

    docker network create formio
  • Next you will create the Minio Server which will connect to your Azure Blob that we just created.

    docker run -itd \
     -e "MINIO_ACCESS_KEY=myblob" \
     --network formio \
     --name formio-minio \
     --restart unless-stopped \
     minio/minio gateway azure;

    Where you will replace “myblob” with you Azure blob name, and the [AZURE BLOB SECRET KEY] with the key we saved earlier.

  • Once this is completed, we can ensure this is running by typing the following command.

    docker logs formio-minio

    You should see some status output from the running minio container.

  • Next we will spin up our API Server using the following command.

    docker run -itd \
      -e "FORMIO_FILES_SERVER=http://formio-files:4005" \
      -e "MONGO=mongodb://formio:[PASSWORD]" \
      --restart unless-stopped \
      --network formio \
      --name formio-server \
      --link formio-files-core:formio-files \
      -p 3000:80 \

    You will need to make sure that you change out the values for PORTAL_SECRET, JWT_SECRET, DB_SECRET, MONGO, REDIS_ADDR, AND REDIS_PASS to be the values that you saved in your editor during the setup process.

  • After this runs for a minute, you should then be able to type the following to check on the status.

  • Next, we will deploy our PDF server to point to both the API server + Minio File Server.

    docker run -itd \
      -e "FORMIO_SERVER=http://formio" \
      -e "FORMIO_PROJECT=59b7b78367d7fa2312a57979" \
      -e "FORMIO_PDF_PROJECT=http://formio/yourproject" \
      -e "FORMIO_PDF_APIKEY=is8w9ZRiW8I2TEioY39SJVWeIsO925" \
      -e "FORMIO_S3_SERVER=minio" \
      -e "FORMIO_S3_PORT=9000" \
      -e "FORMIO_S3_BUCKET=formio" \
      -e "FORMIO_S3_KEY=myblob" \
      --network formio \
      --link formio-server:formio \
      --link formio-minio:minio \
      --restart unless-stopped \
      --name formio-files-core \
      -p 4005:4005 \

    You will need to change the FORMIO_PROJECT, FORMIO_PROJECT_TOKEN, FORMIO_PDF_PROJECT, FORMIO_PDF_APIKEY, FORMIO_S3_KEY, and FORMIO_S3_SECRET with configurations provided previously as well as settings that are provided to you within your Project Settings, under PDF Management @

    For the FORMIO_PDF_PROJECT, you can keep the “http://formio/” since this will connect to the locally running server, and then with the correct project name, it will find your project.

  • Once this is running, you should be able to type the following.

    docker logs formio-files-core

    and then see the following.

  • You should now have two public ports accessible on this machine.
    • API Server: Port 3000
    • PDF Server: Port 4005
  • We can now make all of this fall under a single HTTP port by configuring an NGINX proxy on our server.


In order to bring all of these locally hosted servers into a single HTTP interface, we can use NGINX to sit in front of these servers.

To setup this configuration, please go through the following steps.

  • Install NGINX using the following command.

    sudo apt-get update
    sudo apt-get install nginx
  • We can check to ensure that we have NGINX running with the following command.

    systemctl status nginx
  • We now need to edit the nginx.conf file to redirect HTTP traffic to the internal servers.

    sudo vi /etc/nginx/sites-available/formio
  • Put the following contents in that file.

    server {
      listen 80;
      server_name  ~^(www\.)?(.+)$;
      client_max_body_size 20M;
      ############# Use the following for SSL ################
      # listen               443 ssl;
      # ssl_certificate      /usr/local/etc/nginx/nginx.crt;
      # ssl_certificate_key  /usr/local/etc/nginx/nginx.key;
      location / {
        proxy_set_header    Host $host;
        proxy_set_header    X-Real-IP $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto $scheme;
        proxy_pass          http://localhost:3000;
        proxy_read_timeout  90;
        proxy_redirect      http://localhost:3000 https://$host;
      location /files/ {
        rewrite ^/files/(.*)$ /$1 break;
        proxy_set_header    Host $host;
        proxy_set_header    X-Real-IP $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto $scheme;
        proxy_pass          http://localhost:4005;
        proxy_read_timeout  90;
        proxy_redirect      http://localhost:4005 https://$host;
    server {
      listen 80;
      server_name  ~^minio.(.+)$;
      client_max_body_size 20M;
      ############# Use the following for SSL ################
      # listen               443 ssl;
      # ssl_certificate      /usr/local/etc/nginx/nginx.crt;
      # ssl_certificate_key  /usr/local/etc/nginx/nginx.key;
      location / {
        proxy_buffering off;
        proxy_set_header Host $http_host;
        proxy_pass http://localhost:9000;

    Note, for this configuration to work with Minio, you will need to create a subdomain @ that points to this server. Minio does not support being hosted outsiide of the root domain.

  • Now save that file, and then switch this out for the default server

    sudo rm /etc/nginx/sites-enabled/default
    sudo ln -s /etc/nginx/sites-available/formio /etc/nginx/sites-enabled/default
    sudo systemctl restart nginx
  • We can now test that we have both an API server and a PDF Server running by going to the following urls in our Browser.

    • (should show the API Server status)
    • (should show the PDF Server status)
  • You can now follow the tutorials provided @ Connecting to Portal to connect your projects to this running instance.