AWS Deployment

Amazon AWS is perfect for Enterprise customers who wish to have a very robust platform with high availability. It is used and trusted by some of the largest companies in the world to serve as their Cloud infrastructure. Below is a diagram showing the architecture of an AWS Deployment.


We assume in this that you already have an account with Amazon AWS.

First, download our multicontainer ZIP archive which includes everything you'll need to deploy to AWS:

What's in our file?

certs/rds-combined-ca-bundle.pem: A "certificate bundle" that provides the Certificate Authority keys so that your deployment can connect to an Amazon Document DB cluster using SSL/TLS. For a global bundle, seeCertificate bundles for all AWS regions. For region-specific bundles, see Certificate bundles for specific AWS regions.

conf.d/default.conf: The nginx reverse proxy configuration file. By default, this configuration simply redirects traffic from the '/' endpoint to your Enterprise Server Container, and traffic from the '/pdf' endpoint to your PDF Server container.

docker-compose.yml: The Docker Compose file that defines the services (containers) that run in your multicontainer deployment and how they interrelate. For example, it defines the nginx proxy (and points that container to the path of the default.conf file above) and "links" it to the Enterprise Server and the PDF Server containers.


Before we start our deployment, we will first need to create a Database to store all of our Forms and Submissions within our deployment. DocumentDB is an AWS managed NoSQL database we'll use to store projects, forms, and submissions.

Within your AWS Console, type DocumentDB and click on the link that says Amazon DocumentDB

  • On the next page, you will click on the button that says Launch Amazon DocumentDB.

  • In the next section, we will choose our Instance configuration. For this, you will provide the following.

    • Cluster Identifier: Keep default or type your own name.

    • Engine Version: 4.x and below.

    • Instance Class:

      • Production Environment: At least db.r5.large

      • Development Environment: At least db.t3.medium

    • Number of Instances:

      • Production Environment: At least 2

      • Development Environment: At least 1

  • The following diagram illustrates a typical configuration for our Development Environment.

  • In the next section, provide a Master username and a secure password

  • Finally, press Create Cluster button

  • Now that our DocumentDB cluster is created, we will now click on the cluster link, and then copy the application connection string. It should look like the following.

  • Next, we will want to change this connection string to a standard connection string. We will do this by first removing everything after the “:27017/”, and then adding our database name to the end of :27017/. We can pick any name here, but for this example, let's use formio.

  • Now, we will want to replace <insertYourPassword> it with the password you chose above when you created the cluster.

  • Add ?ssl=true&readPreference=secondaryPreferred&retryWrites=false to the end of the url to indicate that we have a database connection over SSL and to disable retryWrites since DocumentDB does not support it.

  • Make sure to copy this connection string for use later.

  • Lastly, navigate to your newly created cluster's dashboard. At the bottom of the page, take note of the Security Group Name that the cluster was added to. We'll need this to ensure that the Enterprise Server can communicate with your database.

Cloud File Storage Using S3

Before we set up our API and PDF servers, we need to first setup an S3 Bucket which will contain the uploaded PDFs used by the PDF server.

  • To set up an S3 bucket, we will first just navigate to the S3 section of Amazon AWS by clicking on the home page, and then typing S3 in the Search bar. Click on S3 in the Search results.

  • On the next page, click on the Create Bucket button.

  • On the next page, under General Settings, give it a Bucket Name of your choice.

  • Now skip over all the other sections (leaving the default configurations), and then at the bottom of the page, press Create Bucket to create your new S3 bucket.

  • Now that we have an S3 bucket created, we now need to create an IAM role with admin rights to this S3 bucket. We will do this by navigating back to the AWS homepage and then typing IAM in the Search bar.

  • Within the IAM page, we will create a new user by clicking on Users and then clicking on Create User.

  • Specify the user's details. Since we're making a role that will interact with the PDF server, we could call our user pdf-server. Click Next

  • In the Set Permissions page, select Attach existing policies directly, and then select Create Policy.

  • On the "Specify Permissions" page, under Select a Service, select S3.

  • In the Actions section, select All S3 actions to give this IAM role access to all of the read, write, and update actions available via the S3 service.

  • In the Resources dropdown, click the Add Arn link next to the bucket section. Here we will provide the bucket name we specified in the previous section. Click Add ARNs.

  • In the object resource section, click Add Arn and then provide our bucket name, and then click on the Any checkbox for the object section. Then click the Add button.

  • For the other Resources, click on them and then click on the Any settings for all of them. When you are done, it should look like the following.

  • Click the Next button to view the Review and Create page.

  • We'll name our policy pdf-server-s3 and click on the Create policy button to create the policy.

  • Now that a policy has been created, we can attach that policy to the IAM role by going back to that page, clicking the Refresh icon, and then search for pdf-server-s3 in the search bar.

  • Click on the Next button in the IAM wizard to view the Review and Create page, and then click Create User to return to your list of Users.

  • Select your newly created user and navigate to Security Credentials.

  • Create an access key for this user by clicking on Create Access Key, selecting Third-party Service, and providing a description for its use.

  • Write down your access key ID and your secret key in a secure location, we'll need them both later.

  • We are now ready to move onto setting up the Elastic Beanstalk deployment!

Elastic Beanstalk

Now that we have our database and S3 configured, we will be using Elastic Beanstalk to manage our docker deployments. First, though, we'll need to configure an EC2 instance profile so our underlying instances can interact with Elastic Beanstalk.

  • Navigate to the IAM Dashboard.

  • On the left-hand navigation menu, select Roles.

  • In the upper right, click Create Role.

  • Select "AWS Service" for your Trusted Entity Type and "EC2" for your Use Case and click Next.

  • Filter the Permissions Policies by the string "ElasticBeanstalk" and select AWSElasticBeanstalkWebTier, AWSElasticBeanstalkWorkerTier, and AWSElasticBeanstalkMulticontainerDocker.

  • After clicking Next, provide this role with a meaningful name, something like aws-elasticbeanstalk-ec2-role. We'll be using this role as our EC2 Instance Profile role when we create our Elastic Beanstalk deployment.

  • Finish by clicking Create Role.

Next we'll begin setting up our Elastic Beanstalk Deployment.

  • Navigate to the Elastic Beanstalk Dashboard.

  • Click on the link that says Create Application. You may see a different page that has a button that reads Create New Environment.

  • Select Web Server Environment and provide a meaningful Application Name.

  • Select Docker as the managed platform.

  • Select Upload your code and upload the file linked above. Give the version a meaningful label.

IMPORTANT NOTE: This file assumes you are using DocumentDB. If you are using any other external database provider, or an internal Community Edition database, then you will need to extract this ZIP file, open up the docker-compose.yml file and then remove the lines that reference the MONGO_CA environment variable. Otherwise the database connection will try to use the AWS certificate to your external database provider and the connection will fail.

  • Under Presets, select High Availability.

  • Click Next to proceed to service access configuration.

  • Select Use an existing service role and select aws-elasticbeanstalk-service-role under Existing service roles.

  • You can optionally associate a key pair with your EC2 instances associated with your environment. Click here for more details.

  • Select the role we created in your IAM dashboard above under EC2 instance profile.

  • Click Next to proceed to networking, database, and tag configuration.

  • Select the same VPC and Instance Subnets that you provided to your DocumentDB cluster above. A typical deployment will run the instances in private subnets and the load balancer in public subnets.

  • Click Next to proceed to instance traffic and scaling.

  • Select General Purpose (SSD) under Root Volume Type.

  • Because the PDF server container will store a large amount of fonts, we'll need to increase the size of the storage for each instance from the default of 8GB to 12GB. These settings can be fine tuned later.

  • Under EC2 Security Groups, select the security group you created in your DocumentDB cluster above.

  • Select your desired capacity settings. We recommend the following instance types:

Environment Type

Instance Size

Development Environments

at least t3.medium

Production Environments

at least t3.large

  • Use the default Load Balancer, Listeners, Processes, and Rules settings. We'll look at configuring TLS listeners later in this documentation.

  • Click Next to navigate to updates, monitoring, and logging configuration.

  • Configure your desired Health Reporting metrics, Platform Updates settings, and Updates and Deployments settings. Generally, you'll want a Rolling deployment policy to make sure your deployment doesn't experience downtime when you're updating the platform.

  • Provide the following environment variables within the Environment Properties section. These are the environment variables which will govern the platform in important ways.





    The MongoDB connection string to connect to your remote database. This is the value we copied before.



    The license key for your deployment. You will get this when you upgrade a project to Enterprise.



    Used to enable the Self-Hosted Portal



    An admin account you would like to use as the first Admin user.


    A password for the first Admin user.



    A secure secret that you will pick that is used to encrypt the project settings.



    If PORTAL_ENABLED is not set (as in an API Environment), then this secret is used to connect another portal to this environment



    A secure secret that you will pick that is used to establish secure JWT tokens.



    This is the name of the Bucket we created in the previous section



    This is the region which the S3 bucket was created



    This is the Key we saved in the previous step.



    This is the Secret Key that we saved in the previous step.


  • NOTE: If you wish to secure your Environment Variables from visibility, then we recommend looking into the Amazon Key Management Service @

  • Click Next to review your choices and press Submit to begin the environment creation process.

After a few minutes, our environment will be available and we'll be ready to create a new Project! Project

  • Now that we have our deployment up and running, the first step is to login to our new deployment. On the first page, we will now use the ADMIN_EMAIL and ADMIN_PASS values (which we added to the Environment Variables in a previous step) to authenticate into the developer portal.

  • Once you are logged into the Developer Portal, we will now create a new Project.

  • In the popup modal, give your project a title and then click Create Project

Domain Routing (Route 53)

Now that your Environment is up and running, the next task is to attach a Domain to the Elastic Beanstalk deployment. If you configured the Elastic Beanstalk deployment to use High Availability, then it will have created some Elastic Load Balancers in front of the deployment which you can link the DNS records against.

  • To get started, navigate to the homepage and then search for Route 53

  • Next, you will need to created a Hosted Zone

  • You will now provide your domain name and then press Create hosted zone

  • Next, you will create a new Record Set and then provide the following record.

    • *

    • Type - A Record

    • Alias - Yes

    • Then select the Elastic Load Balancer as the target.

    • Now press Create

  • When you are done, your routes should look something like the following.

  • Now, you set your domain Nameservers to point to the ones provided by Route 53. Once the domains evaluate, then you should be able to see the deployed API within that domain.

  • Next, go back into your Portal, and then configure the PDF Server URL that we configured in a previous step with the new DNS name, which will be something like

Configure SSL certificate for Application Load Balancer

Create basic records in Route 53. The following example shows a created record for the root path and for the "www" path.

Here's the route record after the update.

Next, create a SSL Cert by clicking the "Request" button at the following URL .

Add the SSL certificate to the Route 53 domain and validate the cert. Choose the recommended option for validating the certificate.

If a "CNAME" record type is created in the Route 53 domain, it should be the issued certificate in the previous step.

If you refresh your certificate screen it should look something like this when the certificate has been issued.

If the certificate is not issued, delete and reissue the certificate

Next, add a listener to the load balancer for port 443 (HTTPS). Navigate to the EC2 Dashboard, open the Load Balancing tab, in the main screen click the Listener tab, click "Add Listener" button.

Set the listener to "HTTPS" and port to 443, then choose a Security Policy. Finally, click the "Select a Certificate" button.

Navigate back to the Load Balancer Listeners tab. If you see a small orange icon next to the Port 443 it means you need to add an Inbound rule for port 443 to the security group. Hover over the icon for details on what security group needs to be updated.

Navigate to the Security Groups by going to EC2 Dashboard > Security Groups. Expand the column to figure out which security group it is then click "Edit Inbound Rules".

Select HTTPS and choose a source then click "Save Rules". This should remove the orange caution mark on the listener's tab shown in the previous steps.

Navigate to your domain to see if the SSL Certificate has been configured.


There are many reasons why your docker containers will fail to start. When this happens, you will need to troubleshoot the reason by observing the logs from those containers. This can be done by downloading the logs from the Elastic Beanstalk, but in some rare occasions, such as when a License has been disabled, the containers will not create the logs. When this occurs, the best course of action is to SSH into the EC2 containers to diagnose the problem.

First, you will need to create a KeyPair so that you can perform the SSH. You can do this by navigating to the EC2 section of AWS, and then clicking on Key Pairs.Then click Create key pair

Next you will follow the instructions to create a new key pair. When you are done, you will download the private key onto your local machine. You will need to ensure that this downloaded key has the correct permissions by doing the following.

chmod 0400 my-key.pem

Next, you will navigate to the Elastic Beanstalk deployment, and then edit the Configurations for your deployment. Click Edit on the Security Section.

Next, you will select your new key pair and then click Save.

Once your deployment is finished making these updates, you will now go to the EC2 section of AWS, click on Instances, and find the instance that is associated with your deployment. You will then copy the Public DNS Name of that instance.

You can now ssh into your instance by performing the following command.

ssh -i ./my-key.pem

Once you have SSH into your instance, you can now perform the following to see the docker images.

sudo su
docker ps -a

This should show you the failed container as you see here.

Once you see these containers, you will then need to copy the Container ID of one of the failed containers. You can then ssh into the failed container by running the following command.

docker commit [CONTAINER_ID] formio-debug
docker run -it \
    -e "LICENSE_KEY=---LICENSE KEY---" \
    -e "PORTAL_ENABLED=true" \
    -e "DEBUG=true" \
    -e "" \
    -e "FORMIO_S3_BUCKET=--- PDF BUCKET ---" \
    -e "FORMIO_S3_KEY=--- S3 KEY ---" \
    -e "FORMIO_S3_REGION=us-west-2" \
    -e "FORMIO_S3_SECRET=---- S3 SECRET ---" \
    --rm --entrypoint sh formio-debug

Of course, you will change out the values with the values you used to deploy the elastic beanstalk. Once you do this, you will then be able to try and run the software manually by doing the following.

For pdf-server:

node pdf.js

For formio-enterprise:

node formio.js

This will then output the problem with why it cannot start... for example, here is an output from a pdf server that could not start.

This tells us that the License for this server has been disabled.

Last updated