Configuring Terraform for AWS

This document describes how to install and configure Terraform in preparation for installing the Apcera Platform on AWS.

Setup Terraform for AWS

  1. Complete AWS prerequisites, if you have not already done so.

  2. Install the supported version of Terraform.

  3. Download the Terraform files for AWS.

  4. Setup your Terraform working directory.

    The file unzips to a directory named AWS.

  5. Copy this directory to a known location, such as $HOME/aws.

Review Terraform files for AWS

You should have the following example configuration files:

  • cluster.conf.erb
  • terraform.tfvars
  • terraform-module

The root files contain predefined configuration information for deploying the Apcera Platform to AWS. In the coming sections you will update and configure these files to deploy the platform.

The terraform-module subdirectory includes the apcera/aws Terraform module. These files define the infrastructure for AWS. You should not need to modify these files.

Configure Terraform for Apcera deployment

Complete the following sections to configure the Terraform files you have copied locally for your deployment of the Apcera Platform to AWS.

Edit to point to the local Terraform modules

First, update the file to point to your local directory where you have copied your Terraform modules for both AWS and GCE. By default this file points to the Apcera Terraform modules on GitHub. However, we have provided these modules to you and you have deployed them locally, so you need to update to point to those local modules.

Edit to point to the local /apcera/aws module:

module "apcera-aws" {
  # When testing new terraform module settings, use a source path pointing to your local repo
  # then run `rm -rf .terraform/modules` and `terraform get --update`
  source = "/Users/user/apcera-terraform/terraform-module/apcera/aws"

And, edit to point to the local /apcera/gce module:

module "apcera-gce-im-only" {
  source = "/Users/user/apcera-terraform/terraform-module/apcera/gce/IM-only"

NOTE: In both above examples, update the path to match your path accordingly.

Edit the terraform.tfvars file

Configure the following parameters:

key_name = "apcera"                                         // SSH key already created, see prerequisites
aws_region = "us-west-2"                                    // Region (for AWS)
az_primary = "a"                                            // Primary subnet AZ
az_secondary = "b"                                          // Secondary subnet AZ
az_tertiary = "c"                                           // Tertiary subnet AZ
access_key = "<enter-access-key>"                           // AWS IAM access key
secret_key = "<enter-secret-key>"                           // AWS IAM secret key
cluster_name = "mycluster"                                  // enter-a-cluster-name
monitoring_database_master_password = "EXAMPLE_PASSWORD"    // enter-a-new-password
rds_postgres_database_master_password = "EXAMPLE_PASSWORD"  // enter-a-new-password

NOTE: Each password must be 8 characters or more and cannot have special characters "@", "/", or "".

Edit the cluster.conf.erb file

The cluster.conf.erb file is used to generate the cluster.conf file.

In general you can leave the provisioner, machines, and components sections set to the defaults. Refer to the Apcera configuration documentation if you want to change these values.

In the chef section of the cluster.conf.erb, you must configure the cluster and base domain names, and you must specify the identity provider and admin users. For production clusters you must also configure SSL certificates for HTTPS, and monitoring.

Specify the cluster name and base domain settings (lines 170 and 171)

In the chef section of the cluster.conf.erb file, you must provide a unique cluster name and base domain for which you have set up a DNS record. In the following example, "example" is the cluster_name and "" is the base domain:

"cluster_name": "example",
"base_domain": "",

Specify the Package Manager endpoint setting (line 225)

Modify the endpoint value for your s3_store based on your AWS region "endpoint": "".

For example, if your AWS_DEFAULT_REGION is us-west-2, the endpoint is as follows:

"endpoint": "",

Refer to Amazon Simple Storage Service (Amazon S3) to find out the ​endpoint​ for your region.

Specify the Auth Server settings (lines 260 to 290)

In the chef section auth_server sub-section of the cluster.conf.erb file, you specify the identity provider and the enter the users and the administrators. The following example uses Google auth and includes the client_id, client_secret, and web_client_id to authenticate users, as well as specifying the "users" and "admins". See Configuring Identity Providers for details.

    "auth_server": {
      "identity": {
        "google": {
          "users": [

          "client_id": ""         
          "client_secret": "byS5RFQsKqXXXcsENhczoD"
          "web_client_id": ""  

      "admins": [

Configure HTTPS

For production clusters, you will need to enable HTTPS and populate lines 232 to 258. If this is a production cluster and you are using HTTPS, see Configuring Apcera for HTTPS for guidance.

By default the Terraform module assumes that you are using HTTPS. If you do not want to use HTTPS, after deployment you update the ELB in the AWS console to use HTTP port 8080 as shown below.


Note that the installation instructions explain how to do this, so you don't have to do anything now to disable HTTPS, just be aware that it is the default.

Configure Monitoring

Production clusters must be monitored. You configure monitoring starting at line 291 in the cluster.conf.erb file.

See Monitoring Your Cluster for guidance on configuring this section.

Configure Terraform to store its state remotely

This step is recommended for production clusters. See configure remote state for Terraform.

Deploy Apcera Platform to AWS

Now that you have installed and configured Terraform, the next step is to create AWS resources and deploy the platform to AWS.