Configuring Terraform for AWS

This document describes how to install and configure Terraform in preparation for installing the Apcera Platform on AWS.

Complete AWS installation prerequisites

Complete AWS prerequisites, if you have not already done so.

Setup Terraform for AWS

  1. Install the supported version of Terraform.

  2. Download the Apcera installation files for AWS.

  3. Setup your Terraform working directory.

    The file unzips to a directory named AWS.

  4. Copy this directory to a known location, such as $HOME/aws.

Review Terraform files for AWS

You should have the following example configuration files:

  • cluster.conf.erb
  • terraform.tfvars
  • terraform-module

The root files contain predefined configuration information for deploying the Apcera Platform to AWS. In the coming sections you will update and configure these files to deploy the platform.

The terraform-module subdirectory includes the apcera/aws Terraform module. These files define the infrastructure for AWS. You should not need to modify these files.

Configure Terraform for Apcera deployment

Complete the following sections to configure the Terraform files you have copied locally for your deployment of the Apcera Platform to AWS.

Verify that points to required Terraform modules

The file points to the local directory where you have copied the required Terraform modules for AWS.

module "apcera-aws" {
  source = "../../terraform-module/apcera/aws"

module "ami-copy" {
  source = "../../terraform-module/apcera/aws/ami-copy"

If necessary (not common), edit to point to a different path where these modules are located.

Edit terraform.tfvars file

Populate the following parameter values:

key_name = "key name, already uploaded to AWS"              // Enter the name of the [SSH key pair](/installation/aws/aws-prereqs/#create-and-download-ec2-key-pair) you created.
aws_region = "your-preferred-aws-region"                    // Enter the [AWS Region](/installation/aws/aws-prereqs/#select-aws-region), such as "us-west-2".
az_primary = "a"                                            // Primary subnet AZ, typically the default is ok.
az_secondary = "b"                                          // Secondary subnet AZ, typically the default is ok.
az_tertiary = "c"                                           // Tertiary subnet AZ, typically the default is ok.
access_key = "REDACTED"                                     // Enter your AWS IAM [access key](/installation/aws/aws-prereqs/#create-and-download-user-access-keys).
secret_key = "REDACTED"                                     // Enter your AWS IAM [secret key](/installation/aws/aws-prereqs/#create-and-download-user-access-keys).
cluster_name = "your-cluster-name"                          // Enter a unique cluster name.
monitoring_database_master_password = "EXAMPLE_PASSWORD"    // Enter a password for the monitoring DB.
rds_postgres_database_master_password = "EXAMPLE_PASSWORD"  // Enter a password for the component DB.
gluster_per_AZ = "0"                                        // Leave the default "0" unless you are using Gluster (not common for MPD).

NOTE: Each above password must be 8 characters or more and cannot have special characters "@", "/", or "".

Edit cluster.conf.erb file

The cluster.conf.erb file is used to generate the cluster.conf file.

In general you can leave the provisioner, machines, and components sections set to the defaults. Refer to the Apcera configuration documentation if you want to change these values.

In the chef section of the cluster.conf.erb, you must configure the cluster and base domain names, and you must specify the identity provider and admin users. For production clusters you must also configure SSL certificates for HTTPS, and monitoring.

Specify cluster name and base domain

In the chef section provide a unique cluster name and base domain for which you have set up a DNS record. In the following example, "example" is the cluster_name and "" is the base domain:

"cluster_name": "example",
"base_domain": "",

Specify Package Manager S3 endpoint

Modify the endpoint value for your s3_store based on your AWS region "endpoint": "".

For example, if your AWS_DEFAULT_REGION is us-west-2, the endpoint is as follows:

"endpoint": "",

Refer to Amazon Simple Storage Service (Amazon S3) to find out the ​S3 endpoint​ for your region.

Specify Google Auth settings

In the chef section auth_server sub-section of the cluster.conf.erb file, you specify the identity provider and the enter the users and the administrators. The following example uses Google auth and includes the client_id, client_secret, and web_client_id to authenticate users, as well as specifying the "users" and "admins". See Configuring Identity Providers for details.

    "auth_server": {
      "identity": {
        "google": {
          "users": [

          "client_id": ""
          "client_secret": "byS5RFQsKqXXXcsENhczoD"
          "web_client_id": ""

      "admins": [

Configure HTTPS

For production clusters, you will need to enable HTTPS by adding the SSL certificate key and chain to the cluster.conf.erb file. See Configuring Apcera for HTTPS for guidance.

By default the Terraform module assumes that you are using HTTPS. If you do not want to use HTTPS, after deployment you update the ELB in the AWS console to use HTTP port 8080 as shown below.


Note that the installation instructions explain how to do this, so you don't have to do anything now to disable HTTPS, just be aware that it is the default.

Configure Monitoring

Production clusters must be monitored. You configure monitoring in the chef.apzabbix section of the cluster.conf.erb file.

Specify the apzabbix.users guest and admin passwords.

Also specify the apzabbix.web_hostnames with the cluster name and domain.

See Monitoring Your Cluster for guidance on configuring this section.

Configure Terraform to store its state remotely

This step is recommended for production clusters. See configure remote state for Terraform.

Deploy Apcera Platform to AWS

Now that you have installed and configured Terraform, the next step is to create AWS resources and deploy the platform to AWS.