Installing and Using Terraform

This document describes how to install and use Terraform to configure and deploy Apcera Platform in production on supported platforms.

Terraform required

Apcera requires the use of Terraform to deploy an Apcera cluster. Terraform creates the infrastructure for the Apcera Platform, including the machine hosts for Apcera components as well as network connectivity between hosts.

Supported Terraform version

See Terraform support in the Supported Versions documentation.

Installing Terraform

See Terraform support for supported Terraform versions.

To install Terraform, unzip the contents of the ZIP file to a known directory and add it to your PATH. Refer to these instructions for details:

To verify installation:

  • Change to the directory (cd) where you unzipped the Terraform ZIP file contents.
  • Run terraform version and verify that you are running the supported version for your platform.

Configuring Terraform

Apcera provides Terraform modules for each supported platform, as well as example configuration files to get started. You customize the example configuration files for your environment to deploy the platform. In some cases you may customize the module files for your needs.

To obtain the Terraform module and example files for your platform, please contact Apcera Support.

Once you have downloaded the Terraform files, create a local working directory to store them.

To do this, copy the Terraform files that you got from Apcera Support to your local working directory. You should structure your working Terraform directory as follows, with the example files at the root and the module files in a modules subdirectory. Note that you cannot put the example files and module files in the same directory.

          |_ cluster.conf.erb
          |_ modules/
                |_ ...

Once you have downloaded and installed the module and configuration files, refer to the installation instructions for your platform for details on using Terraform to install the platform:

Securing Your Terraform Files

The installation configuration files cluster.conf.erb,, *.tfvars, and the generated cluster.conf file store cluster information in plain text, including required credentials and SSL certs. Apcera strongly recommends that you secure and version control your installation files before you deploy your cluster. To do this we suggest you create a Git repository for your cluster installation files, and use git-crypt to encrypt the necessary files.

Terraform commands

Although not an exhaustive list, you will likely use several of the following Terraform commands to deploy a cluster. Refer to the installation instructions for your platform for specific details.

Terraform command Description
terraform get Use to download and cache in this directory the modules used by the configuration. This must be run again with --update to import changes to the upstream modules, so without that flag it is safe to automate this step.
terraform plan --module-depth -1 Use to show what changes Terraform will attempt to make.
terraform graph --module-depth -1 Use to show what resources exist and their dependencies. The graph command outputs the resources and their dependencies in format that you can visualize using GraphViz or other similar tools.
terraform apply Use terraform apply to run the changes and create the resources. This command may take some time to complete.
teraform refresh Use teraform refresh to reconfirm all remote state if its been sometime since you ran the apply command. This verifies all the resources are in their expected states. You should run this command if it has been some time since you ran the apply command.
terraform output [NAME] Use terraform output [NAME] to output the named variable from the configuration. You use this in cluster.conf.erb to fill in fields with data from Terraform. The output command by itself returns all variables from the configuration.
terraform taint --module NAME RESOURCE Use to mark a resource to be deleted and recreated. On the next terraform apply, the resource is deleted and recreated. For example: terraform taint --module apcera-aws aws_instance.tcp-router or terraform taint --module apcera-vsphere vshpere_instance.tcp-router.
terraform destroy Use terraform destroy to tear down the cluster you have created using Terraform. You must confirm the decision by entering yes. The destroy command deletes resources created by Terraform, except the Orchestrator host and resources it cannot delete. You will have to manually remove the Orchestrator host, and you may have to manually delete some resources, such as S3 buckets and volumes. After doing this, rerun the destroy command to ensure that everything is removed.

NOTE: If you receive an error when runnning the terraform apply command the first time, run it again. This error may be the result of timing issues with accessing resources immediately after their creation. Re-running the terraform apply commmand should attach the policy.

Best practices for using Terraform

Consider the following best practices when using Terraform.

Use the supported Terraform version

Refer to the README file included with your Apcera Platform bundle for your selected infrastructure provider.

Version control Terraform files

The Terraform files are critical to the operation of your cluster. As a best practice, you should version control these files using GitHub or some other RCS. In addition, when performing Terraform updates, make sure your local configuration files are current with those under version control.

Configure remote state storage for Terraform

Terraform stores the configuration state of your cluster in a file named terraform.tfstate. The state file is referenced when you update your cluster. Apcera recommends using remote state storage for production clusters so that there is a single Terraform state file for your cluster that can be used by multiple users to update your cluster.

For remote storage of your cluster state, please refer to the Terraform documentation for the version of Terraform that you are using. For production clusters, where you likely will have more than one person who can update a cluster, this state file must be available to multiple users via shared storage.