Installing Remote IMs for Hybrid Deployments

This document describes how to install a pool of remote Instance Managers (IMs) for hybrid Apcera deployments.

Prerequisites

Full cluster deployment

You must have already created and deployed a full Apcera cluster on a supported provider using Terraform.

The reference hybrid deployment described here shows how to deploy remote IMs on GCE that connect to a full cluster on AWS. If you are using different providers, you will need to extrapolate these instructions to your chosen environments.

Terraform

You must use Terraform to deploy a hybrid cluster.

  1. Install Terraform.
  2. Download the Terraform files for remote IMs.
  3. Setup your Terraform working directory.

    When you unzip the Terraform files for adding a remote IM pool, you will have a directory named "Remote-IMs". Copy this directory to a known location and use it as your working directory.

Hybrid deployment workflow

The workflow for deploying remote IMs to GCE is as follows:

1) Reserve a static IP on GCE side.
2) Upload the GCE base image.
3) Configure Terraform for GCE.
4) Configure VPN on GCE.
5) Configure GCE firewall rules.
6) Deploy GCE IMs.

Reserve a static IP on GCE

You need a static IP on on the GCE side which will be used by AWS to connect to the IMs on GCE. The static IP is added to the main.tf file.

The following sections take you through creating a GCE project, service account, and then static IP in case you have not yet created a GCE account and service.

Create GCE project

1) Log into the Google Cloud Platform Dashboard at https://console.cloud.google.com.

2) Click Create a project… to create a GCE project.

screenshot

Create GCE service account

3) From the left pane menu, select API Manager.

screenshot

4) Select Credentials.

5) Click Add credentials and then select Service account key.

screenshot

6) In Service account, select New service account.

7) Enter the Name for the service account.

8) Click JSON.

9) Click Create.

You will be prompted to store the JSON formatted key file.

Create Static IP on GCE

10) From the menu, select Networking.

11) Select External IP addresses.

screenshot

12) Enter the following values:

  • Name: <cluster_name>-vpn-addr
  • Region: <gce_region>

13) Leave Attached set to None and then click Reserve.

Upload the GCE base image

Provide Apcera support with the following information:

  • GCE project name, for example: apcera-platform-hybrid-aws-gce
  • GCE credentials JSPN file, for example: apcera-platform-hybrid-aws-gce-xxxxXXXXxxxx.json

Once you provide this information, Apcera will upload a GCE base Apcera image for creating the IM hosts to your GCE project.

Once you have this file, go to Compute Engine > ​Images and select the image. Note that the image name starts with continuum-base- which you will enter in the terraform.tfvars file later in the GCE configuration process.

screenshot

Configure Terraform files for GCE deployment

You are now ready to generate the IMs on Google Cloud Platform using Terraform.

1) Edit the variables.tf file.

Uncomment the commented out section. The file should look as follows:

variable "key_name" {
    description = "Name of the SSH keypair to use in AWS."
}

#variable "key_path" {
#    description = "Path to the private portion of the SSH key specifieÂd."
#}

variable "aws_region" {
    description = "AWS region to launch servers."
}

variable "access_key" {}
variable "secret_key" {}
variable "cluster_name" {}
variable "monitoring_database_master_password" {}
variable "rds_postgres_database_master_password" {}

variable "gce_project" {}
variable "gce_region" {}
variable "gce_credentials" {}
variable "gce_machine_type" {}
variable "gce_image" {}
variable "gce_primary_subnet" {}
variable "gce_instance_manager_count" {}

2) Open the outputs.tf file, and:

a) Uncomment all commented out GCE sections.

b) Add comments around the original output "instance-manager-addresses" block.

The file should look like the following:

/*
output "instance-manager-addresses" {
  value = "${module.apcera-aws.instance-manager-addresses}"
}
*/

output "instance-manager-addresses" {
  value = "${module.apcera-aws.instance-manager-addresses}, ${module.apcera-gce-im-only.instance-manager-addresses}"
}
output "gce-vpn-configuration" {
  value = "${aws_vpn_connection.gce.customer_gateway_configuration}"
}
output "gce-instance-manager-device" {
  value = "${module.apcera-gce-im-only.instance-manager-device}"
}
output "gce-instance-manager-addresses" {
  value = "${module.apcera-gce-im-only.instance-manager-addresses}"
}

3) Open the main.tf file, and:

a) Uncomment the commented out section by deleting /* at line 24, and */ at line 68.

b) Update the GCE static IP address you created earlier. The IP address in line 33 is a dummy IP address.

# VPN Connection to GCE environment
resource "aws_customer_gateway" "gce_gateway" {
  tags = {
    Name = "${var.cluster_name}-gce-gw"
  }
  # BGP ASN unusued, but required by API
  bgp_asn = 60001
  # IP address of remote VPN peer, must be provisioned manually in GCE console
  ip_address = "104.xxx.62.x"
  type = "ipsec.1"
}

4) Edit the cluster.conf.erb file.

a) Update the Instance Manager (IM) count.

You need to increase the IM count to include the IMs you are deploying to GCE.

For example, if you started with 3 IMs on AWS and you are adding 2 IMs on GCE, then you would increase the IM count to 5:

# Instance Managers will likely scale the most.
    instance-manager: 5

b) Comment out lines # 225 through 228.

c) Uncomment lines # 230 through 234.

The file should look like:

# "instance-manager": {
#  # TERRAFORM OUTPUT: instance-manager-device
#  "device": "<%= `terraform output instance-manager-device`.chomp %>"
# }

"instance-manager": {
# TERRAFORM OUTPUT: instance-manager-device & gce-instance-manager-device
"device": ["<%= `terraform output instance-manager-device`.chomp %>",
           "<%= `terraform output gce-instance-manager-device` %>"]
}

5) Open the terraform.tfvars file and:

a) Uncomment the commented out GCE variable block by deleting /* at line 10, and */ at line 18.

b) Set each GCE variable with correct information.

Example:

key_name = "aws-apcera"
aws_region = "us-west-2"
access_key = "YOURAWSACCESSKEYID"
secret_key = "YOURAWSSECRETEKEYabcdefg"
cluster_name = "mycluster"
monitoring_database_master_password = "mymonitoringpw"
rds_postgres_database_master_password = "mydbpassword"

gce_project = "apcera-hybrid"
gce_region = "us-central1-a"
gce_credentials = "${file("/User/tom/account.json")}"
gce_machine_type = "n1-standard-2"
gce_image = "continuum-base-1449795462"  # Verify the image ID in your GCE project
gce_primary_subnet = "10.1.50.0/24"
gce_instance_manager_count = "2"

6) Update the credentials value.

Open the /terraform-module/apcera/apcera-terraform/terraform-module/apcera/gce/instance-manager/apcera-gce-instance-manager.tf and update the credentials value.

The file should look similar to:

provider "google" {
  account_file = "" # needed as of Terraform v0.6.8 to avoid prompt
  credentials  = "${file("/User/tom/account.json")}"
  project      = "${var.gce_project}"
  region       = "${var.gce_region}"
}

7) Update the credentials value.

Open the file
/terraform-module/apcera/apcera-terraform/terraform-module/apcera/gce/network/apcera-gce-network.tf and update the credentials value.

See the example in the previous step.

8) Run Terraform commands.

The last step is to run Terraform to update the infrastructure and deploy the new VMs to GCE.

First, run the following command to pull the GCE module:

terraform get

Then, run the execution plan:

terraform plan --module-depth -1

Finally, run the following command to create the IMs on GCE:

terraform apply

Configure VPN on GCE

This step requires the shared key provided by AWS. You do this after running terraform apply and using the output of terraform output gce-vpn-configuration. You will need to create two VPNs.

1) Retrieve the AWS shared key and store it in a text file for convenience:

terraform output gce-vpn-configuration > vpn-output.txt

2) Select ​Networking > VPN.

3) Click Create a VPN and enter the following:

  • Name: <cluster_name>-vpn
  • Network: Select <clster_name>-primary-network from the drop-down
  • Region: us-central1 (must be same region as your static IP)
  • IP Address: name from prior step (<cluster_name>-vpn-addr)
  • Tunnels (configure as follows):
    • Remote peer IP address – copy the first tunnel_outside_address and paste it to the value expected here (see example below).
    • IKE version: Select IKEv1 from the drop-down list (be sure to select IKEv1 for IKE version).
    • Shared secret: copy and paste the pre_shared_key matching the first tunnel (see example below).
    • Remote network IP range: Enter the CIDR of your AWS VPC, such as 10.0.0.0/16.

screenshot

First tunnel_outside_address:

<vpn_gateway>
  <tunnel_outside_address>
    <ip_address>198.51.100.0</ip_address>
  </tunnel_outside_address>

First pre-shared key:

<ike>
      <authentication_protocol>sha1</authentication_protocol>
      <encryption_protocol>aes-128-cbc</encryption_protocol>
      <lifetime>28800</lifetime>
      <perfect_forward_secrecy>group2</perfect_forward_secrecy>
      <mode>main</mode>
      <pre_shared_key>FIRST-SHARED-KEY</pre_shared_key>
    </ike>

4) Create a second VPN.

  • Click Add Tunnel and add a second VPN following the same procedure as above.

  • This time, copy and paste the second tunnel_outside_address and pre_shared_key from your vpn-output.txt.

Second tunnel_outside_address:

<tunnel_outside_address>
        <ip_address>203.0.113.0</ip_address>
      </tunnel_outside_address>

Second pre-shared key:

<ike>
      <authentication_protocol>sha1</authentication_protocol>
      <encryption_protocol>aes-128-cbc</encryption_protocol>
      <lifetime>28800</lifetime>
      <perfect_forward_secrecy>group2</perfect_forward_secrecy>
      <mode>main</mode>
      <pre_shared_key>SECOND-SHARED-KEY</pre_shared_key>
    </ike>

When both VPNs are configured, you should see the following:

screenshot

Configure Firewall Rules

1) Select Networking > Networks from the navigation.

2) Select the <cluster_name>-primary-network.

3) Click Add firewall rule and enter the following:

  • Name: fw-<cluster_name>-hybrid-cluster
  • Source filter: IP ranges
  • Source IP ranges: Enter the IP ranges for the AWS VPC (for example: 10.0.0.0/16).
  • Allowed protocols and ports: tcp;udp;icmp

4) Click Create.

screenshot

Deploy your hybrid cluster.

The last step is to deploy the IMs to GCE and verify your hybrid cluster.

1) First, generate the updated cluster.conf file.

2) Copy the updated cluster.conf to the Orchestrator host.

3) Deploy the cluster:

orchestrator-cli deploy -c cluster.conf --update-latest-release

4) Verify IM deployment to GCE

Log in to your cluster. You should see that you now have IMs in AWS and IMs in GCE.

screenshot

Here is a summary of the commands:

$ erb cluster.conf.erb > cluster.conf
$ terraform output orchestrator-public-address
$ scp cluster.conf root@<orchestrator-IP>:~
$ ssh -A root@<orchestrator-IP>
# cp cluster.conf ~orchestrator
# sudo -u orchestrator -i
# orchestrator-cli deploy -c cluster.conf --update-latest-release

Verify deployment

See Post Installation Tasks.