Installing Apcera Platform EE on Azure

This document describes how to install Apcera Platform EE on Azure.

Review prerequisites

Before you being, be sure to review the Azure installation requirements.

Setup Terraform

  1. Install the supported version of Terraform.

  2. Download the Terraform files for Azure.

  3. Setup your Terraform working directory.

    The file unzips to a directory named azure.

  4. Copy this directory to a known location, such as $HOME/azure.

  5. Review the Terraform files.

    In the azure folder you should have the following files:

    • apcera_erb_helpers.rb
    • /azure
    • cluster.conf.erb
    • main.tf

    The /azure subdirectory contains the Apcera module for Azure. These files define the Azure machines and networking. For default deployments you do not need to modify these files. See Azure requirements for details on these files.

    You will edit the main.tf and cluster.conf.erb files to deploy the Apcera Platform to Azure.

Capture Azure Account Information

Complete the following process to capture the information you need to populate Terraform. If necessary refer to the Azure documentation.

  1. Review Azure requirements.

    To summarize, you will need the following Azure information:

    • Subscription ID
    • Client ID
    • Client Secret
    • Tenant ID

    The steps below guide you in obtaining this information.

  2. Obtain your Azure Subscription ID.

    Log in to https://portal.azure.com/.

    Select "My permissions” (beneath the account avatar in the upper right, as shown below):

    screenshot

    Select a subscription from those available in the dropdown.

    Record the "Azure Subscription ID" that is displayed.

  3. Install the Azure CLI.

    You need to install the Azure CLI so that you can get the Client ID (as described in the next step).

    sudo npm install azure-cli -g

    azure config mode arm

  4. Obtain the Azure Client ID and Azure Client Secret.

    Run the following Azure CLI command:

    azure login

    Sign in to the https://aka.ms/devicelogin page and enter the code to authenticate.

    Create a test application. The password you define is your "Azure Client Secret."

    azure ad app create --name "mytest" --home-page "https://www.my.test" --identifier-uris "https://www.my.test/example" --password AZURE-CLIENT-SECRET

    The above command returns the "AppId" string. This value is the "Azure Client ID."

  5. Obtain the Azure Tenant ID:

    Copy the "AppId" string returned from the above command and use it with the following command:

    azure ad sp create -a APP-ID-STRING

    This command returns the "ObjectId" string. Use it with the following command:

    azure role assignment create --objectId OBJECT-ID-STRING -o Owner -c /subscriptions/ENTER-YOUR-AZURE-SUBSCRIPTION-ID/

    Run the following command to display the "tenantId."

    azure account show -s ENTER-YOUR-AZURE-SUBSCRIPTION-ID --json

    For example:

     $ azure account show -s YOUR-AZURE-SUBSCRIPTION-ID --json
     [
       {
         "id": "YOUR-AZURE-SUBSCRIPTION-ID",
         "name": "Pay-As-You-Go",
         "user": {
           "name": "accts@email.com",
           "type": "user"
         },
         "tenantId": "YOUR-AZURE-TENNANT-ID",
         "state": "Enabled",
         "isDefault": true,
         "registeredProviders": [],
         "environmentName": "AzureCloud"
       }
     ]
    

Configure Azure infrastructure

Edit the main.tf file with the following information.

  1. Source.

    Enter the full local path to where you have stored the Apcera Terraform module for Azure.

    For example: source = "~/my-user/apcera-azure/azure-module"

  2. Cluster name.

    The cluster name must contain alphanumeric characters and be no longer than 14 characters. Microsoft naming rules allow for 24 characters. We append the cluster name to one or more security groups, and use 10 of the allotted 24 characters to do so. Note that we normalize the cluster name to lower case and strip out any non-alphanumeric characters.

  3. Azure credentials.

    Enter your Azure credentials:

    • Subscription ID
    • Client ID
    • Client secret
    • Tenant ID

    Refer to the Configure Azure section above to obtain these credentials.

  4. Cluster location.

    Provide the cluster location, which is the Azure region where you will deploy the cluster. Region quota limits may apply depending on your subscripton level.

  5. Cluster subnet.

    Typically you will use the default network CIDR range for the cluster: 10.0.0.0/16.

    If necessary you can update this range.

    Note that the the CIDR ranges for the cluster subnets are set in the module files. Typically you do not need to change the subnets unless you change the network CIDR range.

  6. Enter the Orchestrator user password.

    This is the password you will use to connect to the Orchestrator host.

    Enter a password that conforms to the Micorsoft password complexity requirements.

    For convenience the default user name is ops, but you can change it if you want.

  7. Enter the cluster admin password.

    This is the ops user that the orchestrator-cli tool will use to connect to cluster hosts.

    The user name must be ops, and the password must conform to the Micorsoft password complexity requirements.

  8. Specify the component count].

    For initial deployments the default component counts are sufficient.

     auditlog-count         = 2
     central-count          = 3
     gluster-count          = 0
     instance-manager-count = 2
     metricslogs-count      = 1
     monitoring-count       = 1
     nfs-count              = 1
     orchestrator-count     = 1
     riak-count             = 3
     router-count           = 2
     singleton-count        = 1
     splunk-indexer-count   = 0
     splunk-search-count    = 0
     tcp-router-count       = 1
    

Create Azure resources

To create Azure resources you use the following Terraform commands:

terraform get                                   // Fetch the Terraform module (Azure)
terraform plan --module-depth -1                // View the changes Terraform will make
terraform apply                                 // Create the AWS resources
  1. CD to the /azure directory you extracted the Terraform Azure module and files.

    cd $HOME/azure

  2. Run the command terraform get.

    The get command downloads and caches in the working directory the modules used by this particular Terraform configuration. This must be run again with --update to import changes to the upstream modules, so without that flag it is safe to automate this step.

  3. Run the command terraform plan.

    The plan command displays the changes Terraform will attempt. If you receive any errors, debug as necessary.

  4. Run the command terraform apply.

    Use the apply command to apply and run the changes. This command may take some time to complete.

    If you receive any errors, rerun the terraform apply command a second time.

  5. Verify creation of Azure resources

    When the terraform apply command completes successfully, log in to the Azure Console for your account.

    Select the Resource groups. You should see the -resource-group is created and underlying resources, including several machine instances, volumes, networks, and security groups.

  6. Configure Terraform to store state remotely

    Optional step, but recommended for production clusters. See configure remote state for Terraform.

Configure DNS

With Azure you must configure DNS before you deploy the cluster because the current implementation of the Apcera Terraform module for Azure uses Riak-CS for the package store, and Riak communications go through the Apcera router. Refer to the Riak-CS Package Storage Backend Configuration for complete details.

To configure DNS, after the terraform apply command successfully completes, the following is output:

router-public-addresses = '40.77.105.40','40.77.109.35'

Create or update two DNS records as follows:

  • $base_domain to point to the http routers (usually an “A” record)
  • *.$base_domain to point to the http routers (usually a “CNAME” to $base_domain)

See Configuring DNS for guidance. Note that if you are using multiple routers, update DNS to point to the load balancer that fronts the routers.

Configure SSH

  1. Add your private SSH key to your local SSH agent:

     ssh-add apcera.pem
    

    You should see the message "Identity added: apcera.pem (apcera.pem)," indicating success.

    Run the following command and confirm that your key was successfully added:

     ssh-add -l
    

    If you get an error saying that "Permissions 0644 for 'apcera.pem' are too open," change the permission on the file so that only you can read it, then re-run the ssh-add command: chmod 400 apcera.pem.

  2. Connect to the Orchestrator host via SSH.

    Obtain the public IP address for Orchestrator in the Azure portal by selecting the -orchestrator VM and looking at the "Essentials" pane.

    Connect remotely to Orchestrator as follows:

         ssh -A orchestrator@40.77.109.110
         orchestrator@40.77.109.110's password:
    

    Successful login:

         Last login: Thu Nov  3 17:05:04 2016 from c-73-70-35-45.hsd5.ca.netcast.com
         orchestrator@orchestrator:~$
    

Edit the cluster.conf.erb file

The cluster.conf.erb file is used to generate the cluster.conf file.

In general you can leave the provisioner, machines, and components sections set to the defaults.

In the chef section of the cluster.conf.erb, you must provide the values for the following parameters.

Cluster name and base domain (lines 201 and 202)

Provide a unique cluster name that matches the one you entered in main.tx and the base domain for which you have set up a DNS record. In the following example, "example" is the cluster_name and "example.mycompany.com" is the base domain:

"cluster_name": "example",
"base_domain": "example.mycompany.com",

Admin password (lines 251)

In the chef.continuum.auth_server section of cluster.conf.erb, specify the basic auth password for the cluster admin user.

Monitoring passwords

In the chef.apzabbix section of cluster.conf.erb, specify the password for the Monitoring admin and DB user.

HTTPS certificate

For production clusters, you will need to enable HTTPS. See Configuring Apcera for HTTPS for guidance.

SSH public key

  1. Generate an SSH public-private key pair.

  2. Add the contents of your public SSH key to cluster.conf.erb.

     chef: {
       "continuum": {
         "ssh": {
           "custom_keys":[ "ssh-rsa YOUR-PUBLIC-SSHKEY" ]
         },
         ...
       }
     }
    

Generate cluster.conf

  1. Run the following command to generate the cluster.conf file:

     erb cluster.conf.erb > cluster.conf
    

    You run this command on your local machine. CD to the working directory where the Azure installation files are located and run the command. You do not run this command on the Orchestrator host.

    To run this command you must have Ruby installed.

    The cluster.conf file is used to deploy the cluster. If successful this command should exit silently.

    Verify that the generated cluster.conf file is output to your directory. If you encounter an error, run the command again.

    If you want to update your cluster, you should update the cluster.conf.erb file and generate an updated cluster.conf file. In other words, you should avoid editing the cluster.conf file directly.

  2. Copy cluster.conf to the Orchestrator host.

    Use SCP to copy cluster.conf to the orchestrator user home directory using the Orchestrator IP address, for example:

     scp cluster.conf orchestrator@52.34.45.31:~
    

    If you see the message, “Are you sure you want to continue connecting (yes/no)?” Type “yes” to proceed.

    Verify that the cluster.conf file is copied to the Orchestrator host:

     cluster.conf 100% 9537 9.3KB/s 00:00
    
  3. Copy the release bundle to Orchestrator (optional).

    During the deployment (described below) you can pull the latest release from cloud storage. Alternatively, to speed deployment time and prevent various round-trips between the Orchestrator server and the internet, you can download the release bundle and copy it to the Orchestrator host.

    Download the Apcera software bundle from the Apcera Support Portal](https://support.apcera.com).

    To securely copy the release bundle tarball to the Orchestrator host:

     scp release-2.4.0-ec5a25b.tar.gz orchestrator@40.77.109.110:~
     orchestrator@40.77.109.110's password:
     release-2.4.0-ec5a25b.tar.gz                                                   100%  581MB 283.2KB/s   35:02
    

Deploy the cluster

  1. SSH to the Orchestrator host.

     ssh -A orchestrator@52.34.45.31
    

    You should be connected, indicated by orchestrator@<host>-orchestrator:~#.

  2. Initialize the Orchestrator DB.

    For first time deploys only (not updates), you must initialize the Orchestrator database:

     orchestrator-cli init
    

    If you do not initialize the DB for initial deploys, you receive the error Error: no tenant records found!.

  3. Perform a dry run to validate cluster.conf:

     orchestrator-cli deploy -c cluster.conf --update-latest-release --dry
    

    Or using a release bundle you copied to Orchestrator:

     orchestrator-cli deploy -c cluster.conf --release-bundle release-2.4.0-ec5a25b.tar.gz --dry-run
    

    Use ls to verify that the graph.png file is generated in the home directory.

    Type exit to log out of the Orchestrator host.

    Copy the graph.png file to your local computer and examine it for errors:

     scp orchestrator@40.77.109.110:graph.png ~/apcera-azure
     orchestrator@40.77.109.110's password:
     graph.png                                               100%  487KB 243.6KB/s   00:02
    
  4. Deploy the cluster.

    To deploy the latest promoted release from the internet:

     orchestrator-cli deploy -c cluster.conf --update-latest-release
    

    To deploy a release bundle you copied to Orchestrator:

     orchestrator-cli deploy -c cluster.conf --release-bundle release-2.4.0-ec5a25b.tar.gz
    

    Read and acknowledge any reboot warning messages.

    Successful deployment is indicated by the message "Done with cluster updates."

    NOTE: If this is an initial deploy, you may have to run the deploy a second time to ensure all hosts are monitored by Zabbix.

  5. Test the deployment

    See Post Installation Tasks for Enterprise Edition deployments.