Upgrading and Scaling

This section provides instructions for upgrading Apcera Platform Enterprise Edition deployments.

Apcera Platform software upgrades are on demand. If Apcera is managing your cluster, we will coordinate the upgrade with you. If you are managing your cluster, you choose when to perform an upgrade.

Lastest promoted release

Apcera Platform version 2.6.2 is the latest promoted release. Refer to the What's New for release information.

Security releases

Various Apcera Platform releases include Linux kernel security patches, including 2.2.3 and 2.4.1. Apcera strongly recommends that you upgrade your cluster software and that you update the Orchestrator host OS to the latest promoted release.

For historical purposes, Apcera version 508 included a kernel update to address CVE-2015-7547, and version 504 included a kernel update to address CVE-2016-0728.

Upgrade paths

You can upgrade directly to release 2.6.x from Apcera Platform version 2.4.2. No other upgrade path is supported. You must use Orchestrator version 1.0.0 to deploy a 2.6.x cluster.

You can upgrade directly to release 2.4.x from Apcera Platform versions 2.2.2 or 2.2.3. No other upgrade paths are supported. You must use Orchestrator 0.5.3 to deploy the 2.4.x release and take advantage of new deployment features.

To get your cluster to the latest promoted release, the suggested upgrade path is 2.2.3 > 2.4.2 > 2.6.2.

Note the follow cluster.conf changes across LTS releases.

For 2.2 > 2.4:

  • The events-server is added
  • The nats-server and auth-server can now be HA
  • Any flex-auth-server component must be explicitly enumerated to be HA

For 2.4 > 2.6:

  • The metrics-manager and health-manager can now be HA
  • The statsd-server is deprecated and must be removed if it is on the same host as the metrics-manager
  • There are only two singleton metricslogs components (graphite-server and redis-server), and no singleton central components other than stagehand.

To migrate components to HA, refer to the migration documentation.

Before upgrading across LTS releases, you should contact Apcera Support for assistance.

Upgrade prerequisites

This section lists the prerequisites for upgrading your cluster.

Backup data as necessary

Refer to the Backup and Restore documentation before you begin a cluster upgrade. Make sure you have backed up your cluster.conf file before you begin a cluster upgrade.

Ensure SSH is enabled

Before performing an upgrade, you must ensure that SSH access to cluster hosts has been correctly setup in your cluster.conf. For example, from the Orchestrator host you should be able to run orchestrator-cli ssh /auth-server and be automatically logged into the host running the auth-server component. If this is not working review the documentation for how to setup SSH access to the cluster hosts before proceeding.

Make infrastructure changes (if necessary)

If you need to make infrastructure changes, in general you should make those first before performing a cluster update. However, if you are taking down a machine, you should remove the Apcera component first (that is, run the update) before making the infrastructure change.

See also cluster scaling for more information.

Note that for Apcera Platform release 2.2.x, you will need to manually add the following components to your cluster.conf file for Enterprise Edition:

Refer to the deployment sizing guidelines for details on these components and on what machine to add them.

Read the release notes

Before updating, check the Release Notes for the release update you are targeting for important information about that release. If there are changes that affect cluster configuration, you will need to update your cluster.conf file before performing the update. If necessary, obtain the latest version of the cluster.conf file from Apcera Support before updating.

General upgrade procedure

This section outlines the general upgrade procedure for upgrading to the latest promoted release or a specific release.

1) Complete the upgrade prerequisites.

2) Update Orchestrator software.

Before upgrading a cluster, you should always update Orchestrator to the latest software version. For security releases you must also update the Orchestrator host OS.

To check the Orchestrator version, ssh into Orchestrator (ssh orchestrator@X.X.X.X) and run the command orchestrator-cli version.

To update Orchestrator software:

(1) Run sudo apt-get update
(2) Run sudo apt-get install
(3) Run sudo apt-get install orchestrator-cli

If you have connected to the Orchestrator host as the root user, you do not need to run the sudo command.

When the process is complete run the following command to verify the Orchestrator version:

orchestrator-cli version

See also Performing an Air-Gapped Orchestrator Update.

3) Copy the necessary files to the Orchestrator host.

Use SCP to copy the updated cluster.conf file to the Orchestrator host. Also, if applicable copy the release bundle to the Orchestrator host. See Copying the cluster configuration file to Orchestrator and Copying the release bundle to Orchestrator.

4) Update the cluster software.

You can upgrade to the latest promoted point release or a specific release. Note that the —-update-latest-release command only updates to point releases, for example from 2.2.2 to 2.2.3. If you want to upgrade to a major relase, such as from 2.0.0 to 2.2.0 or 3.0.0, you must specify the version. You should always perform a dry run first.

To list the available releases for your cluster by running orchestrator-cli releaseinfo.

Alt text

To upgrade to the latest promoted point release:

orchestrator-cli deploy --config cluster.conf --update-latest-release --dry
orchestrator-cli deploy --config cluster.conf --update-latest-release

To upgrade to a specific release:

orchestrator-cli deploy -c cluster.conf —-release 2.2.3 —-dry
orchestrator-cli deploy -c cluster.conf —-release 2.2.3

5) If necessary, reboot cluster hosts.

If you see the message below when you run the orchestrator-cli deploy command, you must reboot all cluster hosts following the reboot instructions.

This release of the Apcera platform includes a package that upgrades the Linux kernel with security fixes.  After the deploy completes, you will need to restart every server in the cluster, one at a time, during a maintenance window of your choice. The security update is NOT COMPLETE until the servers are restarted.
Proceed? [y/N]: y

screenshot

NOTE: Releases 2.2.3, 2.0.0, 508 and 504 include kernel patches and require cluster host reboot.

6) Update APC.

Lastly, update APC by running the command apc update for each APC client.

Orchestrator OS update procedure

If the release you are upgrading to includes a Linux kernel update (such as 2.2.3, 2.2.0, 508, and 504), you must manually update the Orchestrator OS following the procedure listed here.

The Orchestrator host uses the vendor kernel. You use apt-get to update the Orchestrator host OS.

To update the Orchestrator host OS kernel and the Orchestrator CLI:

1) SSH in to Orchestrator and run the following commands:

sudo su (in order to upgrade the Orchestrator kernel, you must log in as a root)

apt-get update && apt-get dist-upgrade

This command will update the Orchestrator host OS and also perform the update on orchestrator-cli bringing it up to the latest version.

2) Reboot the Orchestrator host.

This can be accomplished by running reboot.

3) Verify Orchestrator kernel upgrade.

Run the command uname -r to see the current running kernel.

Alternatively, you can run dpkg -l | grep kernel to verify that it will boot next you can run cat /boot/grub/menu.lst.

Importing an updated cluster host base image

When an update to the Apcera Platform host operating system image (base image) is available, you may need to import it to your cluster if you want to add Instance Managers (IMs) or other cluster hosts.

NOTE: New images were made available for the Apcera Platform releases 2.2.3, 2.0.0, 508 and 504.

To import an updated base image:

  1. Deploy the version of Apcera Platform corresponding to the base image (likely the latest release).
  2. Download the updated image from the Apcera Support Site.
  3. If you are using vSphere or OpenStack, import this image into your infrastructure. On vSphere you'll likely want to put it in a new template folder.
  4. If you are using AWS, download the most recent terraform-module and run your Terraform configuration against it. If you are using vSphere or OpenStack, update your cluster.conf to reference this new image. On vSphere, update provisioner template_folder to the new folder you uploaded the base image to. On OpenStack, update machine_defaults image to the value of the uploaded image ID.
  5. To test, increase the instance-manager by one and deploy.

Cluster scaling

This section provides considerations for scaling your cluster.

You scale a cluster by updating the components section of cluster.conf and deploying the platform using Orchestrator.

Scaling cluster components

The Orchestrator is able to scale up components with ease, however there currently is a limitation on scaling down components. The Orchestrator is able to scale down a component where it is the only component active on that machine, since it will simply terminate the machine. However, to scale down a component on a box also running other components is currently not supported. This is because the uninstall process for each component has not been implemented within the Chef cookbooks used to setup and deploy Apcera.

Scaling cluster resources

If you add computing resources to a virtual machine in an Apcera cluster (for example, add more RAM or CPUs to an Instance Manager node), those changes won't be detected until you run orchestrator-cli deploy. There are hard-coded configuration files that specify cluster resources for each component located /opt/apcera/continuum/conf. Only on deploy are these changes detected.

It's more typical to grow clusters horizontally by adding more VMs, rather scaling vertically by adding more compute resources. See also component scaling guidelines.

Changing components and system parameters

If you want to change VM parameters such as CPU, memory, etc., you need to do so as follows:

  • Place 0 in the components section and perform a deploy (which removes that component)
  • Make the changes to the in cluster.conf and deploy again (which installs the component with the changes you made)

This same procedure applies to Zabbix, such as adding email or pagerduty settings.