Cluster Administration

This section provides instructions for managing Apcera Platform.

Orchestrator OS update procedure

The Orchestrator host uses the cloud vendor kernel. It is important that you update the Orchestrator OS and install available kernel patches to keep it from exploited by security vulnerability.

To update the Orchestrator host OS kernel and the Orchestrator CLI:

1) SSH in to Orchestrator and run the following commands:

sudo su (in order to upgrade the Orchestrator kernel, you must log in as a root)

apt-get update && apt-get dist-upgrade

This command will update the Orchestrator host OS and also perform the update on orchestrator-cli bringing it up to the latest version.

2) Reboot the Orchestrator host.

This can be accomplished by running reboot.

3) Verify Orchestrator kernel upgrade.

Run the command uname -r to see the current running kernel.

Alternatively, you can run dpkg -l | grep kernel to verify that it will boot next you can run cat /boot/grub/menu.lst.

Upgrading your cluster

Apcera Platform software upgrades are on demand. If Apcera is managing your cluster, we will coordinate the upgrade with you. If you are managing your cluster, you choose when to perform an upgrade.

Refer to the Upgrading to Apcera Release 3.0 if you are upgrading an existing pre-3.0.0 cluster to 3.0.

Use the orchestrator-cli tool to find out the available release information.

Run the orchestrator-cli releaseinfo --show-constraints command will show available release information along with its minimum required version of the orchestrator-cli and Apcera Platform.

Deployed Release: 2.6.3
+-------------------------------------------------------------------+
|                        Available Releases                         |
+---------+-------------------------------+--------------+----------+
| Version | Release Date                  |  Minimum Required Ver.  |
+---------+-------------------------------+--------------+----------+
|         |                               | Orchestrator | Platform |
+---------+-------------------------------+--------------+----------+
| 2.6.3   | 2017-07-13 18:53:28 +0000 UTC | 1.0.0        | 2.4.0    |
| 2.6.2   | 2017-05-11 21:29:37 +0000 UTC | 1.0.0        | 2.4.0    |
| 2.6.1   | 2017-04-10 18:27:19 +0000 UTC | 1.0.0        | 2.4.0    |
| 2.6.0   | 2017-02-26 17:42:47 +0000 UTC | 1.0.0        | 2.4.0    |
| 2.4.2   | 2017-01-06 16:46:05 +0000 UTC | 0.5.3        | 2.2.2    |
| 2.4.1   | 2016-12-22 23:12:27 +0000 UTC | 0.5.3        | 2.2.2    |
| 2.4.0   | 2016-11-11 19:34:48 +0000 UTC | 0.5.3        | 2.2.2    |
| 2.2.3   | 2016-10-24 21:06:39 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.2.2   | 2016-10-14 16:56:54 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.2.1   | 2016-09-20 18:38:41 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.2.0   | 2016-07-20 17:18:24 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.0.2   | 2016-06-23 16:07:30 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.0.1   | 2016-04-14 04:37:41 +0000 UTC | 0.4.1        | 2.0.0    |
| 2.0.0   | 2016-03-29 16:55:16 +0000 UTC | 0.4.1        | 508.1.5  |
| 508.1.5 | 2016-02-25 21:49:43 +0000 UTC | 0.3.3        | 506.1.0  |
| 506.1.0 | 2016-02-11 22:10:33 +0000 UTC | 0.3.3        | 504.1.8  |
| 504.1.8 | 2016-02-08 19:58:03 +0000 UTC | 0.3.3        | 449.3.1  |
+---------+-------------------------------+--------------+----------+

Backup data as necessary

Refer to the Backup and Restore documentation before you begin a cluster upgrade. Make sure you have backed up your cluster.conf file if you need to make any configuration changes.

Ensure SSH is enabled

Before performing an upgrade, you must ensure that SSH access to cluster hosts has been correctly setup in your cluster.conf. For example, from the Orchestrator host you should be able to run orchestrator-cli ssh /auth-server and be automatically logged into the host running the auth-server component. If this is not working review the documentation for how to setup SSH access to the cluster hosts before proceeding.

Make infrastructure changes (if necessary)

If you need to make infrastructure changes, in general you should make those first before performing a cluster update. However, if you are taking down a machine, you should remove the Apcera component first (that is, run the update) before making the infrastructure change.

See also cluster scaling for more information.

Read the release notes

Before updating, check the Release Notes for the release update you are targeting for important information about that release. If there are changes that affect cluster configuration, you will need to update your cluster.conf file before performing the update. If necessary, obtain the latest version of the cluster.conf file from Apcera Support before updating.

General upgrade procedure

This section outlines the general upgrade procedure for upgrading to the latest promoted release or a specific release.

1) Update Orchestrator software.

Before upgrading a cluster, you should always update Orchestrator to the latest software version.

To check the Orchestrator version, ssh into Orchestrator (ssh orchestrator@X.X.X.X) and run the command orchestrator-cli version.

To update Orchestrator software:

(1) Run sudo apt-get update
(2) Run sudo apt-get upgrade orchestrator-cli

If you have connected to the Orchestrator host as the root user, you do not need to run the sudo command.

When the process is complete, run the following command to verify the Orchestrator version:

orchestrator-cli version

See also Performing an Air-Gapped Orchestrator Update.

2) Copy the necessary files to the Orchestrator host.

Use SCP to copy the updated cluster.conf file to the Orchestrator host. Also, if applicable copy the release bundle to the Orchestrator host. See Copying the cluster configuration file to Orchestrator and Copying the release bundle to Orchestrator.

3) Update the cluster software.

To upgrade to the latest promoted point release:

orchestrator-cli deploy --config cluster.conf --update-latest-release --dry
orchestrator-cli deploy --config cluster.conf --update-latest-release

To upgrade to a specific release:

orchestrator-cli deploy -c cluster.conf —-release 3.0.0 —-dry
orchestrator-cli deploy -c cluster.conf —-release 3.0.0

5) If necessary, reboot cluster hosts.

If you see the message below when you run the orchestrator-cli deploy command, run orchestrator-cli reboot --config cluster.conf command to reboot cluster hosts.

This release of the Apcera platform includes a package that upgrades the Linux kernel with security fixes.  After the deploy completes, you will need to restart every server in the cluster, one at a time, during a maintenance window of your choice. The security update is NOT COMPLETE until the servers are restarted.
Proceed? [y/N]: y

screenshot

Cluster scaling

This section provides considerations for scaling your cluster.

You scale a cluster by updating the components section of cluster.conf and deploying the platform using Orchestrator.

Scaling cluster components

The Orchestrator is able to scale up components with ease, however there currently is a limitation on scaling down components. The Orchestrator is able to scale down a component where it is the only component active on that machine, since it will simply terminate the machine. However, to scale down a component on a box also running other components is currently not supported. This is because the uninstall process for each component has not been implemented within the Chef cookbooks used to setup and deploy Apcera.

Scaling cluster resources

If you add computing resources to a virtual machine in an Apcera cluster (for example, add more RAM or CPUs to an Instance Manager node), those changes won't be detected until you run orchestrator-cli deploy. There are hard-coded configuration files that specify cluster resources for each component located /opt/apcera/continuum/conf. Only on deploy are these changes detected.

Changing components and system parameters

If you want to change VM parameters such as CPU, memory, etc., you need to do so as follows:

  • Place 0 in the components section and perform a deploy (which removes that component)
  • Make the changes to the in cluster.conf and deploy again (which installs the component with the changes you made)

This same procedure applies to Zabbix, such as adding email or pagerduty settings.