Managing the Apcera Platform

You use the apcera-install tool to manage your Apcera Platform cluster.

Cluster information

Run the apcera-install status command to display the status of each VM host as well as the cluster access information.

For example, on a Mac you run the command:

$ ./apcera-install status

Here is an example of the output:

[ Apcera Setup - Status ]
Please wait a moment while we query your cluster...
exit status 1
[ Cluster Status ]
+--------+-------------------------------+
| Status | Apcera Platform Version       |
+--------+-------------------------------+
| Up     | 3:3.0.0 (build: 6c0dcc9)      |
+--------+-------------------------------+

[ Machine Status ]
+----------+------------------+-------------------------+---------------+-------------+
| Provider | Role             | Name                    | Public IP     | Private IP  |
+----------+------------------+-------------------------+---------------+-------------+
| aws      | orchestrator     | subdomain-orchestrator  | 52.63.231.132 | 10.0.50.65  |
| aws      | central          | subdomain-central-1     | 54.206.88.220 | 10.0.50.203 |
| aws      | central          | subdomain-central-2     | 13.210.55.105 | 10.0.60.239 |
| aws      | central          | subdomain-central-3     | 52.65.13.0    | 10.0.70.213 |
| aws      | instance-manager | subdomain-im-1          | 52.64.159.153 | 10.0.50.249 |
| aws      | instance-manager | subdomain-im-2          | 13.54.141.13  | 10.0.60.209 |
| aws      | instance-manager | subdomain-im-3          | 13.55.31.146  | 10.0.70.130 |
| aws      | router           | subdomain-router-1      | 52.62.25.174  | 10.0.50.14  |
| aws      | router           | subdomain-router-2      | 52.65.16.53   | 10.0.60.32  |
| aws      | router           | subdomain-router-3      | 52.62.92.100  | 10.0.70.153 |
| aws      | monitoring       | subdomain-monitoring-1  | 13.54.98.216  | 10.0.50.222 |
| aws      | auditlog         | subdomain-auditlog-1    | 52.65.124.129 | 10.0.50.144 |
| aws      | metricslogs      | subdomain-metricslogs-1 | 54.66.188.233 | 10.0.50.197 |
+----------+------------------+-------------------------+---------------+-------------+

[ Access Info ]
Target: https://subdomain.apcera-platform.io:443
Web Console: https://console.subdomain.apcera-platform.io
Zabbix Monitoring: https://zabbix.subdomain.apcera-platform.io
Apcera Platform Version: 3:3.0.0 (build: 6c0dcc9)
Users: admin
DNS Token: cd08c16c-9bf9-479e-9ff3-1aaf06104c19
Provider: aws
Number of Centrals: 3
Number of IMs: 3
Number of Storage: 0
Number of Routers: 3
Number of TCP Routers: 0
Number of Monitoring Hosts: 1

Access any of your VMs using the "apcera-install ssh" command or use your
ssh client by logging in as the user "ops" (e.g. ssh ops@<ip>)
using the key located at "/Users/username/.ssh/id_rsa.pub". Use the "orchestrator" user when logging into the orchestrator machine.

Log in to Zabbix

Log in to your cluster's Zabbix dashboard using the URL provided by the apcera-install status command: http://zabbix.<subdomain>.<cluster-domain>

[ Access Info ]
Target: https://subdomain.apcera-platform.io:443
Web Console: https://console.subdomain.apcera-platform.io
Zabbix Monitoring: https://zabbix.subdomain.apcera-platform.io

Use the user name and password you provided during the cluster configuration. If you don't remember, you can find it in the apcera-install.json file under monitoring section.

Zabbix monitors the health of your Apcera cluster components.

screenshot

Also, see Configuring Cluster Monitoring as a reference.

SSH to cluster host

You can SSH into a component host. To do this:

  1. Run the apcera-install status command to list the machine status.
  2. Access the component host using the command apcera-install ssh <role> or apcera-install ssh <name>.

    For example, run the following command to SSH into the Orchestrator host: ./apcera-install ssh orchestrator

NOTE: You can SSH into any of the nodes on your cluster by running the apcera-install ssh command by passing the machine role (e.g. orchestrator, instance-manager, central) or the machine name (e.g. subdomain-orchestrator, subdomain-im-1, subdomain-central-2).

If you configured an SSH key for your cluster, you can access any of your VMs using your preferred SSH client by logging in as the user "orchestrator" for orchestrator host (e.g. ssh orchestrator@<public_ip>), and "ops" (e.g. ssh ops@<public_ip>) for others. See Configuring SSH Access.

Type exit to quit the SSH session.

SCP files to/from cluster host

To securely copy a file between localhost and cluster hosts, run the apcera-install scp command.

Usage:

apcera-install scp <local_file> <host_name>:<destination_file>

Or

apcera-install scp <host_name>:<source> <local_destination>

Where the host_name is the Name you get from apcera-install status command. Alternatively, you can use the Role name such as central. If your cluster has more than one host belong to the role, the scp command picks the first machine (e.g. subdomain-central-1).

For example, the following command copies the /home/orchestrator/graph.png file from the orchestrator host, and save it in your current working directory with the same name.

apcera-install scp orchestrator:/home/orchestrator/graph.png ./graph.png

Update platform configuration

If you want to change the configuration of a cluster (for example, change the number of instance managers), you need to run the apcera-install config command. After you update the platform configuration you must redeploy the platform (run apcera-install deploy) with the new configuration.

$ ./apcera-install config

Your previous input is saved in the apcera-install.json file which becomes the default values when you re-run the config command.

Available Sub-commands

  auth         Configure auth parameters
  cluster      Configure cluster parameters
  domain       Configure domain parameters
  https        Configure https parameters
  monitoring   Configure monitoring parameters
  nameserver   Configure nameserver parameters
  os           Configure Operation System
  provider     Configure provider parameters
  splunk       Configure Splunk parameters
  user         Configure user parameters

If you want to reconfigure a specific area of configuration, run the config command followed by the sub-command listed above. For example, to make some Splunk configuration changes to your cluster, run the following command:
$ ./apcera-install config splunk

Run the apcera-install plan to see if your configuration changes require updates to the infrastructure: $ ./apcera-install plan
If the output indicates that there is no change required, you can proceed with deploy.

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.

Scaling the cluster

You can scale your cluster by adding or removing the Instance Managers (IMs) and/or Storage nodes from your cluster as the capacity requirement changes.

1) Run apcera-install config provider and enter the desired number of IMs and storage nodes when prompted.

$ ./apcera-install config provider

[ Provider Configuration ]
Specify the number of Instance Managers for provider aws, the number of IMs is recommended not to be less than the number of zones. [3]: 5
Would you like to add storage nodes to provider aws? [y/N] y
Specify the number of Storage nodes for provider aws, the number of storage must be a multiple of 3 [3]:

2) Run apcera-install plan to calculate what changes need to be made. The output indicates how many cloud elements will be added, changed, or destroyed.

For example:

Plan: 2 to add, 1 to change, 0 to destroy.

3) Run apcera-install apply to execute the plan.
4) Run apcera-install deploy to deploy the changes.

Redeploy is required if you change hardware (disk, attached storage, RAM, etc.) on any cluster machine host, physical or virtual.

Deployment options

Use the apcera-install deploy command to deploy the Apcera Platform after you have configured and created the infrastructure. You also use the apcera-install deploy command to redeploy your Apcera Platform if you want to update the software or change the deployment configuration after running apcera-install config.

Usage:

   apcera-install deploy [flags]

Flags:

  --conf string          Deploy from specified cluster.conf
  --conf-only            Generate cluster.conf only and skip the deploy action
  -h, --help             Help for deploy
  -n, --nopass           Skip the cluster passphrase
  -r, --release string   Path to the release file to deploy

The –nopass option indicates that a system default will be used as the cluster passphrase. This option is provided for a convenience during tests, or building a demo cluster. This option is insecure and not recommended for production clusters.

Global Flags:

  -c, --config <config_file_path>: Config file (JSON supported)

If you only want to generate the cluster configuration file (cluster.conf), run the command with the --conf-only flag. Since the deploy command regenerates the cluster.conf each time it runs, any customization to the file should be saved with a different file name, and pass it with --conf <file-name> flag. See Deploy Apcera Platform with custom_cluster.conf.

Deploy specific release

To deploy a specific release (other than the latest), use the --release (or -r) command option with the release bundle you want to deploy. Note that the --release option is a global command option that can be used with apcera-install deploy or apcera-install install.

For example:

The following command uses the --release option to deploy a specific Apcera Platform release bundle from the cloud:

$ ./apcera-install deploy --release https://s3.amazonaws.com/apcera-releases/continuum/bundles/release-3.0.0.tar.gz

The following command uses the --release option to deploy a specific Apcera Platform release bundle that you have downloaded to your local machine:

$ ./apcera-install deploy --release /Users/username/Downloads/release-3.0.0-f284c8e.tar.gz

If you encounter an error during the deploy, see troubleshooting.

Rebooting after deploy

Some release may require one or more of the cluster hosts to be restarted after the deploy operation. In such a case, the deployment completion message will indicates that the cluster needs to be rebooted.

For example:

Deploy requires a reboot to complete required software/kernel updates
To get the latest updates for your cluster, please run: "apcera-install reboot".

Run apcera-install reboot to restart your cluster hosts.

For example, on a Mac you would run:

$ ./apcera-install reboot

This is equivalent to running the orchestrator-cli reboot command directory on the Orchestrator host machine. For more information, refer to Restarting Cluster Hosts.

Get component logs for troubleshooting

Generally you can use the apcera-install logs to troubleshoot issues. However, if the logs/apcera-install.log does not provide enough troubleshooting detail, you can:

Pull the component logs

If you receive errors during deployment and the logs/apcera-install.log does not provide enough troubleshooting detail, you can pull the component logs by executing the apcera-install logs command.

For example, on a Mac you run the command: $ ./apcera-install logs

System logs from all Apcera components will be downloaded under /logs directory of where you are running apcera-install from.

screenshot

If you are troubleshooting a deployment failure, the first log you want to check is the /logs/orchestrator-1/chef-client-XXXXXXXX.log file. If the chef-client log indicates an error with a specific Apcera component, you can find the component log at /logs/<machine_name>/continuum-<component_name>/current.

For example, the API Server component runs on central hosts, the component log is located at /logs/central-1/continuum-api-server/current.

Refer to the architecture diagram in Installation workflow section.

Tail a component log

If you need additional troubleshooting, you can SSH into a component host by executing the apcera-install ssh <role> command. Component log is located at /var/log/continuum-<component_name>/current.

For example, if you want to tail the HTTP Router log:

  1. SSH to the router host: apcera-install ssh router
  2. Locate the HTTP Router log: cd /var/log/continuum-router
  3. Run the tail command: tail -f current
  4. Enter Ctrl + C to quit
  5. Type exit to quit the SSH session

Also, see Troubleshooting Apcera Deployments as a reference.

Use Splunk to query the system logs

If your cluster is configured with Splunk, you can launch the Splunk dashboard to query all system logs collected from the cluster. The Splunk dashboard URL looks like: https://splunk-indexer.<sub-domain>.<domain>.

For example, https://splunk-indexer.hello-world.apcera-platform.io where hello-world is your sub-domain of the Apcera default domain (apcera-platform.io).

You can run query against a specific app (e.g. search "Failed to execute SSH") using the Splunk's search capability.

screenshot

Also, see Troubleshooting Apcera Deployments as a reference.

Tear down the cluster

To tear down a cluster, use the command $ ./apcera-install destroy.

For example, on a Mac you would run:

$ ./apcera-install destroy

You are prompted to confirm.

  [ Apcera Install - Destroy ]
  Do you really want to destroy?
    Terraform will delete all your managed infrastructure.
    There is no undo. Only 'yes' will be accepted to confirm.

    Enter a value: yes

Enter yes to proceed with the cluster tear down. This command terminates and delete all cloud infrastructure components created by the apcera-install apply command.

Use the command apcera-install destroy --force to destroy the cluster will disable the confirmation prompt.

Syntax:

-f, --force: don't prompt for confirmation