Configuring SSH Access
This document describes SSH requirements for deploying the Apcera Platform Enterprise Edition in production, and provides instructions for generating SSH keys to access and manage cluster hosts.
- SSH requirements
- Generate SSH key pair
- Connect to Orchestrator Using SSH
- Enable SSH Access to Cluster Hosts
To deploy a cluster, you need to provide an SSH key pair for admin access to the:
- Orchestrator host
- All other cluster hosts
The Orchestrator host is a Linux VM instance created by the infrastructure template. To deploy the cluster, you connect to the Orchestrator instance using SSH and run the
To connect to cluster hosts, you must add your public SSH key to the
cluster.conf file and deploy the cluster. You can then connect to cluster hosts from the Orchestrator host.
Generate SSH key pair
1) Create SSH local key pair.
On a Unix machine, run the following command:
You will be prompted to enter a passphrase, among other things such as key name, file path, key type (RSA or DSA), etc.
This command will generate a private and public key pair of the following type:
2) Add the SSH key to your local SSH agent.
You will be promted to enter the passphrase you created in Step 1.
3) Verify that you added your SSH key to your local SSH agent.
For example, you should see a result similar to the following:
ssh-add -l 4096 32:14:63:00:80:22:ec:0f:6c:ac:97:f8:78:8e:9f:1f /Users/bobjohnson/.ssh/id_rsa (RSA)
Connect to Orchestrator Using SSH
4) SSH to the Orchestrator host.
Obtain the public IP of the Orchestrator host and SSH to it as the
ssh -A email@example.com
ssh -A firstname.lastname@example.org Last login: Fri Oct 28 18:17:29 2016 from 192.168.46.1 ubuntu@vm:~$
5) To verify Orchestrator connectivity, execute the Orchestrator CLI.
If you cannot connect, exit Orchestrator, return to your local host, remove the key pair and try again.
Enable SSH Access to Cluster Hosts
6) Add the public SSH key to the cluster.conf file.
See Configuring SSH Access for guidance.
See also the installation instructions for your platform.
7) Deploy the cluster.
- Copy the cluster.conf file to Orchestrator using SCP.
- Connect to the Orchestrator host via SSH.
- Perform a dry run
- Run a deploy.
orchestartor-cli deploy --dry-run -c cluster.conf --update-latest-release orchestartor-cli deploy -c cluster.conf --update-latest-release
8) Verify SSH access to cluster hosts.
- Run the command
- This Orchestrator command outputs the list of cluster hosts you can connect to.
- To connect to one of cluster hosts, enter the number beside the host name from the Orchestrator output.
orchestrator-cli ssh Starting up database... done Multiple nodes matched your query, please select which: 1) IP: 192.168.46.134, Name: bd027361, Tags: monitored,health-manager,nfs-server,tcp-router,auditlog-database,ip-manager,package-manager,stagehand,metrics-manager,component-database,router,auth-server,job-manager,redis-server,nats-server,cluster-monitor,api-server,events-server,component-database-master,auditlog-database-master,basic-auth-server,app-auth-server,tcp-router-nginx 2) IP: 192.168.46.136, Name: 3029a939, Tags: monitored,graphite-server,instance-manager,statsd-server Pick a host :