Installing Apcera on OpenStack
This section provides instructions for installing Apcera on OpenStack Liberty.
OpenStack Cluster Description
The example cluster installed on OpenStack as described here makes use of the following resources:
- 20 virtual machine hosts with NFS persistence, remote package store, and 3 IMs that can be expanded to 10.
- 3 subnets
- 7 floating IP addresses (static IPs)
- 4 security groups
- 10 volumes (in addition to host root disks)
Optional expansions include:
- Add auditlog-secondary on subnet 2.
- Add http router on subnet 2, subnet 3.
- Given IM count is 1 per subnet; can increase IM count across each subnet (such as 3, 3, 4 for a total of 10 IMs).
The associated OpenStack installation files deploy an Apcera Platform cluster (relase 2.6.0 or later) with the following components:
Host Role | Count | Subnet | Flavor | Image | Volume | Security Group | Floating IP | Components |
---|---|---|---|---|---|---|---|---|
audit | 1 | 1 | large | base | 200 | default | no | auditlog-db |
central | 3 | 1, 2, 3 | medium | base | 100 | default | no | component-db, job-manager, package-manager, health-manager, cluster-monitor, metrics-manager, nats-server, events-server, auth_server, basic_auth_server, stagehand* |
gluster-nfs | 3 | 1, 2, 3 | medium | base | 250 | default | no | gluster-nfs |
graphite | 1 | 1 | medium | base | 200 | default | no | graphite |
router | 1 | 1 | medium | base | na | elb | yes | exposed http router |
instance-manager | 3 | 1, 2, 3 | im | base | 200 | default | yes | instance-manager |
ip-manager | 1 | 1 | small | base | na | dmz | no | ip-manager |
monitoring | 1 | 1 | medium | base | 80 | bastion | yes | monitoring-host, monitoring-database |
orchestrator | 1 | 1 | small | orch | na | bastion | yes | orchestrator |
redis | 1 | 1 | medium | base | 100 | default | no | redis |
riak | 3 | 1, 2, 3 | medium | base | 80 | default | no | riak |
tcp router | 1 | 1 | small | base | na | dmz | yes | tcp router |
See also
- List and description of Apcera cluster components.
- Sizing considerations for a minimum production deployment on OpenStack.
- Required Ports.
Installation Prerequisites
-
Install and configure OpenStack Liberty, or access your hosted OpenStack Liberty installation.
The Apcera Platform installation files and instructions for OpenStack support the Liberty release.
Refer to OpenStack configuration tasks as necessary throughout these instructions.
-
In OpenStack create the following machine flavors:
Type CPUs RAM Root Disk apcera.small 2 2 GB 12 GB apcera.medium 2 4 GB 12 GB apcera.large 4 8 GB 12 GB apcera.IM 16 32 GB 12 GB -
Obtain the Apcera images and upload them to OpenStack.
Download the Orchestrator and Base Apcera images.
Upload both images to OpenStack.
-
Create DNS records.
You will need to create at least two DNS records that map the cluster base_domain and Zabbix monitoring host to the public IP addresses for the HTTP router and for the Zabbix server. These records are populated after the hosts are created in OpenStack (described below), but you may want to register the domain and create the records now so you can populate them after the resources are provisioned.
-
Create security keys and certificates.
You will need to provide an SSL key and certs chain to use HTTPS for the router.
You will need to provide a public ssh-cert string:x for remote access to cluster hosts using SSH.
-
Install Terraform.
Apcera supports Terraform version 0.8.8 for installing Apcera on OpenStack Liberty.
To install Terraform, download v0.8.8, extract the contents of the ZIP file to a known directory and add it to your PATH.
To verify Terraform installation run the following command:
terraform version
Installation Steps
-
Download the Apcera installation files for OpenStack and unzip them to a local directory.
-
CD to the working directory where you extracted the installation files.
-
Add your SSL key and cert chain for the HTTP router to the /CERTS directory.
For example:
/CERTS/your-cluster.key /CERTS/your-cluster.crt /CERTS/yourCA.pem
-
Populate
terraform.tfvars
with your OpenStack credentials and other values.Replace all values as described below:
openstack_auth_url = "Get from RC OS_AUTH_URL=" openstack_user_name = "Get from RC OS_USERNAME=" openstack_password = "Enter the OpenStack password for your project" openstack_tenant_name = "Get from RC OS_TENANT_NAME=" openstack_region = "region-you-are-using"
To get these values, download the RC file. See also populating OpenStack values.
-
Populate
main.tf
.The
main.tf
Terraform file configures OpenStack resources.Update
main.tf
as follows (sections are commented "TODO" in the file):Lines Action 24, 25 Add the UUIDs for the Base and Orchestrator images that you uploaded to OpenStack. 99, 115, 125 Bump up number of IMs per subnet as needed (for example: 3, 3, 4). 117 Add auditlog-secondary as needed. 118, 126 Bump number of HTTP routers (2 or 3) as needed. By default main.tf reads the required Terraform files from the local /terraform-modules directory.
-
Populate
cluster.conf.erb
.The
cluster.conf.erb
file is used to generate the cluster configuration file (cluster.conf
).Update
cluster.conf.erb
as follows:Lines Action 10 - 18 Update all values to match your cluster details. 343 - 346 Add public SSH key for cluster host access via Orchestrator. 357 - 359 Configure basic auth user credentials and cluster admins. 381 - 406 Update the Zabbix passwords as necessary, or you can use the defaults. When it is time to update your cluster, you should modify cluster.conf.erb and regenerate cluster.conf as described below, rather than editing cluster.conf directly.
-
Run Terraform.
Issue the following Terraform commands:
terraform get terraform plan --module-depth=-1
Plan should add 57 resources, including host vms, security groups, floating IPs, non-root disks (volumes), and subnets.
terraform apply
If you receive errors indicating certain assets cannot be created, run the
apply
command again. OpenStack may have timing issues where it is trying to use a resource whose creation is not done (even if OpenStack reports that it is created). -
Create or update DNS records.
DNS records are required for the HTTP router and monitoring host using the external (public) IP addresses for these hosts.
DNS Entry Description base_domain
Public IP address of the http router, or list of IPs if multiple (get value from terraform output router-external-address
)*.base_domain
CNAME record to base_domain
(alias or pointer, such as*.example.cluster.com
)monitoring-host
Public IP address (cannot be under base_domain
entry) (get value fromterraform output monitoring-external-address
For multiple routers, create a DNS record for each external IP address.
-
Generate
cluster.conf
.erb cluster.conf.erb > cluster.conf
-
Upload
cluster.conf
to Orchestrator.Run the following command to get the Orchestrator IP address:
terraform output orchestrator-external-address
Securely copy
cluster.conf
to Orchestrator. For example:scp cluster.conf orchestrator@147.47.247.74:~
Credentials: user
orchestrator
, passwordorchestrator
. -
Connect to Orchestrator.
Connect to Orchestrator remotely using SSH. For example:
ssh orchestrator@147.47.247.74
Credentials: user
orchestrator
, passwordorchestrator
.Successful connection is indcated by the login prompt
orchestrator@vm:~$
. -
Update and initialize Orchestrator.
Orchestrator version 1.0.0 is required to deploy Apcera version 2.6.
To update Orchestrator:
sudo apt-get update sudo apt-get install orchestrator-cli y
To verify Orchestrator update:
orchestrator-cli version
To initialize Orchestrator (required on initial deployment only):
orchestrator-cli init
-
Deploy the cluster.
Deploy the latest promoted release of the Apcera Platform.
Perform a dry run to verify the syntax of
cluster.conf
:orchestrator-cli deploy -c cluster.conf --update-latest-release --dry
Deploy the cluster:
orchestrator-cli deploy -c cluster.conf --update-latest-release
Successful deployment is indicated by the message "Done with cluster updates."
If deployment fails, run
orchestrator-cli deploy
again.If necessary troubleshoot the deployment.
-
Reboot the cluster.
Because there is a new kernel, full cluster reboot is required.
orchestrator-cli reboot -c cluster.conf
-
Verify and bootstrap the deployment.
Complete post-installation tasks, including:
- Log in to the web console.
- Download and install APC.
- Target the cluster and log in using APC.
- Install Apcera packages.