Installing Apcera on OpenStack
This section provides instructions for installing Apcera on OpenStack Liberty.
OpenStack Cluster Description
The example cluster installed on OpenStack as described here makes use of the following resources:
- 20 virtual machine hosts with NFS persistence, remote package store, and 3 IMs that can be expanded to 10.
- 3 subnets
- 7 floating IP addresses (static IPs)
- 4 security groups
- 10 volumes (in addition to host root disks)
Optional expansions include:
- Add auditlog-secondary on subnet 2.
- Add http router on subnet 2, subnet 3.
- Given IM count is 1 per subnet; can increase IM count across each subnet (such as 3, 3, 4 for a total of 10 IMs).
The associated OpenStack installation files deploy an Apcera Platform cluster (relase 2.6.0 or later) with the following components:
|Host Role||Count||Subnet||Flavor||Image||Volume||Security Group||Floating IP||Components|
|central||3||1, 2, 3||medium||base||100||default||no||component-db, job-manager, package-manager, health-manager, cluster-monitor, metrics-manager, nats-server, events-server, auth_server, basic_auth_server, stagehand*|
|gluster-nfs||3||1, 2, 3||medium||base||250||default||no||gluster-nfs|
|router||1||1||medium||base||na||elb||yes||exposed http router|
|instance-manager||3||1, 2, 3||im||base||200||default||yes||instance-manager|
|riak||3||1, 2, 3||medium||base||80||default||no||riak|
|tcp router||1||1||small||base||na||dmz||yes||tcp router|
- List and description of Apcera cluster components.
- Sizing considerations for a minimum production deployment on OpenStack.
- Required Ports.
Install and configure OpenStack Liberty, or access your hosted OpenStack Liberty installation.
The Apcera Platform installation files and instructions for OpenStack support the Liberty release.
Refer to OpenStack configuration tasks as necessary throughout these instructions.
In OpenStack create the following machine flavors:
Type CPUs RAM Root Disk apcera.small 2 2 GB 12 GB apcera.medium 2 4 GB 12 GB apcera.large 4 8 GB 12 GB apcera.IM 16 32 GB 12 GB
Obtain the Apcera images and upload them to OpenStack.
Upload both images to OpenStack.
Create DNS records.
You will need to create at least two DNS records that map the cluster base_domain and Zabbix monitoring host to the public IP addresses for the HTTP router and for the Zabbix server. These records are populated after the hosts are created in OpenStack (described below), but you may want to register the domain and create the records now so you can populate them after the resources are provisioned.
Create security keys and certificates.
You will need to provide an SSL key and certs chain to use HTTPS for the router.
You will need to provide a public ssh-cert string:x for remote access to cluster hosts using SSH.
Apcera supports Terraform version 0.8.8 for installing Apcera on OpenStack Liberty.
To verify Terraform installation run the following command:
Download the Apcera installation files for OpenStack and unzip them to a local directory.
CD to the working directory where you extracted the installation files.
Add your SSL key and cert chain for the HTTP router to the /CERTS directory.
/CERTS/your-cluster.key /CERTS/your-cluster.crt /CERTS/yourCA.pem
terraform.tfvarswith your OpenStack credentials and other values.
Replace all values as described below:
openstack_auth_url = "Get from RC OS_AUTH_URL=" openstack_user_name = "Get from RC OS_USERNAME=" openstack_password = "Enter the OpenStack password for your project" openstack_tenant_name = "Get from RC OS_TENANT_NAME=" openstack_region = "region-you-are-using"
main.tfTerraform file configures OpenStack resources.
main.tfas follows (sections are commented "TODO" in the file):
Lines Action 24, 25 Add the UUIDs for the Base and Orchestrator images that you uploaded to OpenStack. 99, 115, 125 Bump up number of IMs per subnet as needed (for example: 3, 3, 4). 117 Add auditlog-secondary as needed. 118, 126 Bump number of HTTP routers (2 or 3) as needed.
By default main.tf reads the required Terraform files from the local /terraform-modules directory.
cluster.conf.erbfile is used to generate the cluster configuration file (
Lines Action 10 - 18 Update all values to match your cluster details. 343 - 346 Add public SSH key for cluster host access via Orchestrator. 357 - 359 Configure basic auth user credentials and cluster admins. 381 - 406 Update the Zabbix passwords as necessary, or you can use the defaults.
When it is time to update your cluster, you should modify cluster.conf.erb and regenerate cluster.conf as described below, rather than editing cluster.conf directly.
Issue the following Terraform commands:
terraform get terraform plan --module-depth=-1
Plan should add 57 resources, including host vms, security groups, floating IPs, non-root disks (volumes), and subnets.
If you receive errors indicating certain assets cannot be created, run the
applycommand again. OpenStack may have timing issues where it is trying to use a resource whose creation is not done (even if OpenStack reports that it is created).
Create or update DNS records.
DNS records are required for the HTTP router and monitoring host using the external (public) IP addresses for these hosts.
DNS Entry Description
Public IP address of the http router, or list of IPs if multiple (get value from
terraform output router-external-address)
CNAME record to
base_domain(alias or pointer, such as
Public IP address (cannot be under
base_domainentry) (get value from
terraform output monitoring-external-address
For multiple routers, create a DNS record for each external IP address.
erb cluster.conf.erb > cluster.conf
Run the following command to get the Orchestrator IP address:
terraform output orchestrator-external-address
cluster.confto Orchestrator. For example:
scp cluster.conf firstname.lastname@example.org:~
Connect to Orchestrator.
Connect to Orchestrator remotely using SSH. For example:
Successful connection is indcated by the login prompt
Update and initialize Orchestrator.
Orchestrator version 1.0.0 is required to deploy Apcera version 2.6.
To update Orchestrator:
sudo apt-get update sudo apt-get install orchestrator-cli y
To verify Orchestrator update:
To initialize Orchestrator (required on initial deployment only):
Deploy the cluster.
Deploy the latest promoted release of the Apcera Platform.
Perform a dry run to verify the syntax of
orchestrator-cli deploy -c cluster.conf --update-latest-release --dry
Deploy the cluster:
orchestrator-cli deploy -c cluster.conf --update-latest-release
Successful deployment is indicated by the message "Done with cluster updates."
If deployment fails, run
If necessary troubleshoot the deployment.
Reboot the cluster.
Because there is a new kernel, full cluster reboot is required.
orchestrator-cli reboot -c cluster.conf
Verify and bootstrap the deployment.
Complete post-installation tasks, including:
- Log in to the web console.
- Download and install APC.
- Target the cluster and log in using APC.
- Install Apcera packages.