Upgrading to Apcera Release 3.0

This section provides instructions for upgrading an existing cluster to Apcera Platform release 3.0.0.

Supported Upgrade Path

To upgrade to Apcera Platform release 3.0.0, your cluster must be running the latest release of Apcera Platform 2.6 (version 2.6.3 as of this writing). No other upgrade path is supported, and you cannot downgrade once you have upgraded.

If you are not on the latest 2.6 release, refer to the 2.6 documentation for upgrade details.

Prepare for Upgrade

Before upgrading, it is recommended that you:

  1. Contact Apcera Support for additional upgrade guidance and planning.

  2. Read the 3.0.0 release notes and review what's new in this release.

    Apcera Platform release 3.0.0 is a forward-only release. Once you upgrade you cannot downgrade to a previous version.

  3. If you enabled the 2.6 beta release of local IPAM and subsequently created virtual networks, you must delete these networks, either before or after upgrading to release 3.0.0.

    After deletion and upgrade, you must recreate the networks. If you do not recreate the networks, the member jobs may become dysfunctional.

Upgrade orchestrator-cli version

SSH into the Orchestrator host and run the following command, which will update the Orchestrator host OS and also update orchestrator-cli:

apt-get update && apt-get dist-upgrade

Update cluster.conf

Apcera release 3.0.0 adds new cluster components, including Vault (vault) and Consul (kv-store). To upgrade you need to add these components to your cluster configuration and redeploy the cluster. If you do not add the necessary components, you will receive an error when you try to deploy, such as: "Error: failed to generate the dependency graph: machine with instance-manager tag cannot be deployed without a machine with vault tag."

In addition, if you want to use configuable virtual networks, you will need to enable local IPAM.

Consul provides the HA backend for Vault, so even if you do not plan to migrate to store3, you must include kv-store with your deployment.

  1. Create a backup copy of your current version of your cluster.conf file, if you have not already done so.

  2. Open for editing the cluster.conf.erb file.

  3. In the machines.central section, add the new components "vault" and "kv-store" to the central host.

    For example:

     machines: {
       ...
       central: {
         <%= terraform_output 'central-addresses'%>
         suitable_tags: [
           "component-database"
           "api-server"
           "job-manager"
           "router"
           "package-manager"
           "stagehand"
           "cluster-monitor"
           "health-manager"
           "metrics-manager"
           "nats-server"
           "events-server"
           "auth-server"
           "basic-auth-server"
           "google-auth-server"
           "vault"
           "kv-store"
         ]
       }
    
  4. In the components section, add "vault" and "kv-store" and the count for each component. The count should be same as the number of centrals you are running. Note that the count must be an odd number (3 or 5) for full HA.

    For example:

     components: {
               monitoring: 1
       component-database: 3
               api-server: 3
              job-manager: 3
                   router: 3
          package-manager: 3
           health-manager: 3
          metrics-manager: 3
              nats-server: 3
            events-server: 3
          cluster-monitor: 1
              auth-server: 3
        basic-auth-server: 3
       google-auth-server: 3
        auditlog-database: 2
           gluster-server: 0
         instance-manager: 3
               tcp-router: 1
               ip-manager: 1
          graphite-server: 1
             redis-server: 1
               nfs-server: 1
                stagehand: 1
                    vault: 3
                 kv-store: 3
     }
    
  5. Enable local IPAM if you want to use configurable virtual networks.

     chef: {
       "continuum": {
         "enable_local_ipam": true,
       }
     }
    

    This step is optional. See Migrating to Configurable Networks for complete details.

    NOTE: If you previously enabled local IPAM, before upgrading delete the networks you created subsequent to enabling beta local IPAM.

  6. Generate cluster.conf.

     erb cluster.conf.erb > cluster.conf
    
  7. Copy cluster.conf to Orchestrator.

     scp cluster.conf orchestrator@x.x.x.x:~
    

Upgrade Cluster Software to 3.0

  1. SSH to the Orchestrator host.

     ssh -A orchestrator@x.x.x.x
    
  2. Run the deploy (--dry first).

    Latest promoted release:

     orchestrator-cli deploy --config cluster.conf \
     --update-latest-release [--dry]
    

    Specific promoted release (orchestrator-cli releaseinfo):

     orchestrator-cli deploy -c cluster.conf \
     --release 3.0.0 [--dry]
    

    Local release bundle that you copy to the Orchestrator host:

     scp release-3.0.0-xxxxxxx.tar.gz orchestrator@x.x.x.x:~
    
     ssh -A orchestrator@x.x.x.x
    
     orchestrator-cli deploy -c cluster.conf --release-bundle release-3.0.0-xxxxxxx.tar.gz [--dry]
    

    Remote release bundle:

     orchestrator-cli deploy -c cluster.conf \
     --release-bundle https://s3-us-west-2.amazonaws.com/test/release-3.0.0-xxxxxxx.tar.gz [--dry]
    

    Remote URL to bundle:

     orchestrator-cli deploy -c cluster.conf \
     --release-base-url https://s3.amazonaws.com/apcera-releases/latest --release 3.0.0-xxxxxxx [--dry]
    
  3. When prompted, enter and confirm the cluster passphrase.

    Alternatively, to use the default cluster passphrase, run the deploy command using the --nopass flag.

     orchestrator-cli deploy --config cluster.conf \
     --update-latest-release [--dry] --nopass
    
  4. Check cluster status and reboot cluster hosts (if necessary).

     orchestrator-cli status -c cluster.conf
    
     orchestrator-cli reboot -c cluster.conf
    

Complete Post Upgrade Migration Tasks

Once you have successfully upgraded you can perform additional migration tasks to bring your deployment current with the 3.0 release. Note that you should perform each migration separately, followed by a deploy. You should not attempt to perform all migrations at once.

  1. Once you have upgraded, manually add the prompts you want to enforce in cluster.conf for secrets you want to protect, such as DB passwords, etc.

    Refer to the prompt for secrets documentation for details.

  2. Also, you may want to migrate from Component Store 2 (Postgres) to Component Store 3 (Consul).

    See the store 3 migration documentation for details.