What's New in the Apcera Platform

This section summarizes what is new in Apcera Platform releases. Refer to the release notes for details on each release.

Release 3.2.15

The Apcera Platform 3.2.15 maintenance release includes several defect corrections.

Release 3.2.14

The Apcera Platform 3.2.14 maintenance release includes several defect corrections.

Release 3.2.13

The Apcera Platform 3.2.13 maintenance release includes several defect corrections.

Release 3.2.12

The Apcera Platform 3.2.12 maintenance release includes several defect corrections.

Release 3.2.11

The Apcera Platform 3.2.11 maintenance release includes several defect corrections.

Release 3.2.10

The Apcera Platform 3.2.10 maintenance release includes several defect corrections.

Release 3.2.9

The Apcera Platform 3.2.9 was not released publically.

Release 3.2.8

The Apcera Platform 3.2.8 maintenance release includes several defect corrections.

Release 3.2.7

The Apcera Platform 3.2.7 was not released publically.

Release 3.2.6

The Apcera Platform 3.2.6 was not released publically.

Release 3.2.5

The Apcera Platform 3.2.5 maintenance release includes several defect corrections.

Release 3.2.4

The Apcera Platform 3.2.4 hot fix release includes correction of an issue with DNS lookup failures on virtual networks introduced with the 3.2.3 release.

Release 3.2.3

The Apcera Platform 3.2.3 maintenance release includes defect corrections which solves issues with slow fetching of network list from web console and APC, jobs being incorrectly marked as errored and log files from Zabbix and GlusterFS filling up cluster disk space. See release notes for more details.

Release 3.2.2

The Apcera Platform 3.2.2 maintenance release includes a defect correction for the Job Manager cache default settings as well as allowing the cache settings to be configured. It also updates the Linux kernel to provide the latest CVE fixes for mitagating the Meltdown vulnerability. Miscellaneous doc updates are also included in this maintenance release.

Release 3.2.1

The Apcera Platform 3.2.1 maintenance release includes a defect correction that allows the Apcera Platform to be deployed on AWS in regions where AWS S3 Signature version 4 is required.

Release 3.2

Apcera Platform 3.2.0 is an LTS release with the following features and enhancements:

Job Rolling Restarts

New in this release of the Apcera Platform, jobs can now be configured for a rolling restart of the job instances. When you want to restart a job because there is a new version of the job or changes to the job's properties, the behavior in previous versions of the Apcera Platform was to stop all instances of a running job and then restart the job with its new version and/or changed properties. This is still the default behavior but in this release jobs can now be configured for rolling restarts where new instances of a job are started (with the new job version and/or changed properties) while existing instances of the job are stopped. This provides a job restart without loss of service or responsiveness.

UDP Routing

The Apcera Platform 3.2 release supports User Datagram Protocol (UDP) routing.

Job Health Monitoring Enhancements

In this release you can configure ongoing TCP health checks on job instances that have exposed non-optional ports, to ensure that a TCP service container is working as expected during its lifetime. If the TCP service in the container fails, the instances are automatically terminated and restarted.

Release 3.0

Apcera Platform release 3.0.0 is an LTS release with the following features and enhancements:

If you are upgrading to reelase 3.0, review the upgrade documentation and contact Apcera Support.

HA Component Store

The component store is used by the Job Manager, Package Manager, and Auth (Policy) Server to store job metadata. Previously the component store was a PostgreSQL database with cold-backup. In release 3.0 the component store is a key-value store (Consul) that runs on the central host across an odd number of nodes (3, 5, etc.).

See Managing the Component Store for details.

All central components are now HA.

Secure Secret Store

The secret store increases the security of the platform by encrypting and managing application secrets, including certficates, private keys, and passwords. The secret store also secures keys used by cluster components, and stores some cluster secrets such as third-party identity provider credentials.

The secret store runs on the central host and is backed Consul for HA. In support of the secret store, Orchestrator version 2.0 requires a cluster passphrase to deploy and manage a cluster.

See Securing Cluster Secrets for more information.

Cert and Key Management

New certification managment features let you install and manage SSL/TLS certificates and private keys for use with routes, giving you the ability to:

  • Upload certificates and keys to the cluster and manage them
  • Store secrets securely in the secret store
  • Apply certificates and keys to routes for secure custom endpoints

See Managing SSL/TLS Certs for Custom Routes and Secret Store commands for details.

Route Management

Routes are now first-class job objects in the platform. Previously routes were job properties. You can now perform policy-controlled CRUD operations on routes.

In addition, sticky sessions can now be configured using APC instead of cluster.conf, and support is expanded to include all apps, not just Java.

See Working with Routes for details.

Encryption at Rest

You can now encrypt job application data that is local to the container (ephemeral) or persisted remotely (NFS or SMB). Encryption at rest uses standards-based encryption, LUKS (whole disk encryption) and EncFS (file-level encryption), respectively. Enablement is controlled by policy.

See the following documentation for details:

OpenAPI Support

The Apcera REST API is versioned for all endpoints, and the version is incremented to v2. In addition, both versions of the API are captured in an OpenAPI (Swagger) specification that is available for download, and from which the API endpoint reference documentation is generated.

The endpoint version is implemented as an HTTP header parameter; it is no longer part of the URL. Existing /v1 endpoints will continue to work as is, but you are encouraged to update them to conform with the header method.

Job Autoscaling

Job autoscaling lets you automatically scale job instances up or down based on CPU, request load, requent latency, and custom metrics. The web console provides a complete interface for setting autoscale parameters.

See Auto-scaling Job Instances for details.

Configurable Virtual Networks

With configurable virtual networks you can define custom networks and dramatically scale the number of networks in your cluster. New virtual networks provide subnet pools that run on isolated Layer 2 ethernets, and allow for configurable use of the IP address space, improving isolation and performance.

See Using Configurable Networks for details.

Storage Improvements

The versions of software used for HA NFS - APCFS (Gluster and Ganesha) are updated to more current versions with upgrade support. In addition, RIAK is replaced as the package store with native Gluster. If you are running RIAK you are encouraged to migrate to Gluster after upgrade.

See Migrating from Riak to Gluster for details.

Stock OS Support for Ubuntu

You can install the Apcera Platform on Ubuntu 14.04.5 (Trusty) using the stock OS. This eliminates thedependency on importing OS images and providing a unified approach to environments, including bare-metal.

See Installing Apcera Platform for your provider.