Release Notes

Oct 26, 2018 (3.2.2)

  • Cluster changes
    • Fixed a regression where the errored_state_window was not being honored for flapping jobs and such a job would not reach an errored state.

    • Increased the cache size in the Job Manager cache (for the Consul store), reducing contention. Also, the cache size for jobs and mapped FQNs (Providers, Services, Bindings) are now configurable through cluster.conf. This was impacting the download of Docker images.

    • Updated the Apcera Platform Linux kernel to release 4.4.0-135 which includes fixes to address CVE-2018-12233, CVE-2018-13094, and CVE-2018-13405 as per https://www.ubuntuupdates.org/package/canonical_kernel_team/xenial/main/base/linux-source-4.4.0

  • Chef changes
    • See note above in the cluster section on updating the Linux kernel to release 4.4.0-135.

Aug 24, 2018 (3.2.1)

  • Cluster changes
    • NOTE: The 3.2.1 release has not been tested on OpenStack.

    • Corrected the defect so that clusters can be deployed AWS regions that require AWS S3 Signature version 4 for authentication

  • Chef changes
    • No changes.

Mar 29, 2018 (3.2.0)

  • Upgrade Notes
    • Apcera Platform release 3.2.0 is an LTS release with significant platform changes. Before upgrading to this release, be sure to read the upgrade instructions.

  • Cluster changes
    • DEPRECATION NOTICE: The use of Gluster to provide cluster configured NAS for user applications running in the cluster is deprecated in the Apcera Platform 3.2 release. For user applications requiring HA NAS running in a cluster, Apcera's recommended configuration is to manage external NFS systems and configure an NFS provider for jobs running in the cluster, for example AWS EFS. (The use of Gluster in a cluster for package storage is unaffected by this announcement and is still supported.)

    • Job Rolling Restarts are now supported as an alternative job updating mechanism that support updating jobs with no downtime or loss in level of service.

    • Job routes now support UDP.

    • Enhanced job health monitoring is now available via TCP based "liveness" check for jobs with non-optional ports. The interval between checks is configurable via cluster.conf settings.

    • Fixed issue with Apcera installation error on Azure platform (Raw Terraform install).

    • Fixed issue with terraform bundle for different providers.

    • Updated API/Data changes for new UPD port type and route, including support for separate UDP router component.

    • Fixed bug regarding application performance degradation when writing to STDOUT.

    • Fixed issue with events server performance that can overload central host CPU and DOS the cluster.

    • Fixed issue where active events server clients can cause restart to timeout, and a deploy to fail.

    • Fixed issue with v3.0 / Virtual Network / Stale Discovery Address.

    • Fixed issue where PM gluster client log file names disrupt logrotate.

    • Apache packages configured to log original client IP.

    • Improved addition values in Job syslog output. When log forwarding is initialized, a 'marker' log message is now sent which includes the FQN of the job whose logs are being forwarded.

    • Improved netfilter logs.

  • Chef changes
    • Added sanity check in Chef to ensure that remote_subnets are CIDRs.

    • Implemented changes in Chef to support UDP router as separate component.

    • Added Chef support for CEP-MNO default route.

    • Add Chef support for TCP Liveness Probe variable.

Mar 01, 2018 (3.0.3)

  • Cluster changes
    • NOTE: The 3.0.3 release has not been tested on OpenStack.

    • Updated the kernel with OS vendor latest release that address the Meltdown vulnerability.

    • Improved timeout and retry logic when checking AWS S3 for package resources.

  • Chef changes
    • No changes.

Oct 23, 2017 (3.0.1)

  • Cluster changes
    • NOTE: The 3.0.1 release has not been tested on OpenStack.

    • Fixed issue in virtual networks where DNS entries for a job's discovery address would become stale. This would occur if a member job in a virtual network was deleted, recreated, and re-joined to the same virtual network.

    • Fixed issue where PPIDs were incorrectly parsed in /proc/<pid>/stat.

    • Fixed policy error when joining a job to a virtual network for some policy configurations.

  • Chef changes
    • No changes.

Sep 22, 2017 (3.0.0)

  • Upgrade Notes
    • Apcera Platform release 3.0.0 is an LTS release with significant platform changes. Before upgrading to this release, be sure to read the upgrade instructions.

    • The Apcera-provided Terraform modules have been updated, including the retirement of previous generation instance types in favor of new ones (for example,replacing the M3 type with T2 for AWS). Note that these will be destructive changes if you download and use these updated modules to perform the upgrade. The recommendation is to upgrade using your existing Terraform modules and then migrate to the new instance types over time.

    • If you previously enabled local IPAM (beta) for your 2.6.x installation, you will need to disable local IPAM (by commenting it out) and revert to global IPAM before upgrading to release 3.0.

    • Apcera Platform release 3.0.0 features a new component store component that improves availability. If you want to migrate from Store 2 to Store 3, you should upgrade the 3.0.0 first, then migrate at a later time.

  • Cluster changes
    • Added container log truncation which prevents logs from growing more than 10MB.

    • Job Autoscaling added as part of the platform.

    • Added subnet pools for virtual networks.

    • Fixed an issue where the JM would incorrectly state a job update contained no changes when certain environment variable changes were made.

    • Fixed an issue where soft negative scheduling tags were not being applied correctly.

    • Added new OvS driver for virtual networks.

    • Integrated with Hashicorp Vault backed by Consul for secure storage of cluster secrets. This first phase of integration stores component keys, database passwords and (optionally) external auth server connection credentials.

    • Return empty HTML pages on HTTP Router errors. Previously used Apcera-branded pages.

    • Fixed issue where instance errors could permanently penalize IM and introduce scheduling artifacts.

    • Added an event message for decreasing a job's instance count. (there was only one for increasing before).

    • Added the "domain" endpoints for installing, uninstalling and listing (POST, DELETE and GET, respectively) certificates and private-keys for domains on the router.

    • Added the subnet pool resource for configurable virtual networks. Supports POST, DELETE and GET actions on the resource.

    • Significantly sped up the /v1/version endpoint.

    • When updating a job (i.e. PUT /v1/jobs/:uuid:), if there are no changes to the job, you receive an HTTP 200 response with the unmodified job instead of an error.

    • Added the 'secret' set of endpoints for certificate/secret functionality. Supports POST, GET and DELETE actions for the importation, listing and deletion of secrets/certificates.

  • Chef changes
    • If you have deployed an APCFS high-availability file system, this release will upgrade GlusterFS from version 3.7.8 to version 3.8.12 and Ganesha NFS from version 2.3.0 to version 2.4.5.

    • Added some missing certificate authorities to the system CA list, requiring for validating connections to some external services signed by those CAs.

    • Correct typo in splunk-forwarder tag, when untagging.

    • Deploy, configure and populate Hashicorp Vault. Migrates component keys and database password out of orchestrator/chef database and cluster file system and into Vault.

    • Orchestrator version updated to 2.0. This version of orchestrator includes Vault support.

    • Introduce new dynamic taint adjustment options.

    • Allow for the forced rotation of router http access logs.

    • Updated Splunk (where used) to version 6.5.3.