Working with Large Packages

This article describes how to work with large package files in Apcera, including uploading, creating, and managing. Refer to Creating and Managing Packages for a primer on general package creation and concepts.

Requirements for Large Packages

When you create an app or a package from source code, the system loads it to the cluster and creates an application package file in the form of *.cntmp. By default, the untarred source code directly must be less than 2GB in size and require less than 256MB RAM for Apcera to process it and create the resulting package.

If you are deploying a large legacy app that has not been microserviced or ported to the cloud, the package size may exceed the defaults. If this is the case, during package processing you may experience one or more failures, including HTTP 504 and 400 errors, connection reset by peer, idle timeouts, and connection closed by components in the middle of the transmission (such as an LB) due to being idle during the transfer.

Processing Large Packages

To process a large package file that exceeds the default memory or disk quota, you must complete the following procedure:

Cluster administrator

1. Determine the maximum package size you will need.

Get the size of the largest untarred source code directory you plan to upload to Apcera. You will need to know what this is to set the parameters specified in this procedure.

For example, the files in a directory called example-store-6.1.0 are 3.38 GB on disk in an uncompressed state.

2. Increase the amount of disk and memory allocated to the Staging Coordinator.

To deploy a large package that is more that You need to increase the amount of resources allocated to the Staging Coordinator.

The Staging Coordinator is responsible for coordinating the loading of the tarball file to the cluster. At a minimum you must allocate disk space and memory to the Staging Coordinator equal to the amount of disk space required by the tarball file.

Staging Coordinator allocation is set in the chef.continuum.package_manager.staging_coordinator section of the cluster.conf file (or the cluster.conf.erb file if you are deploying with Terraform). The defaults are 256MB memory and 2GB disk.

chef: {
  "continuum": {
    "package_manager": {
      "staging_coordinator": {
        "memory": 268435456,
        "disk": 2147483648

For large packages you need to increase at least the disk size. Memory may not need to be bumped that much. For example, assuming a tarball size of 4GB, you would bump the Staging Coordinator allocation to 4GB memory and 4GB disk:

chef: {
  "continuum": {
    "package_manager": {
      "staging_coordinator": {
        "memory": 4294967296,
        "disk": 4294967296

3. Increase the idle timeout for the load balancer (if necessary).

The complete package upload process is a transmission among the following components:

(1) apc → (2) Nginx → (3) API server → (4) Package Manager → (5) Storage backend

If you are using multiple HTTP Routers (Nginx) with a load balancer (LB) frontend, the flow is as follows:

(1) apc → (2) LB → (3) Nginx → (4) API server → (5) Package Manager → (6) Storage backend

If you are using Riak for the storage backend, the transmission will flow from the Package Manager, to Nginx, to Riak.

Since the transmission involves the HTTP router (Nginx), you must bump the idle connection timeout for this component. In Apcera release 2.4.1 we have increased this to 2 minutes and 30 seconds. If you are using a load balancer you will need to do the same.

If you are using Terraform, you can increase the load balancer setting using the “idle_timeout” set for the ELB. See that this may not be supported for all providers, in which case you must do it manually.

You can also set this manually using the AWS console for the ELB. The default the idle connection from the Amazon ELB is 60s for example.


4. Redeploy the cluster.

Since you changed cluster.conf, you will need to redeploy the cluster for these settings to take effect.


1. Create a compressed tarball file of the source code directory.

For example:

tar -zcvf example-store.tar.gz example-store-6.1.0/

2. Target your cluster and log in using APC.

apc target

apc login [--google, --basic, etc]

3. Change the namespace to the packages namespace.

On the file system, the package is uploaded to the root directory. Specifying a namespace gives the file a logical location you can refer to.

apc -ns /apcera/pkg/packages

4. Create a package from a tarball file.

The 'package from file' command creates a new package from an existing file on your local machine. This could be a tarball, or a *.cntmp file produced by the 'package build' command.

apc package from file example-store.tar.gz example-store-tarball --batch

When this package is used to build an application package, untarred file contents are copied to the root directory of the job. You can test this by creating a capsule, adding the package, and connecting to the capsule and navigating the root file system where you find the file directory.

5. Add a provides clause to the package.

apc package update example-store-tarball --provides-add package.example-store-tarball

6. Create the package script and refer to the tarball file.

For example, the following is the contents of the example-store-pkg.conf configuration file:

name:      "example-store"
namespace: "/apcera/pkg/packages"

depends [ { os: "ubuntu" },   { package: "example-store-tarball"} ]

provides [ { package: "example-store-6.1.0"} ]

build (
  export INSTALL_PATH=/opt/myapp/example-store-6.1.0

  mkdir -p $INSTALL_PATH
    mv /example-store-6.1.0 $INSTALL_PATH

7. Increase the amount of disk allocated to the Compiler Stager.

You must increase the amount of disk space allocated to the Compiler Stager so that it can stage (compile) the application package. You must set the disk size to be at least as large as the tarball file that you want to process. Do not increase the amount of memory allocated to the Compiler Stager to process large package files.

For example:

apc job update job::/apcera/stagers::compiler -d 6GB

8. Run the package script and create the application package.

apc package build example-store-pkg.conf

For example:

./apc package build example-store-pkg.conf
Build and upload package? [Y/n]: 
Creating json file from manifest... done
Packaging... done
Creating package "example-store"... done
Uploading package contents... 100.0% (237/237 B)
[staging] Subscribing to the staging process...
[staging] Beginning staging with 'stagpipe::/apcera::compiler' pipeline, 1 stager(s) defined.
[staging] Launching instance of stager 'compiler'...
[staging] Downloading package manifest for processing...
[staging] Validating dependencies...
[staging] Rerunning stager with new dependencies
[staging] Stager needs relaunching
[staging] Launching instance of stager 'compiler'...
[staging] + export INSTALL_PATH=/opt/hiway/example-store-6.1.0
[staging] + mkdir -p /opt/hiway/example-store-6.1.0
[staging] + mv /example-store-6.1.0 /opt/hiway/example-store-6.1.0
[staging] Downloading package manifest for processing...
[staging] Validating dependencies...
[staging] Beginning process...
[staging] Executing build script...
[staging] Build complete.
[staging] Setting provides for the package...
[staging] Cleaning up /tmp...
[staging] Taking a snapshot of the filesystem changes...
[staging] Staging is complete.

Built "package::/apcera/pkg/packages::example-store"

Run apc package list on the appropriate namespace to confirm:

./apc package list
Working in "/apcera/pkg/packages"
│ Name                     │ Namespace            │ State │
│ apache-2.2.31            │ /apcera/pkg/packages │ ready │
│ apache-2.4.23            │ /apcera/pkg/packages │ ready │
│ apache-ant-1.9.4         │ /apcera/pkg/packages │ ready │
│ apache-tomcat-8.0.33     │ /apcera/pkg/packages │ ready │
│ bzr-2.6.0                │ /apcera/pkg/packages │ ready │
│ example-store            │ /apcera/pkg/packages │ ready │
│ example-store-tarball    │ /apcera/pkg/packages │ ready │
│ git-2.8.0                │ /apcera/pkg/packages │ ready │
│ gnatsd-0.7.2             │ /apcera/pkg/packages │ ready │
│ imagemagick-6.8.9-6-apc2 │ /apcera/pkg/packages │ ready │
│ iperf-2.0.5              │ /apcera/pkg/packages │ ready │
│ maven-3.2.2              │ /apcera/pkg/packages │ ready │
│ memcached-1.4.20         │ /apcera/pkg/packages │ ready │
│ mercurial-3.0.2          │ /apcera/pkg/packages │ ready │
│ minio-2016-09-11         │ /apcera/pkg/packages │ ready │
│ minio-2016-10-24         │ /apcera/pkg/packages │ ready │
│ minio-client-2016-08-21  │ /apcera/pkg/packages │ ready │
│ minio-client-2016-10-07  │ /apcera/pkg/packages │ ready │
│ mysql-5.6.31             │ /apcera/pkg/packages │ ready │
│ mysql-client-5.6.31-apc1 │ /apcera/pkg/packages │ ready │
│ newrelic-java-3.20.0     │ /apcera/pkg/packages │ ready │
│ nginx-1.11.3-apc1        │ /apcera/pkg/packages │ ready │
│ postgres-9.4.8           │ /apcera/pkg/packages │ ready │
│ rabbitmq-3.5.5-apc2      │ /apcera/pkg/packages │ ready │
│ redis-2.8.24             │ /apcera/pkg/packages │ ready │
│ rsnapshot-1.4.2          │ /apcera/pkg/packages │ ready │
│ rsync-3.1.2-apc1         │ /apcera/pkg/packages │ ready │
│ subversion-1.9.3         │ /apcera/pkg/packages │ ready │
│ td-agent-1.1.17-amd64    │ /apcera/pkg/packages │ ready │
│ tokumx-2.0.1             │ /apcera/pkg/packages │ ready │
│ zsh-5.0.5                │ /apcera/pkg/packages │ ready │

Run apc package show on the package to confirm:

./apc package show example-store
│ Package:          │ example-store                                  │
│ FQN:              │ package::/apcera/pkg/packages::example-store   │
│ UUID:             │ 60d2d4af-c9c3-4022-918f-a1761da3514e           │
│ State:            │ ready                                          │
│                   │                                                │
│ Created by:       │                                │
│ Created at:       │ 2016-12-10 03:58:52.607583387 +0000 UTC        │
│ Updated by:       │                  │
│ Updated at:       │ 2016-12-10 04:07:38.086366253 +0000 UTC        │
│                   │                                                │
│ Staging Pipeline: │ stagpipe::/apcera::compiler                    │
│ Stagers:          │ job::/apcera/stagers::compiler                 │
│ Dependencies:     │ os: ubuntu                                     │
│                   │ package: example-store-tarball                 │
│ Provides:         │ package: example-store-6.1.0                   │

You can now proceed with using apc app from package to create the application from the app package.

Additional options

Alternatively you can fetch (download) the package artifact (tarball file) using the sources.url array in the package configuration file.

To generate the SHA checksum you can do the following:

shasum -a 256 example-store.tar.gz
f43331270ab9f864db92039217a84af09c566ccb15d9a0042ec0923d34067361  example-store.tar.gz

Then reference the remote file:

name:      "example-store"
namespace: "/apcera/pkg/packages"

sources [{
    url: ""
    sha256: "f43331270ab9f864db92039217a84af09c566ccb15d9a0042ec0923d34067361"

depends [ { os: "ubuntu" },   { package: "example-store-tarball"} ]

provides [ { package: "example-store-6.1.0"} ]

build (
  export INSTALL_PATH=/opt/myapp/example-store-6.1.0

  mkdir -p $INSTALL_PATH
    mv /example-store-6.1.0 $INSTALL_PATH

NOTE: Do not use the include_files parameter to process large package files over 128MB.

Lastly, you can create a capsule and use apc capsule filecopy to transfer the tarball file to the capsule, snapshot it (thereby creating a package), then use apc app from package to create the app.