Using Multi-Resource Manifests

A multi-resource manifest is a JSON file that declares and configures one or more jobs, services or networks. When you deploy a manifest (using APC, the Web Console, or POST /manifest API endpoint) each newly declared resource in the manifest is created and configured according to its declaration. If a resource already exists, its properties are updated with those in the manifest (with a few caveats, see About updating resources in a multi-resource manifest).

You can do the following with a multi-resource manifest:

  • Create jobs from Docker images or existing packages
  • Specify resources, number of instances, auto-scaling behavior, routes, package dependencies, and other job attributes
  • Create and delete job links and service bindings
  • Create and configure services and bind them to jobs
  • Create virtual networks and define member jobs

For example, the following manifest defines two jobs /dev::nats-server and /dev::nats-client. The nats-server job is created from the public NATS Docker image on Docker Hub. Its started flag specifies that the job should be started when its ready. The nats-client job is created from an existing package on the cluster (package::/dev::nats-client-pkg). A job link is created on the nats-client to the nats-server job using the links field.

{
  "jobs": {
    "job::/dev::nats-server": {
      "docker": {
        "image": "nats:latest"
      },
      "state": "started"
    },
    "job::/dev::nats-client": {
      "packages": [
        {
          "fqn": "package::/dev::nats-client-pkg"
        }
      ],
      "links": {
        "NATS": {
          "fqn": "job::/dev::nats-server",
          "port": 4222
        }
      },
      "state": "started"
    }
  }
}

Deploying a multi-resource manifest

To deploy an application with a multi-resource manifest you use the apc manifest deploy command, passing it the location of the JSON manifest file, for example:

apc manifest deploy myapp.json

You can also deploy a manifest using the Web Console.

Alt text

The specified manifest file is uploaded to the cluster where it is parsed and executed.

About updating resources in a multi-resource manifest

When you deploy a multi-resource manifest the Apcera cluster parses the manifest for all declared resources (jobs, services, and networks). In general, if a declared resource already exists its properties are updated according to the new values in the manifest. There are a couple of exceptions to this behavior:

  • Existing job links and service bindings are not updated during manifest deployment. You must first delete the existing link or service binding and then redeploy the manifest with the new link or binding configuration. You can delete a binding or job link from a manifest by setting the object's "delete" attribute to true and then re-deploying the manifest (see Updating an existing job link), or use APC or the Web Console.
  • You cannot delete resources from a manifest. For example, removing a job from a manifest does not cause the resource to be deleted when the manifest is deployed. You must use APC or the Web Console to delete those resources.

Checking the status of manifest jobs

The apc manifest status <manifest> command displays the status of each job resource declared in the specified manifest, including the number of job instances, errors with the job (if any), and the virtual network subnet each job belongs to (if any).

apc manifest status nats-jobs.json

╭────────────────────────────────────────┬─────────┬───────────┬────────────────┬────────╮
│ Job FQN                                │ Status  │ Instances │ Subnet         │ Errors │
├────────────────────────────────────────┼─────────┼───────────┼────────────────┼────────┤
│ job::/sandbox/admin::nats-client       │ started │ 1         │ 192.168.1.0/24 │        │
│ job::/sandbox/admin::nats-server       │ started │ 1         │ 192.168.1.0/24 │        │
╰────────────────────────────────────────┴─────────┴───────────┴────────────────┴────────╯

Comparison to application manifests

Application manifests (which precede multi-resource manifests) let you specify set of deployment attributes for a single job. In general, multi-resource manifests provide more functionality than single-job manifests. However, single-job manifests provide some features not currently available with multi-resource manifests:

  • Deployment of source code from local system. Multi-resource manifests can create applications from Docker images or existing packages, only.
  • Support for defining runtime templates.
  • Support for creating log drains.

Job declaration and configuration

Each multi-resource manifest must contain a top-level jobs JSON object. This field contains one or more job configuration objects that describe the jobs to create (or update) when the manifest is executed. The key of each job configuration object is the FQN of the job to create or update.

Note: Some job attributes cannot be updated or deleted from a job manifest. See Deploying a multi-resource manifest for details.

Each job configuration object may contain the following fields:

  • affinity – Job's affinity tags.
  • autoscaling – Job's auto-scaling configuration.
  • docker – Specifies the URL of a Docker image from which the job's dependent package is derived. Each job definition must include either docker or packages.
  • drains – Adds log drain endpoint(s) to an app.
  • env – Environment variables to set on the job.
  • exposed_ports – Ports to exposed on the job.
  • group – Group name used to run the job process.
  • instances – Number of desired job instances.
  • links – Job links defined on the job.
  • monitoring – Specifies a custom metric to monitor for job auto-scaling.
  • packages – Specifies a list of the job's dependent packages. Each job definition must include either packages or docker.
  • resources – Compute resources (RAM, CPU, disk, network) reserved for the job.
  • restart_mode – Job's restart mode.
  • required_ports – Ports exposed on the job that are included in health checks.
  • routes – HTTP and TCP routes defined on the job.
  • scheduling_tags - Job's scheduling tags.
  • services – Service bindings defined on the job.
  • ssh – Controls whether you can connect to the job via SSH.
  • start – Defines command to start job and start timeout length.
  • state – Job's desired state (started, stopped).
  • stop – Defines command to stop job process, and stop timeout length.
  • user – User used to run the job process.

affinity

Specifies the job's affinity to other jobs, including the affinity type (repel or attract) and affinity requirement (soft or hard). May contain the following fields:

  • repel – An object with two array fields, hard and soft, that list the FQNs of jobs to create hard and soft anti-affinity with, respectively.
  • attract – An object with two array fields, hard and soft, that list the FQNs of jobs to create hard and soft affinity for, respectively.

The following example creates a hard anti-affinity (repel) between the nats-client job and the nats-server, and a soft anti-affinity between nats-client and nats-ops. It also creates a soft attractive affinity between nats-client and nats-worker.

{
  "jobs": {
    "job::/sandbox/admin::nats-client": {
      "packages": [
        {
          "fqn": "package::/sandbox/admin::nats-client"
        }
      ],
      "affinity": {
        "repel": {
          "hard": [
            "job::/sandbox/admin::nats-server"
          ],
          "soft": [
            "job::/sandbox/admin::nats-ops"
          ]
        },
        "attract": {
          "soft": [
            "job::/sandbox/admin::nats-worker"
          ]
        }
      }
    }
  }
}

autoscaling

A object that specifies the job's auto-scaling behavior. See Examples.

Property name Description Default
max_instances Integer that specifies the maximum number of instances that a job should be scaled up to. 5
min_instances Integer that specifies the minimum number of instances that a job should be scaled down to. 1
observation_interval_secs Integer that specifies the number of seconds over which the auto-scaler samples metric data before computing a delta. Minimum allowed value is 10 seconds (also default value). 10
warmup_secs Integer that specifies the number of seconds to allow new instances to start-up before considering any further auto-scaling. 10
rule Object that specifies the auto-scaling method and its configuration parameters. Each rule specifies type, metric and config properties (see below) None.
type String that identifies the auto-scaling method to use. Valid values are threshold (Threshold auto-scaling method) or pid (PID auto-scaling method). None.
metric The name of the metric whose value will be monitored and upon which the auto-scaler will base its decisions. Supported values are cpu_per_second, requests_per_second, or request_latency, or a custom metric name specified in the manifest's monitoring block. cpu_per_second
config The configuration settings for the auto-scaling controller specified by type. See Threshold auto-scaling parameters and PID auto-scaling parameters. None.
Threshold auto-scaling parameters

The following table lists configuration parameters for threshold auto-scalers.

Parameter name Description Default
lower_threshold Threshold value of the metric below which an action should be triggered. Default depends on metric that is being observed:
cpu_per_second: 100.0
requests_per_second: 1.0
request_latency: 1.0
upper_threshold Threshold value of the metric above which an action should be triggered. None.
scale_up_delta Number of job instances to start during a scale-up action. 1
scale_down_delta Number of job instances to start during a scale-down action. 1
monitoring_time_window_secs Number of seconds which the metric has to be outside the lower or upper threshold value before signaling a delta. Default: 0 0
PID auto-scaling parameters

The following table lists configuration parameters for PID auto-scalers.

PID Controller Parameters    
Parameter name Description Default
setpoint The target value of the metric to maintain. It must be a positive number. Depends on metric that is being observed:
cpu_per_second: 100.0
requests_per_second: 1.0
request_latency: 1.0
kp The proportional gain: the coefficient of the proportional term of the expression; this term dictates the magnitude of the corrective action in proportion to the magnitude of the error. KP influences the magnitude of the action and determines how fast or aggresively the auto-scaler will react to changes in the value of the metric. 0.45
ki The integral gain: the coefficient of the integral term of the expression. When the error is steady and small over a long period of time, the proportional term is not effective: its value will be equally small. The integral term is an accumulation of the error, second by second; its value will eventually be significant enough to eliminate the steady small errors; KI influences how significant it will be. This is the key to stability in the long run. 0.0013
kd The derivative gain: the coefficient of the derivative term of the expression; this term measures the rate of change of the error. By adding it to the delta, the controller tries to anticipate the next value of the error and act accordingly. 0.0

docker

Object that specifies the URL of a Docker image and, optionally, credentials to access the image, and a flag that specifies whether to pull the latest image manifest from the Docker repository, or use the cached manifest.

The docker object may contain the following fields:

  • image – (Required) Location of the Docker image, including registry location, image name and version tag.
    • The image tag defaults to latest if not specified.
    • For public images on Docker Hub, you can omit the registry URL (apcera/gnatsd:latest, for example).
    • For images on a private Docker registry you must include the image's full URL, including the URI protocol (http:// or https://).
  • pull – Boolean that determines if the specified image's metadata is requested from the remote Docker registry (true) or if the locally cached image metadata is used (false), if one exists. If no package already references the requested image its manifest is pulled from the registry. Also see Docker image caching.

    • If pull is true and the target job doesn't exist, a new package is created with the image pulled from the registry. If the job does exist, a new package is created with the image pulled from the registry, replacing the job's existing package.
    • If pull is false and the target job doesn't exist, a package in the namespace providing the same image from the same registry is looked up. If the package exists, the image's metadata and configuration is extracted from the package and a new package is created with that configuration. If no such reference package exists, the image manifest is pulled from the registry and a new package is created.
    • If pull is false, the job exists, and is being re-deployed with the same Docker image then the job's current package is reused; no new package is created. If a different Docker image is requested, a package in the same namespace is looked up that provides that image. If such a package exists, its Docker configuration and metadata is used to create a new package. If there is no existing package exists, the image is pulled from the registry and a new package is created.
  • username – For private repositories, the username to access the image.
  • password – For private repositories, the password to access the image.

The following manifest declares a job created from the latest official NATS image on Docker Hub:

{
  "jobs": {
    "job::/dev::nats_server": {
      "docker": {
        "image": "nats:latest"
      }
    }
  }
}

The following manifest declares a job created from a private Docker image that requires authentication:

{
  "jobs": {
    "job::/name/space::local-name": {
      "docker": {
        "image": "https://quay.io/user/image:1.0",
        "username": "admin123",
        "password": "adminpassword"
      }
    }
  }
}

The following manifest declares a job from a Docker image and sets the "pull" flag to true so it always pulls the image metadata from the registry rather than from the local cache:

{
  "jobs": {
    "job::/sandbox/admin::whalesay": {
      "docker": {
        "image": "user/whalesay:latest",
        "pull": true
      }
    }
  }
}

Notes:

  • Policy must exist that allows the authenticating user (or job, if using app tokens) to pull images from the specified Docker registry for the job's FQN (see Using policy for Docker whitelisting).
  • The packages field is ignored if a job declaration includes a docker field.

drains

Array of objects that specify the drains for sending logs from an application.

"job::/dev::appPackage": {
  "package": {
    "fqn": "package::/dev::appPackage"
  },
  "drains": [
    { "url": "syslog://test.example.com:80", "max_size": 256 }
  ]
}

env

Key-value map of environment variable names to values to set on job instances. For example, the following creates environment variables named VAR_1 and VAR_2 with the values VALUE 1 AND VALUE_2, respectively.

"job::/::job-A": {
  "env": {
    "VAR_1": "VALUE 1",
    "VAR_2": "VALUE 2"
  }
}

exposed_ports

Array of integers that specify the ports to expose on the job. Use required_ports to specify which of the exposed ports should be monitored for health checks by the system.

Example:

  "job::/::A": {
    "docker": {
      "image": "https://registry-1.docker.io/library/nats:latest"
    },
    "exposed_ports": [ 4222, 6222, 8222 ]
  }

group

String that specifies the group to use when running the instance. Default is picked by system runtime.

  "job::/::job-A": {
    "package": {
      "fqn": "package::/dev::appPackage"
    },
    "user": "root",
    "group": "root"
  }

instances

Number that specifies the desired number of job instances. If not specified, a single instance is created. The following example specifies that there should be three instances of the job running:

{
   "jobs": {
      "job::/sandbox/admin::guide-app": {
        "packages": [
          { "fqn": "package::/apcera::continuum-guide" }
        ],
      "instances": 3,
      "state": "started"
    }
  }
}

Object that defines one or more job link configuration objects. The key for each link configuration object is the base name of the environment variable set on the job instance. Each configuration object can contain the following fields:

  • fqn – FQN of the target job to link to (required). The specified job must either be declared elsewhere in the manifest, or already exist on the cluster.
  • port – Port on target job to link to (optional). Defaults to 0. The specified port must be exposed on the target job.
  • bound_ip – IP address that the source job should use to connect to the target job (optional). By default, an available IP is automatically selected.
  • bound_port – Port that the source job should use to connect to the target job (optional). By default, an available port is automatically selected.
  • delete – Boolean that specifies if this job link should be deleted.

The following example declares two jobs (capsules) named capsule-A and capsule-B. A link named CAPSULEB is defined on capsule-A that provides a link to capsule-B.

{
  "jobs": {
    "job::/sandbox/admin::capsule-A": {
      "packages": [
        {
          "fqn": "package::/apcera/pkg/os::ubuntu-14.04"
        }
      ],
      "ssh": true,
      "start": {
        "cmd": "/sbin/init"
      },
      "links": {
        "CAPSULEB": {
          "fqn": "job::/sandbox/admin::capsule-B",
          "port": 4567
        }
      },
      "state": "started"
    },
    "job::/sandbox/admin::capsule-B": {
      "packages": [
        {
          "fqn": "package::/apcera/pkg/os::ubuntu-14.04"
        }
      ],
      "ssh": true,
      "start": {
        "cmd": "/sbin/init"
      },
      "exposed_ports": [ 4567 ],
      "state": "started"
    }
  }
}

Notes:

  • To delete an existing job link add "delete": true to the link configuration object, e.g.:
    "links": {
      "CAPSULEB": {
        "fqn": "job::/sandbox/admin::capsule-B",
        "port": 4567,
        "delete": true
    }
    
  • If the source job already has a binding with the same name it is not re-created.
  • FQN for the target port has to be defined, currently not expanded from the default namespace.

monitoring

Used for job auto-scaling. An object that defines a custom metric name and an HTTP endpoint to monitor its value. The named metric must be referenced within an autoscaling configuration block.

Monitoring parameters  
Parameter name Description
monitoring Object that configures a custom metric name, type, and monitoring endpoint.
type The type of custom monitoring rule. Currently, the only supported value is http_metric.
config Object that contains a url field whose value is the HTTP endpoint to monitor.

The following is a (partial) manifest that defines a custom HTTP metric named queue_size with the URL endpoint of https://monitoring.example.com/queue.

{
  "jobs": {
    "job::/sandbox/admin::myapp": {
      ...,
      "monitoring": {
        "queue_size": {
          "type": "http_metric",
          "config": {
            "url": "https://monitoring.example.com/queue"
          }
        }
      },
      "auto-scaling": {
        "max_instances": 20,
        "min_instances": 1,
        "observation_interval_secs": 10,
        "rule": {
          "type": "pid",
          "metric": "queue_size",
          "config": {
            "setpoint": 50,
            "KP": 0.45,
            "KI": 0.01,
            "delta": 1,
          }
        }
      }
    }
  }
}

packages

The packages field is an array of objects that specify the job's dependent packages. Each object in the array must contain an fqn field that specifies the fully-qualified name of the package to add to the job's dependencies. This package must already exist on the cluster.

For example, the following adds package::/apcera/pkg/runtimes::node-0.10.33 and package::/dev/my-app-package as dependencies to job::/::job-B.

    {
      "jobs": {
        "job::/::job-B": {
          "packages": [
            { "fqn": "package::/apcera/pkg/runtimes::node-0.10.33" },
            { "fqn": "package::/dev/my-app-package" }
          ]
        }
      }
    }

Notes:

  • The apc manifest command does not support creating or importing packages into the cluster; referenced packages must have imported previously using APC (for example, apc app create or apc package import).
  • Every job declaration must include at least one package, or specify a Docker image, in which case the packages field is ignored.

resources

Object that specifies the CPU, network, disk, and memory to allocate to each job instance. May contain the following fields:

  • cpu – Milliseconds of CPU time per second of physical time. May be greater than 1000ms/second in cases where time is across multiple cores.
  • disk_space – Amount of disk space to allocate to each instance (default is 1GiB).
  • memory – Amount of memory to allocate to each instance (default is 256MiB)
  • network_bandwidth – Amount of network throughput to allocate (floor).
  • network_max – Amount of network throughput to allow (ceiling).

Example:

"job::/::job-A": {
  "resources": {
      "cpu": "1000",
      "network_bandwidth": "1Mbps",
      "memory": "1GB",
      "disk_space": "2GB",
      "network_max": "2Mbps"
  }
}

required_ports

Array of integers that specify which exposed ports (specified by exposed_ports) should be monitored for health checks.

The following example exposes ports three ports (4222, 6222, and 8222) and indicates that 4222 and 6222 should be monitored for health checks.

"job::/::A": {
  "docker": {
    "image": "https://registry-1.docker.io/library/nats:latest"
  },
  "exposed_ports": [ 4222, 6222, 8222 ],
  "required_ports": [ 4222, 6222]
}

restart_mode

String that specifies the job's restart mode. Must be one of the following values:

  • "always" – Restarts the container ignoring the exit code.
  • "failure" – Restarts the container whenever the exit code is non-zero.
  • "no" – Never restart the job.

Example:

{
  "jobs": {
    "job::/sandbox/admin::hello-world": {
      "docker": {
        "image": "tutum/hello-world:latest"
      }
  },
     "restart_mode": "no",
     "state": "started"
    }
  }
}

routes

An array of route configuration objects that define routes to map to the job. Each route configuration object may contain the following fields:

  • type – String that specifies the route type. Valid values are "http" or "tcp".
  • endpoint – String that specifies the route's URL endpoint. Can be specified in the form of <host-name>.<cluster-name>.<domain-name> to choose a specific endpoint, or the string auto to have an endpoint selected by the system.
  • config – (Required) Configures the route. For HTTP routes this field is a map of URL paths relative to the route's endpoint. Each path (object key) must start with a forward slash (/). For TCP routes this field this is an array of configuration objects. Each configuration object can contain the following fields:
    • https_only – Boolean that specifies if HTTPS only should be enforced on the route. Default is false. Only valid for HTTP routes. See Enforcing HTTPS on the router for details.
    • port – (Required) Physical port number exposed on the app to which the route is mapped. Set to 0 to have Apcera select a port.
    • weight – Proportion of traffic delivered to this route, normalized across all apps sharing the route.

For example, the following manifest creates a job from the NATS Docker image and defines an HTTPS-only route on port 8222, and a TCP route on port 4222.

{
  "jobs": {
    "job::/dev::nats-app": {
      "docker": {
        "image": "https://registry-1.docker.io/library/nats:0.8.0"
      },
      "state": "started",
      "exposed_ports": [
        4222,
        8222
      ],
      "routes": [
        {
          "type": "http",
          "endpoint": "nats-monitor.example.com",
          "config": {
            "/varz": [
              {
                "port": 8222,
                "https_only": true
              }
            ]
          }
        },
        {
          "type": "tcp",
          "endpoint": "auto",
          "config": [
            {
              "port": 4222
            }
          ]
        }
      ]
    }
  }
}

scheduling_tags

An object that contains two arrays, "soft" and "hard", that specify the soft tags and hard tags to apply to the job, respectively.

The following example assigns "openstack" as a soft tag, and "aws" and "vsphere" as hard tags to the specified job.

"jobs": {
  "job::/sandbox/demo::redis-server" : {
    "docker" {
      "image" : "redis:latest"
    },
    "scheduling_tags" : {
      "soft": [ "openstack" ],
      "hard": [ "aws", "vsphere"]
    }
  }
}

services

Object that defines service bindings on the job. Consists of one or more binding configuration objects. The key for each object is the base name of the environment variable set on the container. A configuration object supports the following fields:

  • fqn – FQN of the service to bind the job to. The service must either already exist on the cluster or be declared within the manifest's services block (see Service declaration and configuration).
  • params – A key-value map of parameters to configure the binding. Parameter are specific to each service gateway.
  • delete – Boolean that specifies if this service binding should be deleted when the manifest is deployed.

The following example binds my-app to the /apcera::outside service that allows network egress from the job, and a MongoDB service that is declared in the top-level services field (see Service declaration and configuration).

{
  "jobs": {
    "job::/sandbox/admin::my-app": {
      "packages": [
        {
          "fqn": "package::/apcera/pkg/os::ubuntu-14.04-apc3"
        }
      ],
      "start": {
        "cmd": "/sbin/init"
      },
      "services": {
        "TODOS": {
          "fqn": "service::/sandbox/admin::todos_db"
        },
        "EGRESS": {
          "fqn": "service::/apcera::outside"
        }
      },
      "state" : "started",
      "ssh": true
    }
  },
  "services": {
      "service::/sandbox/admin::todos_db": {
          "description": "Database for todo items",
          "name": "todo",
          "type": "mongodb",
          "params": {
            "persistence_provider": "provider::/apcera/providers::apcfs"
          }
      }
  }
}

start

Object that specifies the command to run when the instance is started. Contains the following fields:

  • cmd – String that specifies the command(s) to run when the instance starts (required).
  • timeout – Integer that specifies the number of seconds to allow the command to complete (optional).

Example:

"job::/sandbox/admin::capsule-1": {
  "packages": [
    {
      "fqn": "package::/apcera/pkg/os::ubuntu-14.04"
    }
  ],
  "start": {
    "cmd": "/sbin/init",
    "timeout" : 30
  }
}

state

String that specifies the desired job state. Valid values are "started" or "stopped".

Example:

"job::/sandbox/admin::nats-docker": {
  "docker": {
    "image": "https://registry-1.docker.io/library/nats:latest"
  },
  "start": {
    "cmd": "/gnatsd -c gnatsd.conf"
  },
  "state" : "started"
}

stop

Object that specifies the command to run when the instance is stopped. Contains the following fields:

  • cmd – String that specifies the command(s) to run when the instance stops (required).
  • timeout – Integer that specifies the number of seconds to allow the stop command to complete (optional).

Example:

"job::/sandbox/admin::capsule-1": {
  "packages": [
    {
      "fqn": "package::/apcera/pkg/os::ubuntu-14.04"
    }
  ],
  "stop": {
    "cmd": "echo 'capsule-1 stopping...'",
    "timeout" : 30
  }
}

ssh

Boolean that specifies if SSH access is permitted to the app's container. When creating a capsule, you must set this property to true (see Creating a capsule).

  "job::/::job-A": {
    "package": {
      "fqn": "package::/dev::appPackage"
    },
    "ssh": true
  }

user

String that specifies the user to use when running the instance. Defaults to root (id = 0). If a Docker image specifies a user in the image configuration, then that user is used.

"job::/::job-A": {
  "package": {
    "fqn": "package::/dev::appPackage"
  },
  "user": "root",
  "group": "root"
}

Service declaration and configuration

A multi-resource manifest may contain a top-level JSON field named services that declares and configures one or more service configuration objects. The key for each service configuration object is the FQN of the service to create. Jobs in a manifest can bind to services declared in the same manifest.

Service configuration object fields

Each service configuration object may contain the following fields:

  • type – The type of service to create (e.g., "mysql" or "http"). Required if provider_fqn is not specified.
  • provider_fqn – FQN of the provider to create the service on. Required for service gateways that support a provider tier (e.g. mysql, postgres).
  • description – Description of the service.
  • params – A map of service parameter names to values. See Apcera-provided service types for parameters supported by Apcera-provided services.

The following example manifest creates a new Postgres service backed by an NFS service for persistence, and binds the service to capsule.

{
    "jobs": {
        "job::/sandbox/admin::my-todos-app": {
            "packages": [
                {
                    "fqn": "package::/sandbox/admin::my-todos-app"
                }
            ],
            "services": {
                "TODOS": {
                    "fqn": "service::/sandbox/admin::todos_db"
                }
            }
        }
    },
    "services": {
        "service::/sandbox/admin::todos_db_1": {
            "description": "Postgres Database for todo items",
            "name": "todos_db",
            "provider_fqn": "provider::/apcera/providers::postgres-provider",
            "type": "postgres"
        }
    }
}

Also see Examples.

Virtual network declaration and configuration

A multi-resource manifest may contain a top-level JSON field named networks that declares one or more virtual networks. The key for each network configuration object is the FQN of the network to create.

Each network configuration object may contain a jobs field which is an array of job to join to the virtual network. Each member of the jobs array is an object with the following fields:

  • broadcast_enable – Boolean that enables or disables network broadcast over the network.
  • discovery_address: An optional name concatenated with ".apcera.local" by which other jobs in the virtual network can discover this job. A discovery address is case-insensitive and may consist of letters and numbers. Dots and hyphens are also allowed, except at the beginning and end of the discovery address. Also see Network discovery.
  • fqn – The FQN of the job to join to the network.
  • multicast_addresses – An array of network addresses in CIDR notation to enable for multi-cast communication over the network.

The following manifest declares two jobs and joins them to the net1 network created in the same manifest. See Example Manifests for more examples of creating and configuring virtual networks.

{
    "jobs": {
        "job::/sandbox/admin::app-A": { ...  },
        "job::/sandbox/admin::app-B": { ... }
    },
    "networks": {
        "network::/sandbox/admin::net1": {
            "jobs": [
                {
                    "fqn": "job::/sandbox/admin::app-A"
                },
                {
                    "fqn": "job::/sandbox/admin::app-B"
                }
            ]
        }
    }
}

Using substitution variables

A multi-resource manifest may contain variables that you can substitute with values you specify. A substitution variable is declared as ${VARIABLE_NAME} within the manifest text. For example, the following manifest defines three substitution variables: ${NAMESPACE}, ${JOB_NAME}, and ${IMAGE_URI}.

{
  "jobs": {
    "job::${NAMESPACE}::${JOB_NAME}": {
      "docker": {
        "image": "${IMAGE_URI}"
      }
    }
  }
}

The apc manifest deploy command takes the names of the variables declared in the manifest as command extended parameters after the double dash (--). You can specify values for each variable in the APC command, or create environment variables with matching names to contain the values. For example, the following command example provides values for the JOB_NAME, NAMESPACE and IMAGE_URI variables in the command string:

apc manifest deploy nats_manifest.json -- \
--JOB_NAME myjob --NAMESPACE /dev/test \
--IMAGE_URI nats

Equivalently, you can define values with environment variables with matching names. For example, the following is equivalent to the previous example:

JOB_NAME=myjob NAMESPACE=/dev/test IMAGE_URI=nats \
apc manifest deploy nats_manifest.json -- --JOB_NAME --NAMESPACE --IMAGE_URI

Notice that the APC command string includes the variable names used in the manifest (this is required) but the values are omitted, as they will be taken from the environment.

Note: Only those environment variables whose names match variables declared in the APC command string are read from your environment.

A common use case for substitution variables is to provide credentials to access a private Docker registry, as shown below. You can create a single manifest that can easily be used by others without including the sensitive information in the manifest:

{
  "jobs": {
    "job::/sandbox/user::myapp": {
      "docker": {
        "image": "quay.io/user/busybox:latest",
        "username": "${USERNAME}",
        "password": "${PASSWORD}"
      }
    }
  }
}

The following APC command deploys the manifest with the provided username and password:

apc manifest deploy private_image.json -- \
--USERNAME myusername --PASSWORD mypassword

Note the following about substitution variables:

  • Variables can be used in quoted string declarations only, in the manifest's text; number and boolean declarations cannot be substituted.
  • Variable names may consist of letters, numbers, and underscores.
  • Variable names are case-sensitive (${VAR} is distinct from ${var}).
  • Variables are not expanded recursively (for example, ${${VAR}} is invalid).
  • Variable names cannot be escaped.

Missing substitution variables

If a value for a substitution variable is not provided, APC displays a warning for each missing variable, for example:

Deploying manifest...
Warning: no value found for variable JOB_NAME

In general, if a value is not provided for a variable then it is assigned an empty string (""). The exception to this rule is for variables used in the namespace portion of FQNs used in the manifest. In that case, the FQN's namespace is set to the default namespace set by APC. This feature enhances portability of manifests by permitting different users to easily deploy the same manifest to their own namespace with minimal configuration. To demonstrate, consider the following manifest:

{
  "jobs": {
    "job::${NAMESPACE}::${JOB_NAME}": {
      "docker": {
        "image": "nats:latest"
      }
    }
  }
}

If a value for ${NAMESPACE} is provided but not for ${JOB_NAME} the manifest fails to deploy with the error Invalid job FQN because it's missing a local name:

apc manifest deploy nats_manifest.json -- --NAMESPACE /dev/test
Deploying manifest...

Error: Invalid job FQN: "job::/dev/test::"

However, if a value is provided for ${JOB_NAME} but not for ${NAMESPACE} then APC automatically inserts the current default namespace into the namespace portion of the job's FQN, for example:

apc namespace
Current namespace: '/sandbox/admin'

apc manifest deploy nats_manifest.json -- --JOB_NAME mynats
Deploying manifest...
...
[manifest] -- Deploy -- created "job::/sandbox/admin::mynats"

Notice that the deployed job's namespace was automatically set to /sandbox/admin (the current default namespace). If you change the default namespace before deploying the manifest then the job will be deployed to that namespace:

apc namespace /test/dev
Setting namespace to '/test/dev'... done

apc manifest deploy nats_manifest.json -- --JOB_NAME helloworld
Deploying manifest...
...
[manifest] -- Deploy -- created "job::/test/dev::helloworld"
...

Equivalently you can use the global --namespace APC option to specify the target namespace:

apc manifest deploy nats_manifest.json --namespace /prod -- \
--JOB_NAME mynats
Deploying manifest...
...
[manifest] -- Deploy -- created "job::/prod::mynats"
...

Debugging variable substitution

To help with debugging issues related to variable substitution you can pass the optional --expansion-file [FILENAME] parameter to the apc manifest deploy command. This creates a version of your original manifest file but with the variables replaced with substituted values.

For example, the following executes the manifest and creates a file named example_manifest_expanded.json that contains the full manifest with substituted values:

apc manifest deploy example_manifest.json --expansion-file example_manifest_expanded.json -- \
--NAMESPACE /sandbox/admin

You can then open example_manifest_expanded.json to view its contents.

To expand the manifest variables in a new file, but not deploy the manifest, add the --expand-only flag:

apc manifest deploy example_manifest.json \
--expansion-file example_manifest_expanded.json --expand-only \
-- --NAMESPACE /sandbox/admin

This will create the example_manifest_expanded.json file but not deploy it.

Example manifests

Below are examples of application manifests.

Creating a capsule

The following manifest creates and starts a capsule running Ubuntu.

{
  "jobs": {
    "job::/sandbox/admin::my-capsule": {
      "packages": [
        {
          "fqn": "package::/apcera/pkg/os::ubuntu-14.04"
        }
      ],
      "ssh": true,
      "start": {
        "cmd": "/sbin/init"
      },
      "state" : "started"
    }
  }
}

Note: You must include "ssh": true in the job definition for the capsule to start.

Creating and binding to an NFS service

The following example creates and binds an NFS service (service::/sandbox/admin::my-nfs) to a capsule. The service's optional mountpath parameter specifies the point on the local system to mount the NFS file system (/data/capsuleB).

{
    "jobs": {
        "job::/sandbox/admin::capsuleB": {
            "packages": [
                {
                    "fqn": "package::/apcera/pkg/os::ubuntu-14.04-apc3"
                }
            ],
            "services": {
                "nfs": {
                    "fqn": "service::/sandbox/admin::my-nfs",
                    "params": {
                      "mountpath": "/data/capsuleB"
                    }
                }
            },
            "ssh": true,
            "start": {
                "cmd": "/sbin/init"
            },
            "state": "started"
        }
    },
    "services": {
        "service::/sandbox/admin::my-nfs": {
            "description": "An NFS service.",
            "provider_fqn": "provider::/apcera/providers::apcfs"
        }
    }
}

This example manifest declares two jobs: a NATS server created from the public Docker NATS image, and a simple NATS client application. The client application publishes a message to the server in a loop and writes the latency of each response to the log. A job link is created from the client to the server so the client can reach the server.

{
  "jobs": {
    "job::/sandbox/admin::nats-server": {
      "docker": {
        "image": "nats:latest"
      },
      "state": "started"
    },
    "job::/sandbox/admin::nats-client": {
      "docker":
        {
          "image": "apcerademos/nats-ping:latest"
        },
      "links": {
        "NATS": {
          "fqn": "job::/sandbox/admin::nats-server",
          "port": 4222
        }
      },
      "state": "started"
    }
  }
}

Deploy the manifest:

apc manifest deploy joblink.json
Deploying manifest...
[manifest] -- Deploy -- execution started
[manifest] -- Deploy -- checking if policy allows linking "job::/sandbox/admin::nats-client" to "job::/sandbox/admin::nats-server"
[manifest] -- Deploy -- creating "job::/sandbox/admin::nats-server"
[manifest] -- Deploy -- created "job::/sandbox/admin::nats-server"
[manifest] -- Deploy -- creating "job::/sandbox/admin::nats-client"
[manifest] -- Deploy -- created "job::/sandbox/admin::nats-client"
[manifest] -- Deploy -- linking "job::/sandbox/admin::nats-client" to "job::/sandbox/admin::nats-server"
[manifest] -- Finish -- execution was successful

╭────────────────────────────────────────┬─────────┬───────────┬────────┬────────╮
│ Job FQN                                │ Status  │ Instances │ Subnet │ Errors │
├────────────────────────────────────────┼─────────┼───────────┼────────┼────────┤
│ job::/sandbox/admin::nats-client       │ started │ 1         │ N/A    │        │
│ job::/sandbox/admin::nats-server       │ started │ 1         │ N/A    │        │
╰────────────────────────────────────────┴─────────┴───────────┴────────┴────────╯

To verify the client is reaching the server, view the logs on the client application. You should see a log item for each

apc app logs nats-client
[stderr] [PING] Latency: 332.159us
[stderr] [PING] Latency: 380.452us
[stderr] [PING] Latency: 471.316us

Creating a job from an existing package

The following manifest creates a new job from the /apcera::continuum-guide package installed on all Apcera clusters. Change the route's endpoint to point to an endpoint on your cluster (or specify auto to have the route endpoint generated for you (see Automatic HTTP route generation example).

{
  "jobs": {
    "job::/sandbox/admin::my-guide": {
      "packages": [
        {
          "fqn": "package::/apcera::continuum-guide"
        }
      ],
      "state": "started",
      "exposed_ports": [ 0 ],
      "routes": [
        {
          "type": "http",
          "endpoint": "guide-app.example.apcera-platform.io",
          "config": {
            "/": [
              {
                "weight": 100,
                "port": 0
              }
            ]
          }
        }
      ]
    }
  }
}

Automatic HTTP route generation example

Multi-resource manifests support the auto generation of both HTTP and TCP routes. The following example demonstrates both use cases:

{
    "jobs": {
        "job::/example::nats-docker-A": {
            "docker": {
                "image": "nats:0.7.2"
            },
            "exposed_ports": [
                4222,
                8222
            ],
            "routes": [
                {
                    "config": {
                        "/varz": [
                            {
                                "port": 8222
                            }
                        ]
                    },
                    "endpoint": "auto",
                    "type": "http"
                },
                {
                    "config": [
                        {
                            "port": 4222
                        }
                    ],
                    "endpoint": "auto",
                    "type": "tcp"
                }
            ],
            "state": "started"
        },
        "job::/example::nats-docker-B": {
            "docker": {
                "image": "nats:0.6.8"
            },
            "exposed_ports": [
                4222,
                8222
            ],
            "routes": [
                {
                    "config": {
                        "/*": [
                            {
                                "port": 8222
                            }
                        ]
                    },
                    "endpoint": "auto",
                    "type": "http"
                },
                {
                    "config": [
                        {
                            "port": 4222
                        }
                    ],
                    "endpoint": "auto",
                    "type": "tcp"
                }
            ],
            "state": "started"
        }
    }
}

Result:

apc manifest deploy example.json

Deploying manifest with 2 jobs
[manifest] -- Deploy -- execution started
[manifest] -- Deploy -- creating "job::/example::nats-docker-A"
[manifest] -- Deploy -- created "job::/example::nats-docker-A"
[manifest] -- Deploy -- creating "job::/example::nats-docker-B"
[manifest] -- Deploy -- created "job::/example::nats-docker-B"
[manifest] -- Finish -- execution was successful

╭──────────────────────────────┬─────────┬───────────┬────────╮
│ FQN                          │ Status  │ Instances │ Errors │
├──────────────────────────────┼─────────┼───────────┼────────┤
│ job::/example::nats-docker-A │ started │ 1         │        │
│ job::/example::nats-docker-B │ started │ 1         │        │
╰──────────────────────────────┴─────────┴───────────┴────────╯

Network discovery example

The following manifest creates two capsules, joins them to a virtual network, and assigns each capsule a network discovery name. The specified name is concatenated with apcera.local, so app-A can ping app-B with the address app-B.apcera.local.

{
    "jobs": {
        "job::/sandbox/admin::app-A": {
          "packages": [
            {
              "fqn": "package::/apcera/pkg/os::ubuntu-14.04-apc3"
            }
          ],
          "ssh": true,
          "start": {
            "cmd": "/sbin/init"
          },
          "state" : "started"
        },
        "job::/sandbox/admin::app-B": {
          "packages": [
            {
              "fqn": "package::/apcera/pkg/os::ubuntu-14.04-apc3"
            }
          ],
          "ssh": true,
          "start": {
            "cmd": "/sbin/init"
          },
          "state" : "started"
        }
    },
    "networks": {
        "network::/sandbox/admin::net1": {
            "jobs": [
                {
                    "fqn": "job::/sandbox/admin::app-A",
                    "discovery_address": "app-A"
                },
                {
                    "fqn": "job::/sandbox/admin::app-B",
                    "discovery_address": "app-B"
                }
            ]
        }
    }
}

To verify that the capsules can communicate, once the manifest has been deployed, connect to one capsule using APC and ping the other at its discovery address, as shown below.

apc capsule connect app-A
-bash-4.3# ping app-B.apcera.local
PING app-b.apcera.local (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=64 time=0.314 ms
64 bytes from 192.168.1.4: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 192.168.1.4: icmp_seq=3 ttl=64 time=0.036 ms

Network broadcast example

The following manifest creates two jobs, "sender" and "listener". The sender application broadcasts messages at 255.255.255.255:6666, while the listener application listens for messages at the same location. The broadcast-virtual-net demo application provides two applications that do this.

Note that since you can't deploy source code via a multi-resource manifest, you must deploy the sender and listener applications using APC (as described in the README), and then reference the application packages by FQN in the manifest, as shown below.

{
    "jobs": {
        "job::/sandbox/admin::master": {
          "packages": [
            {
              "fqn": "package::/sandbox/admin::sender"
            }
          ],
          "state" : "started"
        },
        "job::/sandbox/admin::listener": {
          "packages": [
            {
              "fqn": "package::/sandbox/admin::listener"
            }
          ],
          "state" : "started"
        }
    },
    "networks": {
        "network::/sandbox/admin::net1": {
            "jobs": [
                {
                    "fqn": "job::/sandbox/admin::sender",
                    "broadcast_enable": true
                },
                {
                    "fqn": "job::/sandbox/admin::slave1-app",
                    "broadcast_enable": true
                }
            ]
        }
    }
}

Network multi-cast example

The following manifest creates two jobs from existing packages (see Creating a job from an existing package), adds them to the net1 virtual network, and enables network multi-cast on all jobs.

{
    "jobs": {
        "job::/sandbox/admin::app1": {
          "packages": [
            {
              "fqn": "package::/sandbox/admin::app1"
            }
          ],
          "state" : "started"
        },
        "job::/sandbox/admin::app2": {
          "packages": [
            {
              "fqn": "package::/sandbox/admin::app2"
            }
          ],
          "state" : "started"
        }
    },
    "networks": {
        "network::/sandbox/admin::net1": {
            "jobs": [
                {
                    "fqn": "job::/sandbox/admin::app1",
                    "multicast_addresses": [
                      "225.1.1.0/24"
                    ]
                },
                {
                    "fqn": "job::/sandbox/admin::app2",
                    "multicast_addresses": [
                      "225.1.1.0/24"
                    ]
                }
            ]
        }
    }
}

Updating an existing job link or service binding using a manifest is a two-step process: you first delete the job link (or binding) by setting the "delete" property on its JSON declaration to true, and then re-deploy the manifest. You can then update the job link (or service binding) to their new values, set "delete" to false, and redeploy to create the new binding.

Note: You can always delete job bindings and service binding using APC or the Web Console. This example demonstrates how to accomplish this with manifests, only.

Suppose you've deployed the following manifest that declares two jobs (nats-server and nats-client) and defines a job link for the client to access the NATS server.

{
  "jobs": {
    "job::/sandbox/admin::nats-server-0.9.4": {
      "docker": {
        "image": "nats:0.9.4"
      }
    },
    "job::/sandbox/admin::nats-client": {
      "docker":
        {
          "image": "apcerademos/nats-ping:latest"
        },
      "links": {
        "NATS": {
          "fqn": "job::/sandbox/admin::nats-server-0.9.4",
          "port": 4222
        }
      }
    }
  }
}

Later you decide you want to redeploy the same manifest but use a different version of the NATS server:

{
  "jobs": {
    "job::/sandbox/admin::nats-server-latest": {
      "docker": {
        "image": "nats:latest"
      }
    },
    "job::/sandbox/admin::nats-client": {
      "docker":
        {
          "image": "apcerademos/nats-ping:latest"
        },
      "links": {
        "NATS": {
          "fqn": "job::/sandbox/admin::nats-server-latest",
          "port": 4222
        }
      }
    }
  }
}

After deploying the manifest as shown above, the job link on the NATS client application will still refer to the nats-server-0.9.4 job, not nats-server-latest . To resolve, you first need to delete the existing job link by adding "delete": true to the job link declaration in the manifest and redeploying:

{
  "jobs": {
    "job::/sandbox/admin::nats-server-latest": {
      ...
    },
    "job::/sandbox/admin::nats-client": {
      ...,
      "links": {
        "NATS": {
          "fqn": "job::/sandbox/admin::nats-server",
          "port": 4222,
          "delete": true
        }
      }
    }
  }
}

To create the desired job link you update the manifest to point to the nats-server-latest job, and remove the "delete" field or set its value to false, as shown below.

{
  "jobs": {
    "job::/sandbox/admin::nats-server-latest": {
      ...
    },
    "job::/sandbox/admin::nats-client": {
      ...,
      "links": {
        "NATS": {
          "fqn": "job::/sandbox/admin::nats-server-latest",
          "port": 4222,
          "delete": false
        }
      }
    }
  }
}

Microservice example

Check out the NATS based microservice multi-resource manifest example at the Apcera Sample Apps repository on GitHub.

Auto-scaling examples

Threshold example {#threshold-example}

The following manifest uses the threshold auto-scaling method to maintain a network request latency between 100-1000 milliseconds. The auto-scaler samples request latency metrics for 10 seconds before making an auto-scaling decision.

{
  "jobs": {
    "job::/sandbox/admin::myapp": {
      "packages": [
        { "fqn": "package::/sandbox/admin::myapp-pkg" }
      ],
      "routes": [
        {
          "type": "http",
          "endpoint": "auto",
          "config": {
            "/": [
              { "port": 0 }
            ]
          }
        }
      ],
      "instances": 1,
      "autoscaling": {
        "max_instances": 20,
        "min_instances": 1,
        "observation_interval_secs": 10,
        "rule": {
          "type": "threshold",
          "metric": "request_latency",
          "config": {
            "upper_threshold": 1000,
            "lower_threshold": 100,
            "scale_up_delta": 1,
            "scale_down_delta": 1
          }
        }
      }
    }
  }
}