APC CLI Reference

This section provides reference information for all APC commands.

apc

APC is the Apcera Platform command-line tool.

See also Using APC.

Usage

apc COMMAND [command-specific-options]

Global flags

  -ns, --namespace NS   - Run your command in a different namespace.

  --batch               - Disable interactive input for a command. Use the
                          APC_BATCH env var to implicitly set this mode.

  -q, --silent          - Disable all input and output. Automatically enables
                          batch mode.

  -h, --help            - View a specific command's help.

  --trace               - Write trace output from API calls into an 'apc.log'
                          file in your current directory.

  --html                - Show tables with HTML formatting.
  --ascii               - Show tables with ASCII formatting.
  --markdown            - Show tables with Markdown formatting.

Subcommands

  app              - Manage apps
  audit            - View audit log entries
  capsule          - Manage capsules
  changelog        - See APC's recent product updates
  cluster          - View cluster related information
  cntmp info       - View .cntmp file contents
  docker           - Create and connect to Docker images
  drain            - Manage app syslog drains
  event            - Stream events from the Apcera Platform
  export all       - Export all cluster objects to a single local package file
  gateway          - Manage service gateways
  glossary         - Learn about the Apcera Platform's terms and concepts
  import           - Import local files into your cluster
  job              - Manage jobs
  login            - Sign in to your cluster
  logout           - Sign out of your cluster
  manifest         - Manage manifests
  namespace        - Set your cluster namespace
  network          - Manage Virtual Networks
  package          - Manage packages
  pipeline         - Manage semantic pipelines
  policy           - Manage policy
  provider         - Manage service providers
  revert           - Revert to an older APC version (if locally available)
  route            - Manage app routes
  rule             - Manage rules governing services and semantic pipelines
  secret           - Manage secrets
  service          - Manage services
  stager           - Manage stagers
  staging pipeline - Manage staging pipelines
  subnet pool      - Manage subnet pools
  target           - Target your cluster
  update           - Check for APC updates
  version          - View APC's current version

Settings

APC settings are saved in a .apc file, by default in your homeDir directory. To modify this location, set an 'APC_HOME' env var with said path.

help

The help command provides inline help for APC commands and subcommands.

Usage:

apc help COMMAND [command-specific-options]

apc COMMAND [command-specific-options] --help

apc COMMAND [command-specific-options] -h

Examples:

apc help app

apc help job

apc help app <subcommand>

apc help capsule <subcommand>

apc help docker <subcommand>

apc help cluster <subcommand>

apc help manifest <subcommand>

apc cluster tag <subcommand> --help

apc cluster instance-manager <subcommand> -h

app

The app command performs operations on apps running in the Apcera Platform.

See also Deploying Apps.

Usage

apc app <subcommand> <required args> [optional args]

Subcommands

  attract      - Establish a scheduling affinity between apps
  autoscale    - Updates an app's autoscaling configuration
  connect      - Connects to an app via SSH
  console      - Connect to a temporary capsule for your app
  create       - Creates a new app
  delete       - Deletes an existing app
  deploy       - Deploys updated code to an existing app
  export       - Export app(s) to a single package file
  filecopy     - Transfers a file securely between the app and your machine
  from package - Create an application from a package
  health       - View the health of an app
  instances    - View state of an app's instances
  link         - Link one app to another
  list         - Lists existing apps
  logs         - Tails an app's logs
  repel        - Remove a scheduling affinity from apps
  restart      - Restart an app
  run          - Runs a single command in your app context
  show         - Shows more detailed information about an app
  start        - Starts an app
  stop         - Stops an app
  stats        - Retrieves an app's current statistics
  update       - Updates an app's individual properties

app attract

The app attract command establishes an affinity for instances of <target-app> to start on instance managers running instances of <attracted-app>. Instances of <target-app> will be able to run on other instance managers if none satisfying this criterion are available, or those that do are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers that are running <attracted-app> will be able to run <target-app>. Note that this command will not impact the scheduling of <attracted-app>.

See also Job Affinity.

Usage

apc app attract <target-app> --to <attracted-app> [optional-args]

Options

  -t, --to APP         - App to attract <target-app> to.

  --hard               - Establishes a hard requirement for instances of
                         <target-app> to run on instance managers which
                         are running <attracted-app>.

  -r, --remove         - Remove an existing attraction.

  --restart            - Restart the app after applying the update if the app
                         is currently running.

Example

  $ apc app attract foo --to bar --hard

app autoscale

The 'app autoscale' command updates an app's auto-scaling configuration.

Usage

apc app autoscale <app-name> [optional args]

Options

  --disable      - Disables autoscaling if enabled.

  --min-instances
                 - Minimum number of instances that the app will scale down to.

  --max-instances
                 - Maximum number of instances that the app will scale up to.

  --observation-seconds
  --interval
                 - Interval of time to monitor the value of the metric.

  --warmup-seconds
  --warmup
                 - Interval of time to wait once the application has started to
                   run and before proceeding to the next action.


  --metric       - Name of the metric whose value will be monitored and upon
                   which scaling decisions are based: cpu_per_second (avg ms/s),
                   requests_per_second (avg req/s), request_latency (avg ms).


  --threshold-upper
  --upper-threshold
                 - Threshold value of the metric above which a scaling action should
                   be triggered.

  --threshold-lower
  --lower-threshold
                 - Threshold value of the metric below which a scaling action should
                   be triggered

  --threshold-scaleup-delta
  --scaleup-delta
  --scaleup-by
                 - Threshold value of the metric above which a scaling action should
                   be triggered.

  --threshold-scaledown-delta
  --scaledown-delta
  --scaledown-by
                 - Magnitude of a scale-down action.

  --threshold-window
                 - Magnitude of a scale-up action.

  --pid-setpoint
  --setpoint
                 - Target value of the metric to maintain using PID controller scaling method.
                   It is a strictly positive number: the metric must take values lower and higher
                   than the setpoint for PID control to work (it must be able to apply
                   corrective deltas in both directions).

  --pid-kp
  --kp
                 - Proportional gain: the coefficient of the proportional term of the
                   PID expression; this term dictates the magnitude of the corrective
                   action in proportion to the magnitude of the error. KP influences
                   the magnitude of the action and determines how fast or aggressively
                   the autoscaler will react to changes in the value of the metric.

  --pid-ki
  --ki
                 - Integral gain: the coefficient of the integral term of the PID
                   expression. When the error is steady and small over a long period
                   of time, the proportional term is not effective: its value will be
                   equally small. The integral term is an accumulation of the error,
                   second by second; its value will eventually be significant enough
                   to eliminate the steady small errors; KI influences how significant
                   it will be. This is the key to stability in the long run.

  --pid-kd
  --kd
                 - Derivative gain: the coefficient of the derivative term of the PID
                   expression; this term measures the rate of change of the error. By
                   adding it to the delta, the autoscaler tries to anticipate the next
                   value of the error and act accordingly.

Examples

  • Configure autoscaling using PID method and target cpu utilization.
    apc app autoscale sample --min-instances 1 --max-instances 10 \
    --method pid --setpoint 100 --metric cpu_per_second
    
  • Configure autoscaling using threshold method and upper/lower thresholds.
    apc app autoscale sample --min-instances 1 --max-instances 10 \
    --method threshold --metric request_latency --scaleup-delta 1 \
    --scaledown-delta 1 --lower-threshold 0.1 --upper-threshold 10
    
  • Remove autoscaling configuration.
    apc app autoscale sample --disable
    

app connect

The app connect command opens an SSH session with the specified app. If your environment is proxied, you should target your cluster over HTTPS before connecting to a app container.

Note: for apps with more than one instance, app connect will not reliably connect to the same instance. If your app has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the app instances command.

Usage

apc app connect <app-name>

Options

  -i, --instance-id UUID      - UUID of the instance to connect to

app console

The app console command connects to a clone of your app running within a capsule. This includes all service bindings present on the original app, and provides an easy way to debug and setup/bootstrap your app.

Single-binary apps, such as Apcera-provided apps or some Docker images, may not be consoled to as they do not mount an operating system.

Usage

apc app console <app-name>

Examples

$ apc app console sample1

app create

The app create command creates a new application with the provided properties. <app-name> is required, and the default path is your current working directory. You can configure some of the options interactively, and use arguments for the others.

By default, applications have a port exposed that is chosen at runtime and exposed via the "PORT" environment variable. If nothing is listening at that port, your application will fail to start. If your app is not a web app and is intended to have no listening ports, use the --disable-routes when creating your app.

You can provide a manifest file in the app's directory, or provide a path to a manifest file via the --config flag. apc help manifest has further details.

Usage

apc app create <app-name> [optional-args]

Options

  -p, --path PATH         - Path to the app you are deploying.
                            (default: current path)

  --package NAME          - Package name for the app you are deploying. Cannot be
                            provided if --path is set.

  --config PATH           - Path to the custom app manifest file, if present.
                            (default: '<app-directory>/continuum.conf')

  -sp, --staging NAME     - Staging Pipeline to use when deploying the app.
                            (default: auto-detected)

  -r, --routes ROUTES     - Routes to register with the application, as a comma-
                            separated list. Routes must not contain any characters
                            forbidden in URLs, per RFC 3986. Routes are applied on
                            the chosen port.
                            (default: <app-name>-<6chars>.<cluster-domain>)

  -dr, --disable-routes   - Disable auto-generation of routes and ports, if no
                            route is provided.

  -ho, --https-only bool  - Add routes as HTTPS only. This applies to
                            auto-generated routes and routes added via '--routes'.
                            HTTP requests will be redirected to HTTPS.
                            (default: false)

  -ae, --allow-egress     - Allow the app open outbound network access.

  --encrypt               - Encrypt the volume the container data is mounted on.

  --allow-ssh             - Allow SSH to the app's container.

  -do, --depends-on       - Specify dependencies for the application with the form
                            'TYPE.NAME', where 'TYPE' is 'os', 'package',
                            'runtime', or 'file'. For instance, to enforce a
                            dependency upon 'ubuntu-14.04' and 'ruby-2.0.0', try
                            '--depends-on os.ubuntu-14.04,ruby-2.0.0'
                            (default: stager picks dependencies for you)

  -i, --instances NUM     - Number of instances of the app
                            (default: 1)

  --restart-mode          - Restart behavior for instances, valid values are
                            "no", "failure", or "always".
                            (default: "always")

  -c, --cpus CPU          - Milliseconds of CPU time per second of physical time.
                            May be greater than 1000ms/second in cases where time
                            is across multiple cores.
                            (default: 0ms/second, uncapped)

  -m, --memory MEM        - Memory to use, in MB.
                            (default: 256MiB)

  -d, --disk DISK         - Amount of disk space to allocate, in MB.
                            (default: 1GiB)

  -n, --netmin NET        - Amount of network throughput to allocate (floor), in Mbps.
                            (default: 5Mbps)

  -nm, --netmax NET       - Amount of network throughput to allow (ceiling), in Mbps.
                            (default: 0Mbps, uncapped)

  -e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                            values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

  -pe, --pkg-env ENV=VAL  - Sets environment variables on the app's package.
                            Multiple values can be supplied by invoking multiple
                            times.
                            (Providing a 'STAGER_DEBUG' variable here can activate
                            increased stager logging detail for built-in stagers)

  --package-name NAME     - Name of the package, if it should have a different
                            name than the app.

  --user UID              - Specifies the UID to run the process as.

  --group GID             - Specifies the GID to run the process as.

  --start-cmd CMD         - Command that starts the app, if different than the
                            one set by the stager.

  --stop-cmd CMD          - Command to run when stopping the app.

  --start                 - Starts the application after deploying.

  -t, --timeout           - Time until the app starts up when attempting to
                            start, in seconds.
                            (default: 30 seconds)

  --stop-timeout          - Time to allow the stop command to run, in seconds.
                            (default: 5 seconds)

  -ht, --hard-tags        - Hard scheduling tags to add to the app.

  -st, --soft-tags        - Soft scheduling tags to add to the app.

  --client-acl            - Access control list by client IP CIDR.

  --sticky-session-cookies - Turn on browser stickiness using the specified
                             session IDs, comma-separated.

  --network NETWORK       - Network to join the app to.

  --autoscale             - Enable autoscaling. Use either autoscale-threshold or
                             autoscale-pid flags.

  --autoscale-min-instances
                          - Minimum number of instances that the app will scale down to.
                             (default: 1)

  --autoscale-max-instances
                          - Maximum number of instances that the app will scale up to.
                             (default: 1)

  --autoscale-observation-seconds
                          - Interval of time to monitor the value of the metric.
                             (default: 10 seconds)

  --autoscale-warmup-seconds
                          - Interval of time to wait once the application has started to
                             run and before proceeding to the next action.
                             (default: 10 seconds)

  --autoscale-metric      - Name of the metric whose value will be monitored and upon
                             which scaling decisions are based: cpu_per_second (avg ms/s),
                             requests_per_second (avg req/s), request_latency (avg ms).
                             (default: cpu_per_second)

  --autoscale-threshold-lower
                          - Threshold value of the metric below which a scaling action should
                             be triggered

  --autoscale-threshold-upper
                          - Threshold value of the metric above which a scaling action should
                             be triggered.

  --autoscale-threshold-scaleup-delta
                          - Magnitude of a scale-up action.

  --autoscale-threshold-scaledown-delta
                          - Magnitude of a scale-down action.

  --autoscale-threshold-window
                          - Number of seconds during which the value of the metric must be
                             beyond the same threshold before an action gets triggered.

  --autoscale-pid-setpoint- Target value of the metric to maintain. It is a strictly positive
                             number: the metric must take values lower and higher than the
                             setpoint for PID control to work (it must be able to apply
                             corrective deltas in both directions).

  --autoscale-pid-kp      - Proportional gain: the coefficient of the proportional term of the
                             PID expression; this term dictates the magnitude of the corrective
                             action in proportion to the magnitude of the error. KP influences
                             the magnitude of the action and determines how fast or aggressively
                             the autoscaler will react to changes in the value of the metric.

  --autoscale-pid-ki      - Integral gain: the coefficient of the integral term of the PID
                             expression. When the error is steady and small over a long period
                             of time, the proportional term is not effective: its value will be
                             equally small. The integral term is an accumulation of the error,
                             second by second; its value will eventually be significant enough
                             to eliminate the steady small errors; KI influences how significant
                             it will be. This is the key to stability in the long run.

  --autoscale-pid-kd      - Derivative gain: the coefficient of the derivative term of the PID
                             expression; this term measures the rate of change of the error. By
                             adding it to the delta, the autoscaler tries to anticipate the next
                             value of the error and act accordingly.

Examples

  $ apc app create sample --start

  $ apc app create sample -r "sample.example.com" -sp "/dev::custom-ruby" --start

  $ apc app create sample --depends-on os.ubuntu-13.10,runtime.ruby-1.9.3-p547

  $ apc app create sample -e 'API_TOKEN="1234abcd"'

  $ apc app create sample -pe "STAGER_DEBUG=vv"

  $ apc app create sample --client-acl="128.66.3.0/24, deny 128.66.0.0/16"

app delete

The app delete command deletes an existing app from your cluster. It also attempts to clean up packages and bindings associated with the app. It will not remove any services to which the app was bound.

Usage

apc app delete <app-name> [optional-args]

Options

  --config PATH           - Path to the custom app manifest file, if present.
                            (default: '<app-directory>/continuum.conf')

  --delete-services, -ds  - Also delete services bound to this app.

Examples

  $ apc app delete sample

  $ apc app delete /prod/retail::sample-app

  $ apc app delete --config sample-app.conf

app deploy

The app deploy command lets you deploy a new code package to an existing app in your cluster. The modified app must pass through the indicated staging pipeline and successfully start, before it will replace the previous version.

Usage

apc app deploy <app-name> [optional args]

Options

  -p, --path PATH         - Path to the app being deployed.
                            (default: current path)

  -sp, --staging PIPELINE - Staging pipeline to process the new app package
                            before restarting it.
                            (default: auto-detected)

  --package-name NAME     - Name of the new package, if different than app's name.

  --keep-previous         - Keeps the previous deployed package.
                            (default: false)

  --start-cmd CMD         - Command to start the app, if different than the one
                            set in its staging pipeline.
  --stop-cmd CMD          - Command to stop the app.


  --start                 - Starts the app after deploying.
                            (default: false)

Examples

  $ apc app deploy sample1

app export

The app export command exports one or many apps from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc app export <app-name> [...]

app filecopy

The app filecopy command transfers a file securely to the specified app. If your environment is proxied, you should target your cluster over HTTPS before connecting to an app instance.

Note: for apps with more than one instance, app filecopy will not reliably connect to the same instance. If your app has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance flag, providing a UUID available from the app instances command.

Your app must have an scp binary installed and available with its $PATH to use the filecopy command.

Usage

apc app filecopy <app-name> <file-path> [optional-args]

Options

  -i, --instance-id UUID            - UUID of the instance to transfer to

  -r, --remote-path PATH            - Path on the capsule for the file

  -dl, --download                   - Transfer file from the capsule to to your local machine.

Examples

  $ apc app filecopy myapp myfile

  $ apc app filecopy myapp myfile -r ./path/remotefile -dl

app from package

The app from package command creates a new application from an existing package within the Apcera Platform, applying any user-supplied options. Some
options are required, and can be set interactively or via command-line flags.

This command can also be used to take a capsule snapshot package and turn that into a job within the Apcera Platform.

Usage

apc app from package <app-name> -p <pkg-name> [optional-args]

Options

  -p, --package NAME      - Package name for the app you are deploying.
                            (required)

  --config PATH           - Path to the custom app manifest file, if present.
                            Any package specific items like package environment
                            variables or staging pipelines will be ignored.
                            (default: '<app-directory>/continuum.conf')

  -r, --routes ROUTES     - Routes to register with the application, as a comma-
                            separated list. Routes must not contain any characters
                            forbidden in URLs, per RFC 3986. Routes are applied on
                            the chosen port.
                            (default: <app-name>-<6chars>.<cluster-domain>)

  -dr, --disable-routes   - Disable auto-generation of routes and ports, if no
                            route is provided.

  -ho, --https-only bool  - Add routes as HTTPS only. This applies to
                            auto-generated routes and routes added via '--routes'.
                            HTTP requests will be redirected to HTTPS.
                            (default: false)

  -ae, --allow-egress     - Allow the app open outbound network access.

  --allow-ssh             - Allow SSH to the app's container.

  -i, --instances NUM     - Number of instances of the app
                            (default: 1)

  --restart-mode          - Restart behavior for instances, valid values are
                            "no", "failure", or "always".
                            (default: "always")

  -c, --cpus CPU          - Milliseconds of CPU time per second of physical time.
                            May be greater than 1000ms/second in cases where time
                            is across multiple cores.
                            (default: 0ms/second, uncapped)

  -m, --memory MEM        - Memory to use, in MB.
                            (default: 256MiB)

  -d, --disk DISK         - Amount of disk space to allocate, in MB.
                            (default: 1GiB)

  -n, --netmin NET        - Amount of network throughput to allocate (floor), in Mbps.
                            (default: 5Mbps)

  -nm, --netmax NET       - Amount of network throughput to allow (ceiling), in Mbps.
                            (default: 0Mbps, uncapped)

  -e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                            values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

  --package-name NAME     - Name of the package, if it should have a different
                            name than the app.

  --user UID              - Specifies the UID to run the process as.

  --group GID             - Specifies the GID to run the process as.

  --start-cmd CMD         - Command that starts the app, if different than the
                            one set by the stager.

  --stop-cmd CMD          - Command to run when stopping the app.

  --start                 - Starts the application after deploying.

  -t, --timeout           - Time until the app starts up when attempting to
                            start, in seconds.
                            (default: 30 seconds)

  --stop-timeout          - Time to allow the stop command to run, in seconds.
                            (default: 5 seconds)

  -ht, --hard-tags        - Hard scheduling tags to add to the app.

  -st, --soft-tags        - Soft scheduling tags to add to the app.

  --client-acl            - Access control list by client IP CIDR.

  --network NETWORK       - Network to join the app to.

Examples

  $ apc app from package sample -p existing-package -r "sample.example.com"

app health

The app health command shows the status and health score of a started app. Apps must be started to view health.

Usage

apc app health <app-name>

Example

  $ apc app health myapp

app instances

The app instances command displays detailed information about the instances of an app, as well as its overall health score.

Usage

apc app instances <app-name>

Example

  $ apc app instances myapp

The app link command links one app to another. The source app gets network connectivity to the given port on the target app.

Source app can connect to the target app by using a special environment variable <name>_URI where <name> is an uppercase form of the --name argument.

If you are linking to a app with more than one instance, the app may not reliably connect to the same instance each time.

Usage

apc app link <source-app> --to <target-app> [args]

Options

  -t, --to JOB           - Target app name. (required)

  -n, --name NAME        - Link name (base for the environment variable name).

  -p, --port PORT        - Port on target app to link to.

  -bi, --bound-ip IP     - Specify the IP address that the source app should use
                           to connect to the target app.

  -bp, --bound-port PORT - Specify the port that the source app should use to
                           connect to the target app.

Example

  $ apc app link myclient --to /staging::myserver -p 1234 -n server

app list

The app list command lists existing apps in your Apcera Platform cluster.

Usage

apc app list

Subcommands

  -l                               - Show apps' fully-qualified names, UUID,
                                     and version number.

  -fp, --filter-by-package PKG     - Only show apps that are using the package
                                     name PKG.

  -fl, --filter-by-label LBL[=VAL] - Only show apps with the given label named LBL present.
                                     If VAL is provided then the label must also have that
                                     value. Multiple values can be supplied by invoking
                                     repeatedly, e.g. "-fl 'X=y' -fl 'W'", resulting in
                                     only showing the apps meeting ALL the label requirements.

Examples

  $ apc app list

  $ apc app list -l --ns /prod

  $ apc app list --filter-by-package package::/apcera/pkg/os::ubuntu-13.10

app logs

Streams app logs to your terminal.

Usage

apc app logs <app-name> [optional-args]

Options

  -l, --lines N       - How many latest log lines to show.
  --no-tail           - Just show logs, don't tail.

app pipe

The app pipe command opens an SSH session with the specified app. If your environment is proxied, you should target your cluster over HTTPS before connecting to an app. It is meant to be a replacement for ssh in commands like "scp" using the "-S" flag. To utilize this feature, create a script file which passes all command line arguments to it:

  apc app pipe "$@"

Then run any scp command with the -S flag, e.g.:

  scp -S ./myScript myfile myapp:myremotefile

See also Using SCP to Transfer Files.

app repel

The app repel command establishes an affinity for instances of <target-app> to start on instance managers NOT running instances of <repelled-app>. Instance of <target-app> will be able to run on instance managers running <repelled-app> if none satisfying this criterion are available, or all those that do are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers not running <repelled-app> will be able to run <target-app>. Note that this command will not impact the scheduling of <repelled-app>.

See also Job Affinity.

Usage

apc app repel <target-app> --from <repelled-app> [optional-args]

Options

  --from APP       - App to repel <target-app> from.

  --hard           - Establishes a hard requirement for instances of
                     <repelled-app> to run on instance managers which
                     are not running <target-app>.

  -r, --remove     - Removes an existing repulsion.

  --restart        - Restart the app after applying the update if the app
                     is currently running.

Example

  $ apc app repel foo --from bar --hard

app restart

The app restart command restarts an app.

Usage

apc app restart <app-name>

Examples

  $ apc app restart sample1

  $ apc app restart /prod/dev::sample1

app run

The app run command runs a command within the context of your application. This includes all service bindings present on the original app, and provides an easy way to run one off scripts or migrations within your app.

Usage

apc app run <app-name> -c <command>

Options

  -c, --command CMD     - Command to run within app context. (Required)

Examples

  $ apc app run sample1 -c "./runscript"

app show

The app show command displays detailed information about an individual app.

In the resulting output, package names with asterisks indicate user-supplied packages (e.g. the app's package), as opposed to system packages.

Usage

apc app show <app-name>

Options

  --compliance     - Adds a column describing whether or not the app is
                     compliant with current policy.

Examples

  $ apc app show sample1

  $ apc app show /::sample1

app start

The app start command starts an app, and sets its state to "running."

Usage

apc app start <app-name>

Examples

  $ apc app start sample1

  $ apc app start /prod/dev::sample-app

app stats

The app stats command shows an app's resource consumption.

Usage

apc app stats <app-name>

Examples

  $ apc app stats sample1

  $ apc app stats /dev::my-app

app stop

The "app stop" command is used to stop a running application.

Usage

apc app stop <app-name>

Options

  -i, --instance-id INSTANCE_UUID   - Stop a single instance of an app. The UUID
                                      can be obtained by running
                                      'apc app instances <name>'.

Examples

  $ apc app stop sample1

  $ apc app stop /prod/dev::website

app update

The app update command updates an app's properties. Specify options and values to update, and your changes will be applied.

For example, you can add (and delete) start commands and environment variables. If there is more than one process on the job to update, specify it with the proc flag. Typically, there is only one process.

Some job properties cannot be updated while the job is running. If a property can't be updated, APC will instruct you to shut down the active job before it will allow the update.

Usage

apc app update <app-name> [optional args]

Options

  -proc, --process NAME         - Specifies the process to update, which is
                                  necessary for updating env variables or the
                                  start command. Only required in cases where
                                  the app has more than one process.

  --user UID                    - Specifies the UID to run the process as.

  --group GID                   - Specifies the GID to run the process as.

  Port options:
    -pa, --port-add NUMBER      - Expose a port.

    -pd, --port-delete NUMBER   - Remove an exposed port.

    -o, --optional              - Port is optional; app is considered
                                  healthy even if the port doesn't respond.

  -ae, --allow-egress           - Allow the app open outbound network access.
  --remove-egress               - Close open outbound network access. Note:
                                  cannot supply other flags with this.

  --allow-ssh                   - Allow SSH to the app's container.
  --remove-ssh                  - Remove ability to SSH to the app's container.

  --restart                     - Restart the app after applying the update if the app
                                  is currently running.

  --network			- Restart the app to join the network specified.
    -leave                                  - Restart the app to leave the network specified.
    -ma, --multicast-add <CIDR-Address>     - Updates the app to add a multicast route.
    -md, --multicast-delete <CIDR-Address>  - Updates the app to remove a multicast route.
    -be, --broadcast-enable                 - Updates the app to add the broadcast route.
    -bd, --broadcast-disable                - Updates the app to remove the broadcast route.

  --restart-mode                - Restart behavior for instances, valid values are
                                  "no", "failure", or "always".
                                  (default: "always")

  --pkg-add NAME=PATH           - Adds the package specified by NAME to
                                  the app, installed at PATH. If no PATH
                                  specified, defaults to /.

  --pkg-delete NAME             - Removes the package specified by NAME
                                  from the app.

  -i, --instances NUM           - Updates the app's number of instances.
                                  (default: 0)

  -sc, --start-cmd CMD                - Adds a start command on a process.
  -remove-sc, --remove-start-cmd      - Remove the start command on a process.
  --stop-cmd CMD                      - Adds a stop command on a process.
  --remove-stop-cmd                   - Remove the stop command on a process.

  -e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                            values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

  -eu, --env-unset ENV          - Unsets environment variables on a process.

  -t, --timeout                 - Number of seconds the system will wait for an
                                  app to start before triggering a failure. If
                                  the app is composed of more than one process,
                                  you must specify one using the '--proc' flag.

  --stop-timeout                 - Number of seconds the system will wait for an
                                  app to stop before tearing it down. If
                                  the app is composed of more than one process,
                                  you must specify one using the '--proc' flag.

  -c, --cpus CPU        - Milliseconds of CPU time per second of physical time.
                          May be greater than 1000ms/second in cases where time
                          is across multiple cores.
                          (default: 0ms/second, uncapped)

  -m, --memory MEM      - Memory the app will use, in MB.
                          (default: 256MiB)

  -d, --disk DISK       - Amount of disk space to allocate, in MB.
                          (default: 1GiB)

  -n, --netmin NET      - Amount of network throughput to allocate (floor), in Mbps.
                          (default: 5Mbps)

  -nm, --netmax NET     - Amount of network throughput to allow (ceiling), in Mbps.
                          (default: 0Mbps, uncapped)

  -ha, --hard-tags-add  - Hard scheduling tags to add to the app.

  -hd, --hard-tags-del  - Hard scheduling tags to delete from the app.

  -sa, --soft-tags-add  - Soft scheduling tags to add to the app.

  -sd, --soft-tags-del  - Soft scheduling tags to delete from the app.

  --client-acl          - Access control list by client IP CIDR.

Examples

  $ apc app update sample --port-add 3306

  $ apc app update sample --pkg-add ubuntu=/install/path,/dev/prod::node=/usr/www

  $ apc app update sample --env-set PATH=/usr/local/sbin --env-set HOME=/home/apcera

  $ apc app update sample --env-set START_CMD="./foo bar cmd" -env-set HOME=/usr/foo

  $ apc app update sample --client-acl="128.66.3.0/24, deny 128.66.0.0/16"

  $ apc app update sample --network mynetwork -ma 225.1.1.0/24

audit

The audit command queries the cluster for audit log entries. Use the command options below to filter the list for the entries of interest.

The cluster limits the number of entries returned for any one request; use the limit and offset options below to continue getting more results.

See also Audit Logs.

Usage

apc audit [optional-args]

Options

  --start NAME           - Filter for audit entries after the given DATE.

  --end DATE             - Filter for audit entries before the given DATE.

  --event-type TYPE      - Filter for audit entries with the given TYPE.
                           See the [documentation](/jobs/logs/#audit-logs) for more details.

  --fqn FQN              - Filter for audit entries with the given FQN.

  --limit NUM            - Limit returned result size to NUM.

  --offset NUM           - Ignore the first NUM results.

  --reverse              - Reverse the default ordering of the results.

  --json                 - Show raw JSON results from the server. No pagination
                           occurs if this option is set.

  --verbose              - View payloads for every entry.

Examples

  $ apc audit --start="Aug 16 2015" --event-type=job.stop
  $ apc audit --end Aug-16-2015-8:15pm-UCT --limit=20 --offset=120
  $ apc audit --fqn=job::/sandbox/admin/test --reverse

capsule

The capsule command lets you run capsules in your Apcera Platform cluster.

Capsules are a blend between lightweight containers generally used for applications, and the full servers traditionally provided by virtual machines.

See also Creating Capsules.

Usage

apc capsule <subcommand> <required args> [optional args]

For additional help, type:

Subcommands:

  attract      - Establish a scheduling affinity between capsules
  connect      - Connects to a capsule via SSH
  create       - Creates a new capsule with the specified operating system
  delete       - Shuts down and deletes an existing capsule
  export       - Exports capsule(s) to a single package file
  filecopy     - Transfers a file securely between the capsule and your machine
  health       - View the health of a capsule
  link         - Link one capsule to another
  list         - Lists your existing capsules
  logs         - Tail capsule logs
  pipe         - Securely connect to a capsule as an alternative to ssh in programs like scp
  repel        - Remove a scheduling affinity from capsules
  restart      - Restarts a capsule
  show         - Shows information about a capsule
  snapshot     - Creates a snapshot of a capsule, in order to persist changes
  start        - Starts a capsule
  stop         - Stops a capsule
  stats        - Displays a capsule's resource stats
  update       - Updates a capsule's individual properties

capsule attract

The capsule attract command establishes an affinity for instances of <target-capsule> to start on instance managers running instances of <attracted-capsule>. Instances of <target-capsule> will be able to run on other instance managers if none satisfying this criterion are available, or those that do are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers that are running <attracted-capsule> will be able to run <target-capsule>. Note that this command will not impact the scheduling of <attracted-capsule>.

Usage

apc capsule attract <target-capsule> --to <attracted-capsule> [optional-args]

Options

  -t, --to CAPSULE     - Capsule to attract <target-capsule> to.

  --hard               - Establishes a hard requirement for instances of
                         <target-capsule> to run on instance managers which
                         are running <attracted-capsule>.

  -r, --remove         - Remove an existing attraction.

  --restart            - Restart the capsule after applying the update if the
                         capsule is currently running.

Examples

  $ apc capsule attract foo --to bar --hard

capsule connect

The capsule connect command opens an SSH session with the specified capsule. If your environment is proxied, you should target your cluster over HTTPS before connecting to a capsule.

If your capsule has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the job instances command.

Usage

apc capsule connect <capsule-name>

Options

  -i, --instance-id UUID      - UUID of the instance to connect to

Examples

  $ apc capsule connect mymachine

capsule create

The capsule create command creates a new capsule inside the Apcera Platform. Capsules are Apcera's term for lightweight VM-like containers.

You must supply either an image or a package (e.g. a capsule snapshot). Images and packages will resolve within your current namespace.

You can find a list of available base images using this command:

  $ apc package list --type=os

Usage

apc capsule create <capsule-name> [optional-args]

Options

  -i, --image NAME      - Name of the base image to use when creating the capsule
                          (no default)

  -p, --package NAME    - Name of the package to use when creating the capsule
                          (no default)

  -s, --snapshot NAME   - Name of the snapshot to use when creating the capsule
                          (no default)

  -ae, --allow-egress   - Allow the capsule open outbound network access.

  --encrypt             - Encrypt the volume the container data is mounted on.

  -c, --cpus CPU        - Milliseconds of CPU time per second of physical time.
                          May be greater than 1000ms/second in cases where time
                          is across multiple cores.
                          (default: 0ms/second, uncapped)

  -m, --memory MEM      - Memory the capsule should use, in MB
                          (default: 256MiB)

  -d, --disk DISK       - Disk space to allocate, in MB
                          (default: 1GiB)

  -n, --netmin NET      - Network throughput to allocate (floor), in Mbps
                          (default: 5Mbps)

  -nm, --netmax NET     - Amount of network throughput to allow (ceiling), in Mbps.
                          (default: 0Mbps, uncapped)

  -net, --network NAME  - Name of network to be joined when job is started.

  -e, --env-set ENV=VAL - Sets an environment variables on the capsule. Multiple
                          comma values can be supplied by invoking multiple times,
                          e.g. "--env-set 'HOME=/usr/local'
                                 --env-set 'PATH=/usr/bin:/opt/local'"

  -ht, --hard-tags      - Hard scheduling tags to add to the capsule.

  -st, --soft-tags      - Soft scheduling tags to add to the capsule.

  --client-acl          - Access control list by client IP CIDR.

Examples

  $ apc capsule create mymachine --image ubuntu-14.04

  $ apc capsule create mymachine -p mymachine-snapshot -m 1GiB -c 400

capsule delete

The capsule delete command deletes an existing capsule from your cluster.

Usage

apc capsule delete <capsule-name>

Options

  --delete-services, -ds  - Also delete services bound to this capsule.

Examples

  $ apc capsule delete sample1

  $ apc capsule delete /dev::sample-capsule

capsule export

The capsule export command exports one or many capsules from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc capsule export <capsule-name> [...]

capsule filecopy

The capsule filecopy command transfers a file securely to the specified capsule. If your environment is proxied, you should target your cluster over HTTPS before connecting to a capsule.

If your capsule has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the job instances command.

Your capsule must have an scp binary installed and available with its $PATH to use the filecopy command.

Usage

apc capsule filecopy <capsule-name> <file-path> [optional-args]

Options

  -i, --instance-id UUID            - UUID of the instance to transfer to

  -r, --remote-path PATH            - Path on the capsule for the file

  -dl, --download                   - Transfer file from the capsule to to your local machine.

Examples

  $ apc capsule filecopy mycapsule myfile
  $ apc capsule filecopy mycapsule myfile -r ./path/remotefile -dl

capsule health

The capsule health command shows the status and health score of a started capsule. Capsules must be started to view health.

Usage

apc capsule health <capsule-name>

Example

  $ apc capsule health myjob

The capsule link command links one capsule to another. The source capsule gets network connectivity to the given port on the target capsule.

Source capsule can connect to the target capsule by using a special environment variable <name>_URI where <name> is an uppercase form of the --name argument.

If you are linking to a capsule with more than one instance, the capsule may not reliably connect to the same instance each time.

Usage

apc capsule link <source-capsule> --to <target-capsule> [args]

Options

  -t, --to JOB           - Target capsule name. (required)

  -n, --name NAME        - Link name (base for the environment variable name).

  -p, --port PORT        - Port on target capsule to link to.

  -bi, --bound-ip IP     - Specify the IP address that the source capsule should
                           use to connect to the target capsule.

  -bp, --bound-port PORT - Specify the port that the source capsule should use
                           to connect to the target capsule.

Example

  $ apc capsule link myclient --to /staging::myserver -p 1234 -n server

capsule list

The capsule list command lists your cluster's existing capsules.

Usage

apc capsule list

Options

  -l                               - Show capsules' fully-qualified names, UUID,
                                     and version number.

  -fp, --filter-by-package PKG     - Only show capsules that are using the package
                                     name PKG.

  -fl, --filter-by-label LBL[=VAL] - Only show capsules with the given label named LBL present.
                                     If VAL is provided then the label must also have that
                                     value. Multiple values can be supplied by invoking
                                     repeatedly, e.g. "-fl 'X=y' -fl 'W'", resulting in
                                     only showing the capsules meeting ALL the label requirements.

Examples

  $ apc capsule list

  $ apc capsule list --namespace baz -l

capsule logs

Streams capsule logs to your terminal.

Usage

apc capsule logs <capsule-name> [optional-args]

Options

  -l, --lines N       - How many latest log lines to show.
  --no-tail           - Just show logs, don`t tail.

capsule pipe

The capsule pipe command opens an SSH session with the specified capsule. If your environment is proxied, you should target your cluster over HTTPS before connecting to a capsule. It is meant to be a replacement for ssh in commands like "scp" using the "-S" flag. To utilize this feature, create a script file which passes all command line arguments to it:

apc capsule pipe "$@"

Then run any scp command with the -S flag, e.g.:

  scp -S ./myScript myfile mycapsule:myremotefile

See also Using SCP to Transfer Files.

capsule repel

The capsule repel command establishes an affinity for instances of <target-capsule> to start on instance managers NOT running instances of <repelled-capsule>. Instance of <target-capsule> will be able to run on instance managers running <repelled-capsule> if none satisfying this criterion are available, or all those that do are sufficiently loaded.

The --hard flag makes this affinity a hard requirement, such that only instance managers not running <repelled-capsule> will be able to run <target-capsule>. Note that this command will not impact the scheduling of <repelled-capsule>.

Usage

apc capsule repel <target-capsule> --from <repelled-capsule> [optional-args]

Options

  --from CAPSULE   - Capsule to repel <target-capsule> from.

  --hard           - Establishes a hard requirement for instances of
                     <repelled-capsule> to run on instance managers which
                     are not running <target-capsule>.

  -r, --remove     - Removes an existing repulsion.

  --restart        - Restart the capsule after applying the update if the
                     capsule is currently running.

Example

  $ apc capsule repel foo --from bar --hard

capsule restart

The capsule restart command restarts a capsule.

Usage

apc capsule restart <capsule-name>

Examples

  $ apc capsule restart sample1

capsule show

The capsule show command shows detailed information about individual capsules.

Usage

apc capsule show <capsule-name>

capsule snapshot

The capsule snapshot command creates a new snapshot of an existing capsule's changed files, and puts them in a timestamped package which can be used to create jobs or additional capsules.

If this times out for some reason, the snapshot will continue server-side until either completing or failing.

See also Taking Capsule Snapshots.

Usage

apc capsule snapshot <capsule-name> [optional args]

Options

  -n, --name NAME         - Name of the snapshot package to be created.
                            (default: "snapshot-<capsule-name>-<time>")

  -d, --directory PATH    - Directory within the capsule to snapshot.
                            (default: "", for the root)

  -t, --timeout TIME      - Time (in minutes) after which the client will time
                            out. The snapshot will continue server-side.
                            (default: 2 minutes)

  -i, --instance-id UUID  - UUID of the instance to snapshot. (default: selects
 	                        single running instance)

  --link                  - Link the snapshot with the capsule. This will
                            trigger a restart of the capsule.

Examples

  $ apc capsule snapshot mymachine

capsule start

The capsule start command starts a capsule and sets its state to "running."

Usage

apc capsule start <capsule-name>

Examples

  $ apc capsule start sample1

  $ apc capsule start /prod::my-cap

capsule stop

The capsule stop command stops a running capsule.

Usage

apc capsule stop <capsule-name>

Options

  -i, --instance-id INSTANCE_UUID   - Stop a single instance of a capsule. The
                                      UUID can be obtained by running
                                      'apc job instances <name>'

Examples

  $ apc capsule stop sample1

  $ apc capsule stop /dev::my-capsule

capsule stats

The capsule stats command shows a capsule's resource consumption.

Usage

apc capsule stats <capsule-name>

Examples

  $ apc capsule stats sample1

capsule update

The capsule update command updates a capsule's properties. Specify the options and values to update, and your changes will be applied.

For example, you can add (and delete) start commands and environment variables. If there is more than one process on the capsule to update, specify it with the proc flag. Typically, there is only one process.

Some capsule properties can't be updated while capsules are running. If a property can't be updated, APC will inform you to shut down the capsule before the update can proceed.

Usage

apc capsule update <capsule-name> [optional args]

Options

  --name NAME                   - Updates the local name of the capsule.

  -proc, --process NAME         - Specifies the process to update, which is
                                  necessary for updating env variables or the
                                  start command. Only required in cases where
                                  the job has more than one process.

  --user UID                    - Specifies the UID to run the process as.

  --group GID                   - Specifies the GID to run the process as.

  Port options:
    -pa, --port-add NUMBER      - Expose a port.

    -pd, --port-delete NUMBER   - Remove an exposed port.

    -o, --optional              - Port is optional; app is considered
                                  healthy even if the port doesn't respond.

  -ae, --allow-egress           - Allow the capsule open outbound network
                                  access.
  --remove-egress               - Close open outbound network access. Note:
                                  cannot supply other flags with this.

  --restart                     - Restart the capsule after applying the
                                  update if the capsule is currently running.

  --network			- Restart the capsule to join the network specified.
    -leave                                  - Restart the capsule to leave the network specified.
    -ma, --multicast-add <CIDR-Address>     - Updates the capsule to add a multicast route.
    -md, --multicast-delete <CIDR-Address>  - Updates the capsule to remove a multicast route.
    -be, --broadcast-enable                 - Updates the capsule to add the broadcast route.
    -bd, --broadcast-disable                - Updates the capsule to remove the broadcast route.

  --pkg-add NAME=PATH           - Adds the package specified by NAME to
                                  the job, installed at PATH. If no PATH
                                  specified, defaults to /.

  --pkg-delete NAME             - Removes the package specified by NAME
                                  from the job.

  -t, --timeout                 - Number of seconds the system will wait for an
                                  job to start before triggering a failure. If
                                  the job is composed of more than one process,
                                  you must specify one using the '--proc' flag.

  -i, --instances NUM         - Updates a capsule's number of instances.
                                (default 0)

  --stop-cmd CMD                      - Adds a stop command on a process.
  --remove-stop-cmd                   - Remove the stop command on a process.

  -e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                            values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

  -eu, --env-unset ENV        - Unsets environment variables on a process.

Examples

  $ apc capsule update foo -ra -p 3306 -s Mysql

  $ apc capsule update foo --pkg-add ubuntu=/install/path,/dev/prod::node=/usr/www

  $ apc capsule update foo --env-set PATH=/usr/local/sbin --env-set HOME=/home/apcera

changelog

The apc changelog command displays a URL to find recent changes to the Apcera Platform.

Usage

apc changelog

cluster

The cluster command lets you view cluster resources.

Cluster refers to the set of hosts that are part of the Apcera Platform. Resource refers to the virtual or physical equipment needed to run a job.

See also Using Job Scheduling Tags.

Usage

apc cluster <subcommand> <required args> [optional args]

Subcommands

  instance-manager - View information about the nodes that run job instances
  tag              - Lists/manages tags in the cluster
  info             - Displays the version of the software running in the Apcera
                     Platform and the name of the administrator.
  usage            - Print usage report

cluster info

Reports the version and Administrator of the cluster currently targeted.

Usage

apc cluster info

Examples

  $ apc cluster info

cluster instance-manager

The cluster instance-manager command lets you view cluster resources.

Instance managers are able to run jobs. Resource refers to the physical or virtual equipment needed to run a job; specifically memory, cpu, disk, and network. The command displays the reserved value associated with the resource and the total available resource.

Usage

apc cluster instance-manager <subcommand> <required args> [optional args]

Subcommands

  list         - List the instance managers that are able to run job instances
                 and the resources used/available on each.
  show         - View resource details of a specific instance manager,
                 including the running job instances.

cluster instance-manager list

The cluster instance-manager list command displays all the current instance managers, the resources that have been reserved and the maximum available on each instance manager.

Usage

apc cluster instance-manager list

Examples

  $ apc cluster instance-manager list

cluster instance-manager show

The cluster instance-manager show command displays details for the named instance manager node, and the job instances running on that node. The argument can be the full hostname or any part of the hostname or UUID. The command will return as many nodes as match the query. The resource information returned for memory disk and network is the reserved resource for each instance.

Usage

apc instance-manager resource show

Examples

  $ apc cluster instance-manager show 22d3b583-awshost

cluster tag

The cluster tag subcommand allows viewing of the scheduling tags assigned to nodes that can run instances of jobs.

Usage

apc cluster tag

Subcommands

  list         - List scheduling tags in the system

cluster tag list

The cluster tag list command list all scheduling tags on your cluster's instance managers and the number of instance managers that implement each tag.

Usage

apc cluster tag list

Examples

  $ apc cluster tag list

cluster usage

The usage command generates a usage report for the specified time period.

Note: When exporting csv data, the memory values are listed in bytes. If the format is set to plain it will display in GB (as opposed to GiB).

Usage

apc cluster usage [optional args]

Options

  format - Desired format of the output. Can be one of ['plain', 'csv']
           (default: plain)

  start  - Start date of the report. The format of the time is: "Jan 2, 2006"
           You may use dashes instead of commas and blanks as in: "Feb-21-2014".
           (default: 1 month ago)

  end    - End date of the report (inclusive). See '--start' flag for details
           of the expected time format.
           (default: today)

Examples

  $ apc cluster usage

  $ apc cluster usage --format csv --start Dec-22-2016 --end Dec-25-2016

cntmp info

You must follow the cntmp command with the word info.

The cntmp info command shows you which packages, jobs, and other data will be imported from a given .cntmp file, without requiring it to be imported.

Usage

apc cntmp info </path/to/tarball-name> [...]

docker

The docker command performs operations on Docker images running in the Apcera Platform.

Deployed Docker images can be managed just like regular Apcera Platform apps.

See also Deploying Docker Jobs.

Usage

apc docker <subcommand> <required args> [optional args]

Subcommands

  run          - Runs a Docker image as an Apcera Platform application
  pull         - Pull a Docker image and create a package from it
  connect      - Connect to Docker image over SSH
  filecopy     - Transfers a file securely between the Docker job and your machine
  link         - Link one Docker job to another

docker attract

The docker attract command establishes an affinity for instances of <target-job> to start on instance managers running instances of <attracted-job>. Instances of <target-job> will be able to run on other instance managers if none satisfying this criterion are available, or those that do are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers that are running <attracted-job> will be able to run <target-job>. Note that this command will not impact the scheduling of <attracted-job>.

See also Job Affinity.

Usage

apc docker attract <target-job> --to <attracted-job> [optional-args]

Options

  -t, --to JOB         - Job to attract <target-job> to.

  --hard               - Establishes a hard requirement for instances of
                         <target-job> to run on instance managers which
                         are running <attracted-job>.

  -r, --remove         - Remove an existing attraction.

  --restart            - Restart the job after applying the update if the job
                         is currently running.

Example

  $ apc docker attract foo --to bar --hard

docker connect

The docker connect command opens an SSH session with the specified Docker job. If your environment is proxied, you should target your cluster over HTTPS before connecting to a Docker container.

Note: for Docker jobs with more than one instance, docker connect will not reliably connect to the same instance. If your Docker job has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the job instances command.

Usage

apc docker connect <container-name>

Options

  -i, --instance-id UUID      - UUID of the instance to connect to

docker create

Do not use. This command is deprecated in favor of docker pull.

docker filecopy

The docker filecopy command transfers a file securely to the specified Docker job. If your environment is proxied, you should target your cluster over HTTPS before connecting to a Docker job.

Note: for Docker jobs with more than one instance, docker filecopy will not reliably connect to the same instance. If your Docker job has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance flag, providing a UUID available from the job instances command.

Your Docker image must have an scp binary installed and available with its $PATH to use the filecopy command.

Usage

apc docker filecopy <job-name> <file-path> [optional-args]

Options

  -i, --instance-id UUID            - UUID of the instance to transfer to

  -r, --remote-path PATH            - Path on the capsule for the file

  -dl, --download                   - Transfer file from the capsule to to your local machine.

Examples

  $ apc docker filecopy mydocker myfile

  $ apc docker filecopy mydocker myfile -r ./path/remotefile -dl

The docker link command links one Docker job to another. The source Docker job gets network connectivity to the given port on the target Docker job.

The source Docker job can connect to the target Docker job by using a special environment variable <name>_URI where <name> is an uppercase form of the
--name argument.

If you are linking to a Docker job with more than one instance, the job may not reliably connect to the same instance each time.

Usage

apc docker link <source-docker-job> --to <target-docker-job> [args]

Options

  -t, --to JOB           - Target Docker job name. (required)

  -n, --name NAME        - Link name (base for the environment variable name).

  -p, --port PORT        - Port on target Docker job to link to.

  -bi, --bound-ip IP     - Specify the IP address that the source Docker job
                           should use to connect to the target Docker job.

  -bp, --bound-port PORT - Specify the port that the source Docker job should
                           use to connect to the target Docker job.

Example

  $ apc docker link myclient --to /staging::myserver -p 1234 -n server

docker pipe

The docker pipe command opens an SSH session with the specified docker application. If your environment is proxied, you should target your cluster over HTTPS before connecting to a Docker instance. It is meant to be a replacement for ssh in commands like "scp" using the "-S" flag.

To utilize this feature, create a script file which passes all command line arguments to it:

  apc docker pipe "$@"

Then run any scp command with the -S flag, e.g.:

  scp -S ./myScript myfile mydocker:myremotefile

See also Using SCP to Transfer Files.

docker pull

The docker pull command creates a new package from an existing Docker image hosted on a Docker registry. It does not create a job running this package, only preloading it into the cluster for use.

Supported registries include the public Docker Hub, private v1 and v2 Docker registries, Quay Enterprise Registry, Quay.io, Amazon EC2 Container Registry (ECR), and JFrog Artifactory.

Amazon Container Registry requires "$ECR_AUTH_TOKEN" environment variable to be set with the authorization token retrieved from AWS for the registry.

See also Pulling Docker Images.

Usage

apc docker pull <package-name> -i <image-name> [args]

Options

  -i, --image IMAGE     - Docker image to create application from. Prefix with
                          a http:// or https:// URL to use a private registry,
                          i.e. http://localhost:5000/app.

  -t, --tag TAG         - Docker image tag (default: 'latest').

  -ru,                  - Username for Docker registry from which image will be pulled.
  --registry-username     Must be supplied in conjunction with registry password.
                          For Quay Enterprise Registries, registry username can be
                          "$oauthtoken".  Not applicable for Amazon Container Registry.

  -rp,                  - Password for Docker registry from which image will be pulled.
  --registry-password     Must be supplied in conjunction with registry username.
                          For Quay Enterprise Registries, if registry username is
                          "$oauthtoken", registry password must be a valid OAuth token.

docker repel

The docker repel command establishes an affinity for instances of <target-job> to start on instance managers NOT running instances of <repelled-job>. Instance of <target-job> will be able to run on instance managers running <repelled-job> if none satisfying this criterion are available, or all those that do are sufficiently loaded.
The --hard flag makes this affinity a hard requirement, such that only instance managers not running <repelled-job> will be able to run <target-job>. Note that this command will not impact the scheduling of <repelled-job>.

Usage

apc docker repel <target-job> --from <repelled-job> [optional-args]

Options

  --from JOB       - Job to repel <target-job> from.

  --hard           - Establishes a hard requirement for instances of
                     <repelled-job> to run on instance managers which
                     are not running <target-job>.

  -r, --remove     - Removes an existing repulsion.

  --restart        - Restart the job after applying the update if the job
                     is currently running.

Example

  $ apc docker repel foo --from bar --hard

docker run

The docker run command creates a new application from a provided Docker image. The Docker image may be pulled from any public or private Docker registry implementing the v1 or v2 Docker registry API.

Supported registries include the public Docker Hub, private v1 and v2 Docker registries, Quay Enterprise Registry, Quay.io, Amazon EC2 Container Registry (ECR), and JFrog Artifactory.

Amazon Container Registry requires "$ECR_AUTH_TOKEN" environment variable to be set with the authorization token retrieved from AWS for the registry.

If the image spec for a Docker app specifies that the app requires persisted volumes, then the user must supply the --provider flag.

If a user specifies that the app requires some volumes at the command line, then they must supply the --provider flag.

Note: created volumes must be deleted manually by the user when they are no longer needed.

Usage

apc docker run <app-name> -i <image-name> [args]

Options

  -i, --image IMAGE      - Docker image to create application from. Prefix with
                           a http:// or https:// URL to use a private registry,
                           i.e. http://localhost:5000/app.

  -t, --tag TAG          - Docker image tag (default: 'latest').

  -p, --port PORT        - Port to expose in the created application.

  -r, --routes ROUTES    - Routes to register with the application, as a comma-
                           separated list. Routes must not contain any characters
                           forbidden in URLs, per RFC 3986. Routes are applied on
                           the specified port, or the default (0) port.
                           (default: <app-name>-<6chars>.<cluster-domain>)

  -ho, --https-only bool - Allow only HTTPS routes to application. HTTP requests
                           will be redirected to HTTPS.
                           (default: false)

  -I, --interactive      - Connect to your Docker job in an interactive prompt.
                           Useful for running OS images.

  --unlock-root          - Unlock the root user account within the container.
                           Some Docker images have locked root users, disallowing
                           SSH access. This option must be supplied to enable SSH
                           to those containers.
                           Note: if this option is supplied, the start command
                           from the Docker image metadata will be ignored.

  -rm, --remove          - Remove container after the interactive shell is
                           closed. Must be used with '--interactive' flag.

  --no-start             - Do not start the job after it is created.

  -v, --volume PATH      - Mount a new persisted volume inside the container at
                           the specified path. May be supplied multiple times.

 --ignore-volumes        - Ignore volumes specified in the image spec. (no
                           data will be persisted)

  --provider NAME        - The provider from which volumes are created. NFS and SMB
                           providers are currently supported.

  -s, --start-cmd CMD    - Application start command.

  --restart-mode         - Restart behavior for instances, valid values are "no",
                           "failure", or "always".
                           (default: "no")

  --stop-cmd             - Command to run when app is stopped.

  --timeout              - Time until the app starts up when attempting to
                           start, in seconds.
                           (default: 30 seconds)

  --stop-timeout         - Time to allow the stop command to run, in seconds.
                           (default: 5 seconds)

  -ae, --allow-egress    - Allow fully open outbound network egress from the
                           job.

  --encrypt              - Encrypt the volume the container data is mounted on.

  -u, --user USER        - User to run Docker image as (defaults to user in the
                           image configuration, or 'root' if image doesn't have
                           user configured).

  -g, --group GROUP      - Group to run Docker image as (default: picked by
                           the Apcera Platform's runtime).

  -e, --env-set ENV=VAL    - Sets an environment variables on the app. Multiple
                             values can be supplied by invoking multiple times,
                             e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

  -c, --cpus CPU         - Milliseconds of CPU time per second of physical time.
                           May be greater than 1000ms/second in cases where time
                           is across multiple cores.
                           (default: 0ms/second, uncapped)

  -m, --memory MEM       - Memory to use, in MB.
                           (default: 256MiB)

  -d, --disk DISK        - Amount of disk space to allocate, in MB.
                           (default: 1GiB)

  -n, --netmin NET       - Amount of network throughput to allocate (floor), in Mbps.
                           (default: 5Mbps)

  -nm, --netmax NET      - Amount of network throughput to allow (ceiling), in Mbps.
                           (default: 0Mbps, uncapped)

  -net, --network NAME   - Name of network to be joined when job is started.

  -ht, --hard-tags       - Hard scheduling tags to add to the docker workload.

  -st, --soft-tags       - Soft scheduling tags to add to the docker workload.

  -sp, --staging         - Staging Pipeline to use when deploying the docker workload.
                           (default: docker)

  -ru,                   - Username for Docker registry from which image will be pulled.
  --registry-username      Must be supplied in conjunction with registry password.
                           For Quay Enterprise Registries, registry username can be
                           "$oauthtoken".  Not applicable for Amazon Container Registry.

  -rp,                   - Password for Docker registry from which image will be pulled.
  --registry-password      Must be supplied in conjunction with registry username.
                           For Quay Enterprise Registries, if registry username is
                           "$oauthtoken", registry password must be a valid OAuth token.

  --client-acl           - Access control list by client IP CIDR.

  -pl, --pull            - Pull the image from the registry. Otherwise, a local image
                           will be used, if available.

drain

The drain command creates and manages log drains.

Usage

apc drain <subcommand> <required args> [optional args]

Subcommands

  add          - Adds a new drain
  delete       - Deletes an existing drain
  list         - Show an app's drains

drain add

The drain add command attaches a syslog drain to the specified app. <app-name> is required. URLs must be provided in the form of: <scheme>://<host>:<port>.

Usage

apc drain add <drain-url> --app <app-name> [optional-args]

Options

  -a, --app NAME        - Name of the app receiving the drain.
                          (default: none)

  -m, --max NUM         - Maximum bytes per log line to send.
                          (default: 2048 bytes)

Examples

  $ apc drain add syslog://your-drain.com:1234 --max 4096 --app site

drain delete

The drain delete command removes a syslog drain from the specified app. <app-name> is required.

Usage

apc drain delete <drain-url> --app <app-name> [optional-args]

Options

  -a, --app NAME        - Name of the app.
                          (default: none)

Examples

  $ apc drain delete syslog://your-drain.com --app site

drain list

The drain list command lists an app's syslog drains. <app-name> is required.

Usage

apc drain list --app <app-name>

Options

  -a, --app NAME        - Name of the app.
                          (default: none)

Examples

  $ apc drain list --app site

event subscribe

You must use the event command with the subscribe subcommand to create a subscription to an events stream.

The event subscribe command subscribes to event stream associated with the the given topic name and redirects it to the standard output. The subscription is terminated when the stream is cancelled, or when the APC process dies.

The events in the stream will be formed off the following items:

  • FQN - FQN representing the resource
  • Source - Source of the event
  • Time - Time at which the event was generated
  • Type - Type of the event
  • Payload - Payload to be sent with the event

See also Events System API.

Usage

apc event subscribe <topic-name> [optional args]

Description:

Options

  -t, --timeout TIME     - Max time (in minutes) which an inactive event stream
                           remains connected before the subscription terminates.
                           (default: 2 minutes, min: 1 minute, max: 5 minutes)

  -f, --file FILE        - File into which events are written instead of
                           printing them to the terminal.

Examples

  $ apc event subscribe job::/dev::foo

export all

The export all command will export everything within a cluster into a single cntmp file, which can then be easily imported into another Apcera Platform cluster.

You can also export individual objects by specifying their type before the export command (e.g. job export or capsule export).

See also Exporting Cluster Data.

Usage

apc export all

gateway

The gateway command performs operations on service gateways.

See also Service Gateways.

Usage

apc gateway SUBCOMMAND [optional args]

Subcommands

  promote      - Promotes an app to be a service gateway
  delete       - Deletes a gateway
  export       - Exports gateway(s) to a single package file
  from package - Creates a new service gateway from an existing package
  demote       - Demotes a service gateway to be a regular app
  health       - View a service gateway's health
  list         - Lists the current service gateways
  logs         - Tail gateway logs
  restart      - Restarts a gateway
  show         - Shows detailed information about an individual gateway
  start        - Starts a gateway
  stats        - Shows a gateway's stats
  stop         - Stops a gateway

gateway delete

The gateway delete command deletes an existing gateway from your cluster. It also attempts to clean up user-sourced packages associated with the gateway.

Usage

apc gateway delete <gateway-name>

Options

  --delete-services, -ds  - Also delete services bound to this gateway.

Examples

  $ apc gateway delete sample1

  $ apc gateway delete /prod/dev::sample-gateway

gateway demote

The gateway demote command demotes a gateway to become a standard app.

Usage

apc gateway demote <app-name>

Examples

  $ apc gateway demote sample

  $ apc gateway demote /prod/retail::my-gateway

gateway export

The gateway export command exports one or many gateways from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc gateway export <gateway-name> [...]

gateway from package

The gateway from package command creates a new Service Gateway from an existing package.

Gateway types determine the protocol for communicating, e.g. postgres or mysql. Types generally correspond with the service protocol.

Usage

apc gateway from package <pkg-name> <gateway-type> [optional args]

Options

  --name NAME          - Name of the job, if it should be different than the
                         gateway's. (default: <gateway-type>)

  --start              - Starts the new service gateway.

  -sc, --start-cmd CMD - Command to start the gateway, if different than the
                         one specified by the package.

  --stop-cmd CMD       - Command to run when the gateway is stopped.

  -ae, --allow-egress  - Allow the gateway open outbound network access.

  -c, --cpus CPU       - Milliseconds of CPU time per second of physical time.
                         May be greater than 1000ms/second in cases where time
                         is across multiple cores.
                         (default: 0ms/second, uncapped)

  -m, --memory MEM     - Memory the gateway should to use, in MB.
                         (default: 256MiB)

  -d, --disk DISK      - Disk space to allocate, in MB.
                         (default: 1GiB)

  -n, --netmin NET     - Network throughput to allocate (floor), in Mbps.
                         (default: 5Mbps)

  -nm, --netmax NET     - Amount of network throughput to allow (ceiling), in Mbps.
                          (default: 0Mbps, uncapped)

  -r, --routes ROUTE    - Route to register with the gateway.
                         (defaults to a subdomain with the gateway name)

  -ht, --hard-tags      - Hard scheduling tags to add to the gateway.

  -st, --soft-tags      - Soft scheduling tags to add to the gateway.

Examples

  $ apc gateway from package mygatewaypkg postgres-gateway

  $ apc gateway from package mygatewaypkg mysql-gateway --start-cmd "./gateway"

  $ apc gateway from package /prod/retail::mygatewaypkg memcache -m 512MB \
      --start-cmd "./gateway"

gateway health

The gateway health command shows the status and health score of a started gateway. Gateways must be started to view health.

Usage

apc gateway health <gateway-name>

Example

  $ apc gateway health myapp

gateway list

The gateway list command lists a cluster's existing service gateways.

The 'Type' column shows the gateway's type, the same parameter used to look up a gateway during apc service create <name> --type <type>.

Usage

apc gateway list

Options

  -l                               - Show gateways' fully-qualified names, UUID,
                                     and version number.

  -fp, --filter-by-package PKG     - Only show gateways that are using the package
                                     name PKG.

  -fl, --filter-by-label LBL[=VAL] - Only show gateways with the given label named LBL present.
                                     If VAL is provided then the label must also have that
                                     value. Multiple values can be supplied by invoking
                                     repeatedly, e.g. "-fl 'X=y' -fl 'W'", resulting in
                                     only showing the gateways meeting ALL the label requirements.

Examples

  $ apc gateway list

  $ apc gateway list --namespace /prod -l

gateway logs

Streams gateway logs to your terminal.

Usage

apc gateway logs <gateway-name> [optional-args]

Options

  -l, --lines N       - How many latest log lines to show.
  --no-tail           - Just show logs, don't tail.

gateway promote

The gateway promote command promotes an app to become a service gateway, and also restarts the app.

Usage

apc gateway promote <app-name>

Options

  -t, --type NAME      - Service type, e.g. postgres, mysql, etc.
                         (required and uniquely enforced; no default)

Examples

  $ apc gateway promote sample -t redis

  $ apc gateway promote /dev::my-app -t postgres

gateway restart

The gateway restart command restarts a gateway.

Usage

apc gateway restart <gateway-name>

Examples

  $ apc gateway restart sample1

gateway show

The gateway show command shows detailed information about individual service gateways.

Packages with an asterisk are user-specified, such as a gateway's package.

Usage

apc gateway show

Options

  --compliance     - Adds a column describing whether or not the gateway is
                     compliant with current policy.

Example

  $ apc gateway show sample1

gateway start

The gateway start command starts a gateway, and sets its state to "running."

Usage

apc gateway start <gateway-name>

Examples

  $ apc gateway start sample1

  $ apc gateway start /sandbox/dev::mysql-sg

gateway stats

The gateway stats command shows a gateway's resource consumption.

Usage

apc gateway stats <gateway-name>

Examples

  $ apc gateway stats sample1

gateway stop

The gateway stop command stops a running service gateway.

Usage

apc gateway stop <gateway-name>

Option

  -i, --instance-id INSTANCE_UUID   - Stop a single instance of a gateway. The
                                      UUID can be obtained by running
                                      'apc job instances <name>'.

Examples

  $ apc gateway stop sample1

  $ apc gateway stop /dev::mysql-sg

glossary

The apc glossary command returns Apcera terms and definitions. See also the online glossary.

import

The import command imports one or many space-separated .cntmp files into your cluster. By default, the namespaces of the contents are preserved.

See also Importing Packages.

Usage

apc import </path/to/import-file-name> [...] [optional args]

Options

  -s, --skip-existing   - Warn, instead of error, if a package already exists.

  -o, --override        - Override cntmp contents namespaces with your local namespace.

Examples

  $ apc import foo.cntmp bar.cntmp apcera.cntmp

job

The job command performs operations on Apcera Platform jobs.

In the Apcera Platform, almost everything is a job, including apps, gateways, capsules, pipelines, and stager modules.

Usage

apc job <subcommand> <required args> [optional args]

Subcommands

  attract    - Establish a scheduling affinity between jobs
  connect    - Connects to a job via SSH
  delete     - Deletes an existing job
  export     - Export jobs to a single package file
  filecopy   - Transfers a file securely between the job and your machine
  health     - View a job's health
  instances  - View the state of a job's instances
  link       - Link one job to another
  list       - Displays a list of your cluster's jobs
  logs       - Tail job logs
  run        - Runs a single command in your job context
  repel      - Remove a scheduling affinity from jobs
  restart    - Restart a job
  show       - Shows detailed information about an individual job
  snapshot   - Snapshot a job's changed files as a package
  start      - Start a job
  stop       - Stop a job
  unlink     - Unlink jobs
  update     - Update a job's properties

job attract

The job attract command establishes an affinity for instances of <target-job> to start on instance managers running instances of <attracted-job>. Instances of <target-job> will be able to run on other instance managers if none satisfying this criterion are available, or those that are running <attracted-job> are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers that are running <attracted-job> will be able to run <target-job>. Note that this command will not impact the scheduling of <attracted-job>.

See also Job Affinity.

Usage

apc job attract <target-job> --to <attracted-job> [optional-args]

Options

  -t, --to JOB         - Job to attract <target-job> to.

  --hard               - Establishes a hard requirement for instances of
                         <target-job> to run on instance managers which
                         are running <attracted-job>.

  -r, --remove         - Remove an existing attraction.

  --restart            - Restart the job after applying the update if the job
                         is currently running.

Examples

  $ apc job attract foo --to bar --hard

job connect

The job connect command opens an SSH session with the specified job. If your environment is proxied, you should target your cluster over HTTPS before connecting to a job container.

Note: for apps with more than one instance, job connect will not reliably connect to the same instance. If your job has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the job instances command.

Usage

apc job connect <job-name>

Options

  -i, --instance-id UUID      - UUID of the instance to connect to

job console

The job console command connects to a clone of your job running within a capsule. This includes all service bindings present on the original job, and provides an easy way to debug and setup/bootstrap your job.

Single-binary jobs, such as Apcera-provided jobs or some Docker images, may not be consoled to as they do not mount an operating system.

Usage

apc job console <job-name>

Example

$ apc job console sample1

job delete

The job delete command deletes an existing job from your cluster. It also attempts to clean up user-sourced packages associated with the job.

Usage

apc job delete <job-name>

Options

  --delete-services, -ds  - Also delete services bound to this job.

Examples

  $ apc job delete sample1

  $ apc job delete /prod/dev::sample-job

job export

The job export command exports one or many jobs from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc job export <job-name> [...]

job health

The job health command shows the status and health score of a started job. Jobs must be started to view health.

Usage

apc job health <job-name>

Example

  $ apc job health myjob

job filecopy

The job filecopy command transfers a file securely to the specified job. If your environment is proxied, you should target your cluster over HTTPS before connecting to a job instance.

Note: for jobs with more than one instance, job filecopy will not reliably connect to the same instance. If your job has more than one instance, one will be chosen to connect to. You may deterministically connect to the same instance by using the --instance-id flag, providing a UUID available from the job instances command.

Your job must have an scp binary installed and available with its $PATH to use the filecopy command.

Usage

apc job filecopy <job-name> <file-path> [optional-args]

Options

  -i, --instance-id UUID            - UUID of the instance to transfer to

  -r, --remote-path PATH            - Path on the capsule for the file

  -dl, --download                   - Transfer file from the capsule to to your local machine.

Examples

  $ apc job filecopy myjob myfile

  $ apc job filecopy myjob myfile -r ./path/remotefile -dl

job instances

The job instances command displays detailed information about the instances of a job, as well as its overall health score.

Usage

apc job instances <job-name>

Example

  $ apc job instances myjob

The job link command links one job to another. The source job gets network connectivity to the given port on the target job.

Source job can connect to the target job by using a special environment variable <name>_URI where <name> is an uppercase form of the --name argument.

If you are linking to a job with more than one instance, the job may not reliably connect to the same instance each time.

See also Linking Jobs.

Usage

apc job link <source-job> --to <target-job> [args]

Options

  -t, --to JOB           - Target job name. (required)

  -n, --name NAME        - Link name (base for the environment variable name).

  -p, --port PORT        - Port on target job to link to.

  -bi, --bound-ip IP     - Specify the IP address that the source job should use
                           to connect to the target job.

  -bp, --bound-port PORT - Specify the port that the source job should use to
                           connect to the target job.

Example

  $ apc job link myclient --to /staging::myserver -p 1234 -n server

job list

The job list command lists your cluster's existing jobs.

Usage

apc job list

Options

  -l                               - Show jobs' fully-qualified names, UUID,
                                     and version number.

  -fp, --filter-by-package PKG     - Only show jobs that are using the package
                                     name PKG.

  -fl, --filter-by-label LBL[=VAL] - Only show jobs with the given label named LBL present.
                                     If VAL is provided then the label must also have that
                                     value. Multiple values can be supplied by invoking
                                     repeatedly, e.g. "-fl 'X=y' -fl 'W'", resulting in
                                     only showing the jobs meeting ALL the label requirements.

  -type TYPE                       - The type of objects to show. Valid options are "app",
                                     "capsule", "job", "gateway", and "stager".
                                     (default: job)

Examples

  $ apc job list

  $ apc job list --namespace /prod/dev -l

  $ apc job list --filter-by-package ubuntu-13.10

  $ apc job list --filter-by-label workflowTrackingNum=3.14159

job logs

Streams job logs to your terminal.

Usage

apc job logs <job-name> [optional-args]

Options

  -l, --lines N       - How many latest log lines to show.
  --no-tail           - Just show logs, don't tail.

job pipe

The job pipe command opens an SSH session with the specified job. If your environment is proxied, you should target your cluster over HTTPS before connecting to a job. It is meant to be a replacement for ssh in commands like "scp" using the "-S" flag. To utilize this feature, create a script file which passes all command line arguments to it:

  apc job pipe "$@"

Then run any scp command with the -S flag, e.g.:

  scp -S ./myScript myfile myjob:myremotefile

See also Using SCP to Transfer Files.

job repel

The job repel command establishes an affinity for instances of <target-job> to start on instance managers NOT running instances of <repelled-job>. Instance of <target-job> will be able to run on instance managers running <repelled-job> if none satisfying this criterion are available, or all those that do are sufficiently loaded. The --hard flag makes this affinity a hard requirement, such that only instance managers not running <repelled-job> will be able to run <target-job>. Note that this command will not impact the scheduling of <repelled-job>.

See also Job Affinity.

Usage

apc job repel <target-job> --from <repelled-job> [optional-args]

Options

  --from JOB       - Job to repel <target-job> from.

  --hard           - Establishes a hard requirement for instances of
                     <repelled-job> to run on instance managers which
                     are not running <target-job>.

  -r, --remove     - Removes an existing repulsion.

  --restart        - Restart the job after applying the update if the job
                     is currently running.

Example

  $ apc job repel foo --from bar --hard

job restart

The job restart command restarts a job.

Usage

apc job restart <job-name>

Examples

  $ apc job restart sample1

  $ apc job restart /dev/test::sample1

job run

The job run command runs a command within the context of your job. This includes all service bindings present on the original job, and provides an easy way to run one off scripts or migrations within your job.

Usage

apc job run <job-name> -c <command>

Options

  -c, --command CMD     - Command to run within job context. (Required)

Example

  $ apc job run sample1 -c "./runscript"

job snapshot

The job snapshot command creates a new snapshot of an existing job's changed files, and puts them in a timestamped package which can be used to create jobs.

If this times out for some reason, the snapshot will continue server-side until either completing or failing.

Usage

apc job snapshot <job-name> [optional args]

Options

  -n, --name NAME         - Name of the snapshot package to be created.
                            (default: "snapshot-<job-name>-<time>")

  -d, --directory PATH    - Directory within the job to snapshot.
                            (default: "", for the root)

  -t, --timeout TIME      - Time (in minutes) after which the client will time
                            out. The snapshot will continue server-side.
                            (default: 2 minutes)

  -i, --instance-id UUID  - UUID of the instance to snapshot. (default: selects
                          single running instance)

  --link                  - Link the snapshot with the job. This will trigger a restart of the job.

Examples

  $ apc job snapshot mymachine

job show

The job show command shows detailed information about individual jobs.

Packages with an asterisk are user-specified, such as an app's package.

Usage

apc job show <job-name>

Options

  --compliance     - Adds a column describing whether or not the job is
                     compliant with current policy.

Examples

  $ apc job show sample1

  $ apc job show /::sample1

job stats

Usage apc job stats <job-name>

The job stats command shows a job's resource consumption.

Examples

  $ apc job stats my-app

  $ apc job stats /dev::sample1

job start

The job start command starts a job and sets its state to "running."

Usage

apc job start <job-name>

Examples

  $ apc job start sample1

  $ apc job start /prod/dev::site

job stop

The job stop command stops a running job.

Usage

apc job stop <app-name>

Options

  -i, --instance-id INSTANCE_UUID   - Stop a single instance of a job. The UUID
                                      can be obtained by running 'apc job instances <name>'

Examples

  $ apc job stop sample1

  $ apc job stop /dev::sample-site

The job unlink command unlinks one job from another. The source job loses
network connectivity to the given port on the target job.

Usage

apc job unlink <source-job> --from <target-job> [args]

Options

  -f, --from JOB        - Target job name (can also be specified as a second
                          argument to the command).

  -p, --port PORT       - Port on target job to link to.

Example

  $ apc job unlink myclient --from /staging::myserver -p 1234

job update

The job update command updates a job's properties. Specify the options and values to update, and the changes will be applied.

For example, you can add (and delete) start commands and environment variables. If there is more than one process on the job to update, specify it with the proc flag. Typically, there is only one process.

Some job properties cannot be updated while the job is running. If a property can't be updated, APC will instruct you to shut down the active job before it will allow the update.

Usage

apc job update <job-name> [optional args]

Options

  --name NAME                   - Updates the local name of the job.

  -proc, --process NAME         - Specifies the process to update, which is
                                  necessary for updating env variables or the
                                  start command. Only required in cases where
                                  the job has more than one process.

  --user UID                    - Specifies the UID to run the process as.

  --group GID                   - Specifies the GID to run the process as.

  Port options:
    -pa, --port-add NUMBER      - Expose a port.

    -pd, --port-delete NUMBER   - Remove an exposed port.

    -o, --optional              - Port is optional; job is considered
                                  healthy even if the port doesn't respond.

  -ae, --allow-egress           - Allow the job open outbound network access.
  --remove-egress               - Close open outbound network access. Note:
                                  cannot supply other flags with this.

  --allow-ssh                   - Allow SSH to the job's container.
  --remove-ssh                  - Remove ability to SSH to the job's container.

  --restart                     - Restart the job after applying the update if
                                  the job is currently running.

  --network			- Restart the job to join the network specified.
    -leave                                  - Restart the job to leave the network specified.
    -ma, --multicast-add <CIDR-Address>     - Updates the job to add a multicast route.
    -md, --multicast-delete <CIDR-Address>  - Updates the job to remove a multicast route.
    -be, --broadcast-enable                 - Updates the job to add the broadcast route.
    -bd, --broadcast-disable                - Updates the job to remove the broadcast route.

  --pkg-add NAME=PATH           - Adds the package specified by NAME to
                                  the job, installed at PATH. If no PATH
                                  specified, defaults to /.

  --pkg-delete NAME             - Removes the package specified by NAME
                                  from the job.

  -i, --instances NUM           - Updates the number of job instances.
                                  (default: 0)

  -sc, --start-cmd CMD                - Adds a start command on a process.
  -remove-sc, --remove-start-cmd      - Remove the start command from a process.
  --stop-cmd CMD                      - Adds a stop command on a process.
  --remove-stop-cmd                   - Remove the stop command on a process.

  -e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                            values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                  --env-set 'PATH=/usr/bin:/opt/local'"

  -eu, --env-unset ENV          - Unsets environment variables on a process.

  -t, --timeout                 - Number of seconds the system will wait for an
                                  job to start before triggering a failure. If
                                  the job is composed of more than one process,
                                  you must specify one using the '--proc' flag.

  -c, --cpus CPU        - Milliseconds of CPU time per second of physical time.
                          May be greater than 1000ms/second in cases where time
                          is across multiple cores.
                          (default: 0ms/second, uncapped)

  -m, --memory MEM      - Memory the job will use, in MB.
                          (default: 256MiB)

  -d, --disk DISK       - Amount of disk space to allocate, in MB.
                          (default: 1GiB)

  -n, --netmin NET      - Amount of network throughput to allocate (floor), in Mbps.
                          (default: 5Mbps)

  -nm, --netmax NET     - Amount of network throughput to allow (ceiling), in Mbps.
                          (default: 0Mbps, uncapped)

  -ha, --hard-tags-add  - Hard scheduling tags to add to the job.

  -hd, --hard-tags-del  - Hard scheduling tags to delete from the job.

  -sa, --soft-tags-add  - Soft scheduling tags to add to the job.

  -sd, --soft-tags-del  - Soft scheduling tags to delete from the job.

  -l, --label-set LBL[=VAL]  - Sets a label name and optional value on a job.
                               Multiple values can be supplied by invoking
                               multiple times, e.g. "-l 'X=y' -l 'W=z'". A given
                               label name can only occur once and providing a
                               value for an existing label will overwrite the
                               existing value.

  -lu, --label-unset LBL     - Label name to delete from job. Multiple
                               labels can be deleted by invoking multiple times.

  --client-acl          - Access control list by client IP CIDR.

  --cacert FILE         - The path to a file containing root certificate authorities
			  that semantic pipelines use when verifying server
			  certificates.  The certificates are in PEM format.

Examples

  $ apc job update sample --port-add 3306

  $ apc job update sample --pkg-add ubuntu=/install/path,/dev/prod::node=/usr/www

  $ apc job update sample --env-set PATH=/usr/local/sbin --env-set HOME=/home/apcera

  $ apc job update sample -l "my label"="a label value" -l lbl=noSpaces

  $ apc job update sample --client-acl="128.66.3.0/24, deny 128.66.0.0/16"

  $ apc app update sample --network mynetwork -ma 225.1.1.0/24

login

The login command signs you into your Apcera Platform cluster.

The Apcera Platform supports multiple authentication providers according to your cluster's policies. Contact your administrator for details. Your cluster must be configured by your administrator to use a specific authentication method.

Specific flags only have to be used once. e.g., if you do apc login --basic on your first login, your next apc login will use the built-in provider automatically.

APC settings are saved in a .apc file, by default in your $HOME directory. To modify this location, set an APC_HOME env var with said path.

See also Identity Providers.

Usage

apc login [optional args]

Options

  --basic      - Use the Apcera Platform's built-in auth provider

  -ad, --kerb  - Use Active Directory (Kerberos) authentication (deprecated)

  --ldap-basic - Use basic LDAP for authentication

  --google     - Use Google Device OAuth2

  --crowd      - Use basic Atlassian Crowd auth (deprecated)

  --keycloak   - Use Keycloak OAuth2 authentication

  --app-auth   - Used only when the apc application runs in an authenticated context
                (i.e. cluster provided application token)

Example

  $ apc login

login show

The login show command shows the name of the currently signed-in user.

Usage

apc login show

logout

The logout command signs you out of your current APC session.

Usage

apc logout

manifest

The manifest command takes a manifest file then deploys it for execution on top of the Apcera Platform.

See also Using Multi-Resource Manifest Files.

Usage

apc manifest <subcommand> [FILE]

Subcommands

  deploy       - Deploys a manifest
  status       - View status of manifest

manifest deploy

The manifest deploy command takes a multi-resource manifest in JSON containing the declaration of one or many resources such as jobs, services or networks.

It creates job links, service bindings, and routes as specified in the manifest in case it is required or fails in case it is not possible to apply the action.

Usage

apc manifest deploy [FILE]

Options

  --expansion-file, -ef  - File to write manifest with substitution
                           variables expanded.

  --expand-only          - Expand substitution variables only, do
                           not deploy. --expansion-file, -ef is required.

Example

  apc manifest deploy project.json

Variables

Substitution variables may be used in string fields. Substitution variables are of the form ${VAR} and have the following characteristics:

  • They are made up of 1 or more letters, decimal digits and underscores.
  • They are case-sensitive. ${VAR} is different than ${var}.
  • They may occur multiple times within and across declarations.
  • They are not expanded recursively. ${${VAR}} is invalid.
  • The sequence started by ${, followed by any characters, and ended by } cannot be escaped. \$\{\} is not interpreted as ${}.

To supply values for substitution variables, pass them after a -- on the command line, respecting the following format: --VAR value.

Values may be made up of any number of printable characters and may be empty, too. If a value for a variable is not supplied, its value will be taken from the environment variable of the same name (if undefined, its value will be the empty string). See the example.

Example

Where VAR4's value will be taken from $VAR4, the environment variable.

  apc manifest deploy project.json -ef expansion.json -- --VAR1 \
    value1 --VAR2 'value 2' --VAR3 '' --VAR4

manifest status

The manifest status command shows the current status of the jobs declared in a multi-resource manifest file.

Usage

apc manifest status [FILE]

Options

  --expansion-file, -ef  - File to write manifest with substitution
                           variables expanded.

Variables

Substitution variables may be used in string fields. Substitution variables are of the form ${VAR} and have the following characteristics:

  • They are made up of 1 or more letters, decimal digits and underscores.
  • They are case-sensitive. ${VAR} is different than ${var}.
  • They may occur multiple times within and across declarations.
  • They are not expanded recursively. ${${VAR}} is invalid.
  • The sequence started by ${, followed by any characters, and ended by } cannot be escaped. \$\{\} is not interpreted as ${}.

To supply values for substitution variables, pass them after a -- on the command line, respecting the following format: –VAR value

Values may be made up of any number of printable characters and may be empty, too. If a value for a variable is not supplied, its value will be taken from the environment variable of the same name (if undefined, its value will be the empty string). See the example.

Examples

  apc manifest status project.json

  apc manifest status project.json -ef expansion.json -- --VAR1 \
    value1 --VAR2 'value 2' --VAR3 '' --VAR4

Where VAR4's value will be taken from $VAR4, the environment variable.

namespace

The namespace command lets you view, set, or default your current namespace.

Changing namespaces is similar to changing to a different directory in a Unix filesystem. Note that a namespace does not need to be created before changing
to it.

You can use these characters in namespace names: alphanumeric characters, /, _, -

See also FQNs and Namepsaces.

Usage

apc namespace <namespace>

Options

-d, –default - Set the local namespace to your home namespace
as indicated by policy.

Example

  $ apc namespace

  $ apc namespace --default

  $ apc namespace /prod/retail

network

The network command lets you interact with your cluster's virtual networks.

See also Creating and Using Virtual Networks.

Usage

apc network <subcommand> [args]

Subcommands

  create     - Creates a new network
  delete     - Deletes a network
  join       - Join a job to a virtual network
  leave      - Remove job from a virtual network
  list       - Lists your cluster's networks
  show       - Show detailed information about an individual network

network create

The network create command lets you create a new virtual network. Unless otherwise specified, the virtual network is created in the default namespace and in the default user subnet pool (if one exists) or in the default system subnet pool.

Usage

apc network create <network-name> [args]

Examples

  $ apc network create net1

  $ apc network create /prod/retail::customernetwork

  $ apc network create net1 --pool my-pool-1

See subnet pools for details on using the --pool option.

network delete

The network delete command deletes an existing virtual network from your cluster. The virtual network must be empty; that is, it should not have any member jobs. In order to delete a virtual network with member jobs, all such jobs must first be removed from the network using apc job <job name> leave --network <network name>.

Usage

apc network delete <network-name>

Examples

  $ apc network delete sample1

  $ apc network delete /prod/dev::sample-network

network join

The network join command joins a job to a virtual network. The job gets network connectivity to all other jobs on the specified network.

The job is given an IP address from the subnet range allocated for the virtual network and it can access the network through a newly visible network interface.

A discovery address can be provided to configure a DNS address through which instances can be discovered by other instances belonging to the same virtual network. Discovery addresses are case insensitive and may consist of letters and numbers. Dots and hyphens are also allowed except at the beginning and end.

Note that if the job has multiple instances, all instances will be allocated their own IPs and be visible on the network.

Usage

apc network join <network> --job <job-name>

Options

  -j, --job NAME           - Name of the job joining the network. (required)

  -da, --discovery-address - Discovery address to search for the job's instances
                             in the virtual network.

Examples

  $ apc network join mynet --job myjob

  $ apc network join mynet --job myjob --discovery-address myjob

  $ apc network join mynet --job myjob --da myjob

  $ apc network join /prod::/net1 -j job1

network leave

The network leave command removes a job from a virtual network.

Note that if the job has multiple instances, all instances will be removed from the virtual network.

Usage

apc network leave <network> --job <job-name>

Options

  -j, --job NAME         - Name of the job leaving the network. (required)

Examples

  $ apc network leave mynet --job myjob

  $ apc network leave /prod::/net1 -j job1

network list

The network list command lets you view your cluster's available virtual networks. By default, this list will be filtered by your current namespace.

Usage

apc network list

Options

  -l                  - Show networks' fully-qualified-names and UUID

Examples

  $ apc network list

  $ apc network list -l

network show

The network show command shows detailed information about a specific virtual network.

Usage

apc network show <network-name>

Examples

  $ apc network show net1

package

The package command performs operations related to Apcera Platform packages.

See also Working with Packages.

Usage

apc package <subcommand> [optional args]

Subcommands

  build     - Build a package tarball from a conf file
  create    - Build a package from a directory using a staging pipeline
  delete    - Delete an existing package
  download  - Download the raw package contents to a local file
  export    - Export package(s) into a single package file
  import    - Import package(s) from local .cntmp files into your cluster
  from file - Create a new package from a tarball
  list      - List your cluster's packages
  replace   - Replace an existing package with a new one
  show      - Show detailed information about a package
  update    - Update a package's properties

package build

The package build command lets you build a runtime package from a .conf file. .conf files are human-readable manifests that include download links and information about what a particular package can provide and depend upon. Generated JSON files are staged and uploaded into the system automatically.

Usage

apc package build <file-name> [optional args]

Options

  -n, --name NAME               - Preferred name of the package being uploaded.
                                  (default: file name without suffix)

  -s, --staging PIPELINE        - Staging pipeline to use on the package.
                                  (default: Apcera-provided compiler stager)

Examples

  $ apc package build go-1.1.conf

  $ apc package build go-1.2.conf --name /sandbox/dev::test-go-pkg --staging /::compiler

package create

The package create command creates a new package with the provided properties. <package-name> is required, and the default path is your current working directory.

You can provide a manifest file in the packages's directory, or provide a path to a manifest file via the --config flag.

Usage

apc package create <package-name> [optional-args]

Options

  -p, --path PATH         - Path to the package you are deploying.
                            (default: current path)

  --config PATH           - Path to the custom manifest file, if present.
                            (default: '<pkg-directory>/continuum.conf')

  -sp, --staging NAME     - Staging Pipeline to use when deploying the package.
                            (default: auto-detected)

  --provides PRVS         - Dependencies this package will provide, in the form
                            of 'TYPE.NAME'. You can specify multiple
                            comma-separated 'provides'.

  -do, --depends-on       - Specify dependencies for the package with the form
                            'TYPE.NAME', where 'TYPE' is 'os', 'package',
                            'runtime', or 'file'. For instance, to enforce a
                            dependency upon 'ubuntu-14.04' and 'ruby-2.0.0', try
                            '--depends-on os.ubuntu-14.04,ruby-2.0.0'
                            (default: stager picks dependencies for you)

  -pe, --pkg-env ENV=VAL  - Sets environment variables on the package.
                            Multiple values can be supplied by invoking multiple
                            times.
                            (Providing a 'STAGER_DEBUG' variable here can activate
                            increased stager logging detail for built-in stagers)

Examples

  $ apc package create sample

  $ apc package create sample -sp "/dev::custom-ruby"

  $ apc package create sample --depends-on os.ubuntu-13.10,runtime.ruby-1.9.3-p547

  $ apc package create sample -pe 'SOME_KEY="1234abcd"'

package delete

The package delete command deletes an existing package from the Apcera Platform.

Usage

apc package delete <package-name>

Examples

  $ apc package delete sample1

  $ apc package delete /prod/dev::app-pkg

package download

The package download command downloads the specified package from an Apcera Platform cluster into a tar.gz file.

Usage

apc package download <package-name> [optional-args]

Options

  -f, --file FILE      - Filename that the exported package should be saved to.

package export

The package export command exports one or many packages from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc package export <package-name> [...]

package from file

The package from file command creates a new package from an existing file on your local machine. This could be a tarball, or a file produced by the package build command.

Usage

apc package from file <file> <package-name> [optional args]

Options

  -s, --staging PIPELINE - Staging pipeline to use on the package.
                           (by default, no staging pipeline is run)

  -p, --provides PRVS    - Dependencies this package will provide, in the form
                           of 'TYPE.NAME'. You can specify multiple
                           comma-separated 'provides'.

  -d, --dependencies DEP - The package's dependencies, in the form of
                           'TYPE.NAME'. You can specify multiple
                           comma-separated dependencies.

  -e, --env-set ENV=VAL   - Sets an environment variables on the package. Multiple
                            comma values can be supplied by invoking multiple times,
                            e.g. "--env-set 'HOME=/usr/local'
                                   --env-set 'PATH=/usr/bin:/opt/local'"

Dependency formatting

You can specify dependencies and provides in the form TYPE.NAME, where type is either file, package, runtime, or os. The name is whatever the package is providing or depends on.

For example:

  'file./usr/bin/ruby'
  'package.apache-2.2'
  'runtime.ruby'
  'runtime.ruby-1.9.3'
  'os.ubuntu'

NOTE: The name is not tied to the package, and a package can fulfill many 'provides'. A package 'Ubuntu 12.10' may provide 'os.ubuntu-12.10', 'os.ubuntu', and 'os.linux'.

Examples

  $ apc package from file foo.txt sample1

  $ apc package from file foo.txt sample1 --provides package.wordpress

  $ apc package from file foo.txt sample1 --provides "package.foo, runtime.bar"
      --staging /dev::ruby

package import

The package import command imports one or many space-separated .cntmp files into your cluster. By default, the namespaces of the contents are preserved.

Usage

apc package import </path/to/import-file-name> [...] [optional args]

Options

  -s, --skip-existing   - Warn, instead of error, if a package already exists.

  -o, --override        - Override cntmp contents namespaces with your local namespace.

Examples

  $ apc package import foo.cntmp bar.cntmp apcera.cntmp

package list

The package list command lists your cluster's existing packages.

Usage

apc package list

Options

  -l                   - Show packages' fully-qualified-names, version, and UUID

  -t, --type TYPE      - Returns a list of packages that provide dependencies of
                         the specified type. Type can be 'file', 'package',
                         'runtime', or 'os'.

  -pr, --provides NAME - Returns a list of packages that provide a certain
                         dependency. The value can be either the provides name,
                         or a combination of the type and name, such as
                         'os.ubuntu'.

  --docker-image NAME  - Returns a list of packages that provide the Docker image.

Examples

  $ apc package list

  $ apc package list --type os -l

  $ apc package list --namespace /prod/retail -l

  $ apc package list --docker-image nats -l

package replace

The package replace command replaces a cluster package with a tarball from your machine. Jobs that depend on <package-name> are stopped, <package-name> is replaced, and dependent jobs have their packages resolved before starting again.

To keep the old package's env vars, provides and dependencies, include the --copy flag. (otherwise they will be discarded)

Note: this command stops any running jobs that depend on <package-name>, and will restart them if the command is successful.

Usage

apc package replace <package-name> <tar-path> [optional args]

Options

  -c, --copy             - If provided, the new package will copy all provides,
                           dependencies, and environment information from the
                           old package to the new one.

  -p, --provides PRVS    - Dependencies that this package will provide, in the
                           form of 'TYPE.NAME'. You can provide multiple
                           comma-separated provides. Only one provides of
                           each type may be added per replace.

  -d, --dependencies DEP - Package dependencies, in the form 'TYPE.NAME'. You
                           can provide multiple comma-separated dependencies.
                           Only one dependency of each type may be added per
                           replace.

  -e, --env-set ENV=VAL - Sets an environment variables on the package. Multiple
                          comma values can be supplied by invoking multiple times,
                          e.g. "--env-set 'HOME=/usr/local'
                                 --env-set 'PATH=/usr/bin:/opt/local'"

  -s, --staging PIPELINE - Staging pipeline to use on the package.
                           (by default, no staging pipeline is run)

Dependency formatting

You can specify dependencies and 'provides' in the form 'TYPE.NAME', where type is either 'file', 'package', 'runtime', or 'os'. The name is whatever the package is providing or depends on.

Examples:

  'file./usr/bin/ruby'
  'package.apache-2.2'
  'runtime.ruby'
  'runtime.ruby-1.9.3'
  'os.ubuntu'

NOTE: The name is not tied to the package, and a package can fulfill many 'provides'. A package 'Ubuntu 12.10' may provide 'os.ubuntu-12.10', 'os.ubuntu', and 'os.linux'.

Examples

  $ apc package replace sample1 /path/to/file.tar.gz

  $ apc package replace sample1 pkg.tar.gz --provides package.wordpress,runtime.node

  $ apc package replace sample1 file.tar.gz --copy

package show

The package show command shows detailed information about an individual package, including its dependencies, what it provides, and its environment variables.

Usage

apc package show <package-name>

Examples

  $ apc package show sample1

  $ apc package show /::sample1

package update

The package update command updates a package's properties to a new value. Simply specify the updated field and new value, and the change will be applied.

The package's dependencies and 'provides' can only be modified if the package is not in the 'ready' state.

Usage

apc package update <name> [optional args]

Options

-pa, --provides-add TYPE.NAME  - Adds the provider 'NAME' of type 'TYPE' to the
                                 package in the format 'TYPE.NAME'. You can
                                 supply multiple comma-separated dependencies at
                                 once.

-pd, --provides-del TYPE.NAME  - Removes the package 'NAME' of type 'TYPE' from
                                 the package.

-da, --deps-add TYPE.NAME      - Adds the dependency package of type TYPE
                                 specified by NAME to the package.

-dd, --deps-delete TYPE.NAME   - Removes the dependency package specified by
                                 NAME from the package.

-e, --env-set ENV=VAL   - Sets an environment variables on the app. Multiple
                          values can be supplied by invoking multiple times,
                          e.g. "--env-set 'HOME=/usr/local'
                                --env-set 'PATH=/usr/bin:/opt/local'"

-eu, --env-unset ENV           - Unsets environment variables on a process.

-n, --name                     - Update the local name of the package.
                                 Changing the namespace is not supported.

Dependency formatting

You can specify dependencies and 'provides' in the form 'TYPE.NAME', where type is either 'file', 'package', 'runtime', or 'os'. The name is whatever the package is providing or depends on.

Examples:

  'file./usr/bin/ruby'
  'package.apache-2.2'
  'runtime.ruby'
  'runtime.ruby-1.9.3'
  'os.ubuntu'

NOTE: The name is not tied to the package, and a package can fulfill many 'provides'. A package 'Ubuntu 12.10' may provide 'os.ubuntu-12.10', 'os.ubuntu', and 'os.linux'.

Examples

  $ apc package update mypackage -pa package.wordpress,runtime.node,package.nginx

  $ apc package update /dev::mypackage -e HOME=my_env_var,GOPATH=/apcera/go/src

pipeline

The pipeline command lets you interact with your cluster's semantic pipelines.

See also Working with Semantic Pipelines.

Usage

apc pipeline <subcommand> [optional args]

Subcommands

  delete       - Delete a semantic pipeline
  from package - Create a semantic pipeline from a package
  list         - List your cluster's semantic pipelines
  logs         - Tail a semantic pipeline's logs
  restart      - Restart a semantic pipeline
  start        - Start a semantic pipeline
  stats        - View a semantic pipeline's resource consumption
  stop         - Stop a semantic pipeline
  show         - Show a semantic pipeline's details

pipeline delete

The pipeline delete command deletes an existing pipeline from your cluster. It also attempts to clean up user-sourced packages associated with the pipeline.

Usage

apc pipeline delete <pipeline-name>

Option

  --delete-services, -ds  - Also delete services bound to this pipeline.

Examples

  $ apc pipeline delete sample1

  $ apc pipeline delete /prod/dev::sample-pipeline

pipeline from package

The pipeline from package command creates a new Semantic Pipeline from an existing package.

Usage

apc pipeline from package <pkg-name> <pipeline-scheme> [optional args]

Options

   -c, --cpus CPU      - Milliseconds of CPU time per second of physical time.
                         May be greater than 1000ms/second in cases where time
                         is across multiple cores.
                         (default: 0ms/second, uncapped)

  -m, --memory MEM     - Memory the pipeline will use, in MB.
                         (default: 256MiB)

  -d, --disk DISK      - Disk space to allocate, in MB.
                         (default: 1GiB)

  -n, --netmin NET     - Network throughput to allocate (floor), in Mbps.
                         (default: 5Mbps)

  -nm, --netmax NET    - Amount of network throughput to allow (ceiling), in Mbps.
                         (default: 0Mbps, uncapped)

  --start-cmd CMD      - Command to start the pipeline, if different than the
                         one set by the pipeline.
  --stop-cmd CMD       - Command to run when the pipeline is stopped.

  -name, --pipeline-name NAME - Name of the pipeline, if something other than
                                desired scheme.

Examples

  $ apc pipeline from package mygatewaypkg mysql

  $ apc pipeline from package mygatewaypkg postgres \
      --start-cmd "./pipeline"

  $ apc pipeline from package /dev/prod::mygatewaypkg http -m 512MB \
      --start-cmd "./pipeline"

pipeline list

The pipeline list command lists your cluster's semantic pipelines.

Usage

apc pipeline list

Options

  -l                  - Show pipelines' fully-qualified-names, version number,
                        and UUID.

Example

  $ apc pipeline list -l

pipeline logs

Streams pipeline logs to your terminal.

Usage

apc pipeline logs <pipeline-name> [optional-args]

Options

  -l, --lines N       - How many latest log lines to show.
  --no-tail           - Just show logs, don`t tail.

pipeline restart

The pipeline restart command restarts a pipeline.

Usage

apc pipeline restart <pipeline-name>

Examples

  $ apc pipeline restart sample1

pipeline show

The pipeline show command shows detailed information about a semantic pipeline.

Usage

apc pipeline show <pipeline-name>

Example

  $ apc pipeline show sample1

pipeline start

The pipeline start command starts a pipeline, and sets its state to "running."

Usage

apc pipeline start <pipeline-name>

Examples

  $ apc pipeline start sample1

  $ apc pipeline start /sandbox/dev::mysql-sp

pipeline stats

The pipeline stats command shows a pipeline's resource consumption.

Usage

apc pipeline stats <pipeline-name>

Examples

  $ apc pipeline stats sample1

pipeline stop

The pipeline stop command stops a running service pipeline.

Usage

apc pipeline stop <pipeline-name>

Examples

  $ apc pipeline stop sample1

  $ apc pipeline stop /dev::mysql-sp

policy

The policy command lets you interact with your cluster's policies, which govern cluster behavior.

See also Governing with Policy and Working with Policy.

Usage

apc policy <subcommand> [optional args]

Subcommands

  delete       - Delete a policy document
  export       - Export current policy
  import       - Import policy documents
  list         - List policy documents
  on           - Show policy rules by realms
  show         - Show a policy document
  sim          - Run a policy simulation

policy delete

Deletes policy document(s) from the cluster.

Usage

apc policy delete <doc> [<doc>...]

Examples

  $ apc policy delete myDoc

  $ apc policy delete myDoc anotherDoc

policy export

The policy export command lets you export policy documents from your cluster.

You can edit exported policy documents locally, and import them back into your cluster using policy import.

If no document name is provided all policy documents are exported.

Usage

apc policy export [<document-name>] [options]

Options

  -d, --dir [DIR]      Put exported documents in a given directory.
                       Default is a current working directory.

  -f, --force          Overwrite existing files without asking.

Examples

Export all policy documents to the current directory

  $ apc policy export

Export authSettings document to /path/to/dir

  $ apc policy export authSettings --dir /path/to/dir

policy import

Imports policy documents into your cluster.

Force flag overrides policy inconsistency warnings.

Usage

apc policy import [-f | --force]  <file> [<file>...]

Example

  $ apc policy import /path/to/policyFile.pol

policy list

The policy list command shows your cluster's policy documents.

Usage

apc policy list

policy on

Shows policy rules currently existing in the system. Filter can be a namespace, an FQN or a resource type.

Usage

apc policy on [<filter>] [-a]

Options

  -a, --all      Show all policy that applies to the given FQN, e.g.
                 for 'job::/prod/a' it will include policy on 'job::/prod/a',
                 'job::/prod', and 'job::/'.

Examples

All policy: $ apc policy on /

Policy on jobs only: $ apc policy on job

Policy on packages in /dev namespace: $ apc policy on package::/dev

Policy on anything in /prod namespace: $ apc policy on /prod

All policy that applies to job::/dev::myjob: $ apc policy on job::/dev::myjob -a

policy show

Shows the source of a policy document, optionally replacing variable references with variable values.

Usage

apc policy show <document> [-r [-c=count] [-f=row]]

Options

  -r, --reveal-variables Optional flag to indicate that any variable
                         references in the policy rules should be replaced
                         by the variable's value from the corresponding
                         data table. The output will show the variable
                         value in back quotes in place of the reference,
                         for example GC->var becomes ` + "`value`" + `.
                         When a data table has multiple rows or a condition
                         reference has multiple values then the rule which
                         references it is shown repeatedly, once for each
                         revealed value.

  -c, --reveal-count     Optional flag when -r is present to control the number
                         of values to reveal (default 3). Since data tables
                         may have many rows and the return policy is replicated
                         for each row, this argument allows a limit to be
                         placed on the count of values shown; "all" may be used
                         to remove the limit.

  -f, --reveal-from      Optional flag when -r is present to control the
                         starting row to use for revealing values (default
                         0). This provides a rudimentary paging operation to
                         move through the rows of a data table.

Examples

Return the contents of authSettings.pol:

$ apc policy show authSettings

Return the contents of bindingsPVuse.pol (which includes references to data table PV->SvcGrps):

$ apc policy show bindingsPVuse

Return the definition and rows for data table SvcGrps (contained in policy document systemDataTableSvcGrps.pol):

$ apc policy show systemDataTableSvcGrps

Return the contents of bindingsPVuse.pol but replace PV->SvcGrps.* references with values from the first three data rows of SvcGrps.

$ apc policy show bindingsPVuse -r

Return the contents of bindingsPVuse.pol but replace PV->SvcGrps.* references with values from the fourth and fifth data rows of SvcGrps.

$ apc policy show bindingsPVuse -r -f 4 -c 2
$ apc policy show bindingsPVuse --reveal-variables -f=4 -c=2

Return the contents of bindingsPVuse.pol, replicating the rules for all the values in the data rows of SvcGrps.

$ apc policy show bindingsPVuse --reveal-variables --reveal-count all

policy sim

Run a policy simulation against the policies in the existing system. The filter parameters specify one or more FQNs (realms) for the simulation. Specify "*" to process all FQNs (realms).

Usage

apc policy sim <filter> [<filter>...] [-i <input_claims> [-i ...]] [-c <claimtype>[=value] [-c ...]]
[-r <file> [-r ...]] [-d <dir>] [-f] [-y] [-e <epoch>] [-w <width>]

Options

  -i, --input     Input claims to establish the user identity and other input
                  claims for the simulation.  When using "*" to process all FQNs
                  this option may also specify template substitution values. For
                  example [job::/sandbox/[name]]=bob will substitute bob for
                  template [name] under the realm job::/sandbox/[name] where
                  [name]=ned will substitute ned for all [name templates for
                  all simulations realms. This option may be specified more
                  than once if multiple input claims are required.
                  Example inputs are
                  name=bob, Google->email=bob@gmail.com, ResType=gateway, etc.
                  Use "name=*" input claim value to run the simulation against
                  all userids.

  -c, --claimtype Claim types and optional values to return such as permit=read,
                  role, max.package.size, etc. This option may be specified more
                  than once if multiple claim types and/or specific values are
                  to be returned.  If not specified then all claim types and
                  their values are returned.

  -r, --replace   Policy files to replace or add to the simulation.  The file(s)
                  will replace existing policies in the system for this
                  simulation only.  They will not be saved in the system.  If
                  the filename does not match an existing policy document, then
                  it is simply added to the list of policy documents for the
                  simulation.  This option may be specified more than once if
                  multiple policy files are to be part of the simulation. The
                  document name reported will be the filename specified with
                  the .pol suffix stripped from the name.  The file must be a
                  standard policy text file such as one downloaded using the
                  "apc policy export" command.

  -d, --dir       Policy documents directory to add or replace policy documents
                  for the simulation.  All files with a ".pol" extension will
                  be uploaded.  Note, that a warn limit of 2MBs of policy text
                  is built into the command to avoid accidentally running
                  long simulation requests.  This can be overridden using the
                  force parameter.

  -f, --force     Forces the command to upload policy documents in excess of
                  the 2MB limit.

  -e, --epoch     The epoch parameter sets the system policy documents as of
                  a specific time.  Any changes to policy after that time are
                  NOT included in the simulation.  The format of the time is:
                  "Jan 2, 2006 3:04pm MST"
                  You may use dashes instead of commas and blanks as in:
                  Feb-21-2014-04:33am-CEST
                  You may use shorter forms of the date by eliminating fields
                  right to left until you reach the year (i.e. month, day, and
                  year are always required).  For example, Jan-28-1991-02:40 and
                  "Jul 7, 1996" are also valid epochs for the current timezone,
                  but "Jan-7", "2016", or "11:22am PDT" are not.

  -y, --hypothetical  Request a hypothetical analysis.  During a hypothetical
                  analysis, the simulation first determines any missing claims
                  which are required to satisfy all the policy output claims for
                  the realm.  The simulation then uses the original input claims
                  and the missing claims as the total set of input claims for
                  the simulation.

  -w, --width     Split the "Claim/Condition/..." column at the first blank
                  greater than the width value specified.  The default is 80.
                  You may specify 0 or a negative value to never split.

Output Fields:
  Context         Indicates the source of the claim, condition, FQN, or input
                  value. The majority of the lines contain the policy file and
                  line number of the claim, condition or FQN in the format:
                  (s) filename:nn
                  where (s) indicates that the document is one stored in the
                  system, filename is the name of the policy document, and
                  nn is the line number. When a policy document is specified on
                  the command line using the -r option, the indicator (a), added
                  policy document, or (r), replaces an existing policy document,
                  will preceed the policy document name.  Context contains
                  <access_denied> if the requestor has access to the realm, but
                  does not have access to the policy file.  Context contains
                  INPUT when the source of the field is one of the input
                  parameters or implied from an input parameter.  For example:
                  Google->name=bob@gmail.com implies an input claim of AUTHTYPE
                  Google.  Context contains "ADD INPUT" when you have requested
                  a hypothetical analysis (the -y option) and the required input
                  claim was added to the simulation in order to determine why a
                  user may not have access to a resource.

  Type            Indicates the value field type: Claim, Condition, FQN, or Input

  Claim / Cond... Contains the value of the Claim, Condition, FQN, or Input field.

Examples

Return the policy tree where the claim is "role admin" when the userid name is fred@apcera.com on FQN stagpipe::/

$ apc policy sim "stagpipe::/" -c role=admin -i "name=fred@apcera.com"

Return the policy tree where the claim type is permit or role dev when the Google email and name are fred@apcera.com on all resources.

$ apc policy sim "*" -i "Google->email=fred@apcera.com" -i "Google->name=fred@apcera.com" -c permit -c role=dev

Determine why fred doesn't have permit update on policy::/sandbox/fred

$ apc policy sim "policy::/sandbox/fred" -i name=fred -c permit=update -y

provider

The provider command lets you perform operations on your cluster's providers.

See also Registering Providers.

Usage

apc provider <subcommand> [optional args]

Subcommands

  list       - Lists the current providers
  register   - Registers a job or external source as a provider with a service gateway
  delete     - Deletes an existing provider
  show       - Show detailed information about a provider

provider delete

The provider delete command lets you delete a provider.

Usage

apc provider delete <provider-name>

Options

  --force             - Force the provider to be removed even if the backing provider raises errors.

Examples

  $ apc provider delete mysql001

  $ apc provider delete provider::/sandbox::mysql001

provider list

The provider list command shows your cluster's providers. By default, the list is filtered by your current namespace.

Usage

apc provider list

Options

  -l                  - Show providers' fully-qualified-names and UUID.

Examples

  $ apc provider list

  $ apc provider list --ns /dev/prod -l

provider register

The provider register command lets you register a provider with a service gateway. You can also use it to register apps (internal and external) as providers.

Some service gateways may require additional parameters. To supply those for provider registration, pass the params after a -- on the command line. See the example below.

Usage

apc provider register <provider-name> [optional args]

Options

  -t, --type SERVICE_TYPE  - Provider's service type, required to determine the
                             service gateway with which to register the
                             provider.
                             (required; defaults to URL scheme if URL is
                             supplied)

  -d, --description DESC   - Provider's description. (optional)

  -u, --url URL            - Administrative connection information, including
                             credentials. ### Examples
                             'postgres://admin:password@example.com:5432'

  -j, --job APP_NAME       - Job name to promote to a provider. (required)

  -p, --port PORT          - Port exposed on job to which the service gateway
                             will connect. Use 0 to let system choose a port.

  --cacert FILE            - The path to a file containing root certificate
                             authorities that service gateways and semantic
                             pipelines use when verifying server certificates.
                             The certificates are in PEM format.

Examples

  $ apc provider register mysql1 --url mysql://admin:password@db.example.com:3306/

  $ apc provider register /prod::postgres -t postgres --job /dev/retail::site

  $ apc provider register postgres -t postgres --job jobname -- --customparam value

provider show

The provider show command shows details about your cluster's providers.

Usage

apc provider show <provider-name> [optional args]

Examples

  $ apc provider show mysql-capsule

revert

Downgrade APC to a previous version's binary (if available locally).

Usage

apc revert

route

The route command performs operations on your cluster's routes.

See also Defining Routes.

Usage

apc route <subcommand> <required args> [optional args]

Subcommands

  add          - Add a new route.
  delete       - Delete an existing route.
  list         - Lists routes.
  map          - Map job to route.
  show         - Show apps sharing a route.
  unmap        - Unmap job from route.

route add

The route add command adds new routes to your apps' exposed ports. This enables the Apcera Platform routers to serve traffic to the route. Apps may share common routes, and will automatically load-balance based on the routes' weights.

Routes are contained by exposed ports, and represent a link into an app's container at the specified port. Port 0 is exposed by default, which is automatically resolved to a free port.

Adding the same route endpoint to a different job will affect the proportion of traffic delivered to each job. If, for instance, both routes are created with 0% weight, then each job will get half of the traffic routed.

Usage

apc route add <route-endpoint> [options]

Options

  -a,  --app NAME           - Name of the app.
                            (no default, required)

  -p,  --port NUMBER        - Physical port number exposed on the app.
                             (default: 0)

  -w,  --weight %           - Proportion of traffic delivered to this route,
  							 normalized across all apps sharing the route.

  -ho, --https-only bool    - Allow only HTTPS requests. Valid only for HTTP type.
  	                         HTTP requests will be redirected to HTTP.
							               (default: false)

Route type

Route type is specified with the --type flag, with --http and --tcp providing short options.

    --type STRING      - Specify a route type, generally 'http' or 'tcp'.
                        (see shortcuts below)

    --http BOOL        - Route is an HTTP route.
                         (default: false)

    --tcp BOOL         - Route is a TCP route.
                         (default: false)

Examples

  $ apc route add http://site.apcera.me --app mysite --weight 50

  $ apc route add 10.0.0.1:1212 --app mydb --tcp --port 4000

  $ apc route add 10.0.0.1:1212 --app mydb --type tcp --port 8080

To auto-provision a route (http or tcp) set the route to auto:

  $ apc route add auto --tcp --app <appname>

route delete

The route delete command removes a route from Apcera Platform. Only routes without job mappings can be deleted. To remove a job mapping use route unmap. This
causes the Apcera Platform routers to stop serving traffic over that route, so the route can then be deleted.

Usage

apc route delete <route-endpoint>

Route Type

–type STRING - Specify a route type. Generally 'http' or 'tcp'. (see
shortcuts below)

–http BOOL - Route is an HTTP route.
(default: false)

–tcp BOOL - Route is a TCP route.
(default: false)


### Examples

$ apc route delete http://site.apcera.me –app mysite

$ apc route delete 10.0.0.1:1212 –type tcp


## route list

The `route list` command lists your cluster's existing job routes.

### Usage

apc route list


### Options

-l - Show route weights and the FQNs/UUIDs of
the jobs that use them.

-fp, –filter-by-package PKG - Only show routes attached to jobs that are
using the package name PKG.

-fl, –filter-by-label LBL[=VAL] - Only show routes attached to jobs with the
given label named LBL present. If VAL is
provided then the label must also have that
value. Multiple values can be supplied by
invoking repeatedly, e.g.
"-fl 'X=y' -fl 'W'", resulting in only
showing the capsules meeting ALL the label
requirements.


### Examples

$ apc route list

$ apc route list -l


## rotue map

The `route map` command adds new mappings to your apps' exposed ports. This enables the Apcera Platform routers to serve traffic to the route. Apps may share common routes, and will automatically load-balance based on the routes' weights.

Routes are contained by exposed ports, and represent a link into an app's container at the specified port. Port 0 is exposed by default, which is automatically resolved to a free port.

Adding the same route endpoint to a different job will affect the proportion of traffic delivered to each job. If, for instance, both routes are created with 0%
weight, then each job will get half of the traffic routed.

Route type is specified with the '--type' flag, with '--http' and '--tcp' providing short options.

### Usage

apc route map [options]


### Options

-a, –job NAME - Name of the job.
(no default, required)

-p, –port NUMBER - Physical port number exposed on the app.
(default: 0)

-w, –weight % - Proportion of traffic delivered to this route,
normalized across all apps sharing the route.

-ho, –https-only bool - Allow only HTTPS requests. Valid only for HTTP type.
HTTP requests will be redirected to HTTP.
(default: false)

-t, –type STRING - Specify a route type, either 'http' or 'tcp'.


### Examples

$ apc route map http://site.apcera.me –job mysite –weight 50

$ apc route map 10.0.0.1:1212 –job mydb –type tcp –port 4000

$ apc route map 10.0.0.1:1212 –job mydb –type tcp –port 8080


## route show

The `route show` command shows which apps are using a route.

### Usage

apc route show


### Example

$ apc route show site.apcera.me


## route unmap

The 'route unmap' command removes a route from an app's exposed port. This causes the Apcera Platform routers to stop serving traffic over that route.

### Usage

apc route unmap --app [args]


### Options

-a, –app NAME - Name of the app.
(no default, required)

-p, –port NUMBER - Physical port number exposed on the app.
(default: 0)

Route Type:

  --type STRING      - Specify a route type, either 'http' or 'tcp'. (see
                        shortcuts below)

  --http BOOL        - Route is an HTTP route.
                        (default: false)

  --tcp BOOL         - Route is a TCP route.
                        (default: false)

Examples

  $ apc route unmap http://site.apcera.me --app mysite

  $ apc route unmap 10.0.0.1:1212 --app mydb --type tcp --port 4000

rule

The rule command lets you interact with your cluster's semantic pipeline rules, which govern interactions between apps and services.

See also Creating and Using Semantic Pipeline Rules.

Usage

apc rule <subcommand> [optional args]

Subcommands

  create  - Create a new semantic pipeline rule.
  delete  - Delete a semantic pipeline rule.
  list    - Lists your cluster's rules.
  show    - Show a rule's detailed information.

rule create

The rule create command lets you add new semantic pipeline rules, for governing a given provider's behavior.

Rules created against a given provider are enforced against all services created from that provider. Rules created against a given service will be scoped only to act upon consumers of that service.

Both a provider and a service may not be supplied.

Usage

apc rule create <rule-name> [optional args]

Options

  -s, --service NAME      - Service upon which the event rule will act.

  -p, --provider NAME     - Provider upon which the event rule will act, after
                            it has been created.

  -j, --job NAME          - Job that the SP rule is enforced against.

  -t, --type TYPE         - Type of event registration to add to the semantic
                            pipeline. Can be either 'hook' or 'notification'.
                            (also accepts 'h' for "hook" and 'n' for
                            "notification")

  -u, --url URL           - URL to receive the hook or notification request.
                            Required for notifications, but optional for hooks.

  --commands COMMANDS     - Commands that will trigger the hook or notification.
                            Should be a comma-separated list.

Option for hooks

  --action ACTION         - If no URL is given, you can specify an action
                            directly. URL and action cannot be specified at the
                            same time. Action can be 'allow', 'deny', 'log', or
                            'drop'.
                            (default: 'deny')

Option for notifications

  --stage STAGE           - Stage when the notification URI should be called.
                            Stage can be 'pre', 'post', or 'roundtrip'.
                            (also accepts 'rt' for 'roundtrip')

Examples

  $ apc rule create denydelete-PG --service pgdb  -t hook --commands delete

  $ apc rule create my_pg_rule -t notification --provider /apcera/providers::postgres-provider -u http://site/url --stage post

rule delete

The rule delete command lets you delete an existing event rule by name.

Usage

apc rule delete <rule-name>

Example

  $ apc rule delete mySampleRule

rule list

The rule list command lets you list your cluster's service event rules.

Usage

apc rule list

Options

  -l                  - Show rules' fully-qualified-names, version number, and UUID.

Example

  $ apc rule list

rule show

The rule show command shows detailed information about a specific event rule.

Usage

apc rule show <rule-name>

Example

  $ apc rule show sample

secret

The 'secret' command performs operations on the Apcera Platform secret store.

In the Apcera Platform, a secret is a PEM file that contains an SSL certificate, a private key, or both a certificate and private key. A certificate/key pair can be installed to the Apcera cluster’s router for a specific domain (example.com, for instance). This allows applications to have routes with custom domains, instead of the domain where the cluster is installed.

Usage

apc secret <subcommand> <required args> [optional args]

Subcommands

  delete    - Delete a secret from the secret store.
  import    - Import secrets to the secret store.
  install   - Bind a secret to a domain.
  list      - List all secrets in the secret store.
  show      - Show details of a particular secret in the secret store.
  uninstall - Unbind secret from domain.

secret delete

Deletes a secret object from secret store.

Usage

apc secret delete <secret-name>

Example

apc secret delete mySecret

secret import

Import PEM encoded secrets into your cluster. If no type is specified Apcera will set the type based on contents of the file.

Usage

apc secret import name <file>

Command options:

    -t, --type SECRET_TYPE  - Type of secret to upload. Possible type values are
                              "certificate", "private_key" and
                              "certificate_and_private_key".  If no type is
                              specified, the platform will determine the type
                              based on the file content. (optional)

    -d, --description DESC  - Secret's description. (optional)

    -p, --password          - Prompt for password for password-protected secret.
                              This field is REQUIRED if importing  password-
                              protected PEM files. Apc will prompt for the
                              password.

Examples

  $ apc secret import /path/to/certFile.pem

  $ apc secret import --type="certificate" /path/to/certFile.pem

  $ apc secret import --type="certificate" -d="Test Certificate file" /path/to/certFile.pem

  $ apc secret import --type="private_key" /path/to/secretFile.pem -p

secret install

Install certificate on the router. A certificate/key pair can be installed to the Apcera cluster’s router for a specific domain (example.com, for instance). This allows applications to have routes with custom domains, instead of the domain where the cluster is installed.

Usage

apc secret install <certificate-name> --privatekey <secret-name> --domain <domain>

Command options:

    -p, --privatekey  - Name of private-key to bind. This must be a secret
                        object of type "private_key". This parameter should only
                        be used if certificate is of type "certificate".

    -d, --domain       - The domain to bind the secret and certificate.
                         (required)

Examples

  $ apc secret install test-cert --privatekey test-key --domain test.com

  $ apc secret install test-cert-and-key --domain test.com

secret list

List secrets in secret store.

Usage

apc secret list

Command options

    -t, --type SECRET_TYPE  - List secret of particular secret type.
                              This value can be "certificate", "private" or
                              "all"(default).

    -i, --installed         - List all installed secrets on domains.

Examples

  $ apc secret list

  $ apc secret list --installed

  $ apc secret list --type "certificate"

secret show

Print details of a secret.

Usage

apc secret show <secret-name>

Example

  $ apc secret show mySecret

secret uninstall

Uninstall a certificate and secret from a domain.

Usage

apc secret uninstall <domain>

Example

  $ apc secret uninstall test.com

service

The service command lets you interact with your cluster's services.

See also Working with Services and Bindings.

Usage

apc service <subcommand> [args]

Subcommands

  create     - Creates a new service
  bind       - Binds a service to a job
  unbind     - Removes a binding from a job
  delete     - Deletes a service
  list       - Lists your cluster's services
  show       - Show detailed information about a individual service

service bind

The service bind command binds a job to an existing service.

The binding steps will create any necessary credentials necessary for the application to connect to the service, and associate the connection details with the application.

The connection details are exposed through environment variables given to the application. If the specified application is already running, it will be restarted with the new connection information.

URIs for service consumption are replicated in three environment variables placed upon the job's container:

  • _URI (e.g., MYSQL_URI for a MySQL database)
    • In the case where an app is bound to multiple services of the same type,
      this variable cannot be relied upon. Instead, use the binding name below
  • _URI (e.g., MY_BINDING_URI)
  • _URI (e.g., MY_DB_URI)

The specific binding environment variables will output after a successful bind.

Auto-generated binding names cannot be guaranteed to be unique since they are generated using service and job names. Hyphens are replaced with underscores. For instance, a binding name between a service "my-service" and job "myjob" will be auto-generated as "bind_my_service_myjob".

Some service gateways may require additional parameters. To supply those for service bindings, pass the params after a -- on the command line. See the example below.

NOTE: In batch mode, APC will automatically restart the job you are binding to when required.

Usage

apc service bind <service-name> --job <job-name> [optional args]

Options

  -j, --job NAME      - The job to bind to the above service
                        (required, no default)

  -n, --name NAME     - The name of the binding created between the service and
                        job, used to generate the connection environment
                        variable
                        (default: bind_<service-name>_<job-name>)

  --cacert FILE       - The path to a file containing root certificate
                        authorities that semantic pipelines use when verifying
                        server certificates. The certificates are in PEM format.

Examples

  $ apc service bind tododb --job todo

  $ apc service bind tododb --job todo -- --customparam value

  $ apc service bind /prod/retail::postgres-db -j /dev::app -n db

service create

The service create command lets you create a new service on a provider. Services are often databases, queues, or generic API services.

The related provider list command shows which providers are available in your cluster. Providers must be specified when a service gateway supports a provider tier.

When a job is bound to the created service, it must look for specific environment variables to know how to consume the service. URIs for service consumption are replicated in three environment variables placed upon the job's container:

  • _URI (e.g., MYSQL_URI for a MySQL database)
    • In the case where an app is bound to multiple services of the same type, this variable cannot be replied upon. Instead, use the binding name below
  • _URI (e.g., MY_BINDING_URI)
  • _URI (e.g., MY_DB_URI)

Auto-generated binding names cannot be guaranteed to be unique since they are generated using service and job names. Hyphens are replaced with underscores.

For instance, a binding name between a service "my-service" and job "myjob" will be auto-generated as "bind_my_service_myjob".

Some service gateways may require additional parameters. To supply those for service creation, pass the params after a -- on the command line. See the example below.

NOTE: in batch mode, APC will automatically restart the job you are binding to when required.

Usage

apc service create <service-name> [args]

Options

 -p, --provider NAME    - Provider to create the service on. Service gateways
                          that support a provider tier (e.g. mysql, postgres)
                          require an explicit provider to be chosen.
                          (no default)

 -b, --binding NAME     - Binding name (on the job)
                          (generated by default if job is provided, with form:
                          bind_<service-name>_<job-name>)

 -d, --description DESC - Service description.
                          (optional)

 -j, --job NAME         - Name of a job to which you can (optionally) bind the
                          new service.

 -t, --type TYPE        - Service type of the provider. Required to determine
                          the service gateway with which to register the
                          provider. Not required when a provider is supplied.

Examples

  $ apc service create customerdb

  $ apc service create customerdb -p /dev::mysql

  $ apc service create customerdb -p /dev::mysql -- --customparam value

  $ apc service create customerdb -p mysql -j /prod/retail::my_app

  $ apc service create customernetwork --type network -- --domainname www.google.com --protocol tcp --portrange all

service delete

The service delete command lets you delete a service from your cluster.

Usage

apc service delete <service-name>

Command options:

  --force             - Force the service to be removed and allow errors from the
                        backing service. Can be useful if the service is no longer
                        available or is experiencing failures.

Examples

  $ apc service delete customerdb

  $ apc service delete /sandbox::customerdb

service list

The service list command lets you view your cluster's available services. By default, this list will be filtered by your current namespace.

Usage

apc service list

Options

  -l                  - Show services' fully-qualified-names

Examples

  $ apc service list

  $ apc service list -l

service show

The service show command shows details about your cluster's services, and the jobs bound to them.

Usage

apc service show <service-name> [args]

Example

  $ apc service show customerdb

service unbind

The service unbind command deletes the binding between a single service and a job running in the Apcera Platform.

Usage

apc service unbind <svc-name> --job <job-name> [optional args]

Options

  -j, --job NAME      - The name of the application to unbind a service from
                        (required, no default)

  --force             - Force the binding to be removed and allow errors from the
                        backing service. Can be useful if the service is no longer
                        available or is experiencing failures.

Example

  $ apc service unbind mydb --job webapp

stager

The stager command lets you interact with your cluster's stagers.

See also Application Staging.

Usage

apc stager <subcommand> [optional args]

Subcommands

  create       - Create and deploy a new stager
  delete       - Delete a stager
  export       - Export stager(s) to a single package file
  from file    - Create a new stager from a tarball
  from package - Create a new stager from a package
  health       - View a stager's health
  list         - List your cluster's stagers
  remove       - Remove a stager from a staging pipeline
  show         - Show detailed information about a stager

stager create

The stager create creates a new stager from a file or directory. You can insert them into existing staging pipelines, or create a new pipeline with the --pipeline flag.

Usage

apc stager create <stager-name> [optional args]

Options

  -p, --path PATH       - Path to the stager being deployed.
                          (default: current path)

  --pipeline            - Specifies that a staging pipeline should be created as
                          well. Its FQN will be constructed from its name.
                          (default: false)

  -sp, --staging NAME   - Staging pipeline to use when deploying the stager.
                          (default: do not use a staging pipeline)

  -ae, --allow-egress   - Allow the stager open outbound network access.

  -c, --cpus CPU        - Milliseconds of CPU time per second of physical time.
                          May be greater than 1000ms/second in cases where time
                          is across multiple cores.
                          (default: 0ms/second, uncapped)

  -m, --memory MEM      - Memory allocated to the stager, in MB.
                          (default: 256MiB)

  -d, --disk DISK       - Disk space to allocate, in MB.
                          (default: 1GiB)

  -n, --netmin NET      - Network throughput to allocate (floor), in Mbps.
                          (default: 5Mbps)

  -nm, --netmax NET     - Amount of network throughput to allow (ceiling), in Mbps.
                          (default: 0Mbps, uncapped)

  -e, --env-set ENV=VAL - Sets an environment variables on the stager. Multiple
                          comma values can be supplied by invoking multiple times,
                          e.g. "--env-set 'HOME=/usr/local'
                                 --env-set 'PATH=/usr/bin:/opt/local'"

  --package-name NAME   - Package name, if it should be different than the
                          stager.

  --start-cmd CMD       - Command to start the stager.

  --additive            - Indicates that the stager should run within the
                          application's runtime environment.
                          (default: false)

  -ht, --hard-tags      - Hard scheduling tags to add to the stager.

  -st, --soft-tags      - Soft scheduling tags to add to the stager.

Examples

  $ apc stager create sample1

  $ apc stager create /prod/dev::sample2 --start-cmd "./mystager"

  $ apc stager create sample3 -m 512MB --start-cmd "./mystager"

stager delete

The stager delete command lets you delete a stager. Without the --force option, the command will fail if the stager is part of a staging pipeline.

Command will also clean up any packages associated with the deleted stagers. If you supply the --force option, the stager will be removed from any pipelines regardless of namespace, policy-permitting.

Usage

apc stager delete <stager-name> [optional args]

Options

  -f, --force         - Forcefully remove stager from all pipelines.
                        (default: false)

stager export

The stager export command exports one or many stagers from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc stager export <stager-name> [...]

stager from file

The stager from file command creates a new stager from the file specified. This may be from a text file or a tarball.

Usage

apc stager from file <file> <stager-name> [optional args]

Options

  --pipeline          - Specifies that a staging pipeline should be created as
                        well. Staging pipeline's name will be the same as the
                        stager's.
                        (default: false)

  -s, --staging NAME  - Staging pipeline to use when creating the stager.
                        (default: do not use a staging pipeline)

  -ae, --allow-egress - Allow the capsule open outbound network access.

  -c, --cpus CPU      - Milliseconds of CPU time per second of physical time.
                        May be greater than 1000ms/second in cases where time
                        is across multiple cores.
                        (default: 0ms/second, uncapped)

  -m, --memory MEM    - Memory to allocate to the stager, in MB.
                        (default: 256MiB)

  -d, --disk DISK     - Disk space to allocate, in MB.
                        (default: 1GiB)

  -n, --netmin NET    - Network throughput to allocate (floor), in Mbps.
                        (default: 5Mbps)

  -nm, --netmax NET   - Amount of network throughput to allow (ceiling), in Mbps.
                        (default: 0Mbps, uncapped)

  --package-name NAME - Name of the package, if it should be named different
                        than the stager. Accepts names and FQNs.

  --start-cmd CMD     - Command to start the stager, if different than the
                        one set by the stager.

  --additive          - Indicates that the stager should run within the
                        application's runtime environment.
                        (default: false)

  -ht, --hard-tags    - Hard scheduling tags to add to the stager.

  -st, --soft-tags    - Soft scheduling tags to add to the stager.

Examples

  $ apc stager from file mystager.tar.gz sample1

  $ apc stager from file mystager.tar.gz /foo/bar::sample2 --start-cmd "./mystager"

  $ apc stager from file mystager.tar.gz sample3 -m 512MB --start-cmd "./mystager" \
    -j /dev::my-sample3 --additive

stager from package

The stager from package command creates a new stager from a package that has already been staged and imported into your cluster. This package may already be in use by other apps or stagers.

Usage

apc stager from package <pkg-name> <stager-name> [optional args]

Options

  --pipeline          - Specifies that a staging pipeline should be created as
                        well. Staging pipeline's name will be the same as the stager's.
                        (default: false)

  -ae, --allow-egress - Allow the capsule open outbound network access.

  -c, --cpus CPU      - Milliseconds of CPU time per second of physical time.
                        May be greater than 1000ms/second in cases where time
                        is across multiple cores.
                        (default: 0ms/second, uncapped)

  -m, --memory MEM    - Memory to allocate to the stager, in MB.
                        (default: 256MiB)

  -d, --disk DISK     - Disk space to allocate, in MB.
                        (default: 1GiB)

  -n, --netmin NET    - Network throughput to allocate (floor), in Mbps.
                        (default: 5Mbps)

  -nm, --netmax NET   - Amount of network throughput to allow (ceiling), in Mbps.
                        (default: 0Mbps, uncapped)

  --start-cmd CMD     - Command to start the stager, if different than the
                        one set by the stager.

  --additive          - Indicates that the stager should run within the
                        application's runtime environment.
                        (default: false)

  -ht, --hard-tags    - Hard scheduling tags to add to the stager.

  -st, --soft-tags    - Soft scheduling tags to add to the stager.

Examples

  $ apc stager from package mystager sample1

  $ apc stager from package mystager sample2 --start-cmd "./mystager"

  $ apc stager from package mystager sample3 -m 512MB --start-cmd "./mystager"

stager health

The stager health command shows the status and health score of a started stager. Stagers must be started to view health.

Usage

apc stager health <stager-name>

Example

  $ apc stager health myjob

stager list

The stager list command shows your cluster's stagers.

Usage

apc stager list

Options

  -l                               - Show stagers' fully-qualified names, UUID,
                                     and version number.

  -fp, --filter-by-package PKG     - Only show stagers that are using the package
                                     name PKG.

  -fl, --filter-by-label LBL[=VAL] - Only show stagers with the given label named LBL present.
                                     If VAL is provided then the label must also have that
                                     value. Multiple values can be supplied by invoking
                                     repeatedly, e.g. "-fl 'X=y' -fl 'W'", resulting in
                                     only showing the stagers meeting ALL the label requirements.

Examples

  $ apc stager list

  $ apc stager list --ns /prod -l

stager remove

The stager remove command removes one or more stagers from the named staging pipeline. The stagers are not deleted, and may be re-used. To delete a stager, use the stager delete command.

To remove multiple stagers, include them as a space-separated arg list.

Usage

apc stager remove <pipeline-name> <stager-names> [optional args]

Examples

  $ apc stager remove ruby-pipeline ruby-rspec

  $ apc stager remove ruby-pipeline ruby-rspec regression-test

stager show

The stager show command shows detailed information about individual stagers.

Usage

apc stager show <stager-name>

Options

  --compliance     - Adds a column describing whether or not the stager is
                     compliant with current policy.

Example

  $ apc stager show ruby-stager

staging pipeline

You must follow the staging command with the word pipeline.

The staging pipeline command lets you interact with your cluster's staging pipelines.

See also Application Staging.

Usage

apc staging pipeline <subcommand> [optional args]

Subcommands

  append  - Add stagers to the end of a staging pipeline
  create  - Create a staging pipeline from a stager
  clone   - Clone an existing staging pipeline
  prepend - Add stagers to the beginning of a staging pipeline
  export  - Export a staging pipeline to a package file
  delete  - Delete a staging pipeline
  list    - List your cluster's staging pipelines
  remove  - Remove stagers from a staging pipeline
  show    - Show detailed information about a staging pipeline

staging pipeline append

The staging pipeline append command lets you append stager modules to the end of a staging pipeline. To append multiple stagers, include them as space-separated args.

Usage

apc staging pipeline append <pipeline-name> <stager-names> [...]

Examples

  $ apc staging pipeline append ruby-pipeline /prod::ruby-rspec
  Success!

  $ apc staging pipeline append /dev::ruby-pipeline /prod::ruby-rspec regression-test
  Success!

staging pipeline clone

The staging pipeline clone command clones an existing staging pipeline's properties into a new staging pipeline.

The new staging pipeline is created in your current namespace unless otherwise specified.

Usage

apc staging pipeline clone <staging-pipeline-name> [optional args]

Options

  -n, --name                  - The name of the new staging pipeline
                                (default: cloned staging pipeline name in user's namespace)

Examples

  $ apc staging pipeline clone ruby-pipeline

  $ apc staging pipeline clone /prod::node-pipeline --name /dev::node-pipeline

staging pipeline create

The staging pipeline create command creates a staging pipeline from a stager within the Apcera Platform. The new staging pipeline will contain the stager from which it was created.

Staging pipelines are created in your current namespace.

Usage

apc staging pipeline create <stager-name> [optional args]

Options

  -n, --name                  - The name of the new staging pipeline
                                (default: name of the stager)

Examples

  $ apc staging pipeline create ruby-stager

  $ apc staging pipeline create node-stager --name nodejs-pipeline

staging pipeline delete

The staging pipeline delete command lets you delete an existing staging pipeline from your cluster.

Usage

apc staging pipeline delete <pipeline-name>

Examples

  $ apc staging pipeline delete ruby

  $ apc staging pipeline delete /prod/retail::java

staging pipeline export

The staging pipeline export command exports a staging pipeline from an Apcera Platform cluster into a cntmp file, which can then be imported into other Apcera Platform clusters.

Usage

apc staging pipeline export <staging-pipeline-name> [...]

staging pipeline list

The staging pipeline list command lists your cluster's staging pipelines.

Usage

apc staging pipeline list

Options

  -l                  - Show pipelines' fully-qualified-names, version number, and UUID.

Examples

  $ apc staging pipeline list

  $ apc staging pipeline list -l

staging pipeline prepend

The staging pipeline prepend command lets you add stagers to the beginning of a staging pipeline. To prepend multiple stagers, include them as space-separated args.

Usage

apc staging pipeline prepend <pipeline-name> <stager-names> [...]

Examples

  $ apc staging pipeline prepend ruby-pipeline /prod/retail::virus-check

  $ apc staging pipeline prepend /site/dev::ruby-pipeline virus-check /dev::minify-js

staging pipeline remove

The staging pipeline remove command lets you remove stagers from a staging pipeline. To remove multiple stagers, include them as space-separated args.

Usage

apc staging pipeline remove <pipeline-name> <stager-names> [args]

Examples

  $ apc staging pipeline remove /foo/bar::ruby-pipeline ruby-rspec

  $ apc staging pipeline remove ruby-pipeline /dev::ruby-rspec /::regression-test

staging pipeline show

The staging pipeline show command shows detailed information about individual staging pipelines.

Usage

apc staging pipeline show <staging-pipeline-name>

Examples

  $ apc staging pipeline show ruby

subnet pool

The 'subnet pool' command lets you interact with your cluster's subnet pools.

Usage

apc subnet pool <subcommand> [args]

Subcommands

  create     - Creates a new subnet pool
  delete     - Deletes a subnet pool
  list       - Lists your cluster's subnet pools
  show       - Show detailed information about an individual subnet pool

subnet pool create

The 'subnet pool create' command lets you create a new subnet pool. Unless otherwise specified, a subnet pool is created in the default namespace and is private with prefix 10.224.0.0/12. These default properties can be overwritten with the use of the options listed below.

Usage

apc subnet pool create <subnet-pool-name> [args]

Options

  -p, --prefix PREFIX           - Desired Subnet Prefix (Optional).

  -c, --max-containers-per-host - Max number of containers that can run on any
                                  single host (Optional).

  -d, --desc                    - Short description of the pool (Optional).

  --default                     - Make this the default pool (Optional).

Examples

  $ apc subnet pool create pool-1

  $ apc subnet pool create /prod/pool-a --prefix 10.1.0.0/16

  $ apc subnet pool create /prod/pool-a --prefix 10.0.0.0/8 --max-containers-per-host 16

subnet pool delete

The 'subnet pool delete' command deletes an existing subnet pool from your cluster. The subnet pool must be empty; that is, it should not have
any member networks. In order to delete a subnet pool with existing networks, all such networks must first be deleted. See apc network delete.

Usage

apc subnet pool delete <subnet-pool-name>

Examples

  $ apc subnet pool delete pool-1

  $ apc subnet pool delete /prod/dev::pool-a

subnet pool list

The 'subnet pool list' command lets you view your cluster's available subnet pools. By default, this list will be filtered by your current namespace.

Usage

apc subnet pool list

Command options

  -l                  - Show subnet pools' fully-qualified-names and UUID

Examples

  $ apc subnet pool list

  $ apc subnet pool list -l

subnet pool show

The 'subnet pool show' command shows detailed information about a specific subnet pool.

Usage

apc subnet pool show <subnet-pool-name>

Example

  $ apc subnet pool show pool-1

target

The target command targets APC to your Apcera Platform cluster. If you don't supply a URL, it will show your current target.

APC stores the target in a .apc file in your %USERPROFILE%, or alternatively in a directory specified by the APC_HOME env variable.

See also Targeting your cluster using APC.

Usage

apc target <url>

Options

  url - URL of your Apcera Platform cluster. Defaults to "https" if no scheme
        present.

Examples

  $ apc target

  $ apc target apcera.io

  $ apc target http://apcera.io:8080

update

The apc update command checks for updates to your APC binary.

Usage

apc update

Options

  --force - Downloads APC from the cluster even if your local copy is
            compatible with the cluster.

version

Shows APC's current version.

Usage

apc version

Example

  $ apc version