Using SSH and App Console

This section describes how to use SSH to connect to job instances in the platform, and how to use app consoles to clone and debug live apps.


All client traffic (APC, web consle, etc.), always starts by connecting to the routing tier. Clients never connect to any other endpoint. Routers have routes to API server(s) at The routers forward traffic to the API servers, and issue requests.

The following diagram shows the relationship among these components.


SSH flow

When you issue an SSH connection, the client sends the request to the API server to get information about the job. This includes the FQN and other metadata. Then once it know about the job, or if specified in the client, the exact instance of the job (by instance UUID), then it tells the API server to create an SSH connection to the job on a particular IM host.

The API server then connects a public port opened on the IM host in the high port range (1024-65535), which forwards to the containerized job on port 222 via NAT. After the SSH connection is established from the API server to the IM the API server updates the client connection to a websocket connection.

Now that the client connection is updated to a websocket connection, data coming from the client goes to the API server then is forwarded to the approprate job over the established SSH connection and responses go back to the API server which then streams it back to the client over the websocket connection. The client never talks directly to the job on the IM, everything is proxied through the API server and routing tier.

Policy required for SSH

To connect to a job instance using SSH, you must have policy in place that permits it, for example:

job::/sandbox/tom {
  if (>name == "tom") {
    permit ssh

Access to the job container OS via SSH is controlled through policy. If you receive a policy error (missing claim "permit ssh"), you do not have the appropriate level of permissions on job resources in the specified namespace to allow SSH.

Using SSH

You can connect via SSH to the live filesystem on the job instance using the following set of APC commands.

Enabling SSH

To enable SSH you use the following syntax (assuming you have policy permission to do so):

apc app update --allow-ssh

For example:

apc app update my-app --allow-ssh --restart

You should see that the SSH port 222 is exposed, and the app is updated and restarted.

Verifying that SSH is enabled

You can verify that you have exposed the SSH port 222 use the following command:

apc app show <app-name>

For the Exposed Ports entry, you will see port 222 is listed. In addition, for the Tags entry you will see “ssh: true”.

Connecting via SSH

Once you have exposed the SSH port, you can connect directly to the live instance using the command apc app connect. From there you can explore the file system.

For example, to connect to the job instance via SSH, issue the following command:

apc app connect <app-name>

Or, if the job is a capsule:

apc capsule connect <cap-name>

Removing SSH

You can remove SSH ingress by issuing the following command:

apc app update <app-name> --remove-ssh --restart

To verify that SSH access is removed, issue the following command:

apc app show <app-name>

You should see that port 222 is no longer exposed, and the tags entry ssh: false.

Using app console

While you can use SSH to connect to an app or capsule, if the app is live (in production) this approach is not ideal. Furthermore, exposing SSH port 222 is not recommended for production apps.

In this case Apcera provides you with the apc app console command that gives you a convenient way to debug live apps without having to expose the SSH port and possibly disrupt the live app. This command creates a cloned copy of your app and connects you to the clone via SSH.

For example, to create a clone of an app and connect to the clone via SSH:

apc app console <app-name>

The clone is a copy of your app that runs in an Apcera capsule job instance. The clone includes any app service bindings, environment variables, etc. In addition, all packages are mounted, so you can explore the app filesystem. The connection is via SSH but the live app has not exposed this port. SSH is exposed for the cloned app only so that you can debug it.

For example, you can execute the following command to list all the environment variables:


Not all apps may have a console created from them. Single-binary apps, such as Apcera-provided apps or some minimal Docker images (such as NATS), may not be consoled to as they do not mount an operating system.

To tear down the cloned copy of your app, simply issue the following command:


You should see that you exist the SSH session and that the console app is stopped and deleted.

If you run command apc app show <app-name> you will see that there are no remnants of the console. But, the actual job instance is still running and the SSH port remains closed.

For a tutorial on app consoles, see Debuging Live App Using App Console.