Deploying Docker and Linking Jobs

This example demonstrates how to deploy a Docker image and a Go app, and how to create a job link between the client and server. The server application is an instance of the NATS Docker image, a high-performance NATS messaging server written in Go. The client is a simple Go application that periodically performs a NATS request to the server.

The tutorial also tailing application logs using APC and the web console, and explores the dynamic service binding capabilities of Apcera Platform.

Deploy the NATS server Docker image

You can deploy a Docker image to Apcera as easily as any other kind of workload.

To deploy and start the NATS Docker image, run the following command:

apc docker run nats-server --image nats

The system downloads the Docker image layers and creates a new package that contains each layer as a package resource. Example output from the command is shown below. You should see the messages "gnatsd is ready" and a "Success!" message indicating that the server is running.

$ apc docker run nats-server --image nats
[nats-server] -- Pulling Docker image -- checking policy
[nats-server] -- Pulling Docker image -- checking if package FQN is taken
[nats-server] -- Pulling Docker image -- fetching image metadata
[nats-server] -- Pulling Docker image -- preparing download
[nats-server] -- Pulling Docker image -- fetching 6 layers
[nats-server] -- Pulling Docker image -- creating package
[nats-server] -- Pulling Docker image -- downloading layer ca5d6e29
[nats-server] -- Pulling Docker image -- downloading layer 4feba52e
[nats-server] -- Pulling Docker image -- downloading layer 6e14e425
[nats-server] -- Pulling Docker image -- downloading layer 6c56fb37
[nats-server] -- Pulling Docker image -- downloaded layer ca5d6e29
[nats-server] -- Pulling Docker image -- downloading layer 095806f3
[nats-server] -- Pulling Docker image -- downloaded layer 4feba52e
[nats-server] -- Pulling Docker image -- downloaded layer 6c56fb37
[nats-server] -- Pulling Docker image -- downloaded layer 6e14e425
[nats-server] -- Pulling Docker image -- downloaded layer 095806f3
[nats-server] -- Pulling Docker image -- downloaded all layers
[nats-server] -- Creating job
[nats-server] -- Configuring job -- tagging package
[nats-server] -- Starting job

You can tail the logs for the NATS server using APC or the web console.

To tail the Docker job log using APC, run the following APC command:

$ apc app logs nats-server

The output should appear as follows:

$ apc app logs nats-server
[stderr] [5] 2015/06/30 19:28:58.631339 [INF] Starting gnatsd version 0.6.0
[stderr] [5] 2015/06/30 19:28:58.631442 [INF] Starting http monitor on port 8333
[stderr] [5] 2015/06/30 19:28:58.631633 [INF] Listening for client connections on 0.0.0.0:4222
[stderr] [5] 2015/06/30 19:28:58.631714 [INF] gnatsd is ready

Note that the server is configured to listen on port 4222. This will be important when we later link jobs. Exit the log tail session view by pressing Ctrl + C in the command prompt session.

To tail the job log using the web console, do the following:

  1. Open the web console for your cluster by going to http://console<cluster-name>.<top-level-domain> (for example, http://console.my-cluster.apcera.net).
  2. Click Jobs in the navigation, then select the nats-server Docker job you created.
  3. Click Tail Logs in the upper-right corner of the screen to display the same log items.

Deploy the NATS client application

Next you deploy a sample NATS client application written in Go that will act as a client to the nats-server you created.

Exploring the NATS client application

For the purposes of this tutorial the key implementation detail is how the client application obtains the URI to connect to the NATS server.

To explore the NATS client application:

  1. Clone the sample-apps GitHub repo, if you haven't already.
  2. Open sample-apps/nats-ping/nats-ping.go in a text editor and locate the following line of code:

     // Check env for NATS_URI
     if nuri := os.Getenv("NATS_URI"); nuri != "" { ... }
    

The NATS_URI environment variable is set automatically on the client application's container after you link the client and server jobs (in the next section) and contains the URI to the NATS server instance. If the nats-server job is restarted it will likely appear at a different location on the network. The value of NATS_URI on the client application is updated accordingly with this new location. This is an example of dynamic binding in Apcera.

Deploying the NATS client application

You use the apc app create command to deploy the NATS client application.

To deploy the nats-client app:

  1. Open a terminal and cd to the sample-apps/nats-ping directory.

     $ cd sample-apps/nats-ping
    
  2. Run the following command:

     $ apc app create nats-client --disable-routes --batch
    

    By default, the app create command creates a route (endpoint) where a user can access the application in a browser. In this case we disable automatic route generation by adding the --disable-routes option because we aren't deploying a web app. The --batch option deploys the app silently (without any further prompts).

As before, the correct staging pipeline for Go applications is automatically started and begins the staging process. When complete, you should see a "Success!" message in the terminal.

If you tried to start the nats-client application at this point there would be start-up errors because the NATS_URI environment variable has not been established on the client's container yet; consequently, it can't connect to the server.

Next we create a job link between the NATS client and server applications. To do this you use the apc job link command, which has the following signature:

apc job link <source-job> --to <target-job> --name <link-name> --port <target-job-port>

This creates a link between <source-job> to <target-job> on the port specified by <target-job-port>. The value assigned to the --name parameter (<link-name>) determines the name of the environment variable that will be set on the nats-client application. It's value is concatenated with "_URI" to form the final variable name. For instance, if --name is assigned the value foo, an environment variable named FOO_URI would be set on the client application.

To create a job link between the NATS client and server:

  • In a terminal, run the following command:

      $ apc job link nats-client --to nats-server --name nats --port 4222
    

    If all goes well the output from this command should look like the following:

          Stopping the job... done
          Binding "job::/sandbox/tim::nats-client" to "job::/sandbox/tim::nats-server"... done
          Starting the application... done
          Waiting for the job to start...
          [stderr] [EVENT] Connected to tcp://169.254.0.16:10000 [ID:efde84a0b6aa903ebbe05b2982d03b22]
          [stderr] [INFO] Delay is 2s
          [stderr] [PING] Latency: 1.124473ms
          Success!
    

Note the following about this command:

  • In this example, the client is expecting the environment variable name to be NATS_URI so --name is assigned the value "nats", as shown below.
  • If the target job has only one port exposed you can omit the --port option. In this scenario the nats-server app has multiple ports exposed, so the --port argument is required.

You should see that the system successfully linked the nats-client job to the nats-server job. The nats-client is started and begins periodically sending a NATS message to the server, which are logged as [PING] Latency xxx.ms messages.

Exploring Dynamic Bindings and Environment Variables

One of the main benefits of job linking is that connection string URIs aren't hard-coded into the client application. In this scenario, if the server is restarted and appears at a different network location, the client won't be able to connect. The Go code will need to be updated with the new connection string. With job linking in Apcera, however, if the target of a job link is restarted the client app will still be able to connect, regardless of the target's actual location on the network.

A good way to see this type of dynamic binding in action is to stop and restart the nats-server while tailing the nats-client app's logs. This process is explained below.

To see dynamic binding in action:

  1. Use APC (or the Web Console) to tail the nats-client app's logs:

     $ apc app logs nats-client
    

    Assuming the nats-server is running (apc app start nats-server) you should now see PING ... messages streaming into the terminal. Note the IP address and port number of the nats_server instance (10.0.2.138:3963).

     $ apc app logs nats-client
     [system] Established job link to instance running at 10.0.2.138:3963
     [stderr] [PING] Latency: 1.061266ms
     [stderr] [PING] Latency: 1.089819ms
     [stderr] [PING] Latency: 1.269693ms
     ...etc..
    
  2. In another terminal window stop the nats-server.

     $ apc app stop nats-server
    
  3. Check the nats-client log tail again, which should now contain the messages "Got disconnected" and "Failed to find remote endpoint for job".

     [stderr] [PING] Latency: 1.106113ms
     [stderr] [PING] Latency: 1.017706ms
     [stderr] [EVENT] Got disconnected
     [system] Failed to find remote endpoint for job "05813cfb-91e1-494a-89d1-df5a8ba0bea2"
    
  4. Restart nats-server:

     apc app start nats-server
    
  5. Check the nats-client log tail again in the terminal window. Momentarily, the log output should indicate that the client has reconnected to the server at a different IP and port (10.0.2.138:3897).

     [system] Established job link to instance running at 10.0.2.138:3897
     [stderr] [PING] Latency: 23.522314883s
     [stderr] [EVENT] Reconnected to tcp://169.254.0.16:10000 [ID:fc50b7ef141d9ae438af38605fd19641]
     [stderr] [PING] Latency: 1.106113ms
     [stderr] [PING] Latency: 1.017706ms
    

Although the IP and port of the nats-server changed, to the client it makes no difference because it simply uses the value of the NATS_URI environment variable to connect. The value of this URI does not change (tcp://169.254.0.16:10000); instead Apcera re-maps that URI to the updated internal IP address and port.