File Share Services - NFS and APCFS

The Apcera Platform provides the NFS Service Gateway for accessing external file servers that support the NFS protocol.

Using NFS services

Users with data on existing external file servers can set up an NFS service to allow jobs running on the Apcera Platform to mount those file systems and access files stored on their file servers. The service can even be set up as read-only, so that jobs can read but not alter or delete data stored on external file servers.

NFS user credentials are added once, when a provider is registered or a service is created, without sharing those credentials with everyone who needs to access the data. Which jobs can connect to an NFS service is controlled within the Apcera Platform via policy.

An NFS service provides:

  • Persistent file system with customizable mount points.
  • Ability to encrypt data at rest on an NFS volume.
  • The ability to mount an external NFS share as read-write or read-only.
  • Good performance for apps that do not have IO heavy requirements.
  • Persistent file system with customizable mount points.

NFS Services and Providers

An NFS provider is a producer of NFS service on a registered NFS instance and NFS export path. Provider-backed services are created on-demand from the provider. Each NFS service is a unique folder location on the NFS export path for a job to store data. See Creating namespaced storage on an NFS server.

An NFS service may instead be created directly ("provider-less") from a URL to access data on an existing NFS instance. NFS Services may not specify both a provider and URL during creation. See Access Existing Data on an NFS server.

Optional NFS URL query parameters

  • The ro query parameter, if present in the NFS URL, causes the NFS volume to be mounted in read-only mode, for example:
    apc service create /services::fileserver-ro --type nfs \
    -- --url nfs://data.example.com:2500/datafiles?ro
    

    Any job that is bound to a read-only NFS service cannot make any changes to that volume.

  • The version query parameter specifies the version of the NFS protocol to use to connect to the NFS instance. The NFS service gateway supports protocol versions 3, 4 (default) and 4.1. See Support for NFS Protocols for more information. For example, the following creates an NFS service that uses the version 3 protocol, and is read-only:
    apc service create /services::fileserver-ro --type nfs \
    -- --url nfs://data.example.com:2500/datafiles?version=3&ro
    

Access Existing Data on an NFS server

A shared NFS service connects directly to the mount point exported by the NFS server, without any NFS provider. All jobs bound to a shared NFS service will mount the share the same mount point (at different mount paths, optionally). Compare this to namespaced NFS services, each of which represents a unique NFS mount point.

Note: Once you create a shared service for an NFS server mountpoint, you cannot create an namespaced service for the same NFS server and mount point. The reverse is also true: if you have an namespaced service for an NFS server mountpoint, you cannot create a shared NFS service for the same NFS server and mount point.

You use the apc service create command and --url extended parameter to create an NFS service that connects to an existing NFS instance. The --url parameter specifies the location and export path of NFS instance. You can optionally specify the NFS protocol version (default is NFSv4), or whether the NFS volume should be mounted as read-only (see Optional NFS URL query parameters).

You will minimally need the following information to create a shared NFS service:

  • The hostname or IP address of the NFS server.
  • The export path of the NFS directory.
  • If the share requires authentication, a domain, username, and password are required to access the volume.

For example, the following creates an NFS service (/services::fileservice) for the NFS instance located at data.example.com on port 2500 at the export path /datafiles:

apc service create /services::fileserver --type nfs \
-- --url nfs://data.example.com:2500/datafiles

You can then bind the service to one or more jobs, optionally at different mount paths:

apc service bind /services:fileserver --job /apps::filebrowser

Creating Namespaced NFS storage for Jobs

In addition to creating NFS services for existing NFS instances, you can also use an NFS provider to create individual NFS services. Each service created on a NFS provider creates a new, unique sub-directory on the NFS instance's export path. Jobs bound to the same service share the same storage. Multiple services may use the same provider, each one isolated from the others.

To create namespaced NFS storage you use the apc provider register command to create a new NFS provider, passing it the URL of the NFS instance, including host name/IP address, port number, and export path. For example, the following creates an NFS provider (/provider:nfs-provider) for the NFS instance and export path at nfs://data.example.com:2655/files:

apc provider register /providers::nfs-provider --type nfs \
--url nfs://data.example.com:2655/files

You then use the apc service create command to create a service from the NFS provider, specified with the --provider parameter, for example:

apc service create app-data \
--provider /providers::nfs-provider --description 'NFS namespaced service'

Lastly, you bind the service to a job:

apc service bind app-data --job /apps/my-app

By default, the mount path of the NFS volume available to the job instance (container) is /nfs/<export-path>, where <export-path> is the path specified when registering the provider. You can also specify a unique mount path when creating the service binding (see Creating NFS Service Bindings and Specifying Mount Paths).

Binding NFS services and Specifying Mount Paths

You use the apc service bind command to bind a NFS service to a job:

apc service bind /services::apcfs-ha-shared --job my-job

By default, the mount path of the NFS volume within the job instance is /nfs/<export-path>, where /<export-path> is the path specified when creating the NFS service or provider. For example, the default mount path is /nfs/apcera-default for a default deployment of APCFS. When creating the binding you can also specify a unique mount path using the --mountpath parameter when creating the service binding. For example, the following service binding sets the mount path to /files:

apc service bind /services::apcfs-ha-shared --job my-job -- --mountpath /files

NFS Mount Path Environment Variables

For each NFS service binding, a set of environment variables are set on the job instance that specify the mount path (or file URI) of the corresponding NFS volume. The mount path is provided both as a path (/nfs/files, e.g.), and a file URI (file:///nfs/files). The same values are duplicated in several environment variables that are named in different degrees of specificity, based on the service type (NFS), service name, and binding name (if specified when creating the binding). If your job is bound to more than one NFS service you must use one of the more specifically-named environment variables to access the correct mount path.

The following set of environment variables are set on a job for each NFS service binding.

  • NFS_PATH (e.g. /nfs/files). The following equivalent environment variables are also set, based on the service name or binding name (if any):
    • <SERVICE-NAME>_PATH
    • <BINDING-NAME>_PATH
    • NFS_<SERVICE-NAME>_PATH
    • NFS_<BINDING-NAME>_PATH
  • NFS_URI (e.g., file:///nfs/files). The following equivalent environment variables are also set, based on the service name or binding name (if any):
    • <SERVICE-NAME>_URI
    • <BINDING-NAME>_URI

Using Encrypted NFS services

For isolated NFS services created by an NFS provider, you have the option of enabling EncFS encryption on the target NFS volume. (This feature is not available when connecting to a existing NFS server.)

When you create an NFS service or provider you have the option of enabling EncFS encryption on NFS data. This feature is often referred to a "data encryption at rest".

Any job bound to an encrypted APCFS service has its data (file contents and names) transparently encrypted before being written to disk, and similarly decrypted during data read operations. It prevents access to the data on the encrypted volume if, for example, the host virtual machine were decommissioned and its drive space re-assigned to another app, or if a hard disk were removed from a cluster and installed in a new environment. And because data can only be decrypted within the running job instance (container) on the Instance Manager, it also protects against someone reading your data "on the wire" by sniffing network packets.

This feature does not protect against application-level security failures, such as a SQL injection attack.

Note: Encryption is not supported on existing shared NFS volumes only on "isolated" storage volumes created by the APCFS provider.

During service creation, a secret key is generated to perform the encryption and stored securely in the platform's secret storage. When a job that's bound to an encrypted APCFS service is started the key is retrieved from the secret storage and used to start EncFS.

To make an encrypted APCFS service you pass --encrypt true as an extended parameter to the apc service create command, for example:

apc service create encrypted-file-service \
--provider /apcera/providers::apcfs -- --encrypt true

You can also pass the --encrypt true parameter when registering a (non-shared) APCFS provider. Any APCFS service created from that provider inherits the encrypt setting. For example, the following creates an APCFS provider with --encrypt true extended parameter:

apc provider register /providers::encrypted-nfs-provider \
--type nfs --url nfs://192.168.1.99:2050/mount/path -- --encrypt true

You can then create an encrypted APCFS service from that provider:

apc service create encrypted-file-service- \
--provider /providers::encrypted_nfs_provider

Note: Data on an encrypted APCFS volume can only be decrypted by a job that is bound to the corresponding APCFS service; you cannot use another tool to export and decrypt the data for use in another environment.

Note: File-locking does not work on remote encrypted volumes. If your application requires file-locking in order to work, you cannot use encryption.

You can automate the enforcement of application data encryption using policy. See policy for data at rest encryption.

Support for NFS Protocols

The NFS servers deployed by the Apcera Platform support NFSv3, NFSv4, and NFSv4.1. The NFS service gateway can connect to any NFS server using NFSv3, NFSv4, or NFSv4.1.

To create an NFSv4 provider, append 'version=4' to the URL options, for example:

apc provider register nfs-ubuntu-1 --type nfs --url "nfs://$IP/data?version=4"

Or, add --version as an extended parameter, for example:

apc provider register nfs-ubuntu-1 --type nfs --url "nfs://$IP/data" -- --version 4

The version number can be "3", "4", or "4.1".

If you do not provide an NFS version number the version will default to NFSv4 for Apcera Platform Release 2.4.0 and later. The version defaults to NFSv3 in earlier releases.

Examples

The following sections walk you through creating NFS/APCFS provider, services, and bindings using capsules.

Also see Redis with APCFS and Using Docker with APCFS for Persistence.

Creating and Using Namespaced NFS Services

In this example you create a new NFS provider, create services from it, and bind the services to capsules. You'll create two services from the same NFS provider, demonstrating how each service has its own namespace storage.

If you are targeting an ACPFS instance the default connection URL is nfs://apcfs-ha.localdomain/apcfs-default. If you are targeting another NFS instance, you will need the server's URL and NFS export path.

  1. Create an NFS provider on the target NFS instance, for example:
    apc provider register /providers::apcfs-ha --type nfs \
    --url nfs://apcfs-ha.localdomain/apcfs-default
    
  2. Create an NFS service on the new provider:
    apc service create /services::apcfs-service \
    --provider /providers::apcfs-ha
    
  3. Create a capsule:
    apc capsule create capsule01 --image linux
    
  4. Bind the capsule to new NFS service. Note the binding environment variables that will be available to the capsule at runtime.
    apc service bind /services::apcfs-service --job capsule01
    ╭──────────────────────────────────────╮
    │        Service Bind Settings         │
    ├───────────┬──────────────────────────┤
    │ App Name: │ capsule01                │
    │  Service: │ /services::apcfs-service │
    ╰───────────┴──────────────────────────╯
    Is this correct? [Y/n]:
    Update requires job restart
    Automatically restart and proceed? [Y/n]:
    Stopping job... done
    Binding service "apcfs-service" to "capsule01"...
    ╭─────────────────────────────────────────╮
    │      Binding Environment Variables      │
    ├─────────────────────────────────────────┤
    │ "APCFS_SERVICE_PATH"                    │
    │ "APCFS_SERVICE_URI"                     │
    │ "BIND_APCFS_SERVICE_CAPSULE01_URI"      │
    │ "NFS_APCFS_SERVICE_PATH"                │
    │ "NFS_BIND_APCFS_SERVICE_CAPSULE01_PATH" │
    │ "NFS_PATH"                              │
    │ "NFS_URI"                               │
    ╰─────────────────────────────────────────╯
    
  5. Connect to the capsule and run some file system commands:
    apc capsule connect capsule01
    -bash-4.3#
    
    • Run df -k and you'll see the exported APCFS path mounted as /apc/apcfs-ha, for example:
      -bash-4.3# df -k
      Filesystem                                                                1K-blocks  Used Available Use% Mounted on
      /var/lib/continuum/instances/52a28f8c/data/remotemounts/nfs/apcfs-default    132096  1024    119808   1% /nfs/apcfs-default
      
    • List the contents of the mounted volume, which is empty:
      -bash-4.3# ls -al /nfs/apcfs-default
      total 8
      drwxr-xr-x 2 runner runner 4096 Oct 14 00:12 .
      drwxr-xr-x 3 runner runner 4096 Oct 13 23:31 ..
      
    • Create a text file on the NFS volume:
      -bash-4.3# echo "Example text." > /nfs/apcfs-default/myfile.txt
      
    • List the APCFS environment variables that provide the path/URI to the mount point:
      -bash-4.3# env | grep 'apcfs'
      NFS_URI=file:///nfs/apcfs-default
      NFS_BIND_APCFS_SERVICE_CAPSULE01_PATH=/nfs/apcfs-default
      BIND_APCFS_SERVICE_CAPSULE01_URI=file:///nfs/apcfs-default
      APCFS_SERVICE_PATH=/nfs/apcfs-default
      APCFS_SERVICE_URI=file:///nfs/apcfs-default
      NFS_PATH=/nfs/apcfs-default
      NFS_APCFS_SERVICE_PATH=/nfs/apcfs-default
      
  6. On your local system, create another capsule:
    apc capsule create capsule02 --image linux
    
  7. Create a new service using the same NFS provider you used to create the first service:
    apc service create /services::new-apcfs-service \
    --provider /providers::apcfs-ha
    

    And bind it to capsule02:

    apc service bind /services::new-apcfs-service \
    --job capsule02 -- --mountpath /data/new-apcfs-service
    
  8. Connect to the second capsule:
    apc capsule connect capsule02
    

    If you run df -k and you'll see the remote NFS directory mounted as /data/new-apcfs-service:

    -bash-4.3# df -k
    Filesystem                                                                     1K-blocks  Used Available Use% Mounted on
    /var/lib/continuum/instances/e42ea741/data/remotemounts/data/new-apcfs-service    132096  1024    119808   1% /data/new-apcfs-service
    

    If you list the contents of the mounted volume you'll see it's empty, which is expected since creating a new service results in a new NFS volume:

    -bash-4.3# ls -al /data/new-apcfs-service
    total 8
    drwxr-xr-x 2 runner runner 4096 Oct 16 21:15 .
    drwxr-xr-x 3 runner runner 4096 Oct 16 21:31 .
    

Creating and Using Shared NFS Services

To connect to an existing NFS server you simply create a service (without a provider) that points to the NFS server's location and export path. Every job that binds to a shared NFS service can access files at its mount path.

  1. Create a shared NFS service, in this case on an AWS Elastic File System instance:
    apc service create /services::aws-efs-shared --type nfs -- --url nfs://10.0.0.95/
    
  2. Create a capsule and bind it to the service:
    apc capsule create capsule01 --image linux
    
    apc service bind /services::aws-efs-shared \
    --job capsule01 -- --mountpath /data/aws-efs-shared
    
  3. Connect to capsule01 and create a new file in the mounted NFS directory:
    apc capsule connect capsule01
    -bash-4.3#
    

    Use the $NFS_PATH to change to the NFS mount directory:

    -bash-4.3# cd $NFS_PATH/
    -bash-4.3# pwd
    /data/aws-efs-shared
    -bash-4.3# touch hello-from-capsule01.txt
    
  4. On your local system, create another capsule and bind it to the same service:
    apc capsule create capsule02 --image linux
    
    apc service bind /services::aws-efs-shared \
    --job capsule02 -- --mountpath /data/aws-efs-shared
    
  5. Connect to capsule02 and list the contents of the mounted NFS directory, which should show the file you created from capsule01:
    apc capsule connect capsule02
    -bash-4.3#
    

    Use the $NFS_PATH to list the contents of the NFS mount directory:

    -bash-4.3# ls $NFS_PATH
    hello-from-capsule01.txt
    

APCFS example using Redmine

Redmine is an open source project management application. This example shows how to use an NFS store for Redmine's issue attachments. The example assumes you have deployed a Redmine job named redmine to your cluster.

Create an NFS service for storing the Redmine data

  1. Create the redmine-apcfs service.
    apc service create redmine-apcfs -p /apcera/providers::apcfs
    
  2. Bind the Redmine job to the APCFS service.
    apc service bind redmine-apcfs -j redmine
    

    To override the default and mount to a different path, use the --mountpath flag. For example:

    apc service bind redmine-apcfs -j redmine -- --mountpath /mount/path
    

Test Redmine persistence

To test the NFS implementation, you create a directory for Redmine's attachments. From the URI used to register the provider, we know that NFS will be mounted at /nfs/data in the container (assuming you did not override the default mount path). We will do this using an app console session for the redmine job and then redeploy the app.

  1. Create an app console session.
    apc app console redmine
    
  2. Connect to the console session.
    apc console connect redmine
    
  3. Update the Redmine job.
    Create the directory for Redmine attachments:
    root@ip-10-0-1-12:/# mkdir /nfs/data/files
    root@ip-10-0-1-12:/# chmod 777 /nfs/data/files
    root@ip-10-0-1-12:/# exit
    
  4. Update Redmine's config file to save attachments to this new directory:
    production:
      attachments_storage_path: /nfs/data/files
    
  5. Redeploy the Redmine app then create an issue with an attachment to see it working.

Troubleshooting connections to an NFS server

If you have your own NFS server and you want to use it for storage, you can create Apcera providers, services, and bindings that connect to your NFS servers.

If you have problems connecting to the NFS server from Apcera try these troubleshooting steps:

  • Does the bind fail with the error failed to mount: protocol not supported? This happens if you created an NFSv4 provider for a server that only supports NFSv3. Either upgrade the server so that it supports NFSv4 or delete the provider and create a new provider that uses NFSv3.
  • Do Docker apps fail with lock errors? Newer Docker apps require NFSv4, and will only work when your server supports NFSv4 and the provider you created was set up to use NFSv4.
  • Can you mount the export from the NFS server on another server? If you can't mount the export from somewhere else you won't be able to mount it from Apcera.
  • Try logging into the NFS server and tail the NFS log. Do you see connection attempts from Apcera?
    • If the logs show nothing then Apcera isn't even contacting the server. Check firewall settings.
    • If the logs show an error message then Apcera is connecting but something else is wrong. Google the error message and try to fix the problem.
  • If the server is in a different subnet than the Apcera cluster the connection request coming from Apcera has been NAT-ed and will be coming from a high port.By default most NFS servers reject requests coming from high ports, so make sure that your server's export settings enable connections from high ports. On Linux servers this means adding the unfortunately named "insecure" flag to the NFS server's export line. e.g.:
# /etc/exports
/opt/nfsdata *(rw,insecure)

The * is the list of IP addresses and/or subnets that are allowed to mount the /opt/nfsdata directory. We recommend that you do not use *. If the export is just for your Apcera cluster put the cluster's subnet in that spot. e.g.:

# /etc/exports
/opt/nfsdata 10.0.0.0/24(rw,insecure)

This is just an example. Use the subnet configured for your cluster.