File Share Services - Amazon Elastic File System (AWS EFS)

Amazon announced the development of the Amazon Elastic File System (AWS EFS) in 2015. EFS was designed to provide multiple EC2 instances with shared, low-latency access to a fully-managed file system. On June 28, 2016 Amazon announced that EFS is now available for production use in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions.

Apcera's NFS Service Gateway can be used to access AWS EFS storage volumes within containers. You can use EFS to provide persistent storage to your containers running on AWS-hosted clouds in regions where EFS is available.

Gathering information

Before you begin you will need to know:

  • The name of the AWS Region where your Apcera Platform is running.
  • The name/ID of the AWS VPC where your Apcera Platform is running.
  • The name/ID of the AWS security group for your Apcera Platform.

Setting up an EFS volume

  1. Log into your AWS console.
  2. Select the name of the AWS Region where your Apcera Platform is running on the upper right side of the screen.
    Login and select
  3. Select Elastic File System.
  4. Click Create File System.
  5. Configure the file system access:
    • Select the name of the VPC.
    • The availability zone and subnet should be selected for you automatically.
    • If your VPC has more than one subnet (unusual) then select the subnet containing the Instance Managers that will be connecting to the EFS volume.
    • Leave IP address set to Automatic.
    • Set the security group to the same security group as your Apcera Platform.
    • Click Next Step
      Configure file system access
  6. Configure optional settings:
    • Set the name of the EFS volume.
    • Choose the performance mode.
    • Click Next Step
      Configure optional settings
  7. Review and create:
    • If everything looks OK, click Create File System.
      Review and create
  8. You should see a "Success!" message and a new EFS volume with "Life Cycle State" = "Creating".
    • Write down the IP address of the EFS volume.
      Success

Create an NFS Provider for the EFS volume

We're going to create a single provider for the EFS volume. Each time you have a container or set of containers that need a persistent file system, just create a new service from the same provider. Each new service will carve out a new namespace on the EFS volume, keeping the files associated with that service separate from the files in all other services that use the same provider.

According to the EFS FAQ, When you create a file system, you create endpoints in your VPC called "mount targets." Each mount target provides an IP address and a DNS name, and you use this IP address or DNS name in your mount command. Only resources that can access a mount target can access your file system. If your cluster is set up to use AWS NFS, then use the DNS name of the EFS volume. If you're not sure, try the DNS name. If it doesn't work you can update your cluster.conf to use the AWS DNS service and redeploy or use the EFS IP address. AWS states that it's possible that the IP address may change, even though the IP is in your VPC and is a private address, so the safest approach is to use the DNS name.

To create the provider, you need to construct a URL describing the volume. In this case, we'll use the DNS NAME of the EFS volume as the hostname and / as the exported volume name. All EFS volumes use the NFS v4.1 protocol. If the DNS name of the EFS volume is fs-49d4d570.efs.us-east-1.amazonaws.com and the IP address is 10.0.0.112 we'd construct a provider using:

apc provider register awsefs --type nfs \
    --url "nfs://fs-49d4d570.efs.us-east-1.amazonaws.com/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

If that didn't work because you're not using AWS DNS and you want to use the EFS IP address:

apc provider register awsefs --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a service from the provider:

apc service create efs-service-1 \
    --provider awsefs \
    --description 'Amazon EFS Service' \
    --batch

Create a capsule, bind the service to the capsule, and connect to the capsule:

apc capsule create efs-capsule1 --image linux -ae --batch
apc service bind efs-service-1 --job efs-capsule1 --batch -- --mountpath /an/unlimited/supply
apc capsule connect efs-capsule1

Once connected, type df -k to see the mounted file system.

Capsule Connect

You can bind this service to any container that needs a shared, persistent file system. Each time you need a new shared, persistent file system for a container or group of containers just create a new service using the same provider and bind the service to your job or jobs.