Amazon S3 Services

Apcera provides the s3 service type and gateway for integrating with Amazon S3 service through a registered service provider.

Using Amazon S3 services

Amazon S3 is a widely adopted object store in the cloud. You can integrate with Amazon S3 using the S3 service gateway to support workloads using the S3.

For each Apcera S3 service you define, the system creates a new user with an unique ID under the identified AWS account. Apcera recommends that you create a dedicated IAM account for such purposes and that you do not use the root user's access key.

AWS credentials

To integrate with Amazon S3 using the S3 service gateway, create an IAM user for the S3 provider with following permissions:

  • IAMFullAccess
  • AmazonS3FullAccess

NOTE: It is strongly recommended that you do not use the root user's access key or the IAM user(s) with full permissions on all AWS resources.

Create an AWS Identity & Access Management (IAM) user

  1. Log into the AWS console
  2. Click Identity & Access Management
  3. Click Users
  4. Click Create New Users
  5. Enter the user names to Create (NOTE: Ensure that the check box for Generate an access key for each user is selected)
  6. Click Create
  7. Click Show User Security Credentials and then record the access key and secret access key
  8. Click Users again
  9. Select your newly created user
  10. Select the "permissions" tab
  11. Click Attach Policy
  12. Select the check-box next to:
    • AmazonS3FullAccess
    • IAMFullAccess
  13. Click Attach Policy

Registering an Amazon S3 provider

Now that you have created an IAM user, execute the APC command to register an S3 provider with Apcera.

Syntax:

apc provider register <provider name> --type s3 --url s3://<AWS_ACCESS_KEY>:<AWS_SECRET_KEY>@s3.amazonaws.com [--description]

For example:

apc provider register s3_provider --type s3 --url s3://ABCDEOFGOHI7EXAMPLE:wJalrXUtnFEMI%2FK7MDENG%2FbPxRfiCYEXAMPLEKEY@s3.amazonaws.com --description 'Amazon S3 provider'

NOTE: The AWS credentials need to be URL safe. Your keys must be made into a URL-encoded object if it contains illegal characters, such as a /, in order to be used by most applications.

Creating Amazon S3 services

The following example creates a new S3 service named mybucket and binds it to a job named myCapsule:

apc service create mybucket --provider s3_provider --job myCapsule

The myCapsule job will have an environment variable named $S3_URI set on its container with following format: s3://<AWS_ACCESS_KEY>:<AWS_SECRET_KEY>@s3.amazonaws.com/<bucket name>

NOTE: When you delete the S3 service, the S3 bucket created for the service will be deleted as well. Therefore, if you wish to retain the data stored in the bucket, make sure to move them before deleting the service.

Example use of a binding

Here is an example of binding a capsule job to an S3 service:

$ apc capsule create s3-builder --image linux --allow-egress
$ apc service create s3_service --provider s3_provider
$ apc service bind s3_service --job s3-builder
$ apc capsule connect s3-builder

If you want to verify that S3 is bound to this capsule, check for the S3 environment variables that were set when you bound the job to the capsule:

$ env | grep S3

Install s3cmd in the capsule:

$ apt-get update && apt-get install s3cmd -y

​Configure s3cmd and enter the access key and secret access key. The keys may be URL-encloded versions from the environment variables. This can be worked around in the capsule in the following way:

$ alias urldecode='python -c "import sys, urllib as ul; \
    print ul.unquote(sys.argv[1])"'

$ urldecode $S3_URI

Configure s3cmd

$ s3cmd --configure

Warning: s3cmd may display an error during testing configuration. It is safe to ignore this error.

Now try putting some data in the bucket:

$ echo "Hi, I'm a file" > test.txt
$ s3cmd put test.txt s3://$BUCKET_NAME

At this point you should be able to click on the bucket name in the AWS S3 dashboard and see the test.txt file listed in the contents of the bucket.