Amazon S3 Services
Using Amazon S3 services
Amazon S3 is a widely adopted object store in the cloud. You can integrate with Amazon S3 using the S3 service gateway to support workloads using the S3.
For each Apcera S3 service you define, the system creates a new user with an unique ID under the identified AWS account. Apcera recommends that you create a dedicated IAM account for such purposes and that you do not use the root user's access key.
To integrate with Amazon S3 using the S3 service gateway, create an IAM user for the S3 provider with following permissions:
NOTE: It is strongly recommended that you do not use the root user's access key or the IAM user(s) with full permissions on all AWS resources.
Create an AWS Identity & Access Management (IAM) user
- Log into the AWS console
- Click Identity & Access Management
- Click Users
- Click Create New Users
- Enter the user names to Create (NOTE: Ensure that the check box for Generate an access key for each user is selected)
- Click Create
- Click Show User Security Credentials and then record the access key and secret access key
- Click Users again
- Select your newly created user
- Select the "permissions" tab
- Click Attach Policy
- Select the check-box next to:
- Click Attach Policy
Registering an Amazon S3 provider
Now that you have created an IAM user, execute the APC command to register an S3 provider with Apcera.
apc provider register <provider name> --type s3 --url s3://<AWS_ACCESS_KEY>:<AWS_SECRET_KEY>@s3.amazonaws.com [--description]
apc provider register s3_provider --type s3 --url s3://ABCDEOFGOHI7EXAMPLE:wJalrXUtnFEMI%2FK7MDENG%2FbPxRfiCYEXAMPLEKEY@s3.amazonaws.com --description 'Amazon S3 provider'
NOTE: The AWS credentials need to be URL safe. Your keys must be made into a URL-encoded object if it contains illegal characters, such as a
/, in order to be used by most applications.
Creating Amazon S3 services
The following example creates a new S3 service named
mybucket and binds it to a job named
apc service create mybucket --provider s3_provider --job myCapsule
myCapsule job will have an environment variable named
$S3_URI set on its container with following format:
NOTE: When you delete the S3 service, the S3 bucket created for the service will be deleted as well. Therefore, if you wish to retain the data stored in the bucket, make sure to move them before deleting the service.
Example use of a binding
Here is an example of binding a capsule job to an S3 service:
$ apc capsule create s3-builder --image linux --allow-egress $ apc service create s3_service --provider s3_provider $ apc service bind s3_service --job s3-builder $ apc capsule connect s3-builder
If you want to verify that S3 is bound to this capsule, check for the S3 environment variables that were set when you bound the job to the capsule:
$ env | grep S3
s3cmd in the capsule:
$ apt-get update && apt-get install s3cmd -y
s3cmd and enter the access key and secret access key. The keys may be URL-encloded versions from the environment variables. This can be worked around in the capsule in the following way:
$ alias urldecode='python -c "import sys, urllib as ul; \ print ul.unquote(sys.argv)"' $ urldecode $S3_URI
$ s3cmd --configure
Warning: s3cmd may display an error during testing configuration. It is safe to ignore this error.
Now try putting some data in the bucket:
$ echo "Hi, I'm a file" > test.txt $ s3cmd put test.txt s3://$BUCKET_NAME
At this point you should be able to click on the bucket name in the AWS S3 dashboard and see the
test.txt file listed in the contents of the bucket.