Saturday, June 22, 2024
HomeCloud ComputingAmazon ECS helps a local integration with Amazon EBS volumes for data-intensive...

Amazon ECS helps a local integration with Amazon EBS volumes for data-intensive workloads


Voiced by Polly

In the present day we’re asserting that Amazon Elastic Container Service (Amazon ECS) helps an integration with Amazon Elastic Block Retailer (Amazon EBS), making it simpler to run a wider vary of information processing workloads. You’ll be able to provision Amazon EBS storage in your ECS duties operating on AWS Fargate and Amazon Elastic Compute Cloud (Amazon EC2) while not having to handle storage or compute.

Many organizations select to deploy their purposes as containerized packages, and with the introduction of Amazon ECS integration with Amazon EBS, organizations can now run extra varieties of workloads than earlier than.

You’ll be able to run knowledge workloads requiring storage that helps excessive transaction volumes and throughput, equivalent to extract, remodel, and cargo (ETL) jobs for large knowledge, which must fetch present knowledge, carry out processing, and retailer this processed knowledge for downstream use. As a result of the storage lifecycle is totally managed by Amazon ECS, you don’t must construct any further scaffolding to handle infrastructure updates, and in consequence, your knowledge processing workloads at the moment are extra resilient whereas concurrently requiring much less effort to handle.

Now you’ll be able to select from a wide range of storage choices in your containerized purposes operating on Amazon ECS:

  • Your Fargate duties get 20 GiB of ephemeral storage by default. For purposes that want further cupboard space to obtain massive container photos or for scratch work, you’ll be able to configure as much as 200 GiB of ephemeral storage in your Fargate duties.
  • For purposes that span many duties that want concurrent entry to a shared dataset, you’ll be able to configure Amazon ECS to mount the Amazon Elastic File System (Amazon EFS) file system to your ECS duties operating on each EC2 and Fargate. Widespread examples of such workloads embody internet purposes equivalent to content material administration methods, inside DevOps instruments, and machine studying (ML) frameworks. Amazon EFS is designed to be accessible throughout a Area and may be concurrently connected to many duties.
  • For purposes that want high-performance, low-cost storage that doesn’t must be shared throughout duties, you’ll be able to configure Amazon ECS to provision and fix Amazon EBS storage to your duties operating on each Amazon EC2 and Fargate. Amazon EBS is designed to offer block storage with low latency and excessive efficiency inside an Availability Zone.

To be taught extra, see Utilizing knowledge volumes in Amazon ECS duties and persistent storage finest practices within the AWS documentation.

Getting began with EBS quantity integration to your ECS duties
You’ll be able to configure the quantity mount level in your container within the process definition and move Amazon EBS storage necessities in your Amazon ECS process at runtime. For many use instances, you will get began by merely offering the scale of the quantity wanted for the duty. Optionally, you’ll be able to configure all EBS quantity attributes and the file system you need the quantity formatted with.

1. Create a process definition
Go to the Amazon ECS console, navigate to Job definitions, and select Create new process definition.

Within the Storage part, select Configure at deployment to set EBS quantity as a brand new configuration kind. You’ll be able to provision and fix one quantity per process for Linux file methods.

Once you select Configure at process definition creation, you’ll be able to configure present storage choices equivalent to bind mounts, Docker volumes, EFS volumes, Amazon FSx for Home windows File Server volumes, or Fargate ephemeral storage.

Now you’ll be able to choose a container within the process definition, the supply EBS quantity, and supply a mount path the place the quantity will probably be mounted within the process.

It’s also possible to use $aws ecs register-task-definition --cli-input-json file://instance.json command line to register a process definition so as to add an EBS quantity. The next snippet is a pattern, and process definitions are saved in JSON format.

{
    "household": "nginx"
    ...
    "containerDefinitions": [
        {
            ...
            "mountPoints": [
                "containerPath": "/foo",
                "sourceVolume": "new-ebs-volume"
            ],
            "identify": "nginx",
            "picture": "nginx"
        }
    ],
    "volumes": [
       {
           "name": "/foo",
           "configuredAtRuntime": true
       }
    ]
}

2. Deploy and run your process with EBS quantity
Go to your ECS cluster and select Run new process. Be aware that you would be able to choose the compute choices, the launch kind, and your process definition.

Be aware: Whereas this instance goes by deploying a standalone process with an connected EBS quantity, you may also configure a brand new or present ECS service to make use of EBS volumes with the specified configuration.

You will have a brand new Quantity part the place you’ll be able to configure the extra storage. The amount identify, kind, and mount factors are those who you outlined in your process definition. Select your EBS quantity sorts, sizes (GiB), IOPS, and the specified throughput.

You can not connect an present EBS quantity to an ECS process. However if you wish to create a quantity from an present snapshot, you’ve got the choice to decide on your snapshot ID. If you wish to create a brand new quantity, then you’ll be able to go away this subject empty. You’ll be able to select the file system kind: ext3, ext4, or xfs file methods on Linux.

By default, when a process is terminated, Amazon ECS deletes the connected quantity. In the event you want the info within the EBS quantity to be retained after the duty exits, uncheck Delete on termination. Additionally, it’s essential create an AWS Identification and Entry Administration (IAM) function for quantity administration that comprises the related permissions to permit Amazon ECS to make API calls in your behalf. For extra data on this coverage, see infrastructure function within the AWS documentation.

It’s also possible to configure encryption by default in your EBS volumes utilizing both Amazon managed keys and buyer managed keys. To be taught extra in regards to the choices, see our Amazon EBS encryption within the AWS documentation.

After configuring all process settings, select Create to start out your process.

3. Deploy and run your process with EBS quantity
As soon as your process has began, you’ll be able to see the quantity data on the duty particulars web page. Select a process and choose the Volumes tab to search out your created EBS quantity particulars.

Your crew can set up the event and operations of EBS volumes extra effectively. For instance, software builders can configure the trail the place your software expects storage to be accessible within the process definition, and DevOps engineers can configure the precise EBS quantity attributes at runtime when the applying is deployed.

This enables DevOps engineers to deploy the identical process definition to totally different environments with differing EBS quantity configurations, for instance, gp3 volumes within the improvement environments and io2 volumes in manufacturing.

Now accessible
Amazon ECS integration with Amazon EBS is obtainable in 9 AWS Areas: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Eire), and Europe (Stockholm). You solely pay for what you employ, together with EBS volumes and snapshots. To be taught extra, see the Amazon EBS pricing web page and Amazon EBS volumes in ECS within the AWS documentation.

Give it a strive now and ship suggestions to our public roadmap, AWS re:Submit for Amazon ECS, or by your typical AWS Help contacts.

Channy

P.S. Particular due to Maish Saidel-Keesing, a senior enterprise developer advocate at AWS for his contribution in penning this weblog submit.

A correction was made on January 12, 2024: An earlier model of this submit misstated: I modified 1) from “both ext3 or ext4” to “ext3, ext4, or xfs”, 2) from “verify Delete on termination” to “uncheck Delete on termination”, 3) from “configure encryption”, “by default configure encryption”, and 4) from “process definition particulars web page” to “process particulars web page”.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments