[Fargate] [request]: Increase maximum ephemeral storage for tasks beyond 200 GiB
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
In 2021, the maximum ephemeral storage for tasks was increased to 200 GiB (ref: #384). I would like to further increase that limit. I don't currently have a concrete target in mind and would defer to the AWS team on what (if any) sort of increase is feasible.
Which service(s) is this request for?
Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I run a Fargate service where each task pulls data from S3 upon starting up. The data must live in the task's ephemeral storage for performance reasons — operating on it across a network boundary is not an option. I love being able to use Fargate instead of managing my own fleet of EC2 instances and would like to continue down this path for as long as possible. However, the amount of data that each task pulls down will continue to grow over time and at some point eclipse 200 GiB. There are some mitigations I can implement to forestall that inevitability, and, in fairness, I will likely have to migrate off of Fargate at some point no matter how high the ephemeral storage limit is raised. My immediate goal for this request is to get a sense of whether 200 GiB is a reasonably hard cap for the foreseeable future (so I should start on my migration away from Fargate sooner) or if there's some wiggle room, which would relieve some of the urgency to explore alternatives.
Are you currently working around this issue?
No.
Additional context
None.
Attachments
None.
Would be extremely beneficial.
EFS is out of the question since it's persistent and expensive.
EBS, is a "workaround" IMHO as it requires "root" privileges, doesn't scale well when you need lots of tasks (quota), adds more provisioning and de-provisioning time, additional ECS Infrastucture role and most importantly for me, it's not supported with AWS Batch service.
Second this: it would be extremely helpful. Totally agree with tamir-deep: using local storage has a number of advantages that I don't want to give up but data sets keep getting bigger.
It's 2025 now and I would propose 1TB as a good upper limit.