Amazon Web Services has announced the general availability of its Amazon Elastic File System (Amazon EFS
), a new, fully managed service that makes it easy to set up and scale file storage in the AWS Cloud
. The company claims with the AWS Management Console, customers can use Amazon EFS to create file systems that are accessible to multiple Amazon Elastic Compute Cloud (Amazon EC2
) instances via the Network File System (NFS) protocol.
Amazon states that enterprises today are moving their workloads to the cloud and many of these workloads depend on Network Attached Storage (NAS). This dependency is costly and time consuming to operate shared file systems because file growth is unpredictable, procurement times are long, and monitoring and patch management are administrative burdens. With Amazon EFS enterprises can automatically scale without needing to provision storage or throughput, enabling file systems to grow seamlessly to petabyte scale, while supporting thousands of concurrent client connections with consistent performance.
How will EFS help?
According to the company EFS has been designed to support a broad range of file workloads – from big data analytics, media processing, and genomics analysis that are massively parallelized and require high levels of throughput, to latency-sensitive use cases such as content management, home directory storage, and web serving.
Amazon states, with EFS, customers can create and use shared file systems that are simple, scalable, and reliable, it is easy to set up and use and doesn’t require customers to provision and manage file system software or storage hardware. When mounted to Amazon EC2 instances, Amazon EFS file system provides a standard file system interface and file system semantics, allowing customers to integrate Amazon EFS with their existing applications and tools.
Amazon claims that its EFS is designed to provide the throughput, Input/Output Operations per Second (IOPS), and low latency that file workloads require. Every file system can burst to at least 100 MB per second, and file systems greater than 1 TB in size can burst to higher throughput as file system capacity grows. Being available and durable it redundantly stores each file system object across multiple Availability Zones. There is no minimum fee or setup cost, and Amazon EFS customers pay only for the storage they use.
Peter DeSantis, Vice President, Compute Services, AWS says:
As customers continue to move more and more of their IT infrastructure to AWS, they’ve asked for a shared file storage service with the elasticity, simplicity, scalability, and on-demand pricing they enjoy with our existing object (Amazon S3), block (Amazon EBS), and archive (Amazon Glacier) storage services.
“Initially, our customers most passionately asking for a file system were trying to solve for throughput-heavy use cases like data analytics applications, large-scale processing workloads, and many forms of content and web serving. Customers were excited about Amazon EFS’s performance for those workloads, and pretty soon they were asking if we could expand Amazon EFS to work excellently for more latency-sensitive and metadata-heavy workloads like highly dynamic web applications,” DeSantis further adds.
Currently, Amazon is allowing customers to launch the EFS using the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. Available in US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions, the company plans to expand to additional Regions in the coming months.