The fifth annual AWS Storage Day happened on Aug. 9, 2023 and you may see the replay right here. The first AWS Storage Day was hosted in 2019, and this occasion has grown into an innovation day that we stay up for delivering to you yearly. In final 12 months’s Storage Day publish, I wrote concerning the fixed improvements in AWS Storage aimed toward serving to you set your information to work whereas retaining it safe and guarded. This 12 months, Storage Day is concentrated on storage for AI/ML, information safety and resiliency, and the advantages of shifting to the cloud.
AWS Storage Day Key Themes
When it involves storage for AI/ML, information volumes are rising at an unprecedented price, exploding from terabytes to petabytes and even to exabytes. With a contemporary information structure on AWS, you may quickly construct scalable information lakes, use a broad and deep assortment of purpose-built information providers, scale your techniques at a low price with out compromising efficiency, share information throughout organizational boundaries, and handle compliance, safety, and governance, permitting you to make selections with pace and agility at scale.
To prepare machine studying fashions and construct Generative AI purposes, you have to have the correct information technique in place. So, I’m completely satisfied to see that, among the many checklist of classes to stay up for on the stay occasion, the Optimize generative AI and ML with AWS Infrastructure session will talk about how one can rework your information into significant insights.
Whether you’re simply getting began with the cloud, planning emigrate purposes to AWS, or already constructing purposes on AWS, we’ve sources that can assist you shield your information and meet your online business continuity aims. Our information safety and resiliency options and options may help you meet your online business continuity objectives and ship catastrophe restoration throughout information loss occasions, throughout restoration level and time aims (RPO and RTO). With the unprecedented information development occurring on the earth as we speak, figuring out the place your information is saved, the way it’s secured, and who has entry to it’s a increased precedence than ever. Be certain to affix the Protect information in AWS amid a quickly evolving cyber panorama session to be taught extra.
When shifting information to the cloud, you could perceive the place you’re shifting it for various use instances, the kinds of information you’re shifting, and the community sources out there, amongst different issues. There are many causes to maneuver to the cloud, not too long ago, Enterprise Strategy Group (ESG) validated that organizations decreased compute, networking, and storage prices by as much as 66 % by migrating on-premises workloads to AWS Cloud infrastructure. ESG confirmed that migrating on-premises workloads to AWS offers organizations with decreased prices, elevated efficiency, improved operational effectivity, quicker time to worth, and improved enterprise agility.
We have a lot of classes that debate the best way to transfer to the cloud, primarily based in your use case. I’m most wanting ahead to the Hybrid cloud storage and edge compute: AWS, the place you want it session, which can talk about issues for workloads that may’t absolutely transfer to the cloud.
Tune in to be taught from consultants about new bulletins, management insights, and academic content material associated to the broad portfolio of AWS Storage providers and options that handle all these themes and extra. Today, we’ve bulletins associated to Amazon Simple Storage Service (Amazon S3), Amazon FSx for Lustre, Amazon FSx for Windows File Server, Amazon Elastic File System (Amazon EFS), Amazon FSx for OpenZFS, and extra.
Let’s get into it.
15 Years of Amazon EBS
Not way back, I used to be studying Jeff Barr’s publish titled 15 Years of AWS Blogging! In this publish, Jeff talked about a number of posts he wrote for the earliest AWS providers and options. Amazon Elastic Block Store (Amazon EBS) is on this checklist as a service that simplifies using Amazon EC2.
Well, it’s been 15 years for the reason that launch of Amazon EBS was introduced, and as we speak we rejoice 15 years of this service. If you had been one of many authentic customers who put Amazon EBS to good use and offered us with the very useful suggestions that helped us invent and simplify, iterate and enhance, I’m certain you may’t imagine how time flies. Today, Amazon EBS handles greater than 100 trillion I/O operations every day, and over 390 million EBS volumes are created day by day.
If you’re new to Amazon EBS, be part of us for a hearth chat with Matt Garman, Senior Vice President, Sales, Marketing, and Global Services at AWS, and be taught the technique and buyer challenges behind the launch of the service in 2008. You’ll additionally hear from long-term EBS buyer, Stripe, about its development with EBS since Stripe was launched 12 years in the past.
Amazon EBS has repeatedly improved its scalability and efficiency to assist extra buyer workloads because the direct storage attachment for Amazon EC2 cases. With the launch of Amazon EC2 M7i cases, powered by customized 4th Generation Intel Xeon Scalable processors, on August 2, you may connect as much as 128 Amazon EBS volumes, a rise from 28 on a earlier technology M6i occasion. The increased variety of quantity attachments means you may improve storage density per occasion and enhance useful resource utilization, lowering whole compute price.
You can host as much as 127 containers per occasion for bigger database purposes and scale them extra affordably earlier than needing to provision extra cases and solely pay for sources you want. With a better variety of quantity attachments, you may absolutely make the most of the reminiscence and vCPU out there on these highly effective M7i cases as your database storage footprint grows. EBS can also be rising the variety of multi-volume snapshots you may create, for as much as 128 EBS volumes connected to an occasion, enabling you to create crash-consistent backups of all volumes connected to an occasion.
Join the 15 years of improvements with Amazon EBS session for a dialogue about how the unique imaginative and prescient for Amazon EBS has advanced to fulfill your rising calls for for cloud infrastructure.
Mountpoint for Amazon S3
Now usually out there, Mountpoint for Amazon S3 is a brand new open supply file consumer that delivers excessive throughput entry, decreasing compute prices for information lakes on Amazon S3. Mountpoint for Amazon S3 is a file consumer that interprets native file system API calls to S3 object API calls. Using Mountpoint for Amazon S3, you may mount an Amazon S3 bucket as a neighborhood file system in your compute occasion, to entry your objects via a file interface with the elastic storage and throughput of Amazon S3. Mountpoint for Amazon S3 helps sequential and random learn operations on current recordsdata, and sequential write operations for creating new recordsdata.
The Deep dive and demo of Mountpoint for Amazon S3 session demonstrates the best way to use the file consumer to entry objects in Amazon S3 utilizing file APIs, making it simpler to retailer information at scale and maximize the worth of your information with analytics and machine studying workloads. Read this weblog publish to be taught extra about Mountpoint for Amazon S3 and the best way to get began, together with a demo.
Put Cold Storage to Work Faster with Amazon S3 Glacier Flexible Retrieval
Amazon S3 Glacier Flexible Retrieval improves information restore time by as much as 85 %, at no extra price. Faster information restores robotically apply to the Standard retrieval tier when utilizing Amazon S3 Batch Operations. These restores start to return objects inside minutes, so you may course of restored information quicker. Processing restored information in parallel with ongoing restores helps you speed up information workflows and rapidly reply to enterprise wants. Now, whether or not you’re transcoding media, restoring operational backups, coaching machine studying fashions, or analyzing historic information, you may pace up your information restores from archive.
Coupled with the S3 Glacier enhancements to revive throughput by as much as 10 instances for thousands and thousands of objects introduced in 2022, S3 Glacier information restores of all sizes now profit from each quicker begins and shorter completion instances.
Join the Maximize the worth of chilly information with Amazon S3 Glacier session to find out how Amazon S3 Glacier helps organizations of all sizes and from all industries rework their information archiving to unlock enterprise worth, improve agility, and save on storage prices. Read this weblog publish to be taught extra concerning the Amazon S3 Glacier Flexible Retrieval efficiency enhancements and comply with step-by-step steering on the best way to get began with quicker commonplace retrievals from S3 Glacier Flexible Retrieval.
Supporting a Broad Spectrum of File Workloads
To serve a broad spectrum of use instances that depend on file techniques, we provide a portfolio of file system providers, every focusing on a unique set of wants. Amazon EFS is a serverless file system constructed to ship an elastic expertise for sharing information throughout compute sources. Amazon FSx makes it simpler and cost-effective so that you can launch, run, and scale feature-rich, high-performance file techniques within the cloud, enabling you to maneuver to the cloud with no adjustments to your code, processes, or the way you handle your information.
Power ML analysis and large information analytics with Amazon EFS
Amazon EFS provides serverless and absolutely scalable file storage, designed for top scalability in each storage capability and throughput efficiency. Just final week, we introduced enhanced assist for quicker learn and write IOPS, making it simpler to energy extra demanding workloads. We’ve improved the efficiency capabilities of Amazon EFS by including assist for as much as 55,000 learn IOPS and as much as 25,000 write IOPS per file system. These efficiency enhancements assist you to to run extra demanding workflows, reminiscent of machine studying (ML) analysis with KubeFlow, monetary simulations with IBM Symphony, and large information processing with Domino Data Lab, Hadoop, and Spark.
Join the Build and run analytics and SaaS purposes at scale session to listen to how latest Amazon EFS efficiency enhancements may help energy extra workloads.
File launch on Amazon FSx for Lustre
File launch for Amazon FSx for Lustre helps customers handle their high-performance Lustre file system and save prices by giving them the flexibility to maneuver chilly information to Amazon S3. File launch extends FSx for Lustre’s S3 integration by releasing file information that’s synchronized with S3 from Lustre. You can launch information out of your FSx for Lustre file system by working or scheduling launch information repository duties, which allow you to specify standards for which recordsdata to launch. Use file launch to tier colder information to Amazon S3 and let customers and purposes proceed to jot down new information to FSx for Lustre. If wanted, you may rapidly and simply retrieve launched file information and entry recordsdata which have been launched as a result of the file metadata for launched recordsdata stays in your FSx for Lustre file system.
Join the Run compute-heavy workloads at cloud scale session to be taught extra about file launch for Amazon FSx for Lustre.
Multi-AZ file techniques on Amazon FSx for OpenZFS
You can now use a multi-AZ deployment possibility when creating file techniques on Amazon FSx for OpenZFS, making it simpler to deploy file storage that spans a number of AWS Availability Zones to supply multi-AZ resilience for business-critical workloads. With this launch, you may make the most of the facility, agility, and ease of Amazon FSx for OpenZFS for a broader set of workloads, together with business-critical workloads like database, line-of-business, and web-serving purposes that require extremely out there shared storage that spans a number of AZs.
The new multi-AZ file techniques are designed to ship excessive ranges of efficiency to serve a broad number of workloads, together with performance-intensive workloads reminiscent of monetary providers analytics, media and leisure workflows, semiconductor chip design, and sport improvement and streaming, as much as 21 GB per second of throughput and over 1 million IOPS for steadily accessed cached information, and as much as 10 GB per second and 350,000 IOPS for information accessed from persistent disk storage.
Join the Migrate NAS to AWS to cut back TCO and acquire agility session to be taught extra about multi-AZs with Amazon FSx for OpenZFS.
New, Higher Throughput Capacity Levels on Amazon FSx for Windows File Server
Performance enhancements for Amazon FSx for Windows File Server assist you to speed up time-to-results for performance-intensive workloads reminiscent of SQL Server databases, media processing, cloud video enhancing, and digital desktop infrastructure (VDI).
We’re including 4 new, increased throughput capability ranges to extend the utmost I/O out there as much as 12 GB per second from the earlier I/O of two GB per second. These throughput enhancements include correspondingly increased ranges of disk IOPS, designed to ship a rise as much as 350,000 IOPS.
In addition, through the use of FSx for Windows File Server, you may provision IOPS increased than the default 3 IOPS per GiB to your SSD file system. This permits you to scale SSD IOPS independently from storage capability, permitting you to optimize prices for performance-sensitive workloads.
Join the Migrate NAS to AWS to cut back TCO and acquire agility session to be taught extra concerning the efficiency enhancements for Amazon FSx for Windows File Server.
Logically Air-Gapped Vault for AWS Backup
AWS Backup is a completely managed, policy-based information safety answer that allows clients to centralize and automate backup restores throughout 19 AWS providers (spanning compute, storage, and databases) and third-party purposes reminiscent of VMware Cloud on AWS and on-premises, in addition to SAP HANA on Amazon EC2.
Today, we’re saying the preview of logically air-gapped vault as a brand new sort of AWS Backup Vault that acts as an extra layer of safety to mitigate towards malware occasions. With logically air-gapped vault, clients can get well their software information via a unique trusted account.
Join the Deep dive on information restoration for ransomware occasions session to be taught extra about logically air-gapped vault for AWS Backup.
Copy Data to and from Other Clouds with AWS DataSync
AWS DataSync is an internet information motion and discovery service that simplifies information migration and helps you rapidly, simply, and securely switch your file or object information to, from, and between AWS storage providers. In addition to assist of information migration to and from AWS storage providers, DataSync helps copying to and from different clouds reminiscent of Google Cloud Storage, Azure Files, and Azure Blob Storage. Using DataSync, you may transfer your object information at scale between Amazon S3 suitable storage on different clouds and AWS storage providers reminiscent of Amazon S3. We’re now increasing the assist of DataSync for copying information to and from different clouds to incorporate DigitalOcean Spaces, Wasabi Cloud Storage, Backblaze B2 Cloud Storage, Cloudflare R2 Storage, and Oracle Cloud Storage.
Join the Identify and speed up information migrations at scale session to be taught extra about this expanded assist for DataSync.
Join Us Online
Join us as we speak for the AWS Storage Day digital occasion on the AWS On Air channel on Twitch. The occasion might be stay beginning at 9:00 AM Pacific Time (12:00 PM Eastern Time) on August 9. All classes might be out there on demand roughly two days after Storage Day.
We stay up for seeing you on Twitch!
– Veliswa