Farewell EC2-Classic, it’s been swell

0
573
Farewell EC2-Classic, it’s been swell


EC2-Classic in a museum gallery

Retiring companies isn’t one thing we do at AWS. It’s fairly uncommon. Companies depend on our choices – their companies actually dwell on these companies – and it’s one thing that we take critically. For instance SimpleDB continues to be round, although DynamoDB is the “NoSQL” DB of alternative for our clients.

So, two years in the past, when Jeff Barr introduced that we’d be shutting down EC2-Classic, I’m positive that there have been at the least a couple of of you that didn’t imagine we’d truly flip the change — that we’d let it run perpetually. Well, that day has come. On August 15, 2023, we shut down the final occasion of Classic. And with the entire historical past right here, I feel it’s value celebrating the unique model of one of many companies that began what we now know as cloud computing.

EC2 has been round for fairly some time, virtually 17 years. Only SQS and S3 are older. So, I wouldn’t blame you in case you have been questioning what makes an EC2 occasion “Classic”. Put merely, it’s the community structure. When we launched EC2 in 2006, it was one large community of 10.0.0.0/8. All cases ran on a single, flat community shared with different clients. It uncovered a handful of options, like safety teams and Public IP addresses that have been assigned when an occasion was spun up. Classic made the method of buying compute lifeless easy, although the stack working behind the scenes was extremely advanced. “Invent and Simplify” is without doubt one of the Amazon Leadership Principles in spite of everything…

If you had launched an occasion in 2006, an m1.small, you’ll have gotten a digital CPU the equal of a 1.7 GHz Xeon processor with 1.75 GB of RAM, 160 GB of native disk, and 250 Mb/second of community bandwidth. And it will have value simply $0.10 per clocked hour. It’s fairly unbelievable the place cloud computing has gone since then, with a P3dn.24xlarge offering 100 Gbps of community throughput, 96 vCPUs, 8 NVIDIA v100 Tensor Core GPUs with 32 GiB of reminiscence every, 768 GiB of whole methods reminiscence, and 1.8 TB of native SSD storage, to not point out an EFA to speed up ML workloads.

But 2006 was a unique time, and that flat community and small assortment of cases, just like the m1.small, was “Classic”. And on the time it was really revolutionary. Hardware had grow to be a programmable useful resource that you would scale up or down at a second’s discover. Every developer, entrepreneur, startup and enterprise, now had entry to as a lot compute as they needed, every time they needed it. The complexities of managing infrastructure, shopping for new {hardware}, upgrading software program, changing failed disks — had been abstracted away. And it modified the way in which all of us designed and constructed purposes.

Of course the very first thing I did when EC2 was launched was to maneuver this weblog to an m1.small. It was working Moveable Type and the this occasion was ok to run the server and the native (no RDS but) database. Eventually I turned it right into a highly-available service with RDS failover, and so on., and it ran there for five+ years till the Amazon S3 Website characteristic was launched in 2011. The weblog has now been “serverless” for the previous 12 years.

Like we do with all of our companies, we listened to what our clients wanted subsequent. This led us to including options like Elastic IP addresses, Auto Scaling, Load Balancing, CloudWatch, and varied new occasion sorts that may higher swimsuit completely different workloads. By 2013 we had enabled VPC, which allowed every AWS buyer to handle their very own slice of the cloud, safe, remoted, and outlined for his or her enterprise. And it turned the brand new customary. It merely gave clients a brand new stage of management that enabled them to construct much more complete methods within the cloud.

We continued to assist Classic for the following decade, at the same time as EC2 developed and we applied a completely new virtualization platform, Nitro — as a result of our clients have been utilizing it.

Ten years in the past, throughout my 2013 keynote at re:Invent, I advised you that we needed to “support today’s workloads as well as tomorrow’s,” and our dedication to Classic is one of the best proof of that. It’s not misplaced on me, the quantity of labor that goes into an effort like this — however it’s precisely the sort of work that builds belief, and I’m happy with the way in which it has been dealt with. To me, this embodies what it means to be buyer obsessed. The EC2 crew saved Classic working (and working properly) till each occasion was shut down or migrated. Providing documentation, instruments, and assist from engineering and account administration groups all through the method.

It’s bittersweet to say goodbye to one in all our unique choices. But we’ve come a great distance since 2006 and we’re not executed innovating for our clients. It’s a reminder that constructing evolvable methods is a method, and revisiting your architectures with an open thoughts is a should. So, farewell Classic, it’s been swell. Long dwell EC2.

Certificate of achievement

Now, go construct!

Recommended posts

LEAVE A REPLY

Please enter your comment!
Please enter your name here