Follow Datanami:
November 28, 2017

What Will Amazon Announce at Re:Invent?

As Amazon Web Services’ annual re:Invent show kicks off in Las Vegas this week, the anticipation is building for what kinds of data services the cloud giant will announce next. Will it drive a giant data-slurping semi-truck onto the stage like it did last year? Or maybe drone-based data delivery is the hot new thing?

Nobody knows what the company will announce tomorrow morning, when AWS CEO Andy Jassy takes the stage for the first keynote address, or what Werner Vogels, the cloud giant’s resident technology genius, will discuss during his keynote on Thursday.

But there’s one thing for certain: It will be big.

It’s hard to overstate the influence that AWS is having on the computer industry as a whole. When it began 11 years ago as a way to monetize the extra computing power that Jeff Bezos assembled for his ecommerce operation, it was small. But over the years since, it has come to dominate the public cloud category.

The scale of AWS is unimaginably massive. Back in 2014, Timothy Prickett Morgan, then the editor of Datanami‘s sister publication EnterpriseTech, estimated that AWS had anywhere from 2.8 million to 5.6 million servers running across 87 data centers in 28 availability zones. The company lashed them all together with its own custom-made networking gear to make it work like one humongous global cluster.

Since then, the company has expanded its global infrastructure footprint to 44 availability zones, with plans to add 17 more around the world. (And that’s not counting the GovCloud it built for the US government, and a second GovCloud now in the works.) If the ratio of servers to data centers and availability zones stayed constant going back to 2014, that would mean that AWS is now running anywhere from 4.4 million to 8.6 million servers across 136 data centers. Mind-boggling numbers, to be sure.

And then there’s the data (it’s always about the data). While AWS isn’t saying how much data it stores on behalf of customers in S3 and other storage repositories, the volume is almost certainly measured in zettabytes — and it might even be moving toward the yottabyte range.

Think a petabyte is big? Please. Storing exabytes has become passé for the world’s largest computer company. “We have a lot of customers who have exabytes of data,” Jassy said during last year’s re:Invent keynote address. “You would not believe how many companies now have exabytes of data that they want to move to the cloud…”

In the pantheon of data storage, there is “big data,” and then there is AWS data. Who else developed a semi-truck that customers can load 100 petabytes of data at a time to physically upload it into the AWS cloud because the Internet is too slow? (AWS).

Besides the Snowmobile semi-truck, last year’s AWS re:Invent brought a host of new services, including:

  • Amazon AI, which uses pre-built neural networking models to automate computer vision, text-to-speech, and natural language processing (NLP) tasks;
  • Amazon Athena, an interactive SQL-based query service for analyzing S3 data;
  • AWS Greengrass, a framework for developing and running distributed applications across mobile device using the company’s AWS Lambda architecture;
  • Lambda@Edge, a new framework that automates much of the low-level plumbing required to build sophisticated IoT applications.

Undoubtedly this year’s AWS re:Invent will bring updates to some of these products. Some insiders have heard ruminations that it could be cooking up some easy-to-use hosted machine learning service, something more like rival Microsoft’s Azure ML service.

AWS still has the scale advantage over its public cloud rivals, but can it keep a coterie of data-hungry developers, data scientist, and data engineers satisfied with ever more powerful capabilities? We’ll find out tomorrow.

Related Items:

Exabytes Hit the Road with AWS Snowmobile

Amazon Adds AI, SQL to Analytics Arsenal

A Rare Peek Into The Massive Scale of AWS (EnterpriseTech)

 

Datanami