Follow Datanami:
January 19, 2012

Amazon Unveils Dynamo as a Service

Datanami Staff

Reliability and scalability are topping the enterprise priority charts when it comes to looking at NoSQL solutions–and Amazon is the latest to step in with a new proposed solution.

The shopping giant and infrastructure as a service provider is touting its new cloudy data powerhouse as enterprise ready and is telling organizations that they can (and should) look to running their mission-critical applications on its cloud—and utilizing its services, including Dynamo, Elastic Map Reduce and more..

The company says that its new data store offering, Dynamo, provides the reliability and scaling capabilities that Amazon has been using internally to run mission-critical, highly fluctuating customer interactions.

Amazon says that Dynamo is designed to “manage the state of services that have very high reliability requirements and need tight control over the tradeoffs between availability, consistency, cost-effectiveness and performance.” Additionally, this capability is offered as a service, meaning that organizations seeking to make use of it don’t need to bother with rapid manual provisioning headaches; as a cloud service, Dynamo will scale to meet demand (automatically to a point, but large customers will need to contact Amazon) and wind back down as well.

They point to an example of Dynamo in action by noting that features on Amazon.com’s shopping site, including customer preferences, shopping carts, sales rank of new titles and other customer-facing elements would hit major scaling and availability walls when set within a standard relational database. Dynamo, however, provides the primary key only interface that can adapt to these requirements and scale with demand.

According to Amazon, one of the biggest internal challenges they face their own with daily operations is achieving reliability at the massive scale. As they noted this week, “even the slightest outage has significant financial consequences and impacts customer trust.”

 Accordingly, they say they’ve built their infrastructure of tens of thousands of servers and network components across several datacenters to create the most fault-tolerant environment possible; one in which components regularly fail but these failures are managed and mitigated. Amazon claims that this approach means that a “persistent state is managed in the face of these failures” and that this “drives the reliability and scalability of the software systems.”

“One of the lessons our organization has learned from operating Amazon’s platform is that the reliability and scalability of a system is dependent on how its application state is managed. Amazon uses a highly decentralized, loosely coupled, service oriented architecture consisting of hundreds of services. In this environment there is a particular need for storage technologies that are always available. For example, customers should be able to view and add items to their shopping cart, even if disks are failing, network routes are flapping, or data centers are being destroyed by tornadoes.  Therefore, the service responsible for managing shopping carts requires that it can always write to and read from its data store, and that its data needs to be available across multiple data centers.”

As Doug Henschen noted, “Another appeal of DynamoDB is that it’s closely tied to Amazon’s Hadoop-based Elastic MapReduce service. Customers using DynamoDB will be able to use data from within DynamoDB (and the AWS S3 storage service) in MapReduce processes and downstream analytic queries, something many Internet-scale businesses are routinely doing or anticipating doing as their businesses scale up. So it’s a single environment with scalable processing, scalable storage and scalable data-processing and analytic capabilities.”

Datanami