I've been playing around with Amazon's Elastic Computing Cloud services for a bit now to assess its utility as software deployment environment. I found it's a game changer for data center and a great model for a service catalog.
The two aspects that I believe are revolutionary about Amazon's cloud offering are the predictability of IT infrastructure costs and the variability of the cost structure. Neither of these is true today for IT organizations. You may have predictability but not variability, and that predictability often comes at high costs.
Amazon has a different offer for their customers. And how Amazon communicates with its customers? Through a service catalog. Let me give you a tour of their offering. The whole page can be found here. I broke this down into Structure, Benefits, Pricing and Actionable for simplicity.
The whole structure looks like this:
It covers what it does, what benefits (hightlights), details, major options and pricing! Then what I call the fine print (aka SLA's).
It doesn't skimp on benefits. In fact, benefits and outcomes are front and center. Here are a few that I've highlighted.
Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.
Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine. Instances can be rebooted remotely using web service APIs. You also have access to console output of your instances.
Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, and instance storage that is optimal for your choice of operating system and application. For example, your choice of operating systems includes numerous Linux distributions, Microsoft Windows Server and OpenSolaris.
Reliable – Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazon’s proven network infrastructure and datacenters.
Secure – Amazon EC2 provides web service interfaces to configure firewall settings that control network access to and between groups of instances.
Inexpensive – Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay a very low rate for the compute capacity you actually consume. Compare this with the significant up-front expenditures traditionally required to purchase and maintain hardware, either in-house or hosted. This frees you from many of the complexities of capacity planning, transforms what are commonly large fixed costs into much smaller variable costs, and removes the need to over-buy “safety net” capacity to handle periodic traffic spikes.
It absolutely delivers their unique differentiator which variable (elastic) cost, while re-assuring that you have complete control, flexibility and of course, it's inexpensive. In fact, if you read that section, it draws a comparison against your data center! And it gets to heart of what customers don't like about IT costs; highly fixed, over-bought, hard to plan for, etc.
It also covers the OS, database software and middleware choices.
Next, the catalog outlines the main packages: Standard and High CPU. Two choices, and then some three sub-choicess. The standard one contains:
- Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform
- Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform
- Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
There's a lot more description, links to explanation, FAQs, etc. It's the way they standardize these formerly complicated configurations that is a useful take away.
Pricing follows and there three aspects to highlight. First, it's completely and easily understandable as a unit of measure. They use per hour.
|Small (Default)||$0.10 per hour||$0.125 per hour|
|Large||$0.40 per hour||$0.50 per hour|
|Extra Large||$0.80 per hour||$1.00 per hour|
Think of all the complexity of running a datacenter: people, machines and facilities, etc. Amazon gets it down to controllable unit of of measure, hours. As a customer, I can choose to consume and hour or not. That's a level of control that's appealing to me. Is this the right unit of measure for every customer? No. It will depends on your customer and the benefit they want to buy. (More in future postings).
Second, they include all the pricing units for network, storage and servers. Your complete datacenter (almost) configuration.
Third, some charges like data transfer charges are harder to map to controllable costs, so Amazon provides a pricing calculator to help translate these costs into the potential bill. And they provide sample configurations and estimates.
Finally, this catalog is NOT STATIC. It's completely actionable. If you have an account and log in, Amazon provides:
- Self-service ordering, configuration and deployment.
- Account management functions.
- Consumption management and billing.
In other words, your complete service catalog experience. I like that they have brought the ease of their regular catalog to a more complex environment. And ease wins.
Amazon has redefined the expectations and pricing for data center services. Make no mistake, they are your competitors. Now the challenge is to respond with your own service catalog and differentiated service definitions. There's a lot more I'll be writing about this.