Amazon announced its first cloud product, Simple Storage Service, 10 years ago Monday.
S3, as it is known, is basically data storage for rent. Amazon soon followed with rentable computer servers and in the intervening years a bevy of other computing services, all running on shared Amazon infrastructure and all available by the hour (compute), gigabyte (storage), and bandwidth used (networking).
In this quest, Amazon
Web Services pioneered the public cloud model whereby it aggregates a ton of servers, storage, and networking which are then shared by businesses that don’t want to build out more of their own data center capacity.
That was huge. Startups that once spent most of their venture funding on pricey servers and software licenses were, thanks to AWS, able to rent the resources they needed with a credit card. Great news for Amazon and the startups. Most definitely not great news for the Oracles
, Sun Microsystems, IBMs
of the universe.
Google Seeks Most-Flexible Cloud Crown
To commemorate the big day, Amazon’s chief technology officer, Werner Vogels, recapped some lessons learned from AWS tenure in a blog post.
He reiterated the familiar refrain that failure—of hard drives, servers, operating systems—will happen even with the best gear. It’s a question of when not if, so Amazon had to learn to anticipate bad scenarios when possible. In addition AWS “developed the fundamental skill of managing the ‘blast radius’ of a failure occurrence such that the overall health of the system can be maintained.”
Get Data Sheet, Fortune’s daily technology newsletter.
He also noted that because developers need a predictable, stable way to use its cloud services, AWS has to be very careful about the design and maintenance of its application programming interfaces, or APIs.
Once customers started building their applications and systems using our APIs, changing those APIs becomes impossible, as we would be impacting our customer’s business operations if we would do so. We knew that designing APIs was a very important task as we’d only have one chance to get it right.
No doubt AWS is the de facto leader in public cloud services. Oft-quoted Gartner
research last year held that AWS runs 10 times more capacity than the next 14 cloud competitors combined, and most customers laud the steady stream of price cuts and new functionality that it produces.
But that power provokes anxiety even among many devout AWS users who privately worry about its dominance. They do not want to replicate the vendor lock-in many of them saw with Oracle or Microsoft or IBM, with Amazon in the cloud.
One executive from a financial services company last year called AWS “a benevolent dictator” in cloud services. His concern is that a benevolent dictator can go wrong. Several AWS accounts, also speaking off the record, said they continue to consume EC2 compute, S3 storage and some other base services like crazy but are loath to adopt a higher level database and other services for fear of locking into that cloud.
For more on Amazon, watch:
One thing has changed in the past three to four years: While AWS growth continues—last quarter it was on pace for a $10 billion-a-year run rate—Microsoft Azure is also growing gangbusters and Google
appears serious about making Google Cloud Platform a real competitor as well.
One thing many cloud watchers (including blogger Om Malik and yours truly) would like now is a uniform way to compare and contrast the price and usage of given cloud resources between vendors.
Here’s hoping that happens in our lifetime.