There are many good cloud companies that do a perfectly good job. You click and they deliver a root login to a running instance. All of them are good. Some even have areas where they’re the best. None of them, though, manage to measure up to the breadth and depth of Amazon.
The reason is simple: AWS has built out so many products and services that it’s impossible to begin to discuss them in a single article or even a book. Many of them were amazing innovations when they first appeared and the hits keep coming. Every year Amazon adds new tools that make it harder and harder to justify keeping those old boxes pumping out heat and overstressing the air conditioner in the server room down the hall.
[ Also on InfoWorld: | ]
For all of its dominance, though, Amazon has strong competitors. Companies like Microsoft, Google, IBM, Oracle, SAP, Rackspace, Linnode, and Digital Ocean know that they must establish a real presence in the cloud and they are finding clever ways to compete and excel in what is less and less a commodity business. These rivals offer great products with different and sometimes better approaches. In many cases, they’re running neck and neck with AWS. And if what you’re after is a commodity machine, well, their commodity Linux instance will run the same code as AWS.
Sometimes the competitors not only match AWS for commodity products, but they actually do a better job. These advantages often appear when the competitors link their cloud to parts of the computer ecosystem that they already dominate. If you want to use .NET code, you’ll find it just a . If you want to use Google’s G Suite of web-based office productivity tools, it’s no surprise that .
Still, for all of these competitors’ innovation and success, Amazon continues to outshine them in many ways—and the words “many ways” is a pretty good summary of Amazon’s approach. The company has evolved a strong, consistent style that might be described as overwhelming. The AWS cloud offers at least 10 different databases and another nine different products lumped in a separate category called “storage.” There are dozens of different types of machine available in dozens of different configurations of RAM and CPUs and you can arrange for Amazon to scale them automatically when the load increases.
Indeed, the greatest advantage of AWS may be the sheer overwhelming number of options. Most of the time, someone there has faced the same problem that’s confounding you and they’ve set up a team to productize a solution. You just have to work your way through all of the options.
work well with both worlds. Microsoft’s has many devotees and so does . AWS doesn’t play favorites and has an integration toolkit for each of them.
. Maybe you don’t need Amazon Connect, but as AWS adds more of these targeted services, there’s a good chance that someone at AWS already built most of what you need. They’ve already got plenty of tools for call centers, , , , and, for those with their own network of orbiting satellites, a full-function “.”
All of the cloud companies understand the attractiveness of containers like Docker, but Amazon is pushing further by building a special, stripped-down version of Linux called that has just enough code to keep the machine running but not much more. Teams running microservices can choose it and quit worrying about extra cruft like FTP servers sitting around in the background. Amazon also intends to embrace containers even more by skipping over the traditional packages for security and feature upgrades. These will be available as complete containers instead, so upgrades can be done in one step and managed with many of the same tools you’re already using to juggle your own containers.
started as a cute idea, a kind of simple shell script that could glue together all of the operations in the cloud. Users quickly turned to Lambda’s serverless functions to handle occasional computing tasks because it’s so much more efficient than dedicating a machine for work that arrives sporadically. That might be a background process that runs once an evening, a corner of your microservices architecture that isn’t used often, or maybe just that blog full of your rantings that still hasn’t found its audience yet.
Broad AI platform
If your shop is obsessed with mixing AI into your stack, Amazon has almost too many options to consider. There are too many to list. They begin with basic tools like for training models to respond to your data. These tools have attracted plenty of developers and that may be why the sales literature brags that “85% of TensorFlow projects in the cloud run on AWS.”
The basic training, though, is just the beginning because AWS also offers a wide range of tools aimed at particular industries. plows through unstructured medical texts looking for lifesaving treatments. looks for malicious behavior.
Developers new to AI can begin exploring with highly automated options like , a project from the AWS lab that is designed to make it simpler to dive in.
The Microsoft and Google clouds also have deep AI expertise and commitment, but Amazon’s wide range is hard to beat.
24 terabytes of RAM
If you’re running very big enterprise databases or you just believe that whoever dies with the most RAM wins, some of the from Amazon are just what you need. Up to 24 terabytes of RAM can be yours in just a few clicks.
AWS gives you other ways to wear your big boy pants as well. They’ve worked with vendors like SAP to make sure their cloud instances can scale up as big as necessary. Are your Oracle databases getting too big? Amazon wants to host them and it has expanded its to support instances with as much as 64 terabytes of SSD storage.
Distributed MySQL or PostgreSQL
To the programmer, looks just like either MySQL or PostgreSQL. You choose the syntax and, underneath the covers, Aurora will store the data in a fast, SSD-based, virtualized storage layer. That alone is a clever idea that lets the programmers use their favorite open source version of SQL.
There’s even more magic, though, because Aurora distributes your data over multiple machines in multiple zones. Your data is split between hundreds of storage nodes in three different zones, ensuring reliability and access speed. across all of the storage nodes to speed up access, sometimes dramatically.