The general public knows what the word “google” means. It’s a search engine, although there’s also Gmail and some services like Google Maps. But that’s about it.
Developers know that behind the scenes, several decades ago, the company started building one of the original cloud services to support these massive sites that run at a world-wide scale. It was an awesome engineering feat. Now the same software and hardware that powers their great search and advertising empire can also support your enterprise, small or large. It’s just a click away.
The cloud business may be dominated by Amazon and its ever expanding collection of dozens of products and services, but Google is running hard. Sometimes they’re catching up and sometimes they might be said to be leading.
Leading? Well, it’s hard to say that some cloud products are head-and-shoulders above others because the products themselves are often commodities. A machine running the current version of Ubuntu or a cloud storage bucket that stores a few gigabytes are about as interchangeable as evenings spent at home during quarantine.
Still, the cloud companies are finding ways to differentiate themselves with extra features and, as is more often the case, slightly different approaches. Google’s cloud products have a style all of their own, a style that echoes the powerful simplicity of many of Google’s consumer-facing products.
Some of this style is apparent as soon as you log in because many of the tools aren’t too different from the widely used G Suite. The user interface has the same primary colors and clean design as the major customer-facing apps like Gmail. Finding your way through the maze of configuration screens is much like finding your way through Google’s office applications or search screens. Less is more.
and Ubuntu everywhere. And when Google built one of the first serverless tools, the App Engine, it started with a toy scripting language, Python. Now the Python language and the serverless approach are everywhere. This tradition of clear and open tools is found around every corner.
Still, here are 13 ways Google Cloud shines just a bit brighter than AWS.
found it a bit easier to get everyone working from home because all of the cloud services are built around an open standard running pretty much everywhere. There’s no need to limit your team to any particular brand or model. To make it even easier and more predictable, Google distributes Chrome for a wide range of operating systems for laptops, desktops, and mobile phones.
This same “web as operating system” strategy leaks over to the cloud services. While you’ll probably be looking for generic machines running generic operating systems, you’ll be dealing with tools and interfaces that play a bit better with the Chrome ecosystem. Your team doesn’t need to run a particular VPN or set up remote access. It doesn’t need to limit itself to a particular OS. Everything runs in the browser.
When cloud services get bigger, people want to link them together. In the past, that meant lots of custom code but things are getting simpler for everyone. Google’s is a “no code” option that will link together the dominant services like Google Analytics and the G Suite. Calling it “no code” may not be completely correct, but the tools for workflows, field inspection, and reporting go a long way to doing most of what you’ll probably want to do. The parts are all there in the Google cloud and now it’s easier to link them together without worrying about syntax and scope issues that go along with text-based code.
has added a layer of well-tuned code to save you the time of writing it yourself. The API supports many of the important standards for guarding patient privacy and nurturing their wellness like HIPPA, HL7 FHIR, HL7 v2, and DICOM. Underneath lies the power of the more basic products like BigQuery in case you need to write code that connects at a lower level.
It’s a bit unfair to think of as something that’s unique to the Google Cloud Platform since it is marketed as a platform for multicloud development and distribution. It’s . Still, the idea came from Google Cloud and it remains a big part of Google’s vision for the cloud future.
The Anthos brand is an umbrella for Google’s hybrid vision where virtual machines will morph into containers and then flow among Kubernetes pods. The tool will turn an old stack running in a VM into a container ready for modern Kubernetes deployment either in the local cloud in your server room or in any of the Anthos-running public clouds out there. A nice service mesh layered on top makes it easier to deploy and debug microservices. Security and identity policies are managed uniformly saving developers from shouldering that work.
Google Cloud Platform offers a number of different ways to store information, but one of the options, , is a bit different from the regular database. Firebase doesn’t just store information. It also replicates the data to other copies of the database, which can include clients, especially mobile clients. In other words, Firebase handles all of the pushing (and pulling) from the clients to the servers. You can write your client code and just assume that the data it needs will magically appear when it’s available.
Pushing new versions of the data to everyone who needs a copy is one of the biggest headaches for mobile developers and really anyone building distributed and interconnected apps. You’ve got to keep all of these connections up just to push a bit of new information every so often.
Firebase may look and sound like a database, but really it’s a mobile development platform with an expanding collection of and with other platforms. It has much of the structure you might need to build distributed web or mobile apps. Or mobile web apps for that matter.
Embedded machine learning
Yes, and Firebase are called databases, but they’re also machine learning powerhouse. You can start off storing your data and then, if your boss wants some analysis, just kick off the machine learning routines with the same tables. You won’t need to move the data or repack it for some separate machine learning toolkit. It all stays in one place, a feature that will save you from writing plenty of glue code. And then, as an extra-added bonus for the SQL jockeys who drive databases, the is invoked with an added keyword to the SQL dialect. You can do the work of an AI scientist with the language of a DBA. In the meantime, Firebase developers can play to that database’s strength and invoke , a tool that brings the machine learning to the data stored locally in an Android or iOS device.
G Suite integration
It’s no surprise that Google offers some integration of its cloud platform products with its basic office products. After all, the Googlers use the G Suite throughout the company and they need to get at the data too. BigQuery, for instance, offers and analyze your data by turning it into a Sheets document in Google Drive. Or you can take your Sheets data and move it quickly into a BigQuery database. If your organization is already using the different G Suite apps, there’s a good chance it will be a bit simpler to store your data and code in the Google Cloud. There are dozens of connections and pathways that make the integration a bit simpler.
More virtual CPUs
The last time we looked at the documentation, maxed out at 96 vCPUs. Google Cloud’s includes some where the number of virtual CPUs needs three digits. And then there’s the m2-ultramem-4164, a monster with 416 virtual CPUs and 11,776 gigabytes of RAM. Of course, the CPUs are not exactly the same power and they almost certainly run some benchmarks at different speeds. Plus adding more virtual CPUs doesn’t always make your software go faster. The only true measure is the throughput on your problem. But if you want to brag about booting up a machine with 416 CPUs, now is your chance!
Custom cloud machines
Google lets you choose how many virtual CPUs and how much RAM your instance will get. and one of them is bound to be pretty close to what you need, but that’s not the same as being truly custom, is it? Google offers sliders that let you be a bit more particular and choose, say, 12 vCPUs and 74 GB of RAM. Or maybe you want 14 vCPUs. If so, Google will accommodate you.
There are limits to this flexibility, though. Don’t dream about infinite precision because the slider tends to stick on even numbers. You can’t choose 13 vCPUs, for instance, which would probably be unlucky anyway. Some of machines like the N2D require the count of virtual CPUs to be multiples of four, and if you start getting greedy by building a big, fat machine, the rules insist that you consume vCPUs in blocks of sixteen. But this is still a great increase in flexibility if you need a strange configuration or you want to provision exactly the minimum amount of RAM to get the job done.
A premium network
Google and Amazon have huge networks that link their data centers but only Google has a separate “premium” network. It’s like a special fast lane for premium customers that comes with some reliability and performance guarantees like N+2 redundancy and at least three paths between data centers. If you want to rely upon the Google CDN and load balancing across different data centers, then opting for the premium network will make life a bit smoother for your data flows.
What about the non-premium users? Do their packets travel by burro and carrier pigeon? No, but accepting fewer guarantees comes with a lower price tag. By opening up this option, Google lets us decide whether we want to pay more for faster global data movement or save money by getting by with perhaps an occasional hiccup. It’s not another option to muddle our brains. It’s an opportunity.
New visits to websites need an immediate response, but a great deal of the background processing and housekeeping around most websites doesn’t need to get done right away. It doesn’t even need to get done in the next few hours. It just needs to be done eventually.
Google offers something called pre-emptible instances that come at a nice discount. The catch is that Google reserves the right to shut down your instance and put it aside if someone or something more important comes along and wants the resources. They’ll start it up again later when the demand for computation drops.
AWS offers a different way to save on compute costs, a marketplace where you can bid for resources and use them only if you submit a winning high bid. This is great if you’ve got the time to watch the auctions and adjust your bids if the work isn’t getting done, but it’s not great if you’ve got more important things to do. Google has come up with a mechanism that rewards you for volunteering to be bumped and matches it with a price that’s locked in. No confusion.
Sustained use discounts
Another nice feature of the Google Cloud Platform is the way that the discounts just show up automatically the more you use your machines. Your compute instances start off the month at full price and . You don’t need to keep the machines running continuously because the price is set by your usage for the month. If your instance is up for about half of the month, the discount is about 10 percent. If you leave your instance up from the first to last day, you’ll end up saving 30 percent off the full price.
By the way, Google approaches this pricing like a computer scientist. It tracks how many vCPUs and how much memory you’ve used among the various machine types and then combines the different machines, where possible, to give you an even bigger discount. Google calls this “inferred instances.” The model is a bit complex but quite useful if you’re constantly starting and stopping multiple machines.
AWS also offers discounts, but the price breaks extend to those who pre-purchase reserved machines or bid on the spot markets. They also offer some tiered discounts that reward large users, and these might work, in some sense, like sustained use discounts. But you pretty much need to make a commitment. Google just rewards you.
Copyright © 2020 IDG Communications, Inc.