7 dark secrets of cloud costs

0
267


Is there anything more seductive than cloud machine price lists? There aren’t many of us old enough to remember paying a penny for a piece of candy, but cloud users enjoy prices that are even smaller.

Google’s N1 standard machine’s is $0.0475 per hour but you can get it for just $0.0100 per hour for your batch processing needs—if you’re willing to be preempted by more important jobs. The crazy spenders can step up to the high CPU version for $0.015 per hour – still less than two cents. Woo-hoo!

Azure a miniscule $0.00099 per gigabyte to store data for a month in its archival storage tier. Amazon, though, may offer the most eye-popping low prices – an infinitesimal $0.0000002083 for 128 megabytes of memory to support a Lambda Function. (Four digits of precision?)

Those tiny numbers throw us off our guard. The medical insurance and real estate bills may be crushing the budget, but when it comes to the cloud we can enjoy throwing money around like confetti. That’s because the prices for many cloud services are literally less than the cost of a piece of confetti.

Then the end of the month comes, and the cloud bill is much larger than anyone expected. How do those fractions of pennies add up so quickly?

Here are seven dark secrets of how the cloud companies turn fractions of cents into real money.

seductively at $0.00099 per gigabyte, something that works out to $1 per terabyte per month. It’s easy to imagine putting aside the backup tapes and the hassles for the simplicity of Amazon’s service.

But let’s say you want to actually look at that data. If you click through to a second tab on the price sheet, you can see the cost for retrieval is $0.02 per gigabyte. It’s 20 times more expensive to look at the data than to store it for a month. If a restaurant used this pricing model, they would charge you $2 for the steak dinner, but $40 for the silverware.

at just $2.50 per month outside of China but jump to $7 per month in Hong Kong and $15 per month in mainland China.

It’s up to us to watch these prices and choose accordingly. We can’t pick data centers just because they seem more convenient or make ideal candidates for an inspection trip.

fallacy – aka throwing good money after bad – is a big problem for gamblers, managers, and pretty much everyone but small children. The money we’ve spent is gone. It won’t ever come back. New spending, though, is something we can control.

It’s a bit different when you’re developing software. We often can’t be sure just how much memory or CPU a feature will require. We’re going to have to ratchet up the power of the machines some of the time. The real challenge is keeping our eye on the budget and controlling costs along the way. Just blithely adding a bit more CPU here or memory there is the path to a big bill at the end of the month.

Overhead

A cloud machine is not a machine per se, but a slice of a larger physical machine that’s been divided into N portions. The slices, though, aren’t powerful enough to handle the load on their own so we deploy tools like Kubernetes to keep N pieces working together. Why are we slicing a fat box into N pieces just to sew it back together? Why not just have the one fat machine handling one fat load?

Cloud evangelists might say that people who ask impertinent questions like that don’t get the benefits of cloud. All of the extra layers and extra copies of the OS bring plenty of redundancy and flexibility. We should be grateful that all of these instances are booting and shutting down in an elaborate, orchestrated dance.

But the ease of recovery with Kubernetes encourages sloppy programming. A node failure isn’t a problem because the pod will sail on as Kubernetes replaces the instance. So we pay a bit more for all of the overhead to maintain the extra layers, thankful that we can just start a clean fresh machine without any of the cruft that seems to get in the way.

Cloud infinity

In the end, the tricky problem with cloud computing is that the best feature, its seemingly infinite ability to scale up to handle any demand, is also a budgetary minefield. Is each user going to average 10 gigabytes of egress or 20 gigabytes? Will each server need two gigabytes of RAM or four? When we start up the projects, it’s impossible to know.

The old solution of buying a fixed number of servers for a project may start to pinch when demand spikes, but at least the budget costs don’t skyrocket. The fans on the servers may whine from all of the load and the users may grouse about the slow response, but you’re not going to get a panicked call from the accounting team.

We can pencil together estimates but no one will really know. Then the users show up and anything can happen. No one notices when the costs come in lower, but when the meter starts to spin faster and faster, the boss starts to pay attention. The deepest problem is that our bank accounts don’t scale like the cloud.

Copyright © 2020 IDG Communications, Inc.

LEAVE A REPLY