Expect a computational tidal wave by 2020.

By then, data traveling to and from will flood datacenters—outpacing data flowing from datacenters to users by a factor of 275.

Yet there’s a wealth of computing power in underused devices that often lie dormant for long stretches of time, collectively hundreds of times as large as Amazon’s ubiquitous global cloud.

Fog computing, defined

For the uninitiated, fog computing extends centralized cloud datacenters to include existing computing devices, creating a network of low-latency computing and storage that can process data closer to where it is produced.

network data in expensive, fully staffed on-premises server rooms.

In the mid-aughts, hosting providers realized they could lend services to customers by hosting all the expensive computational equipment and upkeep themselves—in the cloud. The payment model only charged customers for their use, dramatically increasing scalability, and exponentially increasing their already-rapid adoption.

instead of near major population centers—means data often has to travel hundreds of miles before it’s processed and sent back hundreds of miles more.

It’s , to say the least, according to TechCrunch contributor Ben Dickson: “The cloud paradigm is like having your brain command your limbs from miles away—it won’t help you where you need quick reflexes.”

The clear demand for lower latency in a broader swath of industries is already obvious. Approximately 50 percent of cloud customers due to latency and performance, according to IDC. And for IoT devices, often installed in remote locations at or beyond the edges of network connections, a third of businesses find a need for , according to Forrester analyst Jeffrey Hammond.

Meanwhile, the superior pool of computing resources available on our servers, workstations, laptops, tablets, even smartphones is simply waiting to be activated.  

Compensating users for their computation resources, creating a fair market

Imagine an the spare computing power in existing devices. What if an architecture could tie those resources together to support the computational overload from 20 billion IoT devices?

Such an infrastructure can certainly be built (and is why we created ActiveAether), the bigger question is how do you incentivize individuals to contribute to this global network of computing power?

By paying them.  

We believe the answer lies in designing a marketplace where host providers, software providers and consumers can all contribute and receive computational resources based on needs, and aligned around a single currency ().

From there, the efficiencies of the free market system take over while together, individuals are creating a network that is faster and far more reliable than the cloud.

So, as we look out to a world where billions of connected devices come online each year, let’s not forget the critical question? Do they have the right infrastructure in which to exist? Currently, no. But.

This article is published as part of the IDG Contributor Network.