In the era of digital transformation, omnichannel marketing, web-scale applications, and the internet of things (IoT), cost-effectively scaling the performance of existing applications is one of the most challenging issues facing enterprise architects and CTOs. In-memory data grids (IMDGs) meet this challenge, delivering massive speed and scalability gains without the need to rip and replace existing applications or data layers.

Until recently, the most common option for massively scaling applications was to purchase additional expensive hardware—over and over again. This severely limited the ability of most companies to aggressively pursue the major initiatives, such as digital transformation, that would most effectively increase their competitiveness. At best, the return on investment of these expensive projects was low.

Today, IMDGs offer a simple, cost-effective alternative. An IMDG consists of a cluster of servers that shares the available memory and CPU power, evenly distributes the dataset across the cluster nodes, parallel processes compute on the node where the applicable data resides, and allows scaling simply by adding a new node to the cluster. Inserted between the application and data layers, the IMDG moves a copy of the disk-based data from RDBMS, NoSQL, or Hadoop databases into RAM. This allows processing to take place without the delays caused by continually having to read and write the data from disk.

In a public or private cloud environment, nodes can be added or subtracted from the IMDG cluster as-needed for maximum flexibility and cost-effective scaling. Some IMDGs also support ANSI-99 SQL and full ACID transactions, advanced security, stream processing support, machine learning, and Spark and Hadoop acceleration.