Despite all the advances we’ve seen in data processing and database technology, there is no escaping data’s Public Enemy No. 1: latency, the time delay before a response is generated and returned. Even Gartner’s of a zero-latency enterprise acknowledges that latency can never actually be zero because computers need time to “think.”  

While you may never truly achieve zero latency, the goal is always to deliver information in the shortest amount of time possible, so ensuring predictable, low latency processing is key when building a real-time application. Often the hardest part, though, is identifying the sources of latency in your application and subsequently eliminating them. If you can’t remove them entirely, there are steps you can take to reduce or manage their consequences.  

Before, during, and after computing the response, there are number of areas that can add unwanted latency. Below are some common sources and tips for minimizing their impact.

Network I/O

Most applications use the network in some manner, whether between the client application and the server or between server-side processes and applications. The important thing to know here is that distance matters—the closer your client is to the server, the lower the network latency. For instance, round-trip latency between nodes within the same datacenter can cost 500 microseconds, while it can be an additional 50 milliseconds for nodes in California and New York.