If we’ve learned anything in the technology business in the last 25 years, it would be to never underestimate the Linux kernel. Why, then, have so many networking companies been so eager to bypass the Linux kernel — or more specifically, the Linux kernel networking stack? What could be so wrong with the networking packet arteries in the Linux kernel that motivates so many of us to bypass them?

There are two main reasons. First, the kernel networking stack is too slow — and the problem is only getting worse with the adoption of higher-speed networking in servers and switches (10GbE, 25GbE, and 40GbE today, and rising to 50GbE and 100GbE in the near future). Second, handling networking outside the kernel allows for plugging in new technology without the need to change core Linux kernel code.

For those two reasons, and with the additional advantage that many kernel bypass technologies are open source and/or specified by standards bodies, the proponents of bypass solutions continue to push data center operators to adopt them.

Kernel bypass solutions

We have seen many kernel bypass solutions in the past, most notably RDMA (Remote Direct Memory Access), TOE (TCP Offload Engine), and OpenOnload. More recently, DPDK (Data Plane Development Kit) has been used in some applications to bypass the kernel, and then there are new emerging initiatives such as FD.io (Fast Data Input Output) based on VPP (Vector Packet Processing). More will likely emerge in the future.

.