IDG Contributor Network: Are you really building a serverless system?


Serverless computing (or simply “serverless”) is an up-and-coming architecture choice in software development organizations. There are two flavors of serverless computing: back end as a service, which provides app developers with APIs for common services such as push notifications, and function as a service (FaaS), which allows developers to deploy code (“functions”) in the cloud that executes in response to events, without having to worry about its execution environment, be it servers or containers.

When developers use FaaS, the cloud provider is responsible for automatically deploying the function and scaling it as the workload changes. In this article, I focus on the FaaS flavor of serverless and use the “FaaS” and “serverless” terms interchangeably.

Serverless proponents claim that it makes the development and deployment of software easier, faster, and cheaper than ever. Serverless architectures enable development teams to focus on business value, rather than operational overhead. But sometimes, in the rush to adopt a promising new architecture, organizations lose track of the goals of serverless architectures—a common problem with architectural paradigm shifts. As a result, the systems they build don’t quite deliver on the benefits or are more challenging to maintain than they should be. Just because your code implements a FaaS interface doesn’t mean you are doing serverless right.

Before I dive into some ways serverless implementations go wrong, let’s take a step back and ask what you are trying to achieve by using FaaS. What is really your goal here?

—handling events, as they arrive, rather than by arbitrary timelines. This is also event-driven. You are using events to drive the functions, with events being the primary mechanism for both sending notifications and sharing data. By designing an architecture that takes full advantage of the serverless framework, you are also taking your first steps into an event-driven stream processing architecture.

Antipattern No. 2: Dependency on a database

Serverless functions are almost always stateless ( is a very interesting exception). You can’t reliably maintain any sort of state in the application itself between invocations. Which means that any nontrivial function will rely on an external database to maintain state.

. This is a good choice, assuming that there exists a managed service that can handle the data you need, with APIs that work for you and elastic scaling. For example, if your function can read and write data to rather than to a database, that is a good idea. Using the pub/sub features of Apache Kafka, which scales as well as FaaS does, is also a good solution, provided that someone else is running it for you.

But keep in mind that if your application is unique enough that you need to write your own data layer service to basically act as a connection pool and a data layer for your database, serverless is probably the wrong architecture. When you add this service, you now create the same operational complexity and fixed costs that you thought you would avoid by going serverless.

Antipattern No. 3: On-premises serverless

There are many on-premises serverless frameworks out there: OpenFaaS, Kubeless, OpenWhisk, Knative, and probably more. This is justified if you are using one because you need to run the same applications in your own data center as in the cloud. But you need to be aware that on-premises serverless has none of the benefits (ops simplicity, cost) that serverless promises.

No matter which framework you use, if you are running serverless in your own data center, you are paying for the infrastructure whether you use it or not. Your own operations team is running the infrastructure, monitoring it, maintaining it, scaling it, and planning its capacity. There are no cost savings, and operationally it is probably more complex than other alternatives, because you are running your application on top of serverless framework, running on top of your container orchestration framework running on top of containers on top of your operating system. You can simplify a lot by stripping a layer or two.


I recommend running on-premises serverless only if the benefits of using the same serverless architecture for both on-premises and in public cloud outweigh the pains of running an additional framework that isn’t necessary for the on-premises deployment.

There are strong benefits to building serverless architectures. Done right, they let your team focus on business value rather than on operations, and in the right use cases it can represent significant cost savings. Serverless seems particularly successful for simple, stateless, applications—simple ETL transformations, serving static web content, handling simple business events. When the requirements become more complex, it is important to keep the big picture in mind and see if the architecture as a whole still delivers the promised benefits. Just because you implemented a FaaS interface, doesn’t mean you built a serverless architecture.

This article is published as part of the IDG Contributor Network.