is all the rage right now—and for several good reasons:
- It removes you from having to provision a server yourself; you simply write functions, and the resources you need are automatically allocated to that function.
- You pay only for the resources you use. No more leaving servers up and running, then getting a big cloud bill at the end of the month.
- It can automatically scale, determining what cloud services need to scale with demand and then making it happen.
Amazon Web Services’ and Microsoft’s are the best-known examples of serverless computing, and both have existed for a few years. Still, although we’ve had some great success in some serverless areas, there are other serverless areas that need work.
It’s not the technology that’s falling short with serverless computing, but its use. You can’t blame this one on AWS or Microsoft—enterprise development shops simply picked the wrong applications to apply serverless computing to.
For example, while new applications are a good fit for serverless computing, old applications often are not—and migrating those old applications to serverless could be more work than enterprises bargained for. One big reason is the fact that serverless computing systems don’t support all programming languages.
Even if the languages are supported, it takes a huge effort to refactor an application to properly take advantage of serverless computing. That refactoring means redoing the application as sets of functions, which essentially means developing the application anew. The advantages of serverless may not justify that refactoring effort, especially if that effort delays the creation of new applications that could add significant business value to focus on refactoring old applications that may not be optimal but nonetheless provide sufficient value as is.
This high refactoring effort in no way take anything away from the potential of serverless computing. It imply reinforces that fact that you need to prioritize where you spend your development resources. I’ve seen too many development shops, drawn by the promise of serveless computing, underestimate the effort to make old applications serverless, and thus delaying efforts that make better business sense for their organizations.
Enterprise development shops made these mistakes before, such as with the “everything must be a container” guys who quickly found out that’s not the case. For whatever reason, the IT community keeps forgetting these past errors in judgment when an enticing new technology comes along.
The inability to correctly apply technology seems to be in our genes, a human frailty that keeps repeating itself. Maybe one day we’ll really understand the saying “Those who fail to learn from history are doomed to repeat it.”