pressmaster - stock.adobe.com
The term “serverless computing” has been bandied about a lot of late, and is seen as a super-efficient way to make use of cloud resources, but there is much more to it than that. The concept has existed for a decade or so, with a long- forgotten company pioneering the process. Unfortunately, it didn’t work out for that pioneer...
and it subsequently folded. However, as with everything, what is old is new again. Even so, the best way to think about serverless computing is as the latest step in the ongoing extraction of services away from the virtual or physical infrastructure, although they obviously need to be present. Alternatively, you can think of serverless as “the code, and just the code”, as it removes the dependencies and the patching of the underlying host. As such, developers need not concern themselves with the maintenance, security and patching of the hosts where the code runs, and there will still be a systems administrator to manage and patch the hosts the code runs on.
Defining serverless computing
The term serverless computing has one of two meanings. Some people use it to refer (incorrectly) to containers, microservices and other abstractions; others (more correctly) use it to refer to function as a service (FaaS).
Serverless computing is primarily designed for short-lived interactions, and a lot of serverless computing providers will terminate threads that last in excess of five minutes. This makes serverless infrastructure a very poor option for a classic web server.
What happens with a serverless framework is that you reduce everything down to code and application programming interface (API) calls, deconstructing the application from the components that make it up. For example, rather than building your own application authentication servers, running them and relying on them to authenticate users, you could use a third-party provider to perform the function.
The same can be said for other components. It would then provide the same function based on using API calls and messaging between the application and the third-party authentication system.
Each authentication instance would run separately, run for the duration of the authentication process and then finish, typically living less than a few seconds.
This may sound like an extremely expensive way of doing things, but as each API call costs somewhere in the region of thousandths of a cent, it becomes very inexpensive.
Perhaps more importantly, it becomes fairly simple to do horizontal elastic scaling, because – at the risk of stating the obvious – persistent data needs to be stored somewhere.
Pay for what you use
Cutting the cost of the infrastructure reduces the overall expense of providing your web-scale application. There has to be some sort of billing mechanism, and with serverless computing it changes per application call and truly becomes a pay-for-what-you-consume model.
The billing also covers the cost of the hardware you consume, so the term “serverless” is a bit misleading. It can, however, work out vastly cheaper than having an excess of hardware just waiting to be used. Whether or not it is used, someone has to pay for that hardware – the same is not true for serverless architecture.
The best way to describe the approach is “outsourcing the mundane” while keeping the key business logic in-house and away from third parties.
To put the cost issue into simple terms, using an Amazon Web Services virtual machine (VM) will cost you multiple tens of dollars per VM per month. Doing the same in serverless computing, based on reasonable usage, could end up being around two-thirds cheaper. Mileage varies dramatically, so keep in mind that there is no specific right or wrong that suits all.
That’s the upside. The downside is that because everything is done using API calls, there is a heavy dependency on ensuring the programs and calls are all correct.
Cost considerations for serverless deployments
While this all sounds like a great way to reduce costs, there are a few things that need to be taken into account before making the switch to serverless computing.
For example, switching to a serverless stack will require a lot of engineering time and talent, as you are re-architecting the application from the ground up, and that is no small undertaking for anything more than the most basic application.
On this point, implementing serverless may require enterprises to adopt new tools and techniques, as well as developers who know how to properly and efficiently use them – and they don’t come cheap.
It is also worth noting that only a subset of languages are supported in serverless. If you are thinking of making the jump, it will be important to look at which ones your chosen FaaS provider plays nicely with. AWS Lambda, for example, supports Go, Node.js, Java, C# and Python.
Serverless computing requires an extreme amount of testing prior to go-live, and debugging is inherently more difficult. It is also not suited to all usage types.
It is fair to say that a serverless approach would be seen as overkill for smaller companies The key is to do the maths, including costs for everything from testing, redesign and, not least, developer skillset.
Serverless computing may be worthwhile for some companies, but it won’t make sense for everyone. It depends on whether or not the volume of traffic and resources warrant such an approach. For those doing auto-scaling at web scale, it may well help, but the journey to it will be neither cheap nor easy, even it if can save the organisation money in the long term.
In short, it really is a case of doing the sums before jumping on this one.
Read more about serverless computing
- Cloud industry price wars could enter new phase, as demand for serverless computing services grows, suggests 451 Research.
- The reality of cloud is failing to live up to the hype for some enterprises, but serverless computing could help firms attain the operational benefits they’re looking for.