pressmaster - stock.adobe.com
The term “serverless computing” has been banded about a lot of late, and is seen as a super-efficient way to make user of cloud resources, but there is much more to it than that.
The concept has actually existed for a decade or so, with a long-forgotten company pioneering the process. Unfortunately, it didn’t work out for them and they subsequently folded. However, as with everything, what is old is new again.
Even so, the best way to think about serverless computing is as the latest step in the on-going extraction of services away from the virtual or physical infrastructure, although they obviously need to be present.
Alternatively, you can think of serverless as “the code, and just the code”, as it removes the dependencies and the patching of the underlying host.
As such, developers need not concern themselves with the maintenance, security and patching of the hosts where the code runs, and there will still be a systems administrator that manages and patches the hosts the code runs on.
When the term “serverless” computing is used it has one of two meanings. Some people use it to refer (incorrectly) to containers, microservices and other abstractions, whereas secondly and (more correctly), it also refers to function as a service (FaaS).
Serverless computing is primarily designed for short-lived interactions. In fact, a lot of serverless computing providers will terminate threads that last in excess of five minutes. This makes serverless a very poor option for a classic web server obviously.
What happens with serverless is that you reduce everything down to code and API calls, deconstructing the application from the components that make it up. For example, rather than building your own application authentication servers, running them and relying on them to authenticate users, we could use a third-party provider to perform the function.
The same can be said for other components. It would then provide the same function based on using API calls and messaging between the application and the third-party authentication system. Each authentication instance would run separately, run for the duration of the authentication process and then finish. Typically living less than a few seconds.
You may be thinking this sounds like an extremely expensive way of doing things, but when you consider that each API call costs somewhere in the region of thousandths of a cent, it becomes very inexpensive.
Perhaps more importantly, it becomes fairly simply to do horizontal elastic scaling, because – at the risk of stating the obvious, persistent data needs to be stored somewhere.
Also, cutting the cost of the infrastructure reduces the overall expense of providing your web scale application. Obviously there has to be some sort of billing mechanism. With serverless computing it changes per application call. It truly becomes a pay for what you consume model.
The billing also covers the cost of the hardware you consume, so the term “serverless” is a bit misleading. It can, however, work out vastly cheaper than having an excess of hardware just waiting to be used. It doesn’t matter if it is used or not –someone still has to pay for it. The same is not true for serverless architecture.
The best way to describe the approach is “outsourcing the mundane” whilst keeping the key business logic inhouse and away from third parties.
To put the cost issue into simple terms, using an Amazon Web Services virtual machine (VM) will cost you multiple tens of dollars per VM per month. Doing the same in serverless computing, based on reasonable usage could end up being around two0thirds cheaper. Obviously, mileage varies dramatically so there is no specific right or wrong that suits all.
That’s the upside, the downside is that because everything is done using API calls, there is a heavy dependency on ensuring the programs and calls are all correct.
Cost considerations for serverless deployments
Should you be thinking that this all sounds like a super way to reduce costs, there are a few things that still need to be taking into account before making the switch to serverless.
For example, switching to a serverless stack will require a lot of engineering time and talent, as you are re-architecting the application from the ground up, and that is no small undertaking for anything more than the most basic application.
On this point, implementing serverless may require enterprises to adopt new tools and techniques, as well as developers who know how to properly and efficiently use them, and they don’t come cheap.
It is also worth noting that only a subset of languages are supported in serverless, and if you are thinking of making the jump it will be important to look at which ones your chosen FaaS provider plays nicely with. AWS Lamba, for instances, supports Go, Node.js, Java, C# and Python.
Read more about serverless computing
- Cloud industry price wars could enter new phase, as demand for serverless computing services grows, suggests 451 Research.
- The reality of cloud is failing to live up to the hype for some enterprises, but serverless computing could help firms attain the operational benefits they’re looking for.
Serverless computing requires an extreme amount of testing prior to go-live, and debugging is inherently more difficult. It is also not suited to all usage types.
It is fair to say, for smaller companies, serverless would very much be seen as overkill. The key is to do the maths (including costs for everything, testing, redesign costs and not least developer skillset).
For certain companies, it can be worthwhile but it won’t make sense for everyone because of the volume of traffic and resources used do not warrant such an approach.
If you are doing auto scaling at web scale, it may well help, but the journey to it will be neither cheap nor easy, even it if can save the organisation money in the long term. In short, it really is a case of doing the sums before jumping on this one.