Tierney - stock.adobe.com

Serverless: Weighing up the pros and cons for enterprises

The concept of serverless computing has been gaining traction within enterprises for several years now, but what is it and how do CIOs know if it is the right fit for their applications?

This article can also be found in the Premium Editorial Download: Computer Weekly: Salad as a service: How tech could revolutionise farming

Serverless computing or serverless architecture has become one of the big trends in enterprise IT over the past several years. Along with other cloud-native development models, it has grown in importance in parallel with the increasing uptake of public cloud services over the past decade or so.

It is easy to see the attraction of serverless, since it holds out the promise of simply running code without having to worry about the underlying infrastructure, and paying only for the resources you use.

But every technology has its pros and cons, and serverless applications may have pitfalls that developers need to be cognisant of to be able to work around them.

What is serverless?

Serverless is somewhat of a misleading name since it does not remove the need for servers. It simply means the end user does not have to get involved with provisioning or managing the servers on which the code is running, as all that complexity is handled by the platform itself.

This may sound a lot like a platform as a service (PaaS), such as Red Hat’s OpenShift or Google App Engine, but in general, these are more traditional developer environments. The developer has much greater control over the deployment environment in a PaaS, including how the application scales, whereas in serverless scaling tends to be automatic.

The term ‘serverless’ can therefore apply to a range of services, notably AWS Lambda on Amazon’s cloud, which can be credited with bringing serverless computing to wider public attention when it launched back in 2014.

Lambda allows developers to create code that runs in response to some event or trigger, such as a new object being uploaded to a bucket in Amazon’s Simple Storage Service (S3) to perform some required function with it, for example. For this reason, Lambda is often referred to as a function-as-a-service (FaaS) platform.

Under the hood, services such as Lambda typically use containers to host the user code, but the platform handles everything such as the spawning of the containers to their retirement once their function is performed.

However, other services can be regarded as serverless if they meet the criteria of scaling automatically as needed and not requiring the user to manage the underlying infrastructure. This includes many application programming interface-driven (API-driven) services provided by the major cloud platforms, and so it is easy to see how entire applications could be constructed by linking together serverless elements developed by the end user with functions and services provided by the cloud platform.

This kind of architecture can also be regarded as an example of microservices, whereby applications are implemented as a collection of loosely coupled services that communicate with each other.

The benefit of implementing an application this way is that key parts of the overall solution can be scaled up independently as required, rather than an entire monolithic application having to be scaled up. The independent parts can also be patched and updated separately from each other.

Weighing up the pros and cons of serverless

The advantages of a serverless architecture are therefore fairly easy to see: it removes the need for the developer to worry about provisioning resources, thereby improving productivity; the user pays only for the resources used when their code is actually running; and scaling should be handled automatically by the platform.

According to an IDC report, serverless platforms offer “a simplified programming model that completely abstracts infrastructure and application life-cycle tasks to concentrate developer efforts on directly improving business outcomes”. IDC projected that using serverless could lead to higher productivity and lower costs result in an average return on investment of 409% over five years.

As with any architectural choice, there are downsides, and any developer or organisation considering using serverless should be aware of these before taking the plunge.

Perhaps the most obvious downside, if it can be considered one, is loss of control. With traditional software approaches, the user typically has control over some or all the application environment, from the hardware used to the software stack supporting their application or service.

This loss of control could be as simple as serverless functions typically having few parameters that allow them to be tweaked to meet exact requirements, to having little or no control over the application’s performance. The latter is a problem reported by some serverless developers, who say that processing times can differ wildly between runs, possibly because the code may get deployed to a different server with different specifications the next time it is executed.

A known performance issue with some serverless platforms is what is known as a cold start. This is the time it takes to bring up a new container with a particular function in it if no instance is currently live. Containers are typically kept “alive” for a period of time, in case the function is needed again, but are eventually retired if not invoked. There are ways around this, of course, with some developers coding a scheduled event function that regularly invokes any time-critical functions to ensure they stay live.

Another issue that might be seen as loss of control is that age-old concern of supplier lock-in. Many serverless application environments are unique to the cloud platform that supports them, and organisations may find that changing hosts to a different cloud would require substantial rewriting of the application code.

“We still have the ‘over a barrel’ problem of one supplier’s serverless not being the same as that from another supplier,” says independent analyst Clive Longbottom.

However, Longbottom adds that containers and Kubernetes are becoming more prevalent and it is to be hoped that there will be the capability for portability across serverless environments at a highly functional level, or add-on capabilities to orchestration systems that can adapt a workload so that it can run on a different serverless environment with little rejigging, or possibly in real-time as and when required.

Keeping tabs on serverless deployments

Cost is also a potential downside to serverless applications. This might seem paradoxical, given that we have already mentioned that serverless applications only incur charges for the time that the code is actually running.

However, this behaviour can make it difficult for an end user to accurately forecast how much it is going to cost to operate serverless applications and services at a level that will deliver an acceptable quality of service. If there were an unexpected spike in demand, for example, then the auto-scaling nature of the serverless platform could lead to more resources being used than were anticipated.

“Where the overall use of serverless is based purely on resource usage and it is not clear as to what the workload will use, then the resulting cost may be surprising when the bill comes through,” says Longbottom.

However, if the workload is well understood and serverless is just being used for simplicity, then there should be no such surprises, he says: “If the serverless provider allows for tiering in resource costs, then that can also be a way of control – particularly if they allow the user to apply ceilings to costs as well.”

More downsides to serverless lie in testing and monitoring of such applications. With traditional application models, developers often have locally installed versions of software components that the code links to, such as a database, allowing them to test out and verify the code works before it is uploaded to the production environment. Because serverless applications may consist of many separate components, it may not be possible to realistically replicate a production environment.

Similarly, monitoring production workloads can be tricky because of the distributed nature of serverless applications and services. Whereas in a traditional application, monitoring would focus on the execution of code, the communication between all the different functions and components in a serverless application can make it much more complex to observe what is happening without specialist tools designed with this task in mind.

“Where composite apps are concerned, monitoring and reporting has to be across the whole workflow, which is a pain at the moment,” says Longbottom.

“AI [artificial intelligence] may help here, dealing with expected outcomes and exceptions and identifying any gaps that happen along the way – again providing the feedback loops that would be required to maintain an optimised serverless environment,” he adds.

All of the issues highlighted here may sound like drawbacks, but every application deployment model has its drawbacks, and those mentioned above must be set against the great convenience that serverless application deployment offers to developers, especially for those that wish to delegate the tiresome resource management tasks to the cloud provider and focus on the business logic of their application rather than dealing with infrastructure issues.

Many of the downsides are also likely to be ironed out as serverless platforms evolve and as improved monitoring tools become available that are designed to cope with the challenges of serverless deployments. Organisations just need to be aware that serverless is a different kettle of fish from traditional development, and to carefully analyse the risks of adopting a serverless model for a specific application or service before committing to it.

Read more about cloud-native deployments

Next Steps

Lightbend launches new Akka Cloud Platform on AWS

AWS adds serverless capability to S3 Object storage

Read more on Platform-as-a-Service (PaaS)

CIO
Security
Networking
Data Center
Data Management
Close