Setting up a cloud computing platform can be a little more complex than many organisations expect, as there are so many things to take into consideration.
For example, what virtual-sized server should you start off with, and does this compare with anything that exists in the real world? If not, how do you know what you are signing up to, and how do you even start to compare one provider’s offer against another?
And that’s just for starters. What type of storage and how many tiers will be required, and will you be penalised for going over and above your expected use limits?
Trying to carefully craft something that meets both the business and technical needs of an organisation seems to be just as hard, if not harder, than when systems architects acquired all the hardware directly and built their own platforms.
Surely, this isn’t what the promise of the cloud was meant to be? Indeed it is not.
The idea of cloud, and this is certainly true of how it has been sold to enterprises in the past, was to provide a simple environment based on five key principles:
- On-demand self service
- Broad network access
- Resource pooling
- Dynamic elasticity
- A measured service
While the offerings of many public cloud providers largely meet these requirements, the idea of on-demand self-service was more about having access to a complete service.
What has emerged, however, is a set of poorly defined and difficult-to-integrate building blocks that can be assembled to create a new platform that could be used to run a useful service on.
Even large organisations struggle when trying to figure out the intricacies of Amazon Web Services (AWS), Microsoft and other public cloud offerings, while firms in the small to mid-market size range have little chance of getting a cost and performance-optimised result from the public cloud.
Cloud use is still not as mainstream as many industry commentators would have people believe, and the availability of customers that can be defined as easy targets for infrastructure and platform as a service (IaaS/PaaS) is drying up.
The lack of skills required to bring together all parts of a cloud platform is making many organisations look at their options and resolve to keep doing things the way they always have, while moving to adopt software-as-a-service (SaaS) offerings.
What is becoming obvious is that prospective cloud users need a different approach to consuming cloud. They need something simpler that allows them to move away from the complexity of setting up the initial base platform.
The answer to this could be serverless computing (a misnomer if ever there was one), which has nothing to do with getting rid of the hardware a service or application runs on. What it aims to do is offer the capacity to provide a function as a service in a simpler manner.
A lot of this harks back to earlier service provider technology provision, but it is now coming into its own due to improvements in standards and systems orchestration.
A who’s who of serverless computing
The first real instantiation of modern serverless computing can be seen through AWS’s Lambda product, which the cloud giant debuted in 2016.
Lambda provides a means of running code without the need to provision servers on the AWS EC2 platform.
Instead, users load the code they want to run, and Lambda takes care of provisioning the resources needed to run the workload, as well as monitoring and management.
It also brings in true use-based charging. Rather than charging users for a base logical server that is deemed to be on at all times, Lambda only charges when a function is being used.
Presently, Lambda is focused on providing these capabilities at the compute level, but, as time goes on, it will also include storage and network resources in the intelligence of the overall build.
Microsoft, IBM and Google have also joined the serverless bandwagon, with the roll-out of their respective Azure Functions, Bluemix OpenWhisk and Cloud Platform Cloud Functions offerings.
Each one has pretty much the same charging mechanism in place as Lambda, in that charging is based on events and live usage.
So what is serverless computing useful for? At the moment, attempting to run a full-function application in this manner would not work well. The live time and amount of base resource required would be too high, and the costs involved would be horrendous and unpredictable.
However, consider something event driven, such as a surveillance system with a network camera set to upload a video to a cloud storage system when movement is detected.
Read more about serverless computing
- Many organisations are moving away from the big, monolithic server stacks and instead building serverless architectures with Docker and microservices.
- New serverless options, such as AWS Lambda and Azure Functions, help enterprises distance themselves from traditional server management to obtain more flexibility and cost efficiency.
The act of transferring that video from the camera to the cloud could trigger an analysis algorithm within a serverless environment.
The action of transferring is the trigger, creating the trigger charge and causing the code to be run to analyse the video, which then creates the live time charge.
Similarly, an organisation that wants to analyse documents for data leak prevention (DLP) purposes could create code that analyses documents for specific content and create metadata around it.
The document is saved in a cloud store, triggering the code to run, and the metadata is sent back to the document store.
Granted, it may not represent a nirvana of “tell me your business needs, and the cloud will provide”, but it is far closer to a self-defining system than organisations have become used to so far.
Of course, it remains to be seen how far the concept can be taken, and if we will get to see that nirvana being catered for in the next generation of serverless systems.
It is Quocirca’s view that 2017 will see organisations start to experiment with full resourceless computing to store in the public cloud.