michelangelus - Fotolia
In the early days, there was a lot more simplicity around cloud pricing. When there were only a few different services, it was easy for the likes of Amazon Web Services (AWS) to just say “this is how much it costs – click here, submit credit card details and you’re off”.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Now, there are so many services being offered by each cloud provider – and each has its own pricing model behind it – that anyone wanting to put together a cloud architecture could be forgiven for tearing their hair out as they work out what is required.
As a relatively basic example, let’s take a look at what an overall workload could potentially consist of, by breaking down the server, network and storage requirements.
- What “class” of logical server is likely to be needed?
- How many tiers of storage?
- Will the storage all be in one region?
- Will there be a need for data to traverse regions?
- Is there “deep” (longer term) storage being used?
- What classes of network are required?
- Will enhanced connectivity be used?
- Is priority and quality of service required?
- Is high availability a requirement?
- Is there a need for full, active mirroring?
- Will warm containers with mirrored data suffice?
- Could cold containers with mirrored data be OK?
Each area has a possible impact on the overall price – and this is before considering whether pricing is going to be calculated per transaction, per volume of data stored/transferred, per user per month or whatever metric the cloud provider wants to push.
Even when a contract has been signed, it is not the end of the problem. The cloud providers are vying with each other through continuously lowering prices, and unwary customers can find themselves tied into high costs as prices tumble around them.
There are also new cost models being brought in, and the financial acumen of a technical person can be tested beyond breaking point.
Picking apart per-second billing models
An example of this is with the per-second pricing model AWS and Google, for example, have adopted of late.
Cloud tended to start off with per-hour pricing, which has then been lowered to per-minute charging as more granular workloads have come through – but does per-second really offer anything of value to a customer?
A false (as will be shown) comparison can be made with the telecommunications industry. Voice calls used to be billed on the basis of a connection fee combined with a per-minute charge for the call.
With so many calls lasting less than a minute, though, users found themselves paying a lot for such short conversations – and didn’t see the value in the model. It only took one telecommunications company (Orange) to opt for a per-second charging model for users to see the value and start switching, and – pretty soon – its telco rivals changed their pricing plans too.
Read more about cloud pricing models
- Oracle chairman and chief technology officer Larry Ellison has announced a cloud pricing programme that includes what the supplier calls “bring your own licence to PaaS” and “universal credits”.
- It’s important to run the numbers before jumping to VMware Cloud on AWS, as on-premise vSphere on similar hardware might be the more cost-effective option.
Is the cloud market anything like the telecommunications one? In this instance, no.
Without forcing a square use case into a round cloud hole, I fail to see just how per-second pricing can really add anything to a cloud model, and cannot think of a workload which is sensitive to per-second loadings: sure, there are plenty (such as those in financial trading) where response times have to be in the sub-second area – but this has nothing to do with per-second billing for central processing unit, storage or network usage.
It simply appears to be a triumph of marketing hype over reality, with some marketing bod having a “flash of brilliance” in coming up with an offer that no-one else had come up with.
Oracle vs. AWS
Then there is Oracle, with its guarantee it will beat AWS’ cloud pricing by at least 50%. As always, the devil is in the fine print.
This is not a case of using Oracle’s cloud as a universal workload engine: the price promise is based on using Oracle’s new cloud-based database. With Oracle having tweaked its pricing model to make AWS twice as expensive as Oracle’s own cloud for licences, it’s pretty much a slam dunk that it can promise the 50% savings – at the licence level.
However, AWS could offer a migration service away from Oracle’s database – and so avoid the Oracle “tax” at the licence level, allowing it to provide highly competitive offers directly.
Cutting through the cloud pricing hype
In a market where things change so rapidly, and where fear, uncertainty and doubt drowns out the real messages, what should a customer be looking for? The idea of “serverless” computing promises one way out of such a mess.
AWS launched Lambda as a service, where the customer provides a data workload. It calculates what resources are required and posts a cost to the customer, who can then say “yes” or “no” as they see fit.
This has to be the future for cloud computing: rather than the person trying to architect a platform requiring a few degrees in computer science plus a professorship in economics to access the resources they need.
Cloud providers need to chase the general market with more of an approach of “tell us what you need – we’ll tell you how much it will cost”.
Sure, there will be extraneous payments that will be needed, such as when a workload increases its resource requirements outside of agreed limits, or where the workload changes to need a different set of resources applied to it.
Cloud providers are talking heavily about how their platforms are ideal for artificial intelligence and machine learning, and it is time for them to use that to cut out the complexity from cloud pricing.