The benefits of using cloud are well documented, but one thing a lot of companies still struggle with is keeping their cloud costs in check, despite how critical this is to deployments of any size being declared a success.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The most critical aspect to understand and implement is what constitutes proper cloud governance from the outset, as well as knowing who has control of the purse strings and is allowed to consume the cloud resources.
This does not necessarily mean someone needs to know and authorise every single virtual machine, but who is responsible for authorising projects and signing off resource limits definitely does needs to be established from the outset.
And this is frequently where projects start to go awry: when too many people get involved. It begins to affect direction, muddies the management, and the entire infrastructure start to veer off course.
Good cloud governance
As part of the governance piece, the project should include details about the designs and configuration of its various requirements.
Right-sizing the virtual environment is key, and a good peer review of the design is important, and should include a breakdown of the expected monthly costs so no one can complain when the six-figure bill arrives via email.
Each project should be presented and approved by management. That authorisation translates into approval for the cloud implementation side. Basically, this makes sure the people and processes are in place before anything gets done in the cloud.
From a technical perspective, cloud automation can manage resources based on defined tags so it does not fall on developers to power off the infrastructure when they log off for the day.
Many cloud platforms offer this functionality out the box, and learning how to use it can help save a lot of money.
The voice of cloud reason
The person responsible for the cloud governance side of the equation needs to be as focused on the business issues as they are on the IT ones, as this helps inject a voice of reason into the projects.
A frequent issue is when a group starts off with a cloud project and collectively decide they need everything in the cloud, including redundancy, and some other function. Then, when the bills roll in, suddenly the whole implementation gets drastically scaled back.
To put it into context, it is incredibly easy to burn through a million pounds a year without even blinking if the project is not properly thought out from the start.
Know your limits
With DevOps building out machines left, right and centre, there needs to be resource limits set from the outset. That way the developers/administrators can tread carefully with the provided resources, and teams can always come back and ask for more if needed.
All too often, individuals who do not pay the bills will go large on cloud machines, and restricting access to pre-defined templates prevents anyone from getting a bit free and easy with resources.
Taking pre-emptive action on such issues can create direct pounds and pence savings, and also helps manage expectations.
In organisations that use a considerable amount of resources, Amazon Web Services (AWS) provides engineering time to help optimise the company’s cloud setup.
There is usually no cost associated with it, but – what AWS understands is that happy customers spend more.
Greater enablement for existing cloud users
Now, that’s all well and good for companies embarking on a new cloud adventure, but what about those who are already there? Once again, establishing governance over the cloud, again is key.
Any decent cloud project should have a Configuration Management Database, which – among other things – details the ownership of each virtual machine or system.
Establishing the ownership makes life easier in the long run, but – without it – life can rapidly descend into machines left running because no one is keeping track of the virtual machines and services in question.
Something that is not done often is evaluating and using new generations of machine instances on a periodic basis.
Read more about cloud trends
- A joint report by Lloyd's of London and AIR Worldwide reveals the financial disruption a three to six-day cloud outage could cause to US-based businesses.
- Data from 451 Research suggests the growing range and complexity of service providers’ product portfolios could make it harder for enterprises to reap multi-cloud rewards.
One example of this is the Azure B series, which essentially exists to allow virtual machines that work hard for short periods of time, such as web servers that have low-level usage until lunchtime or evening, when peak browsing periods occur.
Using the B series means the same performance as a higher-specified machine can be had cheaper than the equivalent virtual machine.
Other common, questionable activities include people standing up database servers when it is usually much simpler and cheaper to use the cloud provider’s own database engines.
Along the same lines, controlling development machines is important, and such machines should be powered off at night. It is just common sense. I know some companies (for better or for worse) that have resorted to powering off production machines in the evening to address this.
Reserving the right to use advance instances
If organisations know a certain amount of resources is going to be required at a certain point in time, it makes sense to leverage discounts for buying instances in advance.
There are tools that allow right-sizing to be done at scale, while providing a real-time view of cloud usage and spend.
These companies, such as Embotics, can help with right-sizing the journey for brownfield hybrid and cloud sites, using machine performance history, smart code and other metrics the virtual machine provides. These are then fed into the application with suggestions.
These changes can then be implemented automatically if desired. Obviously public cloud is a bit more restricted as it is not possible to change RAM or CPU, but for private cloud deployments it can be very useful indeed.
Really controlling cloud cost comes down to cutting your cloth according to what you are prepared to pay, planning and judicious use of new cloud technologies designed to reduce spend and meanwhile maintaining the desired performance.