Tierney - stock.adobe.com

Keeping cloud costs in check: What enterprises need to know

Organisations should not be rushed into moving to the cloud, because doing so could result in mistakes that are costly to put right

This article can also be found in the Premium Editorial Download: Computer Weekly: The startups transforming data analytics

Enterprises are under pressure to move to the cloud, and to do so quickly for business agility and competitive reasons. But it is important, regardless of how big a risk of digital disruption they face, to take time over the creation and implementation of their cloud strategy.

After all, making a wrong decision – in terms of what provider to go with or how much of the IT estate should move to the cloud – can be extremely costly and potentially difficult to remediate later down the line.

Some of these decisions are based on misconceptions about the economics of cloud, for example.

While some businesses and administrators may think cloud can be expensive (and it can), they are often not comparing like with like in their cost assessments.

Many administrators would claim that using their own infrastructure is cheaper, but frequently they are not looking at the total cost of ownership over the life of the application. As such, developers and administrators need to think not in terms of separate hardware and software, but the application as a whole over its useful lifetime.

It may seem inexpensive to house that hardware internally, but the cost is not only the hardware, but the hidden costs of systems administration, infrastructure, maintenance, licensing, power, cooling and other intangible items.

Adopting cloud is not simply a case of lifting and shifting workloads to a designated cloud provider; it also encompasses working out the migration costs of moving infrastructure to the cloud. Also, the applications earmarked for migration also need to be developed for use in the cloud, and companies trying to retrofit their existing ones to fit such an environment will have a huge uphill battle.

For that reason, administrators working in greenfield sites have a major advantage over those dealing with brownfield infrastructure. And planning is the absolute make or break requirement for a successful cloud deployment.

It is important to be realistic about application requirements. It may be simple to say “scale as required”, but that usually comes with a cost that needs to be worked out ahead of time – not just the actual instance cost, but also the technological development and technical debt it will incur.

Scaling cannot just be thrown out ad hoc – testing, testing and more testing is key. Also, not everyone needs auto-scaling, so be honest about the organisation’s requirements. Features cost money, and waste money when they are not used.

It is critical to have a signed-off design that is agreed and understood by all relevant parties. Up to this point, cloud provider selection is not really a consideration. Any design should include the appropriate sizing and costing of the infrastructure.

Everything as IaaS

A common issue that cloud administrators see when new administrators or developers try to spin up their own cloud environment for the first time is that they tend to concentrate on doing everything as infrastructure as a service (IaaS) rather than platform as a service (PaaS), and are essentially trying to reinvent the wheel (at great cost).

This means that rather than standing up a new database server, using a shared infrastructure – such as shared database servers – and load balancing infrastructure.

Doing this has a lot of positives and a few negatives. One big positive is the administrative burden for the high-availability database servers now rests on the shoulders of the cloud provider, while the administrator has to cope with the maintenance schedules from the cloud provider.

The ability to use a PaaS infrastructure also depends very much on the company’s specific requirements.

Read more about cloud management

If the company operates in a highly regulated industry, shared PaaS infrastructure may not really be an option. Your mileage may vary but it is essential to check, verify and check again. No one likes to be on the legal or technical naughty step.

Any project that has been costed out should have a designated person to manage resources (and users). Often at this point, any utilisation that has not been planned results in the shock of a huge bill, especially if a developer has put its own credit card down as the billing source.

Also, to make life easier to manage from the start, give one person the role of granting resources to users. This helps stop costs spiralling out of control and prevents users deciding what they think they need, rather than what they really need.

At this point, it should be possible to look at the costing of the environment. Knowing how large the environment will be can be very useful in these ways: 

  • It is possible to set a budget for it.
  • Knowing sizes in advance can enable the company to purchase credits up front, rather than on demand (known as spot pricing), which can help to keep a lid on costs.
  • Moving to the cloud comes with “invisible” costs, such as bandwidth egress. These are things that need to be thought about, because increasing bandwidth doesn’t come cheap.
  • Developing in the cloud should be kept separate, if only because it becomes easier to turn off when not in use, such as after hours, saving money.
     

Sizing tools are available to help calculate cost savings and are available from most cloud providers.

Preparing to fail

Finally, amid all the jokes about availability (or lack thereof) of some providers at critical times, it is important for company as a whole to have a disaster recovery plan in place. How this is achieved is not nearly as important as just having a tried-and-tested strategy in place.

One major weakness of cloud providers is that, with potentially millions of customers, you are just a number and will have to wait out any downtime issues that occur.

For on-premise environments, this is a moot point, but being able to correctly (and quickly) fail over to an alternate cloud provider can save a lot of business heartache.

Many providers do offer resiliency, but in cases where some underlying infrastructure service has a failure, the dependent services may not be available.

There are several disaster recovery-focused software providers that allow companies to fail over between different cloud infrastructures as required.

Depending on several scenarios, including availability requirements and contractual obligations, such tools may be useful.

However, one major tip is to ensure that the disaster recovery instance itself can be stood up totally independently of the original infrastructure.

In other words, avoid dependencies at all costs. It should also go without saying that it should be possible to test these disaster recovery environments in isolation, ready for the day when failovers may be needed.

To summarise, cloud projects tend to work best when they are well planned, well documented and thoroughly tested. In short, having a plan will help not only the technical staff, but, when done properly, will also help the organisation to understand and control costs better.

Read more on Infrastructure-as-a-Service (IaaS)

CIO
Security
Networking
Data Center
Data Management
Close