Datacentre evolution: delivering competitive advantage

Feature

Datacentre evolution: delivering competitive advantage

Anthony Foy, Managing Director, Interxion Group

Once upon a time, datacentres were back-office domains controlled by a company’s in-house IT specialists; today, a new approach to server-room management is emerging to meet the computing and communications needs of modern organisations.

 

These new datacentres are very different to their predecessors, and even look physically distinct. Today’s datacentres are not merely larger in terms of space; they have far greater capacity in every respect. Most organisations today are underpinned by mission-critical technology, and have a far greater appetite for computing power and network bandwidth than could be foreseen when the first-generation spaces were built.

   

 

Power and heat

Today, datacentres are built to handle the unique challenges posed by the IT needs of modern organisations. Vast quantities of electricity must be reliably supplied, while the concentrated heat generated by racks of high-performance processors must be safely dissipated.

 

While first-generation datacentres relied on ample free space to cool monolithic mainframe computers, today’s datacentres pack in tens, hundreds or even thousands of ultra-thin servers, terabytes of disk storage, and dense equipment to provide fast communications. The resulting power draw and heat has become a major concern, as soaring energy prices have become a hugely significant cost-factor in IT and facility-management budgets.

 

Low-cost connectivity

Another recent change is that whereas the first datacentres were built to serve a single company, the trend today is to move away from internal consumption of resources and towards leveraging shared infrastructures – driving down costs whilst enhancing operational efficiency.

 

In part, this last trend can be seen as directly related to the increasing market penetration of broadband and its impact on the way business and individuals use computers. Organisations are increasingly relying upon third-party ‘multi-sourced’ partners to deliver the highly resilient and scalable IT infrastructure needed to provide and maintain connectivity.

 

This has led to the evolution of carrier-independent datacentres, firms that offer a range of connectivity providers in order to improve network resilience, drive down the cost of bandwidth, and deliver the benefits of a shared infrastructure model.

 

It is not only internet giants with large bandwidth requirements that have cottoned on to the idea that shared spaces from a carrier-independent datacentre can be an efficient way to handle the needs of computing users. In the last five to eight years, increasing numbers of corporate organisations have begun to use carrier-independent datacentres to supplement (or even replace) their old in-house datacentres.

 

The benefits of carrier-independence

Carrier-independent datacentres offer many compelling arguments for outsourcing your IT or communications infrastructure. As one example, if your company has its own datacentre, it will need to contract to a carrier – which means paying for the physical and exclusive cabling between your datacentre and their exchange. Worse still, if the provider’s systems crash, you lose connectivity. And contracting to a second carrier provider for resilience purposes is usually too costly for most organisations.

By contrast, outsourced service providers can offer hosting with multiple carriers, so you are no longer tied to a single supplier or a single physical infrastructure. If one carrier’s system goes down, you can automatically switch to another – promoting resilience and ensuring availability.

Equally, if you choose an outsourced datacentre with a large number of different carriers – Interxion houses around 700, with between 50 and 100 in each datacentre – they will have to compete for your business. The cost savings can be significant; in some cases, enough to pay for your first year’s hosting.

 

Service evolution

Third-party datacentre hosts are now offering a wide range of services to customers. Very often, they act as a safety net in companies’ disaster recovery policies, ensuring that data is mirrored offsite so that even in the event of a massive operational failure, key data can be accessed and restored.

 

Security is another area that hosting companies are starting to move into. Firewalls, anti-spam services and virus protection are often provided by datacentres as they scan for attacks or potential malicious code.

 

The number and range of mission-critical applications being managed by third-party datacentres has also increased. From sales force automation and customer relationship management applications to supply-chain management and human resources software, almost every kind of modern application can now be found housed and supported by third-party datacentres.

 

The emergence of the third-party datacentre is also playing its part in another trend, as companies come under pressure to show their environmental credentials. By sharing resources and increasing the utilisation of hardware, datacentres enable power consumption to be minimised. Arguably, this overall approach is “greener” than the traditional model, where servers often run at just 20 percent capacity and energy-hungry cooling equipment struggles to dissipate heat from poorly-designed server rooms.

 

Taking IT off your hands

Yet the most common reason for the shift towards third-party datacentres is probably still the ability to dispense with the burden of IT administration. At a time when security patching and updating of applications has reached epidemic proportions, many companies will be more than happy to see the back of troublesome administrative duties that do nothing to improve underlying business performance or to help them differentiate themselves from competitors.

 

Companies that have experimented with total outsourcing can find in third-party datacentres a halfway-house approach: data can be stored in the same country and direct access to equipment can often be provided. The model also provides an escape route from having to deal with challenges such as overheating equipment, spiralling data storage volumes or the complexities of regulatory change and its impact on data movement and storage.

 

At least in the short term, many companies will choose to continue investing in internal staff and equipment to manage IT tasks. Yet there is a clear trend towards passing the load on to specialists – who have greater expertise and can take advantage of economies of scale to deliver a better service in order to maintain their leadership position.

 

 

 


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2007

 

COMMENTS powered by Disqus  //  Commenting policy