Dell is making all the right noises and the right bets. Will its magic work?

avenkatraman | No Comments | No TrackBacks
| More

I had a chance to see Michael Dell - in flesh - for the first time yesterday in Brussels at the Dell Solutions Summit. He delivered a great keynote on Dell's datacentre strategy, its investment plans and also spoke about all the hot IT topics - software-defined infrastructures, internet of things, security and data protection.

English: Michael Dell, founder & CEO, Dell Inc.

Michael Dell, founder & CEO, Dell

Michael sounded optimistic about Dell's place in the future IT but what was new was how open Dell has become as a company and its firm commitment to all things that determine the new age IT - software defined, cloud, security, mobile, big data,  next-gen storage and IoT.

For one, Michael was candid with the numbers. He said:

  • The total IT market is worth $3 trillion and we have a 2% share of it. Only 10 companies have 1% or more share of that $3 trillion market.
  • Dell's business comprises 85% government and enterprise IT and just 15% is end-user focused.

This kind of number-feeding the press and analysts is new at Dell, which, until now, like rest of the service providers in the industry kept business numbers close to its chest.

But that was not all, Michael didn't hold back from saying a few things that raised a few eyebrows:

  • "I wish we hadn't made some of the acquisitions we did."
  • "As ARM moves to 64-bit architecture, it becomes more interesting," Michael said. He said the company is open to working with its longstanding partner Intel's rival for mainstream datacentre products if that's where the market moved.
  • He also said, Dell is a big believer of the software-defined future. "We ourselves are moving our storage IT into a software defined environment."
  • And to those that wrote off the PC industry, Michael said: "We absolutely believe in the PC business, we are consolidating/growing".

Michael's optimism and confidence in the company's future is a far cry from last year when the company's ailing business strategy forced it to get itself off the public eye.

"Going private has helped us," he said while speaking in Brussels. "It has enabled us to put our focus 100% on our customers. We have invested more in research, development, innovation and in channels in the past year."

Dell also seems to be striking a right chord with its customers, channel and analysts as those I spoke to said they like the company a lot and are pleased with how quickly it adapts and listens to its users.

Dell Research will be focused on five areas - software defined IT, next generation storage (NVM, Flash), next gen cooling, big data/analytics, IoT. Analysts say that's a good bet.

"Dell's foray into research clearly designed to establish it as an IT innovator as well as a scale/efficiency player," says Simon Robinson from 451 Research group on Twitter.

Product-wise too it is making progress. Dell has been more creative than its competitors in designing its new servers on the latest Xeon chip. Its 13th-generation PowerEdge servers have capabilities such as NFC for server inventory management, new Flash capabilities, and has more front sockets.

Dell is also being innovative in its enterprise cloud strategies. It is providing the reference architectures, proof of concepts and server technologies to its system integrators to do the cloud implementation for customers. Having catered to the likes of AWS in the past, Dell has used that cloud experience to build reference architectures but gets the channel to implement it.

"We see private cloud as the future of cloud computing," Michael said. According to him, enterprises in Europe prefer "local" clouds for data sovereignty and privacy issues, so it is supporting local system integrators with local datacentres to build cloud for the customers.

Michael and his company are certainly making the right noises and are investing in the right technologies. But will it increase their ranking in the datacentre (which I see as fourth after Cisco, HP and IBM - in that order), only time will tell.

Tintin (character)

Tintin with Snowy

Also, is it symbolic that Dell held its Solutions Summit party at the world-famous Comic Strip Museum in Brussels - the home of Tintin, Captain Haddock, the Smurfs and Asterix? Don't know, but I sure did have fun!

Cloud wars just got spicier, thanks to Google

avenkatraman | No Comments | No TrackBacks
| More

Ambitious startups and developers around the world got a big treat from Google ahead of the weekend - $100,000 worth of Google cloud credits along with 24/7 support from the tech experts at Google.

Urs Hölzle, Google Fellow launched the "Google Cloud Platform for Startups" initiative on Friday to help startups take advantage of its enterprise cloud offering and "get resources to quickly launch and scale their idea". The free cloud resources are aimed at helping developers focus on code without worrying about managing infrastructures. Google has also not set any restrictions on the type of cloud services users can spend their credits on giving them complete flexibility to choose IaaS, PaaS or SaaS or even data-related cloud offerings.

But to qualify, the startups will need to have less than $5m dollars in funding and have less than $500,000 in annual revenue. And the cloud credits are available through incubators, accelerators and investors.

Cloud computing has always been a technology that democratized IT by giving startups a level-playing field to compete with the big players. And cloud behemoth AWS is seen as the "go-to" cloud option for the cool, emerging poster-children of the web such as Netflix and Instagram (before it was acquired by Facebook).    

Google Appliance as shown at RSA Expo 2008 in ...

Google Appliance as shown at RSA Expo 2008 in San Francisco. It was only a computer case with no parts inside.-Daniel A (Photo credit: Wikipedia)

Google, AWS, Microsoft and IBM have so far been tripping over each other to announce price drops to lure more users to their cloud services. AWS launched a free programme called AWS Activate to help selected startups with resources for working with AWS. It includes services such as AWS web-based training, virtual office hours with an AWS Solutions Architect and credit for eight self-paced training labs. 

But Google has now upped the game by targeting startups with cloud credits of the size and scale that hasn't been seen before.

Cloud giants are targeting the ambitious startups because today's startups can become tomorrow's enterprises and the providers want these potential customers to use their platforms. 

Startups are lean and are quick at adopting new technologies such as the cloud but need some technical expertise so they can focus on their business than the underlying technology. Google is offering exactly that in a bid to get a footprint in the enterprise IT.

It will be interesting to see how UK's promising tech startups such as Swiftkey and Hailo use these cloud credits offered by Google. But more importantly how quickly AWS, Microsoft and IBM respond to this. 

Cloud wars just got spicier!  

 

Want cloud success? Eat your greens!

avenkatraman | No Comments | No TrackBacks
| More

Cloud computing is becoming a default option of delivering IT services but to reap all the benefits of the cloud, enterprises must do the boring stuff first.

On Thursday, I attended a Westminster eForum seminar on the future of cloud computing where I witnessed very interesting conversations around cloud adoption, risks, and its future from speakers ranging from analysts, legal experts and industry association heads to cloud vendors and public sector professionals.

Studies show that broccoli may help in the pre...

Boring but necessary! (Photo credit: Wikipedia)

When experts said cloud can be secure and cost-effective and can lead to innovation - it did not raise any eyebrows from the delegates. This suggests to me that users are fully convinced of cloud's benefits.

But even then, some cloud projects backfire. Why?

The excitement of cloud is leading enterprises to overlook the boring work they need to do beforehand to yield the full benefits of the cloud. Ovum analyst Gary Barnett illustrated this best in his (PowerPoint-free!) session. Here is an article where Gary shares the user instances where cloud has failed.

"My mum made sure I ate my broccoli before I got my pudding," Gary said. But in the cloud world, no one's eating the broccoli, he said.

"If you don't clean up your data before putting it on the cloud platform, you will have cloudy rubbish." He also pointed that some users are finding cloud expensive because they are not building proper policies and guidelines around its use.

Experts at the seminar insisted cloud is a secure way of doing IT and cloud breaches are usually because of users' "silly and predictable passwords" and their lack of awareness. Gary urged enterprises to educate users on the loopholes of predictable passwords.

"No one loves the boring stuff. But just like you have to eat your greens, you have to do all the boring stuff before adopting the cloud. Otherwise you're just transferring onsite mess offsite," Gary said.

The "Eat your greens" theme continued throughout the seminar and the floor roared out laughing when Microsoft's cloud director Maurice Martin said: "In my case, the greens were the cabbages, broccoli was too posh."

 

Five questions you must ask your cloud provider

avenkatraman | No Comments | No TrackBacks
| More

One of the main barriers to cloud adoption is data privacy. This is an issue because, for the majority of cloud providers, EU/EEA and US data privacy and Information Security standards are minefields which are very difficult to cross. And that is because their focus has been on the ease of use and functionality of their services rather than the all-important data privacy, information security, data integrity and reliability requirements around providing these services responsibly.

But, when looking through the plethora of cloud service providers, you can immediately sort the 'wheat from the chaff' once you start drilling down into the data privacy, information security, data integrity and reliability capabilities offered to ensure the protection of your and your customers' data.

In this guest blog post, Mike McAlpen, the executive director of security & compliance and data privacy officer at 8x8 Solutions outlines the questions cloud users must ask their providers before signing a contract.


Have you chosen the right cloud services provider?  
- M
ike McAlpen


By asking your cloud services provider the following questions you will be on the way to knowing whether you can entrust your data into its care.  

  • Compliance with EU/EEA data privacy standards

The most important question is whether your provider can provide third-party verification/audit assurance of their compliance with EU/EEA and/or US data privacy standards? It is not enough for the provider to simply produce this verification/audit assurance, it must show that it has fully implemented the UK Top 20 Critical Security Controls For Cyber Defence and/or ISO 27001 and/or rigorous US standards such as the Federal Information Security Act (FISMA) and International Information PCI-DSS v3.0 security standards.

If this verification/audit assurance is not available then your business is at peril of not meeting EU/EEA and/or US standards.

In the US, many EU/EEA and other countries it can be a criminal offence if a breach of personal data privacy occurs and an individual employee or senior management, depending on the circumstances of the breach, is deemed to be responsible.

  • Onward Transfer of Data

Does your provider work with third-party suppliers in order to deliver the cloud services it offers? If so you must check that it has contracts in place with its third-party suppliers that provide assurance that they are, and will continue to be, compliant with EU/EEA and/or US standards.

  • Data Encryption

Does the cloud solutions vendor provide the capability to encrypt sensitive data when it is being transferred across the Internet and importantly again when it is 'at rest'? (i.e. Stored by your cloud services provider, or in files on a computer, laptop USB flash drives or other electronic media?).

  • The Right to be Forgotten

Has your provider's solution been engineered to enable it to identify and associate each user's personal information data? It must also provide the capability for each user to view and modify this personal data. In addition, if the user wishes this data to be deleted, the provider must then be able to completely erase all of that person's personal data without affecting anyone else's data. 

  • Service Level Agreements (SLAs)

Outside of compliance with data privacy standards, another key issue is asking your provider how you will determine and then document, within your services contract, the required service level agreements (SLAs). It's no use whatsoever having the cloud services you have always wanted if you have no way of measuring or monitoring if they are actually being delivered to an acceptable level or if there are no financial penalties for non-compliance.

If your provider cannot answer "yes" to the above questions and you cannot agree to mutually acceptable SLAs - look for another provider!

VMworld 2014: What happened on Day 1

avenkatraman | No Comments | No TrackBacks
| More

On Day 1 of its annual conference VMworld 2014 themed "No Limits", VMware unveiled its strategies around open cloud platform OpenStack and around container technology Kubernetes. It also launched new tools to extend its software-defined datacentre and hybrid cloud offerings.

Open software-defined datacentre

One of the significant announcements was the VMware Integrated OpenStack - a service that provides enterprises - especially SMBs the flexibility to build a software-defined datacentre on any technology platform (VMware or not).

VMware Integrated OpenStack distribution is aimed at helping customers repatriate workloads from "unmanageable and insecure public clouds". Take that AWS.

Container technology and VMware infrastructures; Kubernetes collaboration

VMware is collaborating with Docker, Google and Pivotal to allow enterprises to run and manage container-based applications on its platforms.

At the annual conference, VMware said it has joined the Kubernetes community and will make Kubernetes' patterns, APIs and tools available to enterprises. Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management.

With Google, VMware's efforts will focus on bringing the pod based networking model of Open vSwitch to enable multi-cloud integration of Kubernetes.

Not only will deep integration with the VMware product line bring the benefits of Kubernetes to enterprise customers, but their commitment to invest in the core open source platform will benefit users running containers," said Joerg Heilig, VP Engineering, Google Cloud Platform. "Together, our work will bring VMware and Google Cloud Platform closer together as container based technologies become mainstream."

With Docker, it will collaborate to allow Docker Engine on VMware workflows. It will also work to improve interoperability between Docker Hub with VMware vCloud Air, VMware vCenter Server and VMware vCloud Automation Center.

New hybrid cloud capabilities

At VMworld, VMware released new hybrid cloud service capabilities and a new line-up of third-party mobile application services. The new capabilities include vCloud Air Virtual Private Cloud OnDemand that offers customers with on demand access to vCloud Air. Another capability - VMware vCloud Air Object Storage - is aimed at providing users with scalable storage options for unstructured data. It will enable customers to easily scale to petabytes and only pay for what they use, according to the company.

It also launched mobile development services within VMware's vCloud Air's service catalog.

Management as a service offerings

VMware also released two new IT management tools under its vRealize brand- for managing a software-defined datacentre and public cloud infrastructure services (IaaS).

VMware vRealize Air Automation is the cloud management tool that allows users to automate the delivery of application and infrastructure services while maintaining compliance with IT policies.

Meanwhile, VMware vRealize Operations Insight offers performance management, capacity optimization, and real-time log analytics. The tool also extends operations management beyond vSphere to an enterprise's entire IT infrastructure. Another sign than VMware is opening up its ecosystem to accommodate other virtualisation platforms.

Partnerships with Dell on software defined services

VMware  has extended collaboration with Dell to combine its NSX network virtualisation platform with the latter's converged infrastructure products.

"Global organisations are adopting the software-defined datacentre as an open, agile, secure and efficient architecture to simplify IT and transition to the hybrid cloud," said Raghu Raghuram, executive vice president, SDDC division, VMware. "The software-defined datacentre enables open innovation at speeds that cannot be matched in the hardware-defined world. As partners, VMware and Dell will advance networking in the SDDC, and collaborate to make advanced network virtualisation available to mutual customers." 

Partnership with HP on hybrid cloud

VMware and HP have extended their collaboration to give momentum to users' SDDC and hybrid cloud adoption. As part of the partnership, HP Helion OpenStack will support enterprise-class VMware virtualisation technologies.

The companies will also make standalone HP-VMware networking solution generally available. Together, these collaborative efforts can help simplify the adoption of the software-defined datacentre and hybrid cloud with less risk, and with greater operational efficiency and lower costs.

All in all, looks like VMware is opening up to competitive platforms and warming up to open source technologies but retains its standoffish traits when it comes to public cloud services.







Microsoft Azure goes down for users around multiple regions including Europe and Asia

avenkatraman | No Comments | No TrackBacks
| More

Just when I thought to myself: Cloud services must be improving as there are fewer outages reported this year than there were last year, Microsoft Azure cloud service went down for many users, including European ones, earlier this week.

Microsoft's Azure status page currently displays a chirpy: 

All good!

Everything is running great.


It also displays a bright green check besides its core Azure platform components such as Active Directory, and popular cloud services including its SQL Databases, and storage services.

A snoop into its history page shows that all wasn't good aboard Azure on Monday and Tuesday. Users experiencing full service interruption and performance degradation across several services including StorSimple, storage services, website services, backup and recovery and virtual machine offerings.

For a brief moment on Tuesday, August 19th, a subset of its customers in West Europe and North Europe using Virtual Machines, SQL Database, Cloud Services, and Storage were unable to access Azure resources or perform management operations. Users accessing Azure's Website cloud services in Northern Europe too faced connectivity issues.

WELCOME TO Microsoft®

WELCOME TO Microsoft® (Photo credit: Wikipedia)

The previous day, some of its customers across multiple regions were unable to connect to Azure Services such as Cloud Services, Virtual Machines, Websites, Automation, Service Bus, Backup, Site Recovery, HDInsight, Mobile Services, and StorSimple. 

Some of the services were down for almost five hours.

This week's global outage follows last week's (August 14th) Azure outage where users across multiple regions experienced full service interruption to its Visual Studio Online. The news doesn't bode well for CEO Satya Nadella's "cloud-first" strategy.

Here is a detailed report on Azure's latest datacentre outage.

Well, I may have tempted fate. Resilience and reliability are two words I'll use sparingly to describe public cloud services. 

PUE - the benevolent culprit in the datacentre

avenkatraman | No Comments | No TrackBacks
| More

Internet of Things, big data, and social media are all creating an insatiable demand for scalable, sophisticated and agile IT resources, making datacentres a true utility. This is making big tech and telecom companies to drift a bit from their core competency and build their own customised datacentres - take Telefonica's €420m investment in its new Madrid datacentre.

But the mind-boggling growth of computing infrastructure is occurring amid shocking increases in energy prices. Datacentres consume up to 3% of global electricity and produce 200 million metric tons of carbon dioxide, at an annual cost of $60bn. No wonder, IT energy efficiency is primary concern for everyone from CFOs to climate scientists.

In this guest blog post, Dave Wagner, TeamQuest's director of market development with 30 years of experience in the capacity management space explains why enterprises must not be too hung up on PUE alone to measure their datacentre efficiency.


Measuring datacentre productivity? Go beyond PUE
-by Dave Wagner


In their relentless pursuit of cost effectiveness, companies measure datacentre efficiency with power utilization effectiveness (PUE). The metric measures the total amount of power coming onto the datacentre floor, divided by how much of that power is actually used by the computing equipment.

PUE = Total energy
            IT energy

PUE is necessary but not a sufficient indicator to gauge the costs associated with running or leasing datacentres.

While PUE is a detailed measure of datacentre electrical efficiency, it is one of several elements that actually determine total efficiency. In the bigger picture, focus should be on more holistic and accurate measures of business productivity, not solely on efficient use of electricity.

Gartner analyst Cameron Haight talked about how a very large technology company owns the most efficient datacentre in the world with a PUE of 1.06. This basically means that 94% of every watt that comes into the floor actually gets to processing equipment. This remarkably efficient PUE achievement does not detail what they do with all of that power, and how much total work is accomplished. If all that power is going to servers that are switched on but essentially idling and not actually accomplishing any useful work, what does PUE really tell us? Actual efficiency in terms of doing real-world work could be nearly zero even when the PUE metric indicates a well-run datacentre in isolation.

Datacenter

Datacenter (Photo credit: Wikipedia)

Boiled down, what companies end up measuring with PUE is how efficiently they are moving electricity around within the datacentre.

By some estimates, many datacentres are actually only using 10-15% of their electricity to power servers that are actually computing something. Companies should minimize costs and energy use, but nobody invests in a company solely based on how efficiently they move electricity.

Datacentres are built and maintained for their computing capacity, and for the business work that can be done thereupon. I recommend correlating computing and power efficiency metrics with the amount of useful work and with customer or end user satisfaction metrics. When these factors are optimised in a continuous fashion, true optimization can be realised.

I've talked about addressing power and thermal challenges in datacentres for over a decade, and have seen progress made - recent statistics show a promising slowdown in datacentre power consumption rates in the US and Europe due to successful efficiency initiatives. Significant improvements in datacentre integration have helped IT managers control the different variables of a computing system, maximising efficiency and preventing over- or under-provisioning, both having obvious negative consequences.

An integrated approach to planning and managing datacentres enables IT to automate and optimise performance, power, and component management with the goal of efficiently balancing workloads, response times, and resource utilisation with business changes. Just as the IT side analyses the relationships between the components of the stack--networking, server, compute, and applications--the business side of the equation must always be an integral part of these analyses. Companies should always ask how much work they are accomplishing with the IT resources they have; unfortunately, often easier said than done. In the majority of datacentres and connected enterprises, the promise of continuous optimisation has not been fully realised, leaving lots of room for improvement.

As datacentres grow in size and capabilities, so must the tools used to manage them. Advanced analytics have become essential to bridging IT and business demands, starting with relatively simple co-relative and descriptive methods and progressing through predictive to prescriptive approaches. Predictive analytics are uniquely suited to understand the nonlinear nature of virtualised datacenter environments. 

These advanced analytic approaches enable enterprises to combine IT and non-IT metrics in such a powerful way that the data generated by the networked computing stack can become the basis for automated and embedded business intelligence. In the most sophisticated scenarios, analytics and machine algorithms can be applied in such a way that the datacentre learns from itself and generates insight and models for decision-making approaching the level of artificial intelligence.

 

What's making Oregon the datacentre capital

avenkatraman | No Comments | No TrackBacks
| More

I am just back from Oregon where I attended a workshop at Intel's Hillsboro campus. What amazed me the most - apart from the most delicious Peruvian cuisine I had at Portland of course - is Intel's large presence in the area and the number of big datacentres in Oregon.

Intel is the biggest employer of the region and has multiple, vast campuses there. It even has its own airport in Hillsboro, Oregon from where it operates regular flights to its Santa Clara headquarters for its employees. Several flights that carry up to 40 Intel employees operate every day. The hotel I stayed in at Hillsboro told me that on any given day, about 70% of the people it serves are Intel-related.

Apart from Intel almost hijacking Oregon with its presence, the state is also home to many datacentre facilities. Facebook (Prineville datacentre), Google (Its first datacentre - The Dalles), Amazon (Boardman), Apple (also Prineville), and Fortune Datacentres (Hillsboro) - they all have large facilities in Oregon. 

Here's why:

Cost:

One of the primary reasons many tech giants consider Oregon as the home to their datacentres is cheaper costs. Oregon does not have sales tax and this means computer products, building materials and services are cheaper than elsewhere in the US. In addition, power - which is a main datacentre money-guzzler - is cheaper in Oregon. Furthermore, the local government lures tech giants by providing incentives such as tax breaks and subsidies. All these factors attract datacentre investment here.

Prineville, Oregon

Prineville, Oregon (Photo credit: Wikipedia)

Talented workforce

Because of the tech culture of the region, many professionals develop server management and virtualisation skills. The emphasis on IT skills in the universities and Silicon Valley's investment in regular training workshops make the workforce in the area more talented and skilled for datacentre management.

Climate

Oregon's weather is comparatively mild. This makes the tricky task of datacentre cooling a little bit easier. It is simpler to devise cooling strategies for a facility when the ambient temperature does not vary highly. Oregon does not get baking hot like Texas or Kansas in the summers nor does it get overwhelmingly snowed under in winters.

Connectivity

The vast stretches of fibre optics cable that run even across Oregon's mountains, lakes and deserts provide fast connections and latency of milliseconds. Its proximity to Silicon Valley is another puller of datacentre investment.

Geography, stability and security

Big cloud and IT service providers love political and economic stability, and physical security and Oregon gives them that. The region is not too prone to natural disasters such as volcano eruptions, earthquakes or hurricanes - which acts as another big attraction to datacentre builders. Take Iceland for instance, despite its promise of 100% green geothermal energy and fibre optics connections to mainland Europe, many IT providers hesitate to set up datacentres there because of its vulnerability to natural disasters.  

Oregon has seismically stable soil and as part of the west coast, it has little to no lightning risk - one of the major cause of outages in the US.

As Google, which opened The Dalles in 2006 by investing $1.2bn, says, Oregon has the "right combination of energy infrastructure, developable land, and available workforce for the datacentre".

I wonder what Oregon's equivalent in Europe would be?

AWS is not the only pretty one in the room anymore

avenkatraman | No Comments | No TrackBacks
| More

It may be too early to conclude that the party at AWS towers is over but the cloud provider is definitely feeling the heat of the competition and the commodity cloud price wars - its quarterly earnings report showed.

Amazon's net sales increased 23% to $19.34bn but it reported a second-quarter net loss of $126m and warned that sales could slow in the current quarter. Amazon's business segment which includes AWS also saw growth drop to 38% year-over-year after witnessing consistent growth rates between 50% and 60% in the last two years.

Beautiful Bride Barbie - OOAK reroot

Beautiful Bride Barbie - OOAK reroot (Photo credit: RomitaGirl67)

I still remember how Amazon founder Jeff Bezos, at the first ever (2012) AWS re:Invent conference in Vegas, said that a
high-margin business is not the right one for AWS.   

There is no incentive to be efficient for businesses operating on high margins because they would make profits anyway, said Bezos.

"Operating a low-margin business is harder," he said adding that the AWS business model is very similar to the retailer's Kindle business model - where the money is not made when the device is sold, but when people use it and keep buying services for it. 

But the price cuts - which are becoming more frequent and deeper (65% cheaper) and driven more by market forces than by internal decisions - is becoming its biggest problem. Since 2008, AWS has slashed cloud services prices 42 times.

AWS has been leading the public cloud price-war, almost over-zealously but other behemoths including Microsoft and Google who have equally deep pockets have been quick to undercut one another in the race to the bottom in pricing for cloud services.

Although the cloud market is still growing rapidly, AWS is finding that its share of the larger pie is shrinking, even while its user number is still growing.  It looks like the growth is not enough to offset the price cuts - and this must be where the problems lie. Customers love discounts and price cuts but investors don't.

With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors," says Gartner's public cloud expert Lydia Leong.

In May 2014, Synergy Research Group explained how Microsoft has grown its cloud infrastructure services "remarkably in the last year and is now pulling away from the pack of operators chasing Amazon".

"AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control," Leong says.

AWS is no longer the only pretty one in the room. It is having to make space for Google Cloud Platform, Microsoft Azure, OpenStack, and IBM SoftLayer and also for the ferociously emerging players such as Digital Ocean and Profitbricks. 

Azure brings sunshine to Microsoft's lacklustre earnings. And how!

avenkatraman | No Comments | No TrackBacks
| More
Satya Nadella is going to be a happy man as his "mobile-first" "cloud-first strategy" is gathering momentum. Microsoft's cloud business has reported a triple-digit YoY growth, the company's earnings report for Q4 ended June 30, 2014 showed. 

Microsoft's commercial cloud revenue grew 147% with an annualised run rate that exceeds $4.4bn (£2.58bn) even as the company's overall profit was down 7%. 

I'm proud that our aggressive move to the cloud is paying off," said chief exec Nadella. 
nadella-tecnomovida

Satya Nadella, Microsoft CEO (Photo credit: tecnomovida)


Other cloud highlights of the Azure provider's results included a 11% revenue growth in its Windows volume licensing sales and similar double-digit revenue growth for server products including Azure, SQL Server, and System Center. 

Its Office 365 Home and Personal subscribers totaled more than 5.6 million, adding more than 1 million subscribers again this quarter. 

 "We are thrilled with the tremendous momentum of our cloud offerings with Office 365 and Azure both growing over 100% again," said Kevin Turner, chief operating officer at Microsoft. 

 As Gartner's research vice president, Merv Adrian told me, "In what was clearly a well-planned posture of demonstrating his command of the whole portfolio, Nadella delivered a strong, visionary picture of Microsoft's 'Digital work and life experiences' stressing the power of its portfolio in enterprise offerings old and new."

"There was good news in enterprise business -- from SQL Server, from "All-up Dynamics" growth, with CRM nearly doubling, and with a commitment to expand Azure footprint and capacity, launch new services and deliver more hybrid cloud tiering," Merv thinks.

While cloud offered a ray of sunshine to the company's earnings, Microsoft blamed Nokia acquisition for the dent in its profits. 

 Microsoft's profit for the quarter March to June 2014 was $4.6bn (£2.7bn), compared with $4.97bn for the same period last year. The company said the Nokia division, which it completed acquiring in April, lost $692m. 

 Last week, Microsoft said it will cut 18,000 jobs - more than 12,000 jobs related to the Nokia phone business division alone. This "restructuring plan to streamline and simplify its operations" is the most severe job cut in the company's 39-year history. 

 Microsoft laid claims to impressive cloud revenues even in the first quarter of 2014 with analysts insisting that the software giant is "now pulling away from the pack of operators chasing Amazon". 

 AWS was the lone leader in Gartner's magic quadrant until June this year when Microsoft joined its arch-rival in the Leader quadrant. AWS is beginning to face significant competition from Microsoft in the traditional business market, and from Google in the cloud-native market, noted Gartner analysts Leong, Douglas Toombs, Bob Gill, Gregor Petri, Tiny Haynes. 

 The biggest takeaway from Microsoft's earnings announced today is that it is indeed crushing it in the cloud sales and riding on the cloud momentum.






Cloud-first? Cabinet Office seeks £700m datacentre partner for 'top secret' data

avenkatraman | 1 Comment | No TrackBacks
| More

The Cabinet Office and GDS (Government Digital Service) have issued a service contract notice seeking a private partner that can provide datacentre colocation services to handle UK government's information classified as "official", "secret" and "top secret".

The government has earmarked up to £700m for the four-year datacentre infrastructure agreement.

"The operating environment is to be capable of housing computer infrastructure that initially handles information with UK Government security classification 'official' but there may be a future requirement for Data Centre Colocation Services that handle information with 'secret' and 'top secret' security classification," the government document read. "The provision of secret and top secret [information] would be subject to separate security accreditation and security classification," it added.

The facilities partner must be able to subscribe for a majority shareholding (up to 75% less one share) in the new private company limited established by the Cabinet Office to provide Data Centre Colocation Services - DatacentreCo.

But under government's Cloud First policy, many existing and new applications will move to the public cloud over the next few years. The Cabinet Office's cloud-first strategy, announced last year, meant that the cloud will be mandated as the first choice for all new IT purchases in government.

The new potentially £700m datacentre will host 'legacy' applications "not suitable or not ready for cloud hosting or for which conversion to cloud readiness would be uneconomic," the document read.

Cabinet Office, 70 Whitehall, London (next to ...

Cabinet Office, 70 Whitehall, London (next to Downing Street) (Photo credit: Wikipedia)

The Cabinet Office wants the full spectrum of datacentre services - rack space, power facilities, network and security. The datacentre hosting the official and secret information will be spread across an area of 350 sq. metres hosting 150 standard 42u racks. This sounds like a modular datacentre requirement.

And it wants "at least two separate [facility] locations subject to appropriate minimum separation requirements".  

Also on the government wish-list are datacentre compliance with security requirements, scalability, proven track-record in the last three years, performance certificates and specific latency performance requirements (less than 0.5 milliseconds) - to cater to the requirements of initial users -- Department of Work and Pensions, the Home Office and the Highways Agency.

The main aim is to have a datacentre facility that is high-quality, efficient, scalable, transparent, service-based ('utility') models - basically cloud-like but not the cloud.

How long do you reckon we'll have to wait before the government declares "serious over-capacity in datacentres" like it did in 2011?

Cloud's Hollywood moment - as a villain in Cameron Diaz's Sex Tape

avenkatraman | No Comments | No TrackBacks
| More

For those still wondering if cloud computing is really mainstream - even Hollywood thinks so. Cameron Diaz's rom-com Sex Tape releasing next Friday is all about the dangers of the cloud.

Cameron diaz

Cameron diaz (Photo credit: Wikipedia)

The movie stars Diaz and Jason Segel as a couple making a sex tape in an attempt to spice up their boring lives.  The video inevitably makes it to the cloud through Segel's iPad on which it was filmed.  The movie tracks how the couple desperately tries to get the video off the cloud while embarrassingly juggling comments from their parents, bosses and even mailman who all see it.

Here's some of the dialogues between Diaz (as Annie)and Segel (Jay):

Annie: (walks in) Honey, that sounds familiar, is that our...

Jay: You know the Cloud?

Annie: Stares ominously before yelling F@#$.

Jay: It went up! It went up to the cloud

Annie: And you can't get it down from the cloud?

Jay: Nobody understands the cloud. It's a f#$@ing mystery.

Whether they succeed to wipe their content off the cloud or not, we'll know only on 18thJuly.  But it looks like a big struggle with Jay and Annie taking desperate measures like nicking devices belonging to their friends and families and even breaking network infrastructures to get the tape off the cloud.

Maybe Jay and Annie are showing in satirical manner how cloud is a one way street - easy to get it up (even inadvertently) but damn hard to get it off!

Here's the trailer of Sex Tape starring Cameron Diaz, Jason Segel and The Cloud:

 


Amazon debuts Zocalo to hog SharePoint, Google Drive, Box and Dropbox market shares

avenkatraman | No Comments | No TrackBacks
| More

Almost 13 years after Microsoft launched the first version of SharePoint, Amazon has launched its version of file sharing and collaboration tool Zocalo at AWS Summit in New York today. Some AWS Summit followers have billed Zocalo as Google Drive and Dropbox killer on Twitter.

Yes, it's called Zocalo which, according to Wikipedia, is the main plaza or meeting-point in the heart of the historic centre of Mexico City.

A late entrant in the document sharing space (Dropbox took off in 2007), Amazon will offer Zocalo for $5 per user per month for 200GB of storage (Dropbox costs $15) or even free (only 50GB) with AWS WorkSpaces - the desktop computing service in the public cloud.

According to Amazon, "document sharing and collaboration is a challenge in today's enterprise". Take that SharePoint and Google Drive or even Office 365.

Zocalo has some pretty nifty features such as multi-device support, offline usage, word and powerpoint collaboration, and it integrates with existing corporate directory (Active Directory). But there's a catch and it's about vendor lock-in - Users will have to put their data first into Amazon S3.

Mexico City Zocalo

Mexico City Zocalo (Photo credit: Wikipedia)

Will Zocalo really tempt users out of Evernotes, SharePoint, Google Drive, Box and Dropbox? I don't know about that but it is a pretty clear indication of SaaS, PaaS and IaaS convergence in the cloud segment - Zocalo - a purely SaaS service from a primarily IaaS provider. And it is also proves how Amazon wants to provide everything that enterprise IT needs (scary?).

Moving to the cloud purely to save costs? Think again

avenkatraman | No Comments | No TrackBacks
| More

Organisations turning to the cloud with a sole intention of cost savings are the ones that are least happy with their cloud infrastructure and are the ones that are most likely to give up cloud adoption.

A recent Cloud Industry Forum research found that in the UK, large enterprises showed highest rates of adoption, at just over 80% followed by small and medium businesses. But public sector's cloud adoption lagged at around 68%.

The study also explored the drivers of cloud adoption and found that the flexibility of cloud as a delivery model was the primary reason for adoption in the private sector while operational cost savings were the main motive for the public sector.

It reminds me of an interesting conversation I had at Cloud World Forum a month ago with Photobox CIO Graham Hobson. Photobox was one of the early adopters of public cloud services - AWS. "When we started, cloud cost was just a fraction (20%) of our total IT spend. Today it is almost equal and I won't be surprised if our cloud costs overtake our on-premises spend soon," Hobson told me.

But that doesn't worry Hobson. In fact he says that public cloud has yielded several benefits in terms of scalability, IT responsiveness and efficiencies for Photobox. "If I was starting a company today, I would have adopted more cloud services than I did a few years ago," he said.

Cloud services operate on a pay-as-you-go model and although it may look attractively low cost at the beginning, if your IT requires constant high capacity and high performance, your cloud bill can soar.

Like I have argued before, cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the main aim of cloud use should be to grow the business and enable newer revenue-generating opportunities.

Just as Netflix or CERN or BP did.

The main advantage of cloud computing isn't always cost saving. If anything, cost saving is usually the byproduct of IT efficiencies found by running IT in cloud.

The super world of supercomputers

avenkatraman | No Comments | No TrackBacks
| More

Last Thursday, I met AWS to learn how users are building supercomputers in the cloud and also to see one being created right in front of me!

Unfortunately, the demo didn't succeed. I don't know if it was a buggy code or what but Ian Massingham, technology evangelist at AWS wasn't able to create a supercomputer and he was as disappointed about it as I was if not more.

But Ian had created one the previous evening -- "Just ran up my first HPC cluster on AWS using the newly released cfncluster demo," read Ian's tweet from the previous day. The link to a demo video AWS sent me subsequently also showed how to get started on cfncluster in 5 minutes.

Amazon cfncluster is a sample code framework, available for free - - to help users run high performance computing (HPC) clusters on AWS infrastructure.

I got to hear how enterprise customers, pharmaceutical companies, scientists, engineers and researchers are build cluster computers on AWS to do some pretty serious tasks such as research on medicine, assessing financial standing of the companies etc, all while saving money (My feature article on how enterprises are exploiting public cloud capabilities for HPC will appear on ComputerWeekly site soon).

And having spent the last two days at the International Supercomputing conference (ISC 2014) in Leipzig, I feel that high-performance computing, hyperscale computing and supercomputers are the fastest growing sub-set of IT. HPC is no more restricted to just science labs but even enterprises such as Rolls Royce and Pfizer are building supercomputers - to analyse jet engine compressors and to research around diseases respectively.

tianhe-2

tianhe-2 (Photo credit: sam_churchill)

Take Tianhe-2, a supercomputer developed by China's National University of Defense Technology for research which retained its position as the world's biggest supercomputer.  It has 3120000 cores, runs a performance of 33.86 petaflops/s (quadrillions of calculations per second) and uses 17808kW of power. Or the US DoE's Titan - a Cray XK7 system running more than 560,000 cores or any of the UK's top 30 supercomputers. They are all mind-boggling in their size, compute performance and uses.

Whether on the cloud or on-premises, I didn't hear a single HPC use-case in the last two days that wasn't cool or awe-inspiring. Imperial College London, Tromso University, Norway, US Department of Energy, Edinburgh University and AWE all use supercomputers to do research and computation around things that matter to you and me. As one analyst told me, "From safer cars to shinier hair, supercomputers are used to solve real-life problems".

Now I know why Ian was having a hard time to pick his favourite cloud HPC project - they're all cool.








What Cloud World Forum 2014 tells us about cloud

avenkatraman | No Comments | No TrackBacks
| More

The sixth annual Cloud World Forum wrapped up yesterday and here's what the event tells us about the state of cloud IT in the enterprise world.

OpenStack is gaining serious traction

OpenStack's big users and providers claimed the cloud technology is truly enterprise-ready because of its freedom from vendor lock-in and portability features. Big internet companies such as eBay are running mission-critical workloads on OpenStack cloud.  Even smaller players such as German company Centralway is using open source cloud to power its infrastructure when TV adverts create load peaks.

HP says it is "all in" when it comes to OpenStack. It is investing over $1bn in cloud-related products and services, including an investment in the open-source cloud. RedHat has just acquired eNovance, a leader in the OpenStack integration services for $95m. Rackspace and VMware are ramping up their OpenStack services and IBM has built its cloud strategy around OpenStack.

Skills shortage around developing OpenStack APIs into a cloud infrastructure seems to be the only big barrier hindering its widescale adoption.

Rise of the cloud marketplace

Cloud marketplace is fast becoming an important channel for cloud transactions. According to Ovum analyst Laurent Lachal, company JasperSoft gained 500 new customers in just six months with AWS marketplace. Oracle, Rackspace, Cisco, Microsoft and IBM have all recently launched cloud services marketplaces. 

What it means to the users? Browsing the full spectrum of cloud services will become as easy for customers as browsing apps in the Apple App Store or Google Play. "As cloud matures, established marketplace seems like a logical evolution. It is a new trend but it gives users a wealth of options in a one-stop-shop kind of way," said Lachal.

Vendor skepticism on the rise

Bank of England CIO John Finch, in his keynote, warned users of "pesky vendors" and cloud providers' promises around "financial upside of using the cloud". Legal experts and top enterprise users urged delegates to understand the SLAs and contract terms very clearly before shaking hands with the cloud providers.

Changing role of CIOs

Cloud is leading to the rise of Shadow IT and CIOs must don the role of becoming the broker of technologies and educating enterprise users on compliance and security, it became apparent at the event. Technology integration, IT innovation and service brokerage are some of the skills CIOs need to develop in the cloud era.

Questions around compliance, data protection, security on the cloud remain unanswered

Most speakers focusing on the challenges around cloud adoption mentioned security, data sovereignty, privacy, compliance and vendor-friendly SLAs as its biggest barriers

Not all enterprises using cloud are putting mission-critical apps on public cloud

Lack of trust seems to be the main reason why enterprises are not putting mission critical workloads on public cloud. Bank of England's Finch just stopped short of saying "never" to public cloud. Take Coca Cola bottling company CIO Onyeke Nchege for instance - he's planning to put mission critical ERP systems on the cloud but private cloud. EBay runs its website on the OpenStack cloud - but a private version it built for itself. One reason customers cite is that mission critical apps seem to be more static and don't need fast-provisioning or high scalability.

"It is not always about the technology though. In our case our metadata is not sophisticated enough for us to take advantage of public cloud," said Charles Ewan, IT director at the Met Office.

But there are some enterprises such as Astra Zeneca (running payroll workloads on public cloud) or News UK that manages its flagship newspaper brands on AWS cloud.

Urgent need for cloud standards in the EU

Lack of standards and regulations around cloud adoption, data protection and sovereignty and cloud exit strategies is making cloud adoption messy. Legal technology experts urged users to be "wise" in their cloud adoption until such time that regulations are developed. But regulators and industry bodies including the European Commission, the FCA and Bank of England are inching closer to developing guidelines and regulatory advice to protect cloud users.

Everyone's trying to get their stamp on the cloud

The more crowded than ever Cloud World Forum saw traditional heavyweights (IBM, HP, Dell, Cisco) rub shoulders with a slew of new, smaller entrants as well as public cloud poster-boys such as AWS, Google and Microsoft Azure. Technology players ranging from chip providers to datacentre cooling services sellers were all there to claim their place in the cloud world. 

Why some cloud projects fail?

avenkatraman | No Comments | No TrackBacks
| More

I was at a roundtable earlier this week discussing the findings of one enterprise cloud research. The findings are embargoed until June 24, but what struck me the most was the numbers around failed or stalled cloud projects.

And that led me to discuss it more with industry insiders. Here are a few reasons why cloud projects might fail:

  • Using cloud services but not using them to address business needs

One joke doing rounds in the industry goes a bit like this - The IT head tells his team, "You lot start coding, I'll go out and ask them what they want".

But the issue of not aligning business objectives with IT is still prevalent. The latest study by Vanson Bourne found that as many as 80% of UK CIOs admit to significant gaps between what the business wants and when the IT can deliver it. While the average gap cited was five months, it ranged between seven to 18 months.

  • Moving cloud to production without going through the SLAs again and again. And again

If one looks at the contracts of major cloud providers, it becomes apparent that the risk, is almost always pushed out on to the user and not on the provider - be it around downtime, latency, availability, and data regulations. It is one thing to test cloud services and quite another to put it out on actual production.

  • Hasty adoption

Moving cloud to production hastily without testing and piloting the technology enough and without planning management strategies will also lead to failure or disappointment with cloud services.

  • Badly written apps

If your app isn't configured correctly, it shouldn't be on the cloud. Just migrating badly written apps on to the cloud will not make them work. And if you are not a marque customer, your cloud provider will not help you with it either.

  • Being obsessed with cost savings on the cloud

One expert says - those who adopt cloud for cost savings fail; those who use it to do things they couldn't do in-house succeed. Cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the primary reason for cloud adoption should be to grow the business and enable newer revenue-generating opportunities. For example, News UK adopted cloud services with an aim to transform its IT and manage its paywall strategy. Its savings were a byproduct.  

  • Early adoption of cloud services... Or leaving it too late

Ironic as it may sound, if you are one of the earliest adopters of cloud, chances are that your cloud might be the earliest iteration and may not be as rich in features as the newer versions. It may even be more complex than current cloud services. For instance, there is a lot of technical difference between pre-OpenStack Rackspace cloud and its OpenStack version.

If you've left it too late, then your competitors are ahead of the curve and the other business stakeholders influence IT's cloud buying decisions.

  • Biased towards one type of cloud

Hybrid IT is the way forward. Being too obsessed with private cloud services will lead to deeper vendor lock-in and adopting too much public cloud will lead to compliance and security issues. Enterprises must not develop a private cloud or a public cloud strategy but use cloud elements that best solves their problems. Take Betfair for instance, it uses a range of different cloud services. It uses AWS Redshift warehouse service for data analytics but uses VMware vCloud for automation and orchestration.

  • Relying heavily on reference architecture

Cloud services are meant to be unique to suit individual business needs. Replicating another organisation's cloud strategies and infrastructure is likely to be less helpful.

  • Lack of skills and siloed approach

Cloud may indeed have entered mainstream computing but the success of cloud directly depends on the skills and experience of the team deploying it. Hiring engineers and cloud architects with experience on AWS to build private cloud may backfire. Experts have also called on enterprises to embrace DevOps and cut down the siloed approach to succeed in cloud. British Gas hired IT staff with the right skills for its Hive project built on the public cloud.

  • Viewing it as in-house datacentre infrastructure or traditional IT

Cloud calls for new ways of IT thinking. Just replacing internal infrastructure with cloud services but using the same IT strategies and policies to govern the cloud might result in cloud failure.

There may be other enterprise-related problems such as lack of budget or cultural challenges or legacy IT that may result in failed or stalled cloud project, but more often it is the strategy (or the lack of it) to blame than the technologies.

My 10 minutes with Google's datacentre VP

avenkatraman | No Comments | No TrackBacks
| More

Google's Joe Kava speaking at the Google EU Da...

Google's Joe Kava speaking at the Google EU Data Center Summit (Photo credit: Tom Raftery)

At the Datacentres Europe 2014 conference in Monaco, I had a chance to not just hear Google's datacentre VP Joe Kava deliver a keynote speech on how the search giant uses machine learning to achieve energy efficiency but also to speak to him individually for 10 minutes.

Here is my quick Q&A with him:

What can smaller datacentre operators learn from Google's datacentres? There's a feeling among many CIOs and IT teams that Google can afford to pump in millions into its facilities to keep them efficient.

Joe Kava: That attitude is not correct. In 2011, we published an exhaustive "how to" instruction set explaining how datacentres can be made more energy efficient without spending a lot of money. We can demonstrate it through our own use cases. Google's network division, which is the size of a medium enterprise, had a technology refresh and by spending between $25,000 and $50,000 per site, we could improve their high availability features and improve their PUEs from 2.2 to 1.5. The savings were so high that it yielded a payback of the IT spend in just seven months. You show me a CIO who wouldn't like a payback in seven months. 

Are there any factors, such as strict regulations, that are stifling the datacentre sector?

It is always better for an industry to regulate itself than have the government do it. It fosters innovation. There are many players in the industry that voluntarily regulate themselves in terms of data security and carbon emissions. One example is how since 2006, the industry has strongly rallied together behind the PUE metric and has taken energy efficiency tools quite to heart.

What impact is IoT having on datacentres?

Joe Kava: IoT (internet of things) is definitely having an impact on datacentres. As more volumes of data are created and as mass adoption of the cloud takes place, naturally it will require IT to think about datacentres and its efficiency differently. IoT brings huge sets of opportunities to datacentres.

What is your one piece of advice to CIOs?

You may think I am saying this because I am from Google but I strongly feel that most people that operate their datacentres shouldn't be doing it. That's not their core competency. Even if they do everything correctly and even if they have a big budget to build a resilient, highly efficient datacentre, they cannot compete in terms of the quick turnaround and the scalability that dedicated third-party providers can offer.

Tell us something about Google's datacentres that we do not know

It is astounding to see what we can achieve in terms of efficiency with good old-fashioned testing and development and diligence. The datacentre team constantly questions the parameters and constantly pushes the boundaries to find newer ways to save money with efficiency. We design and build a lot of our own components and I am not just talking about servers and racks. We even design and build our own cooling infrastructure and develop our own components of the power architecture that goes into a facility.

It is a better way of doing things.

Are you building a new datacentre in Europe?

(Smiles broadly) We are always looking at expanding our facilities.

How do you feel about the revelations of the NSA surveillance project and how it has affected third-party datacentre users' confidence?

It is a subject I feel very strongly from my heart but it is a question that I will let the press and policy team of Google handle.

Thank you Joe

Thank you!

 


Enhanced by Zemanta

No such thing as absolute freedom from vendor lock-in, even in open source, proves Red Hat

avenkatraman | No Comments | No TrackBacks
| More

OpenStack is a free, open source cloud computing platform giving users freedom from vendor lock-in. When it was alleged that Red Hat won't support customers who use other versions of OpenStack cloud on its Linux operating systems, its president Paul Cormier passionately shared the company's vision of open-source but steered clear from stating wholeheartedly that it WILL support its users no matter what version of OpenStack they use.

Any CIO worth his salt will admit that support services can be a deal-breaker when deciding to invest in technology.

Red Hat customers opt for the vendor's commercial version of Linux (RHEL) over free Linux versions because they want to use its support services and make their IT enterprise-class. This has helped Red Hat build a $10bn empire around Linux and become the most dominant provider of commercial open source platform.

OpenStack

OpenStack (Photo credit: Wikipedia)

So when Cormier says -- "Users are free to deploy Red Hat Enterprise Linux with any OpenStack offering, and there is no requirement to use our OpenStack technologies to get a Red Hat Enterprise Linux subscription. 

And separately, "Our OpenStack offerings are 100% open source. In addition, we provide support for Red Hat Enterprise Linux OpenStack Platform," -- customers are likely to pick Red Hat's OpenStack cloud on Red Hat operating system resulting in supplier lock-in.

Cormier justified: "Enterprise-class open source requires quality assurance. It requires standards. It requires security. OpenStack is no different. To cavalierly 'compile and ship,' untested OpenStack offerings would be reckless. It would not deliver open source products that are ready for mission critical operations and we would never put our customers in that position or at risk." 

Yes, Red Hat has to seek growth from its cloud offerings and as an open source leader, it has to protect the reputation of open cloud as being enterprise-ready.

Red Hat's efforts in the open source industry are commendable. For instance, it acquired Ceph provider Inktank last month and said it will open source Inktank's closed source monitoring offering.

But as the open sourced poster child, it also has the responsibility to contribute more to the spirit of open cloud and to invest more in Open source technology to give users absolute freedom to choose the cloud they like.

Competition among cloud providers is getting fiercer. To grab a larger share of the growing market, some cloud providers are slashing cloud costs while others are differentiating by offering managed services.  But snatching flexibility and freedom from cloud users is never a good idea.

But it will be unfair to single out Red Hat to open up its ecosystem. There's HP, IBM, VMware and Oracle who are all part of the OpenStack project and who all have their versions of OpenStack cloud.

As Cormier says, "We would celebrate and welcome competitors like HP showing commitment to true open source by open sourcing their entire software portfolio."

Until then it's a murky world. What open source? What open cloud? 


Enhanced by Zemanta

Using cloud for test and development environments? Avoid this costly mistake

avenkatraman | No Comments | No TrackBacks
| More

Using cloud services for application testing or software development is becoming a common practice because of cloud's scalability, agility, ease of deployment and cost savings.

But some users are not yielding the cost saving benefit, and in some cases, even seeing cloud costs soar because of a simple error -- they are not turning storage instances down when not in use.

Time and again purveyors of cloud computing have highlighted scalability as the hallmark of cloud computing and time and again users have listed the ability to scale the resources up and down as one of the biggest cost saving factors of the cloud.

But when discussing cloud costs and myths with a public cloud consultancy firm recently, I was shocked to learn that many enterprises that use the cloud for testing and development forget to scale down their testing environment at the end of the day and end up paying for idle IT resources - defeating the purpose of using cloud computing.

Building a test and dev lab in the cloud has its benefits - it saves the team time from building the entire environment from the ground up. Also, should the new software not work, they can launch another iteration quickly.  But the main benefit is the lower cost.

But delirious app testers and software developers may be leaving the instances running and pay for cloud storage for the hours of the night when no activity takes place on the infrastructure.

On the public cloud, turning down unused instances and capacity does not delete the testing environment. This means developers can simply scale the system up the next day to start from where they left.

But the practice of leaving programs running on the cloud is so common that cloud suppliers, management companies, and consultancies have all developed tools to help customers mitigate this waste.

For instance, AWS provides CloudWatch alarms which help customers set parameters on their instances so they automatically shut down if they are idle or underutilised.

Another tool it offers is AWS Trusted Advisor - available for free to customers on Business Level Support, or above. It looks at their account activity and actively shows them how they can save money by shutting down instances, buying Reserved Instances or moving to Spot Pricing.

"In 2013 alone, it generated more than a million recommendations for customers, helping customers realise over $207m in cost reductions," AWS spokesman told me.

Cloud costs can be slashed by following good practices in capacity planning and resource-provisioning. But that's at a strategic level while quick savings can be achieved by simple, common sense measures such as running instances only when necessary.

Perhaps, it is time to think of cloud resources as utilities - if you don't leave the lights on when you leave work why should you leave idle instance running on the pay-as-you-operate cloud?

That's $207m IT efficiency savings for customers of just one cloud provider. Imagine.

 

Enhanced by Zemanta

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Philip Virgo: Given that Snowden burst the Cloud Bubble with regard to read more

-- Advertisement --