Optimistic Cloud Forecast - IT Managers No Longer Left In The Shade...

Remember the early days of cloud computing. All the promises – low-cost, scalable, simple to deploy and manage, secure, be in control of your own data and apps without the OnPrem hassle… All sounded like nirvana to the overworked IT management team: “Finally, my prayers have been answered as to how to enable and manage digital transformation”.

The reality, however, has been somewhat different. The first stumbling block was the “simplicity” – moving resource to the cloud was often anything but, given the raft of choices, not just initially, but ongoing as the “scalability” then appeared to be rather more limited than anticipated. And then came the bills! And as for repatriating data and apps, should that be perceived as necessary or beneficial moving forward…

It’s not as if cloud computing was actually a new concept; but even those that understood it was a combination of reinvented outsourcing and a streamlined network architecture that had been previewed many times over (Frame Relay, ATM, BGP, VPLS, collapsed backbones anyone?) had to assume that it was going to be a better alternative that any of the options that had preceded it. Otherwise, why would it exist? It certainly wouldn’t exist without virtualisation – that being a fundamental of cloud – but even then, IBM invented virtual machines back in 1972 and VMware has been around since 1998. However, none of that would matter if cloud computing really did advance IT; what’s in a date and timeline after all?

The problem is, and indeed to date, the cloud computing experience has been seen by many, as noted above: more expensive than anticipated, more restricted than anticipated and more complex than anticipated. For the CSP (Cloud Service Provider) giants, however, it can be argued that these are not issues, merely revenue-earning opportunities. But for the cloud computing consumer – aka the customer – there surely has to be a better interpretation or offering of the cloud computing concept? In recent conversations with US-based Qumulo, this has been precisely the topic of conversation – how to make the cloud concept affordable, truly scalable and flexible enough, in terms of the feature set, to enable a company to actually manage data and apps in the way it wants to, rather than in the fashion dictated by the CSP.

So, with perfect timing, Qumulo has announced a potential – and very promising sounding – solution to said problem, with its Scale Anywhere Platform. Speaking with Ryan Farris, VP of products at Qumulo, the idea is to take all the perceived issues and limitations associated with the current cloud computing offering and quite simply resolve them. Well, I say “simple”; that – from a development perspective – it would certainly not have been, but the finished offering is designed to look exactly that. Fundamental to the new platform is the ability for a customer to manage and organise their data wherever they want and whenever they want. Some of the primary issues many cloud customers have encountered lay in the murky waters of data governance and data provenance, given the global data protection enforcements now in place. This complete flexibility resolves those issues, as well as meaning that better – far better – economies are available.

Farris noted that: “The economic agility of the cloud is definitely a thing that is appropriate for some workloads, but if you are traditionally file, it’s not that attractive sometimes for customers to move their data in the cloud, for a couple reasons. Most of it is cost”. And, since the Qumulo solution is pure software, long-term hardware independence is another objective achieved; otherwise, we’re back to the days of being “locked-in” to a vendor, the biggest customer moan of the 90s and 2000s.

Here are a few examples of said flexibility and scalability. Firstly, Qumulo promises that it’s basically an “instant on” service – “You can spin it up in 12 to 15 minutes and just get started, and immediately scale”. That cannot be said of every (any?) cloud service currently. You want reliability? The platform claims 12 nines of durability – remember when five nines was seen as perfection (and cost appropriately)? It’s a truly hybrid offering, and object storage is used for data, whether the format is – say – NFS or SMB and you can scale up AND down as required. Note that latter capability 😊 That’s not to say that performance is restricted; gigabit per second is included in the basic service; if you want 100Gbps – or any speeds in between – Qumulo offers this too.

To cut to the chase, as they say, it means that you’re only paying for the throughput and performance you need, as and when you need it. Remember the old mantra of “bandwidth on demand”, followed by “scalability on demand” and how many false – or very expensive – promises there were? This appears to be the antidote. Again – to the point – Qumulo is hiding nothing, to the extent of including a TCO calculator, allowing a customer to mix n match different speeds, feeds and storage and see what it costs. And the company can see and tell a customer what they are using and when, so it takes out the guesswork completely.

A key element of the new platform is what Qumulo is calling Global Namespace (Q-GNS). Basically, this is what is at the heart of the Qumulo architecture – a unified data plane that can host a company’s entire unstructured data, from edge to core and cloud. The phrase “moving from data centres to centres of data” was used to define it and it does perfectly sum up the freedom the Qumulo solution potentially gives you with this underlying architecture.

From a user perspective, it means the ability to access remote data as if it were local and maintain a consistent user experience; real-time apps requiring latency-free data access are as equally catered for as “in your own time” archived data. From an IT manager perspective, it’s about reducing cost and complexity, and data management of that global footprint and increasing agility so that, put simply, any type and level of user or data owner can just grab the data when they need it. From a “where does that data need to reside?” perspective, it’s a case of whatever policy and strategy suits the customer: more cloud, less cloud, back OnPrem… that’s what Qumulo means by flexibility.

The data consistency is being achieved using a broad spread of technical capabilities; for example, data pre-fetching and caching and locking semantics based on NFS leasing (guaranteeing that it’s always the latest version of the file the user is accessing). Farris summed up the concept of Q-GNS noting: “Any instance can be shared. So, it doesn’t matter if it’s a Qumulo onsite in a Data Centre, if it’s at the edge, or if it’s a cloud footprint in Azure – it’s just a Qumulo instance”.

Farris cited archive as a great use case example of the flexibility at work, meaning that data can be archived wherever it is best suited, depending on the need for rapid and regular access as one extreme and rarely accessed at the other, whether OnPrem or in the cloud, depending on what the least cost scenario is in each case.

He noted: “If you’re looking at, instead of just TCO on a single cluster, a single unit, then you can broaden that out from primary storage that’s sitting OnPrem to an optional archive that you can pursue either in the cloud or in another archive OnPrem. And in either case, you’re starting to then play with these ratios. And if 80% of your data is cold, then your footprint might lower by half, or similar”.

Flexibility, scalability, (much) lower TCO. Sounds too good to be true? Well, as a hard-nosed, been there, worn every vendor’s freebie t-shirt over four decades of fashion change IT veteran, of course it sounds too good to be true, but… it all pans out when you look at the rationale and underlying tech – here’s looking forward to getting my hands on the Scale Anywhere Platform in the near future. As they say, watch this space!

 

CIO
Security
Networking
Data Center
Data Management
Close