Why We Should Maybe Get Hyper Over A Cloud...

This time last year I was basking in the icy cool, but mercifully dry climate of Berlin, testing SoftIron’ s HyperCloud product in its Berlin labs and very much appreciating the fantastic support of the company’s resident techs.

If you read the report: https://softiron.com/resources/broadband-testing-hypercloud-product-review/ – you will see that I was more than impressed with what the SoftIron guys had created in, what was, a short space of time and with an R&D team smaller than the reception staff at Microsoft’s HQ in Redmond. Probably. Twelve months on and it’s that time of year where we generate all kinds of predictions for the year ahead, create wish lists and make resolutions that we resolutely won’t stick to (unless it involves glue, maybe: note to old musician mates – glue can also be used for sticking things together).

But before we get onto what I imagined a next generation of HyperCloud might hopefully include (allowing for the reality that said R&D team is still not quite up there, size-wise, with AWS) let us take a look at what is actually important in the cloudy world of private and public cloud as we head into 2024 (sounds like a year from a Sci-Fi movie as a kid). Over the past few weeks, I’ve been in conversation with many vendors at the heart of, or at least on the fringes of, what I’ll called the cloud revolution – i.e., trying to take the concept as outlined by the likes of AWS, GCP, Azure et all, and turn it into something that is actually more relevant to the vast majority of businesses investing in IT and looking for a better ROI and lower TCO (enough TLAs already!).

Digression: many years ago, I recall reading an IBM press release which consisted of nothing other than TLAs connected with words such as “and”, “the” and “new”. The words “paradigm” and “shift” probably featured heavily too.

Going back a few years in my own blog world, I recall describing the first-generation cloud as an attempt to reinvent the mainframe – both in real and virtual terms. However, given how many actual mainframes are still not only in use, but underpin many businesses, it is clear that yer big computer (“our first mainframe was bigger than Texas and it had 20KB of storage and 1Kb of memory”) hasn’t been usurped by the concept of a load of data centres scattered across the universe, accessed primarily via t’Interweb. The reality is that many companies who have moved towards at least a hybrid cloud strategy have been disappointed by a number of cloudy factors, notably cost, scalability and flexibility. And cost. Oh, did we say that already? Then there’s the issue of data sovereignty, provenance and governance and potentially hefty (lack of) compliance fines. But, as I always say to those companies somewhat shocked by the reality of their “cloud experience”, the likes of Amazon, Google and Microsoft aren’t charities – they exist to make (lots of) money – from your money. And then there’s the reality that, depending on the nature of the data and the business, moving everything to the cloud is simply a non-starter.

There’s been a lot of talk recently about the concept of technical debt. We’ve all, at some or many points in our life, suffered what we might call buyer’s remorse: “why did I purchase that?”. Well, in a simplistic fashion, that’s kind of what technical debt is; generally, it’s an investment (often enormous) in technology that, unlike a fine wine, simply hasn’t aged well and/or simply wasn’t fit for purpose. There’s nothing new about technical debt; remember the old adage of the “forklift upgrade” when an investment in – say – Ethernet switches literally hit the buffers as port limits and switch limits were surpassed with no means of expansion beyond. The reason technical debt is seemingly so manifest and painful right now is because, well, IT really has changed from the ground up in recent times. Just look at DevOps, for example. The very real idea of “cloud native” applications being the only “dev” direction to go in, is rendering entire IT infrastructures redundant and, no, that’s not being sensationalist or scaremongering – it is a very real situation.

In some cases, that very building blocks of technology that you based your infrastructure on then gets acquired and basically ignored by the new owner. On that subject, there’s been a LOT of talk about the future of VMware since Broadcom acquired it and why so many companies are contemplating dropping it like the proverbial hot potato, as if the maintenance and renewal costs are not sufficient reason in the first place… The question, therefore is, what do you do about it? Well, throwing more money at an increasingly expensive hybrid cloud or VMware infrastructure doesn’t seem to be any kind of panacea, just a recipe for potentially more technical debt.

Increasingly, I’m thinking that the idea of reinventing the mainframe does still make sense, but that public cloud never was – and really isn’t – the answer. Going back in time to 12 months ago, and my first in-depth appraisal of SoftIron’ s HyperCloud solution, my initial view was that it sounded like a great way of repatriating data and ownership of your IT and thereby relying less on a hybrid cloud strategy and the offerings of AWS et all. Now, I’m thinking, what if something like HyperCloud IS the new mainframe? Why should private cloud just be a bit-part player, rather than taking the lead role? After all, the basic premise of public cloud: simplification, near limitless scalability and flexibility, efficiency, secure environment, reduction in reliance on ever scarcer human skill sets and associated reduction in costs while optimising productivity – are unarguable. The benefits of OnPrem – ownership, control, predictability – are as strong as ever, and reasons why OffPrem hasn’t taken over the world of IT. So, if you could combine the two…

Which brings me back to the starting point: what would I love to see evolve from the likes of a first generation of HyperCloud from 12 months ago? Increased performance, scalability and integration are all givens. More automation in the deployment and configuration of a HyperCloud would be great, be it templates, wizards, mind-reading… Oh, and in tandem, a more extensive management interface, with more directly accessible features, more readily available operation stats and status checks, reports, maybe a bit of AI-generation Haiku and an integrated sushi maker to go with it? Or am I asking too much? It is the month of the “C” word after all…

Meantime, Merry Christmas And A Happy New Year (and since, by the time some of you read this, Easter eggs will be in the shops, Happy Easter too).

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close