The abstraction maelstrom: inside the vortex of real cloud planning 

This is a guest post for the Computer Weekly Developer Network written by Dick Morrell, who tweets at @ThatPodcastChap and describes himself as a father, podcaster, a Linux veteran and open source warrior. Morrell is also co-founder of SmoothWall, a real-time, content-aware web filtering, monitoring & records management solution.

Morrell uses his several decades of experience working as a technology evangelist and commentator to explain what’s really happening in cloud implementations and why software engineers might just be finding the magical promise of cloud simplicity rather tougher than expected. As well as the above noted positions, Morrell heads up cloud and security training at QA.com and is a cloud and security ambassador for OpenUK.

Cloud arrived… and, all of a sudden, the premise was simple, the memes were commonplace – and “cloud is just someone else’s computer”, they told us.

It was printed on adhesive stickers, it adorned the laptops of Agile types tasked with designing and building-out the core ambitions of their masters as the journey to cloud transformed from a dual carriageway to public cloud, which ultimately also gave way to a call to embrace edge computing… and, subsequently, the enforced nature of hybrid and multi-cloud.

Recently at Red Hat Summit, my old boss Paul Cormier made the bold statement that every CIO “was now a cloud operator”… and within Red Hat I witnessed first-hand the development of a core cloud team as early as 2010, the team wanted to fundamentally understand the business drivers of virtualisation-to-cloud. 

We acquired the technologies that would become standard-bearers for the underpinning of provisioning and migrating hypervisors and applications, acquisitions like Qumranet (KVM), Makara (OpenShift) and bringing Ansible back into the fold.

Through my work with the Cloud Security Alliance and at Red Hat, I think I have grasped an appreciation of the need for diplomacy when sitting in a boardroom of a FTSE100 organisation whose cloud journey had either stalled, or become entirely entwined in a mess of multiple cloud locations each with significant challenges, or with drastic cost overruns.

Cumbersome incumbents

There is often too much reliance on incumbent providers, too much reliance on DevOps teams who had their enshrined way of working with chosen tools, platforms and silos within their organisations. 

There was a cold realisation that many of these major household name organisations (on both sides of the Atlantic) had in many cases “built a cloud out of what they already had” and so really not grasped the new world of virtualisation. Or, even more commonly, they had taken the misstep of migration and swallowed in full Amazon (and formerly Gartner’s) 6 R’s of cloud (Rehost, Replatform, Repurchase, Refactor, Retire and Retain) as a hammer to crack a nut.

With prices on storage and classes of services now so intensely competitive across global cloud providers, the temptation for major organisations to place all their eggs into one basket is enormous. Technical innovation and investment by Google, Microsoft and Amazon is creating a gulf between the three major core worldwide players and the remaining healthy cloud marketplace. 

The great tear-up tear-down swindle

Cormier’s analyst-researched proclamation that many of the larger Red Hat customers now have as many as seven clouds (up from IDG analyst Gary Chen’s view of four in 2016) means continued conformity and ownership issues in a fast-evolving marketplace. 

We know that governance and certification has frankly not kept up with the tear-up and tear-down mentality that was afforded to us by the promise of CI/CD, where overly complex single pane cloud management solutions simply do not pass muster when examined in detail.

As a security practitioner the fact that cloud security is always a post mortem activity has concerned me for many years. I see no change in my stance. The promise of service mesh and event log data now becoming a heady mess of noise emanating from multiple platforms and on-premises applications and service. 

Mind the SaaS security gap

Morrell: Time to wake up, CI/CI doesn’t tear up/down quite like it says on the packet.

There is a gap in the marketplace for what Gartner and others once mooted as Security-as-a-Service, without really considering the cacophony of unstructured and mislabelled event and application data from so many control planes, applications and automated container platforms. 

What with metrics, trends, logs and sidecar-based proxies all pushing information at speed and consuming CPU power, the ethos is on uptime and infrastructure-as-code to automate and to build. But this does not stop to consider the pollution from the exhaust pipe of the communal cloud car which actually allows us to learn more and to advance the security of our transitional journey within multiple clouds.

While the march to Kubernetes brings with it the promise of predictable rules-based and processed-based compute and process goodness, the two decades or more lessons of how we deal with event log data pushes us beyond dashboards and scalable metrics. Although Grafana has empowered our ability to understand traces and metrics, the real proof of the pudding as we now hit the next gear in cloud the navel-gazing needs to be understanding the gap between the train and the platform. Right now the major players and security vendors are not aligned — the lack of joined-up thinking in how we deal with this has no sponsorship across the industry.

How we deal with event data in a more complex world where proprietary vendors and cloud and cloudy based services post Covid-19 now push CIOs and ultimately the CFO of organisations is a moot point. Cloud has become often a benchmark for on-premise cost rationalisation and a new definition of tech refresh. Storage of snapshots and backups, application data and disaster recovery are established, however cloud lifecycle and the introspection and learning from multiple platforms is handicapped by focus on performance and compute.

Time for those painful conversations

Painful conversations around hidden costs of cloud and the ownership of the operative aging and ownership of data need to start being more commonplace. The talk of datalakes and lakehouses belie the fact that data science and AI as solutions, are expensive to own, they drive compute and storage costs that were never part of predicted IT spend.

Internet pioneer Vint Cerf was once quoted as saying, “In a small company, you often see a lot more of what goes on in a broader range of things. And that’s good.”

The irony being, it’s those things that we can’t see that are the issues that in years to come we will revisit as the fundamental missteps in our proper utilisation of cloud. After all, in the Kingdom of the Blind, the one-eyed man [or woman] is king.

Evolution matters, expect to see a new tier of cloud services with specific security operations and security incident event management and application broker management appear either as free-standing certified services or broker services sometime soon. It’s not if, it’s when.

 

 

CIO
Security
Networking
Data Center
Data Management
Close