The rise of the data fabric

This is a guest blogpost by James Corcoran, SVP of Customer Value, KX

The rapid – and ongoing – digital transformation of virtually every industry sector on the planet has super-charged the volume of data being created by people and machines. Analyst firm IDC estimates that by 2025, the global datasphere will total 181 zettabytes, and the amount of data created over the next five years will be greater than twice the amount created since the advent of digital storage.

This explosion of data presents both a challenge and an opportunity. Data environments are becoming more diverse, more distributed, more complex and more costly to manage. Equally, more data means more insights that have the potential to transform operational efficiency and commercial performance.

Existing data management architectures have struggled to keep up with these new realities. Throw the rising demand for real-time and event-driven data in the mix and it’s clear that a new approach to data management is needed. Enter the data fabric.

Call me HAL

Research shows that businesses built on and driven by insights from data grow on average at more than 30% annually and at least eight times faster than global GDP. With stats like that, it’s not hard to see why having a data strategy that facilitates the rapid access and analysis of data is becoming a strategic imperative for organizations regardless of what industry sector they operate in.

However, to realize the benefits of being a data driven business, companies must ensure access to data, along with the tools and skills needed to make sense of it, is democratized. The power must be transferred from the few to the many, and for that to happen the behind-the-scenes complexity needs to be abstracted so data consumers can focus on extracting value rather than managing process.

Analyst firm Gartner defines data fabric as a “design concept that serves as an integrated layer (fabric) of data and connecting processes.” Put simply, it simplifies ingestion, access, sharing, governance and security of data across all applications and environments – cloud, multi-cloud, hybrid, edge and on-prem – while constantly improving data quality by automatically discovering the business relevance between different data sets through the application of AI models and machine learning. It’s HAL without the bad stuff.

Rip it up and start again?

The ability to make faster, better-informed business decisions based on insights from data is the new competitive frontier for organizations the world over. So, should organisations rip up their existing data management architectures and replace them with a data fabric?

The first thing to say is that a data fabric is more than just a set of technologies and processes, it’s a new approach to data management that requires a paradigm shift in how organisations think about how users need to access and use data. Around 2015, the concept of a data lake started to take prominence as companies sought a solution that enabled them to use existing technology frameworks to streamline the storage and management of data across applications and environments. Today, a data fabric takes that thinking a stage further by adding a semantic layer – informed by meta-data – that standardizes management, integration, governance and access while enabling deep analysis and constantly learning.

While existing technologies can certainly be adapted to provide a data fabric, it’s widely acknowledged that the starting point for a successful project is a data management platform that can run anywhere; can manage all types of data regardless of its volume, velocity and variety and can be natively integrated with the applications, tools and languages used by data customers across an organisation. If that solid base is in place, the chances of success are high.

The era of big ideas

As digital transformation continues to drive and disrupt markets, we are entering an era where big ideas can be dreamt, developed, tested, and released in time frames previously considered near impossible. From the metaverse to autonomous vehicles, rapid vaccine development to the latest cryptocurrency exchange, they all have one unifying narrative, their success is dependent on data and much of that data will be generated and acted on in real-time. It’s here where a data fabric perhaps offers the most promise.

Like driving a new car from a garage forecourt, the value of data declines rapidly from the instant it’s created. For organisations to gain maximum value from their data they need to move from access to insight in as short a time window as possible. A data fabric reduces time to insight, and therefore value, by both facilitating faster access and adding greater context, for example combining real-time with historic data enabling better decision making.

Despite its relative nascency, data fabric architectures are constantly evolving to keep pace with an ever-changing modern data landscape. A data fabric can now encompass four layers: speed, batch, access and meta, each one providing the necessary management and governance needed to support the provision of data and insights wherever they are needed across an organisation.

Ultimately a data fabric seeks to help an organisation optimise the value of their data. At KX, we see this as being most apparent when applied to real-time analytics, where in the moment decision making is reliant on rapid access to high quality, contextual data. Gartner believes that already, in 2022, more than half of major new business systems are incorporating some form of continuous intelligence that uses real-time context data to improve decision making. IDC again estimates that by 2025, 30% of data will be real-time, and 49% will reside in the public cloud.

As more and more businesses look to gain value from the data they create in the moment, the value of implementing a data fabric architecture will continue to rise.

CIO
Security
Networking
Data Center
Data Management
Close