Is it time, for real-time (data)?

The Computer Weekly Developer Network reported on streaming real-time data specialist Confluent in line with its open source user and practitioner symposium this week. Alongside the company’s partner views and those of related advocates in this space, a whole plethora of other vendors have shared their views on the state of real-time data, this story features a hand-picked selection of worthy commentators who have experience in real-time information management.

Thinking about the effect of real-time data on contextual AI, Sven Oehme, CTO of AI data intelligence platform company DDN says that the shift from batch to real-time was just the first wave. 

“Now we’re entering an era where data must be actionable the instant it’s created, whether at the edge, in the core, or in the cloud,” said Oehme. “The real pressure isn’t just speed, it’s intelligence: systems have to self-optimise, move data efficiently and feed AI models continuously. At DDN, we see this driving a re-architecture of the entire data pipeline – from storage to compute – to eliminate latency, reduce data movement, and make real-time truly autonomous.”

Backwards-looking go forth

Eduardo Crespo, VP EMEA at PagerDuty says that (whether feeding contextual AI or not) real-time data transforms how organisations process information, redefining how formerly backwards-looking teams operate. 

“We see organisations moving from reactive incident response to proactive, event-driven operations and the goal isn’t simply to process data faster, they want to automate decisions leading to actions the moment reliable signals emerge. Turning real-time insights into instant operational resilience allows global businesses to keep their critical systems up and customer experiences perpetually positive,” he said.

NetApp has deep views on  this topic, we spoke to NetApp’s field CTO for AI & cyber security, Adam Gale.

“Modern data processing, especially in AI-driven environments, requires intelligent infrastructure. Like humans, organisations must act on the available data, storing and learning from it to shape future decisions. An AI Data Engine (AIDE) is critical to this. This curates metadata discovery, automated change detection and global data synchronisation across on-premises and multi-cloud environments. It enforces governance through built-in policy-based controls, supports compliance and converts unstructured data into vector embeddings for semantic search to support rapid retrieval and inference. Together, this creates a continuous feedback loop that accelerates analytics, inference and decision-making,” said Gale.

Next up, Neo4j, the company we know for its graph database that stores data as nodes and relationships, which allows it to effectively manage and query highly connected data for applications like fraud detection, recommendation engines and knowledge graphs.

“The next phase of innovation in data processing lies in the convergence between operational and analytical workloads. By linking data transactions with context, organisations can enhance reasoning and recommendations in real-time. For example, if teams can detect fraud and angalyse fraud rings from the same dataset, they can generate real-time recommendations based on decades of customer behavioural data – much faster than if they were working across fragmented datasets,” said Andreas Kollegger, generative AI innovation lead at Neo4j.

Graph technology validation

Kollegger says that graph databases are well-positioned to facilitate this. By enabling teams to run both types of workloads together while modelling the dependencies between data entities, organisations can uncover not only why something happened but also what’s likely to follow. 

“Streaming agents add a new dimension here; integrating LLMs into real-time workflows brings flexibility, but also latency and reliability challenges. Context engineering, especially with a knowledge graph, helps mitigate these risks by delivering precise, dependable context to guide faster, more accurate responses. That’s why precise context at speed is becoming the key differentiator in real-time systems,” added Kollegger.

Eric Sammer, distinguished engineer at Redis and former CEO of Decodable says that the trend in data continues toward high-quality, secure and compliant data as a shared resource powering multiple use cases. 

“Over the past decade, those use cases have evolved from offline reporting to online services used by humans – and now by AI agents. These agents act on whatever data they have, regardless of its quality or compliance. As AI begins taking action itself, rather than merely informing humans, data quality becomes critical. Without high-quality, up-to-date data accessible to the right systems, companies are only making it easier to make bad decisions faster than ever before,” said Sammer.