Redis explains why data architecture 'crumbles' at AI/ML model inferencing

Rolling out Artificial Intelligence (AI) and Machine Learning (ML) is tough.

It’s tough because it’s hard to get these projects to production and ultimately deliver the desired results.

What’s tougher is that we need more AI/ML to be applied inside modernised IT stacks.

Redis Labs has commissioned a study to look for more insight into how software application developers should think about the tools and platforms they use to create AI applications.

The open source database company thinks it has found increasing use of models based on real-time data.

Well, no surprise there, AI/ML is at its most useful when applied to real-time information flows if it is to deliver so-called ‘predictive analytics’ insight, right?

The problem is, between a third to a half of companies who talk about this application surface believe their current data architectures won’t meet their future model inferencing requirements.

Most decision-makers (64%) say their firms are developing between 20% to 39% of their models on real-time data from data streams and connected devices.

As Hazelcast explains nicely here, “ML inference is the process of running live data points into a machine learning algorithm (or “ML model”) to calculate an output such as a single numerical score. This process is also referred to as operationalizing an ML model or putting an ML model into production.”

Not-so-nifty network hops

As Forrester Consulting has said in relation to the Redis study, AI-powered by ML models mustn’t slow down applications by necessitating a network hop to a service and/or microservice for an application to use an ML model and/or get reference data.

Most applications, especially transactional applications, can’t afford those precious milliseconds while meeting service-level agreements (SLAs).

“Companies are embracing AI/ML to deliver more value for their mission-critical applications, yet need a modern AI/ML infrastructure to support real-time serving and continuous training. There are still gaps that impede companies from making existing applications smarter and delivering new applications,” said Taimur Rashid, chief business development officer at Redis Labs.

It is perhaps no surprise to hear Rashid suggesting that given these challenges, the simplicity and versatility of an in-memory database can enable organisations to handle these challenges.

What would be even better is if that in-memory database had an inferencing engine for low-latency and real-time data service needs… and yes, Redis does check that box, so there’s a bit of contrived positioning here, even if the actual data model inference requirement crumbling point of the wider story is the real part to go away with.

For more read the Forrester Consulting opportunity snapshot.

CIO
Security
Networking
Data Center
Data Management
Close