The future of enterprise AI is built on data, not applications

This is a guest blogpost by Andy MacMillan, CEO of Alteryx

Market consolidation has been rife in the enterprise IT market this year, the most significant move happening in late May with Salesforce’s $8 billion acquisition of Informatica. In truth, this could have been seen from a mile off. Business leaders are all-in on generative AI but struggling to scale the technology at large as they wrestle with data siloes and the gigantic task of reimagining entire business processes with AI in mind.

That’s where things get complicated. Enterprise IT and application platforms going down the M&A path to offer the “complete” AI data stack for IT departments risk overlooking the wider spectrum of business users who will be the driving force for enterprise AI adoption at scale. Data needs to move more freely than ever before in today’s context. This requires more than an application reshuffle.

Untangling and unlocking the data lakehouse

The last five years have seen more and more organisations aggregate their data from business applications into data lakehouse infrastructure. This has certainly helped companies move faster toward becoming data led. But rolling out AI across the enterprise is a whole other challenge.

First, boards are understandably cautious about feeding vast portions of their data lake into AI models or systems. A more strategic, selective approach, making only the most relevant data accessible to AI tools, is typically preferred. Second, data stored within a lakehouse architecture is often organised around specific business functions. For instance, data from ERP systems is fundamentally different in structure and purpose from data generated by CRM platforms. All of this data needs to be reorganised to be used by AI models and agents.

A key aspect of this reorganisation will be translating the business logic underlying processes into AI to accompany the relevant data. Whether it’s managing return policies or forecasting sales, the teams closest to daily business operations are best positioned to identify high-impact AI use cases. Enterprise-wide AI initiatives driven solely by IT often struggle to scale because they overlook the need for this deep, functional expertise.

An AI data clearinghouse comes into focus

Breaking this impasse will require fresh thinking. For agentic systems to deliver meaningful value, they must be grounded in business-specific, context-rich first-party data. Without it, the potential of co-pilots and AI agents remains fundamentally limited.

The “AI data clearinghouse” is a concept I’ve been discussing with business leaders across the globe over recent months. It resonates because it offers solutions to commonly felt struggles slowing down AI rollouts. The idea is to use a neutral, business-user friendly software layer to pull in data from all business applications with an interface to build visual workflows with steps to factor in business logic, connectors and governance checks. The workflows are the processes behind AI applications and can be shaped by the very business users those applications are designed for. Drag-and-drop ease of building workflows, rather than coding, makes the process accessible to as wide a set of business users as possible.

This approach brings multiple advantages. By empowering business users to design AI workflows through visual tools, it not only democratises AI development but also helps overcome common governance hurdles in enterprise deployments. Visual workflows make it easy for compliance teams, risk officers, and senior leaders to quickly grasp the data pipeline behind an AI use case and provide informed input. As a result, AI becomes less of a black box and more of a transparent, collaborative process, enabling faster, more productive decision-making.

However, it’s still common for CEOs to prioritise a vision to become an AI-first company while, at the same time, pushing down broad directives for no first-party data to be inputted into AI systems. Data is the most valuable asset in many modern enterprises, so the reservation is understandable. But there needs to be a compromise for AI to make a serious impact in an enterprise, and an AI data clearinghouse helps make progress here.

Data everywhere

One-stop shop data architecture platforms and co-pilot products are easily positioned as convenient. But it’s increasingly evident that they’re not going to be the breakthrough for at-scale enterprise AI adoption that our industry is waiting for.

The creation and embedding of AI workflows into enterprises can’t be a repeat of a past which saw teams across a business put in requests to an over-stretched business intelligence team for any data-backed insight and wait days for a response. That’s not the future. Business users need to be given a hand in shaping AI workflows and the tooling to do so, in a manner that satisfies senior leaders who are keen to know that all internal uses of AI are in step with internal usage polices and regulatory requirements.

We’re at a pivotal moment, as AI begins to move beyond isolated projects and limited use cases to broader enterprise adoption. The companies that lead this shift will be the ones bold enough to rearchitect their data foundations for AI, and forward-thinking enough to empower business users as active participants in the transformation.