Lasse Andresen
June 18, 2025

Closing the gap between AI ambition and enterprise reality

Closing the gap between AI ambition and enterprise reality

AI pilots are everywhere but production-grade systems are rare for a reason.

Across the enterprise, teams are experimenting, executives are making bold commitments, and expectations are rising fast. But as momentum builds, many organizations are hitting a wall. The leap from prototype to production is exposing limitations that strategy alone can’t solve.

The challenges are foundational: fragmented data, brittle infrastructure, and security models that weren’t built for systems that learn and adapt.

The race to operationalize AI is heating up - but most enterprises are still stuck at the starting line.

New power, new problems

The promise of AI for enterprise is built on its access to vast troves of data - customer records, internal communications, proprietary documents, and more. This unprecedented access is what makes AI so effective, but it’s also what makes it risky. With every new use case, organizations face the potential for data leaks, misdirected outputs, or even outright manipulation of business processes

A June 2025 analysis revealed that nearly 90% of analyzed AI tools had been exposed to data breaches, putting businesses at severe risk. The rise of shadow AI (AI tools adopted without employer approval or IT oversight) has introduced serious vulnerabilities. Many teams are deploying consumer-facing AI (like customer service chatbots) without proper security protocols, opening the door to credential theft, data leaks, and exposure of core infrastructure.

These challenges expose the need for appropriate data security and robust governance for AI tools - but this is easier said than done.

AI needs the right guardrails – not roadblocks

The challenge is not the AI tools or models, it’s the surrounding infrastructure. Data pipelines, compliance rules, access control models, and software integrations are often not ready to support enterprise-grade AI deployment. One practitioner (overheard on reddit) put it simply: deploying AI without a rock-solid data pipeline and proper governance is like hiring a superstar and giving them a locked file cabinet. No matter how powerful the model, it can’t deliver value if it can’t reach or safely use the right data.

This sentiment echoes across the industry. Senior engineers at leading AI labs cite integration with enterprise software, authorization infrastructure, and partner ecosystems as the true bottlenecks. Others point to overcautious or blanket-ban AI policies that effectively paralyze innovation.

Even when infrastructure improves, enterprises still face a fundamental hurdle: AI systems don’t behave like traditional software. They don’t follow fixed workflows or stay neatly within predefined access boundaries. AI blurs the lines between users, systems, and data, making risk harder to pinpoint, monitor, and control.

The perimeter is no longer the line of defense

Most security strategies are built on well-defined perimeters, fixed roles, and consistent workflows - a structure that has served enterprises well for decades. But AI is rapidly dissolving those boundaries, opening the door to new security challenges:

  • Prompt injection: Attackers can exploit the way AI models process language, slipping in instructions that cause the system to reveal confidential data or make unauthorized changes.
  • Data poisoning: The quality of AI output relies on the integrity of its input. If attackers manipulate training or operational data, they can quietly degrade decision quality or bias outcomes
  • Excessive data access – or complete lockout: AI systems often require broad data access to be effective, but without precise controls, enterprises face a dilemma. Either expose sensitive data and increase risk, or restrict access entirely and limit AI’s value. Many choose the latter, leaving critical data unused. The solution lies in enforcing dynamic, context-aware controls that make sensitive data safely usable.

These risks highlight a fundamental change: AI operates across shifting contexts and boundaries, making static defenses and rigid access models insufficient. Security must adapt in real time, just like the systems it protects.

Defend and control at the data level

Protecting the enterprise now means governing how data is used, not just accessed. That means shifting to dynamic, data-centric controls.

To support this, data must carry the information needed to evaluate whether it can be safely and appropriately activated. That includes indicators of sensitivity, such as whether the data contains personal information or confidential business details, as well as trust signals that reflect the data’s quality, origin, and handling. Usability depends on both: data must be reliable, compliant, and permitted for use in a given context.

By embedding these attributes directly into data flows, governance becomes enforceable at the point of use. Organizations gain the ability to apply precise controls in real time, restricting or enabling AI access based on the specific characteristics of the data, not just the identity of the user or system. With this level of visibility and control, enterprises can scale AI safely, without increasing exposure or limiting effectiveness.

From ambition to execution

Enterprises don’t lack vision for AI, they lack the infrastructure to deliver on it safely and at scale. Moving beyond pilots means rethinking how data is governed, secured, and activated.

That starts with shifting control to the data layer, embedding trust into every flow, and enabling real-time decisions about how information is used. With the right foundation, organizations can stop treating AI as an experiment, and start using it as a core part of how they operate.

Keep updated

Don’t miss a beat from your favourite identity geeks