Patricia Alfheim
October 20, 2025

The AI bubble isn’t just financial - it’s functional

The AI bubble isn’t just financial - it’s functional

It has become impossible to scroll through LinkedIn or open a news site without encountering a new theory about how and when the AI bubble will burst. Yale’s Jeffrey Sonnenfeld has warned of “dangerous over-investment” as the same few technology giants trade equity, compute, and customer revenue in an increasingly self-referential loop. Goldman Sachs anticipates “a lot of capital that doesn’t deliver returns,” while even OpenAI's Sam Altman has acknowledged that people will “over-invest and lose money” before the market stabilizes.

AI-related spending has already overtaken U.S. consumer demand as the primary driver of GDP growth while those same interdependent tech giants (Alphabet, Amazon, Apple, Broadcom, Meta, Microsoft, and NVIDIA) now account for over a third of the S&P 500’s market capitalisation. The scale is staggering and increasingly concentrated, with much of the apparent momentum circulating within a narrow corporate loop.

Inside enterprises, the same momentum shows up as urgency to realise value. Many are launching pilots to demonstrate progress and explore opportunity, regardless of the state of their data or architecture. Boardroom timelines are compressing delivery, and resources shift from core projects into visible AI efforts. The result is a widening gap between what is being built and what will actually work.

This is the quieter instability shaping the market – a functional bubble growing out of the same forces driving financial exuberance. Innovation is accelerating faster than enterprises can turn it into dependable capability, and the market is valuing potential that still lacks the structure to stand on.

The enterprise ceiling

Rushed enterprise adoption sits at the centre of this imbalance. As many organisations move too quickly to turn AI ambition into delivery, the underlying data foundations and operating models remain incompatible to support it.

Gartner research shows that many organisations overestimate their AI readiness, with most initiatives failing to progress beyond proof of concept or pilot. According to Riverbed AI’s 2025 State of AI Readiness for AIOps Report, just 12 percent of AI projects have been deployed across organisations. It creates a striking contrast: investment is skyrocketing while the realised value remains limited.

Enterprises are hitting a functional ceiling: they want to be first movers in AI, but the underlying architecture cannot yet sustain the complexity, visibility, and flexibility required to get there.

The data foundation AI needs

While many enterprises are seeing valuable gains from generative AI tools, the scope is limited. These models succeed because they operate within bounded contexts, drawing on pre-trained, structured data that does not require access to live data ecosystems. Enterprise AI initiatives require access to real-time, interconnected data: systems that pull from multiple sources, reflect live context, and operate within governance frameworks applied consistently across domains.

This creates major challenges, as even modern enterprises still rely on a patchwork of architectures, point solutions, and legacy technologies. Data often remains trapped in source systems, a byproduct of historic priorities around security and departmental control rather than cross-system intelligence. This creates a data ecosystem that is fragmented and siloed, with integrations that duplicate data and strip away original context and lineage. Attempting to implement AI within an enterprise environment like this rarely succeeds.

This explains why so many pilots succeed but fail to scale. Success in isolation often reflects tightly controlled conditions that don’t translate to enterprise reality. Once a project leaves the pilot environment, it encounters tangled infrastructure, inconsistent data governance, and integration complexity that is difficult to navigate and overcome.

Defying the AI bubble

For AI initiatives to succeed, they need a data foundation that is connected, structured, contextualized, and well governed. Achieving this maturity requires a shift in mindset: treating intelligence as an operational discipline, an architectural capability that underpins how an organisation learns, decides, and acts.

Enterprises that build this kind of connected and governed data ecosystem create the resilience needed for scale. Granular control and dynamic security ensure that data remains transparent, compliant, and adaptable as AI capabilities expand. With consistent governance, shared context across domains, and preserved provenance, AI can evolve without constant rebuilding – each new capability reinforcing the system rather than straining it.

It is with this data foundation of integrity and adaptability, that AI can deliver what the market promised: consistent performance, validated results, and models capable of scaling securely. The organisations that master this balance will define the post-bubble landscape, where capability, not exuberance, determines value. It will also transform the AI economy: from capital chasing possibility to capability driving capital.

To learn more about how to create AI projects that scale, check out The 5% playbook: How the top 5% of AI projects deliver real business outcomes.

Keep updated