The AI Bubble: Economics, Hype Cycles, and the Missing Discourse
How our penchant for simplistic and one-sided narratives mislead us, resulting in wastage of opportunities and resources.
The conversation surrounding artificial intelligence has become bifurcated into two seemingly irreconcilable camps: the evangelists proclaiming AI as the greatest technological breakthrough of our era, and the skeptics declaring we’re trapped in a speculative bubble destined to burst.
Both perspectives contain truth, yet both miss a crucial insight that should anchor this debate: what we’re witnessing is not unprecedented but rather a predictable manifestation of how transformative technologies move through capital markets and society.
Photo by Google DeepMind from Pexels
The “AI bubble,” properly understood, is fundamentally about the concentration of capital being poured into AI infrastructure, roughly $1.1 trillion expected between 2026 and 2029, with total AI spending anticipated to surpass $1.6 trillion. This concentration is economically rational and worth examining through frameworks of investment allocation, not dismissal.
The disconnect lies in how daily discourse treats the bubble and the technology as mutually exclusive phenomena. They are not. A bubble in valuations can coexist with genuine technological transformation. Understanding this requires stepping back from the sensationalism and examining three foundational premises that should guide any serious analysis: what constitutes the bubble, how AI adoption will unfold across industries in disproportionate ways, and how this progression mirrors established patterns we’ve documented repeatedly throughout technology history.
The Bubble is Capital, Not Technology
When observers cite an “AI bubble,” they are largely pointing to one phenomenon: the extraordinary amount of capital being deployed. Microsoft committed $80 billion to AI in 2025. Google allocated $75 billion. Meta planned over $600 billion across three years. Amazon earmarked $100 billion. This capital concentration represents what economists at Allianz have termed more accurately as “a boom underpinned by fundamentals”, not to suggest the fundamentals justify current valuations, but to distinguish between capital overallocation and technological illegitimacy.
The economics here are straightforward: companies with strong existing revenue streams are betting significant resources on AI infrastructure. The technology is demonstrably real and there is no merit in countering that, but the question is whether the scale of investment maps onto realistic revenue expectations. OpenAI reported $3.7 billion in revenue in 2024 against operating expenses of $8 to $9 billion. Projections suggest the company will reach $13 billion in revenue this year, yet forecasts indicate losses of $129 billion by 2029. This gap between investment and profitability forms the core of legitimate bubble concerns.
Yet this framing deserves nuance. Historically, transformative infrastructure investments have required capital deployment that initially appeared excessive. The railroad boom of the 1840s, the automobile industry’s expansion, and the internet infrastructure buildout of the 1990s all involved periods where capital commitments exceeded near-term returns.
The economic justification for these investments lay not in immediate profitability but in eventual market transformation and the winner-takes-most dynamics of scaling platforms.
For well-capitalized entities like Microsoft, Google, and Amazon, the bet is asymmetric: if AI becomes as transformative as computing infrastructure itself, first-mover advantages in building capacity justify substantial capital deployment even with uncertain timelines to profitability.
This does not mean the capital is being deployed wisely across all actors. It means that for mega-cap companies with diverse revenue streams, the allocation is economically defensible even if the specific multiples placed on pure-play AI companies, reaching 29.7x revenue for median AI company valuations, are stratospheric.
The Industry Disparity: Where AI Creates Value, and Where It Becomes Theater
Here lies perhaps the most overlooked aspect of the AI boom: adoption and value creation are concentrated in specific industries and use cases, not distributed evenly. This matters because it reveals where the actual transformation is occurring and where capital is being wasted on implementation theater.
Financial services has achieved 4.2x returns on generative AI investments. Telecommunications and IT, the sectors leading adoption, have reached 38% AI implementation rates with projected gross value additions of $4.7 trillion by 2035. Retail has increased AI budget allocation to 20% of technology spending, resulting in measurable outcomes like 15% conversion rate increases during peak shopping periods. These are not abstract projections. They represent tangible productivity gains and revenue impact.
Yet across organizations broadly, the picture darkens considerably. An MIT study examining 300 AI deployments found that 95% of organizations implementing AI saw zero return on investment despite $30 to $40 billion in enterprise GenAI spending. More striking still: 42% of companies abandoned most AI initiatives in 2025, up from just 17% in 2024, with the average organization scrapping 46% of AI proofs-of-concept before reaching production. These statistics represent not the death of AI but rather the natural filtering of AI-as-tool versus AI-as-system.
The distinction matters. Companies deploying AI tools, ChatGPT, Copilot, and similar products, report productivity gains at the individual level but frequently see no bottom-line impact. These tools enhance what individuals can accomplish without necessarily improving organizational P&L. Enterprise-grade AI systems, by contrast, the custom solutions and sophisticated vendor implementations, are being “quietly rejected” according to research, precisely because they require orchestration, data infrastructure, governance, and organizational change management that most firms are not equipped to handle.
This disparity reveals a crucial insight: AI will not progress uniformly from tools to applications to systems. Rather, it will bifurcate. Certain sectors with well-defined problems, abundant data, and clear ROI metrics, financial services, logistics, telecommunications, healthcare diagnostics, will progress to sophisticated AI systems. Others, lacking these conditions, will plateau at the tool level or abandon AI initiatives entirely. This uneven progression is not a failure of the technology but a reflection of how technologies actually diffuse through economies.
The Hype Cycle: A Framework the Discourse Ignores
Here emerges the most frustrating element of contemporary AI discussion: the almost willful ignorance of established frameworks for understanding technology adoption and market dynamics. Gartner’s Hype Cycle, which has provided a remarkably consistent lens across decades of technological transformation, suggests five stages: Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity.
Generative AI has officially entered the Trough of Disillusionment phase as of 2024, as actual implementation challenges collided with the exuberant expectations of 2023. This is not a novel occurrence. The internet followed this exact arc. During the Trough of Disillusionment in 2000, investors suffered catastrophic losses, the Nasdaq fell 78% from its peak, yet emerged with Amazon, eBay, and Priceline, companies that redefined commerce. The technology didn’t fail; capital reallocation happened, inefficient players were eliminated, and remaining players scaled to transformative levels.
This same progression is happening with AI, yet media discourse and investor conversation rarely invoke this framework. Why? Perhaps because naming the pattern, acknowledging that we are at an expected, historical stage of technology adoption, removes the drama from the narrative. Sensationalism serves multiple actors: media outlets attract readership through apocalyptic headlines; startups attract funding by claiming urgency and disruption; venture capitalists feed FOMO by suggesting that missing this cycle means permanent irrelevance; established technology companies justify massive capital deployments as existential bets.
What gets lost is the banal truth: we are approximately where we should be in the adoption curve. Generative AI had its peak in 2023. The slide into the Trough is expected. The eventual Slope of Enlightenment will see second and third-generation products that actually solve specific, narrow problems rather than overpromise universal intelligence.
The Investor Substrate: Why Wasted Capital Doesn’t Matter to Everyone
This framework brings into focus another overlooked dimension: not all investors operate from the same constraints or with the same time horizons. A venture capitalist deploying early-stage capital can absorb significant loss rates if even a few investments achieve venture returns. A pension fund or retail investor cannot. A mega-cap tech company viewing AI as a multi-decade infrastructure play operates under entirely different assumptions than a startup attempting to achieve product-market fit within a five-year window.
The $4.4 billion in combined losses from AI implementations documented by EY represents genuine waste for the organizations absorbing those losses. Yet for some investors, institutional players with portfolio diversification, loss harvesting strategies, and the capacity to hold long-term positions, these losses are fungible with the eventual multidecade upside if AI does become as transformative as computing itself. Jamie Dimon, head of JPMorgan, captured this paradox perfectly: AI “is real” and will eventually “pay off,” much as automobiles and televisions eventually paid off, yet “most people involved in them didn’t do well.” The technology succeeds even as most market participants suffer losses.
This asymmetry deserves acknowledgment: capital waste across enterprises accumulates, but it does not uniformly harm all participants. For privileged investors and large-cap companies with existing cash flows, misspent capital in experimental AI initiatives is essentially a tax on future potential, unfortunate but not devastating. For smaller investors, startups, and organizations without substantial existing profitability, the same capital misallocation represents existential risk. The AI boom is simultaneously destroying capital and creating future dominant platforms, and those realities operate on different timescales and affect different participants dramatically differently.
The Missing Intellectual Synthesis
What troubles me most is not the existence of an AI bubble. They are predictable features of capital markets encountering new technologies. What troubles me is the intellectual laziness of refusing to synthesize the multiple truths simultaneously: yes, there is capital misallocation; yes, the technology is genuinely transformative; yes, specific sectors will see AI mature into systems while others plateau at tools; yes, established hype cycle frameworks explain where we are; yes, some investors can afford losses that would devastate others.
The daily discourse, in media, in investor calls, in corporate strategy rooms, rarely holds these tensions together. Instead, it oscillates between unqualified enthusiasm (”AI will solve everything, invest massively”) and catastrophizing skepticism (”this is a bubble, it will all crash”). Both framings appeal to narrative simplicity. Both fail at intellectual rigor.
The sensationalism serves interests, certainly. But I suspect it also reflects a broader discomfort with complexity and time horizons. Admitting that we’re in a expected stage of a hype cycle, that capital misallocation and genuine transformation can coexist, that different actors face fundamentally different incentive structures, that outcomes will be highly sector-specific - this synthesis offers no clean story. It provides no simple call to action. It cannot be summarized in a headline or a thesis statement designed for social media amplification.
Yet this is precisely the analysis that should inform investment decisions, strategic planning, and policy formation. Understanding the AI landscape requires equal fluency in market dynamics, technology adoption patterns, industry-specific economics, and the behavioral incentives of different actor types. It requires distinguishing between “the technology is real” and “current valuations are justified” as separate questions. It requires recognizing that bubble dynamics and genuine transformation are not mutually exclusive but rather coexisting phenomena at different levels of analysis.
Establishing What We Know
My perspective on AI and investment is rooted in understanding technology adoption at multiple levels: the mathematics of S-curves and diffusion dynamics; the historical patterns of hype cycles across computing, telecommunications, and internet adoption; the microeconomics of where specific AI implementations generate measurable value and where they generate PowerPoints; the macro patterns of capital reallocation during technological transitions. I view the current moment not as unprecedented but as intelligible through established frameworks that have proven predictive across decades of technological change.
The AI bubble is real. The value of AI is real. The industry disparities are real. The hype cycle progression is real. These are not contradictions requiring reconciliation but rather multiple truths at different levels that collectively explain the landscape.
The frustration lies in watching sophisticated actors, investors, executives, analysts, fail to synthesize these realities. That gap between the data-driven reality and the discourse-driven narrative is where genuine opportunity and risk lie, and it’s where the conversation should focus.


