Is there an AI bubble? With the massive number of dollars going into AI infrastructure such as OpenAI’s $1.4 trillion plan and Nvidia briefly reaching a $5 trillion market cap, many have asked if speculation and hype have driven the values of AI investments above sustainable values. However, AI isn’t monolithic, and different areas look bubbly to different degrees. - AI application layer: There is underinvestment. The potential is still much greater than most realize. - AI infrastructure for inference: This still needs significant investment. - AI infrastructure for model training: I’m still cautiously optimistic about this sector, but there could also be a bubble. Caveat: I am absolutely not giving investment advice! AI application layer. There are many applications yet to be built over the coming decade using new AI technology. Almost by definition, applications that are built on top of AI infrastructure/technology (such as LLM APIs) have to be more valuable than the infrastructure, since we need them to be able to pay the infrastructure and technology providers. I am seeing many green shoots across many businesses that are applying agentic workflows, and am confident this will grow. I have also spoken with many Venture Capital investors who hesitate to invest in AI applications because they feel they don’t know how to pick winners, whereas the recipe for deploying $1B to build AI infrastructure is better understood. Some have also bought into the hype that almost all AI applications will be wiped out merely by frontier LLM companies improving their foundation models. Overall, I believe there is significant underinvestment in AI applications. This area remains a huge focus for my venture studio, AI Fund. AI infrastructure for inference. Despite AI’s low penetration today, infrastructure providers are already struggling to fulfill demand for processing power to generate tokens. Several of my teams are worried about whether we can get enough inference capacity, and both cost and inference throughput are limiting our ability to use even more. It is a good problem to have that businesses are supply-constrained rather than demand-constrained. The latter is a much more common problem, when not enough people want your product. But insufficient supply is nonetheless a problem, which is why I am glad our industry is investing significantly in scaling up inference capacity. As one concrete example of high demand for token generation, highly agentic coders are progressing rapidly. I’ve long been a fan of Claude Code; OpenAI Codex also improved dramatically with the release of GPT-5; and Gemini 3 has made Google CLI very competitive. As these tools improve, their adoption will grow. At the same time, overall market penetration is still low, and many developers are still using older generations of coding tools (and some aren’t even using any agentic coding tools). [Truncated for length. Full text: https://lnkd.in/gnMYckzB ]
When the World Wide Web showed up and all the dot coms appeared on top of it, it was a similar scenario. The middle tier of providers went out of business because all their customers went out of business leaving only the most resilient. 25 years later we are still finding applications fueled by that web innovation. I have assumed AI will follow this pattern since it became accessible. I lived through the transformation of DSP and other voice processing hardware transforming to software (and had a hand in hastening it along with FreeSWITCH) so for me its already in the plan to evolve the application layer. If there is a bubble, decades of innovation will rise from the ashes focused on apllied use cases.
AI technologies are being implemented across an ever-increasing number of sectors, creating a broad economic base that guards against any bubble burst. Unlike the dot-com era, where hype was confined to web startups, AI is penetrating enterprise systems, healthcare, automotive, and public services, generating real revenue streams. In 2025, 78% of global companies use AI in daily operations, with 90% either adopting or planning to spanning retail (personalized marketing), healthcare (early disease detection), finance (predictive analytics), and manufacturing (supply chain optimization). This diversification neutralizes risks: even if training infrastructure faces pressure, inference demand from agents (150% CAGR to $51.5B by 2028) and applications sustain the ecosystem. Leaders achieve 1.7x revenue growth and 3.6x shareholder returns through AI, proving fundamental value beyond speculation. Widespread adoption across industries ensures AI's "weighing machine" resilience long-term.
I have been monitoring all the hype on autonomous cars, and now this AI. And based on what I have observed, everything will be Human in the loop focused. The technology is here to stay and only the monotonous tasks will be replaced and people working on those monotonous jobs will evolve and use AI to assist them with their monotonous tasks and be able to focus more on adding more value to the business. People have still not adopted early AI and the new products launched and competing without a real uptake. This is going to be one of those things that will only get attention when there is emergency or pandemic. Cisco webex was their in the market since early 2000's and consumers weren't using it as widely as they do now, and then came zoom and teams. Even with these ai platforms I think rather than oversupplying, it's important to show people how it improves their way of living.
It is true that there is no demand problem. That is because the services are mostly free. If everyone charged at a minimum enough to cover their cost, would we still have this huge demand?
The “AI bubble vs no bubble” framing hides a deeper problem: we are massively overfunding capacity while underfunding architecture — how enterprises actually turn models and tokens into durable advantage. Most “AI applications” today are thin UX layers on top of LLM APIs. The real application layer upside is in AI-native operating models: where data structures, workflows, and incentives are redesigned so that AI isn’t a bolt-on tool but part of the firm’s cognitive architecture. That’s also where capital efficiency comes from when infra cycles turn and margins matter again. In that sense I agree: the application layer is wildly underinvested — but not in yet another assistant, in the boring, deep-tech work of rebuilding enterprise intelligence from the ground up. https://c-cortex.com/deep-tech/ #AI #AIBubble #DeepTech #AIInfrastructure #AIApplications #AIEconomy #Architecture #DisruptingCapital
A very balanced breakdown. The real gap today isn’t in infrastructure-it’s in applications. Across our work with enterprises and builders, we see the same pattern: organizations are investing heavily in GPUs, clouds, and models… but not enough in agentic workflows, business use cases, and end-to-end AI apps that actually deliver ROI. Infrastructure creates possibility. Applications create value. This is why the next decade belongs to teams who can bridge Microsoft’s AI stack, Power Platform, Copilot Studio, and real-world business processes. That’s where scalable, sustainable impact will be created.
What interesting observations. I can’t help but feel the underinvestment in AI applications happens because investors understand how to build data centers but struggle to identify which business applications will actually win. I’m hoping that this will create huge opportunity for companies that can spot the right workflows early.
I was just having a discussion on this. At the infrastructure level I can definitely see the concern of a bubble, especially if the latter layers are under invested. You can’t just build data centers and that’s it, people have to actually use it. Businesses needs to be utilizing AI and be able to adapt AI automation into their operations to see the ROI. If you just look at how most smb business operates you will realize how little of any AI is being utilized. There are very few companies helping the everyday businesses adapt. Most people are just utilizing chat and starting to learn how to prompt. Only the larger enterprises are deploying agentic workflows and automation as they can afford to deploy and maintain them. SMBs are frantically trying to figure out AI integration, but they are met with consultants and a fragmented ecosystem of apps just starting to integrate AI agents, with minimal actual assistance in adapting AI in a meaningful way. So, is there an infrastructure bubble? Perhaps. However, I echo the sentiment that the application layer is underfunded. Ultimately, the AI infrastructure needs users to drive its value.
Thankfully, model intelligence is no longer the barrier at the application layer. There's more ROI in solving the less glamorous work of data collection, governance, and evaluation to make the applications reliable
This is a very confused post; the economic logic simply does not add up. You assert that applications must inherently be more valuable than infrastructure because they generate the revenue to pay for it, yet you acknowledge Nvidia briefly reaching a $5 trillion market cap while simultaneously calling the application layer underinvested. Mathematically, these two realities cannot coexist. You cannot value a super highway at trillions of dollars if you admit there are hardly any cars on the road to pay the tolls. If the application layer is truly underinvested and small, then the infrastructure layer is not just cautiously optimistic it is a massive bubble waiting to burst, because there is currently no traffic volume large enough to justify the construction costs. You cannot have it both ways by claiming the asphalt is worth gold while admitting the drivers are broke. You also explicitly tell investors they are wrong to worry about model companies wiping out application startups, yet in the very same breath, you praise Claude Code as a example of growth. Let me let you in on a secret, Claude Code is a tool developed by Anthropic, a model company. This show that the model builders are winning the application space. Thx