The Illusion of Differentiation: Why the Real AI Game Is About Interface, Not Intelligence

The Illusion of Differentiation: Why the Real AI Game Is About Interface, Not Intelligence

Is Apple late to the game? What's up with the Open AI x Google deal? Is there an end to hyperscalers and private equity hoarding land for data centers? Is there long-term differentiation at the chip level? Are investors warranted in continuing to throw billions at new AI model startups? Is the Disney lawsuit against Midjourney the beginning of the end for publicly trained AI models?

Bonkers.

So many big questions and each warrants deep discussion. There are many people you can follow that are hitting these questions and more on the daily, but I am going to attempt to lay macro context behind my perspective that can be cascaded down on questions like the ones above (and the 100s of others).

The world is obsessed with comparing large AI models. But step back from the benchmarks, marketing language, and model releases, and a deeper pattern emerges: We're training models to the point of convergence. We're burning compute to chase diminishing returns. And we’re missing the real game — the interface game.

The Coming Convergence of Models

If there’s no Moore’s Law equivalent for model convergence, maybe there should be. Every major AI lab is racing toward some form of general-purpose reasoning engine: - Trained on massive swaths of the internet - Fine-tuned with similar reinforcement methods - Aligned via human feedback and usage patterns - Optimized toward instruction-following and problem-solving. In other words, the inputs, objectives, and constraints are becoming uniform. It’s not hard to imagine a near-future where all major foundation models are: [1] Equally capable for 90% of use cases, [2] Equally fast and cost-optimized and [3] Indistinguishable to most users. When that happens, the differentiator will no longer be intelligence — it will be the interface.

Compute Arms Race → Inference Plateau → Edge Intelligence

Today, hyperscalers are burning through unprecedented volumes of GPU compute to train new foundation models. But what happens when: - The marginal utility of training new LLMs declines? - The financial and energy costs of centralized training hit ceilings? - Regulators and supply chains slow the scale of compute? We enter the inference era, where value is created not by training better brains, but by applying intelligence better and faster at the edge. AMD’s CTO stated that the majority of inference tasks will be executed on-device (phones, PCs, edge systems) by 2030. At the last Embedded World event in IOT Analytics interviewed the booth sponsors on their predictions, and their analysts estimate 75% of AI inference (i.e., model execution) is now happening on-device or in “thick-edge” servers—rather than in centralized cloud systems.


Article content
Graphic by GCore: "Cloud, Edge, Security and AI Solutions" (

Open AI has already confirmed that the large majority (75% in 2024) came from consumer subscriptions and this week FT reported that "OpenAI’s annual recurring revenue has surged to $10 billion… the group’s recurring revenue comes from consumers paying for ChatGPT, roughly 3 million business and education clients, and sales of OpenAI’s API". When the models are commoditized and priced down to the floor, when the capabilities at the device level are perfectly adequate to do 99% of all the AI things, that’s where NPUs (Neural Processing Units), federated learning, and local inference come in.

The intelligence doesn’t go away, but it does become ambient.

Why Interface Becomes the Battlefield

When every model is good enough, the interface becomes the differentiator: - Who owns the context? (your calendar, your voice, your ambient environment)? Who controls the modality? (voice, text, AR, gesture)? Who integrates intelligence into everyday workflows? This is why Apple’s silence hasn’t been weakness, but strategy. Apple already owns the interface: the device, the OS, the sensor stack and the user. They don’t need to win the model game. They need to wrap intelligence in experience. And they can afford to take their time. Meanwhile, OpenAI, Google, Meta, and others are all beginning to realize: Users don’t adopt models. They adopt moments. Whoever delivers the best moment — wins. Again.. interface things. The legalities of AI and copyright infringement will grind data value to a halt publicly which means this is about private spaces. Maybe a device you wear all day that interoperates with your fav phone? Seems like that's already a heavy bet ($6B heavy).

Article content
Image via Sacra: "The private markets research you need to be a better investor." (https://sacra.com/research/ai-wearables-land-grab/)

So Where Should You Bet?

Investors pouring billions into “picking the winner” among model providers may be missing the point. The question isn’t: Who has the smartest AI? It’s: Who has the smartest access to AI? The game will be won by the company that makes intelligence feel effortless. The brand that personalizes intelligence without friction. The interface that wraps compute in emotion, trust, and intuition.

Final Thought: From Model to Moment

AI isn’t a model game. It’s an interface game. And that game is just beginning.

Also, I know we're talking about human interfaces here - but the same applies to machine interfaces. It's been estimated that up to 90% of all internet traffic will be machine to machine by 2030 and the data I am using is 2 years old (I'm lazy and point made).

To view or add a comment, sign in

Others also viewed

Explore content categories