AI Application Layer Squeeze — Pressure From Both Sides
The four-layer AI stack
Graham Weaver divides the AI economy into four layers, each with a different investment profile:
| Layer | Examples | Investment outlook |
|---|---|---|
| Infrastructure | Chips, data centers, energy | Long, visible growth runway |
| LLMs | OpenAI, Anthropic, xAI | Few players, already priced for success |
| Applications | Vertical SaaS, AI-native tools | Most overhyped — capital most misallocated here |
| Use cases | Enterprises deploying AI operationally | Where genuine value creation occurs |
The squeeze mechanism
Application-layer companies face compression from two directions simultaneously:
From above: LLM providers are building their own interfaces, products, and features that directly compete with apps built on top of them. As models become more capable, the “wrapper” layer of value shrinks. The LLM itself can do what the app was doing, without the app.
From below: Corporate customers are increasingly capable of building their own tooling using the same underlying models. As AI fluency spreads through engineering organizations, the build-vs-buy calculus tilts toward build for any use case that touches proprietary workflows.
The internet analogy
In the 1990s, companies that helped people complete government tasks online — getting marriage licenses, filing permits — grew at over 100% annually. Then Google arrived and absorbed those information-access rents by making the underlying capability free and universal. The intermediary layer evaporated.
Weaver sees a direct parallel: early AI applications are capturing revenue pools that LLMs will gradually absorb. The timing is uncertain, but the structural dynamic is not. Application companies with $2M in revenue and $500M valuations are, in this framing, the marriage-license websites of the AI era.
What survives
Two characteristics mark the application-layer companies that may hold durable positions:
- Proprietary data sets — data that cannot be replicated by the LLM provider or reconstructed by a customer. This means data generated through the product’s own network effects, not data scraped or licensed.
- Deep customer interface lock-in — integration so embedded in the customer’s workflow that switching costs exceed the value of building internally. This requires years of deployment, not months.
Both conditions are harder to achieve than pitch decks suggest. Most AI startups have neither.
Related Notes
- AI Stack Value Accrual - Chip, Infra, Intelligence, App — complementary framework on where value concentrates across the stack
- Distribution as the Remaining Moat - Why SaaS Incumbents Aren’t Dead — the case for incumbents retaining distribution advantage
- Graham Weaver - The Alpine Playbook — source clipping