The Inference Bottleneck: Antitrust and Neutrality Duties in the Age of Cognitive Infrastructure
New paper argues AI inference is becoming critical infrastructure requiring antitrust oversight.
A new academic paper titled 'The Inference Bottleneck: Antitrust and Neutrality Duties in the Age of Cognitive Infrastructure' argues that as generative AI commercializes, competitive advantage is shifting from model training to continuous inference, distribution, and routing. Authors Gaston Besanson and Marcelo Celani warn that large-scale inference services—controlled by firms like OpenAI, Google, and Anthropic—are becoming 'cognitive infrastructure,' a bottleneck input that downstream applications rely on to compete. These vertically integrated firms (who also compete through their own assistants and tooling) could foreclose competition not just through pricing, but through non-price discrimination like manipulating latency, throughput, error rates, or context limits for rivals, or by steering AI agents toward their own services.
The paper makes three key moves: it defines 'cognitive infrastructure' as a measurable concept based on reliance and discrimination capacity; frames anti-competitive harm using 'raising-rivals'-costs' logic for platform markets; and proposes a targeted regulatory approach called 'Neutral Inference.' This framework would impose auditable duties on gatekeepers, including quality-of-service parity, routing transparency, and FRAND-style (Fair, Reasonable, and Non-Discriminatory) treatment for similarly situated buyers. The proposal aims to prevent covert anti-competitive conduct in AI markets before it becomes entrenched, focusing on observable evidence of gatekeeper status rather than broad market definitions.
- Defines 'cognitive infrastructure' as AI inference services that become bottleneck inputs for downstream competition, controlled by vertically integrated firms.
- Identifies non-price discrimination risks: firms could throttle rivals via latency, throughput, error rates, or feature gating that's hard to litigate.
- Proposes 'Neutral Inference' regulatory framework with three pillars: QoS parity, routing transparency, and FRAND-style non-discrimination for gatekeepers.
Why It Matters
Could shape future antitrust regulation for AI platforms, impacting how OpenAI, Google, and others operate their inference APIs.