The Economics of AI Supply Chain Regulation
Game theory study reveals which policies boost consumer surplus and profits as compute costs fall.
A new economic study provides a crucial framework for understanding and regulating the emerging AI supply chain. Authored by Sihan Qian, Amit Mehra, and Dengpan Liu, the paper models the dynamic between upstream foundation model providers (like those offering GPT-4o or Claude 3) and downstream firms that fine-tune these models for specific applications. The core finding is that not all regulatory interventions are equally effective; their impact depends heavily on the underlying costs of compute and data preprocessing.
Specifically, the analysis reveals a nuanced policy landscape. Policies that encourage downstream firms to compete on application quality consistently increase consumer surplus. In contrast, policies promoting price competition or providing compute subsidies only boost consumer welfare under certain cost conditions—they are complementary tools. Notably, the research identifies potential "win-win-win" scenarios where pro-price-competitive policies or compute subsidies can increase profits for both providers and downstream firms while also benefiting consumers. However, as the cost of compute continues to decline—a central trend in AI—the effectiveness of these policies will shift, requiring adaptive regulatory approaches.
- Pro-quality competition policies (e.g., standards, benchmarking) always increase consumer surplus in the AI app market.
- Pro-price competition & compute subsidies create win-win-win outcomes for providers, firms, and consumers, but only under specific high or low cost conditions.
- Falling compute costs will dynamically alter policy effectiveness, making some tools obsolete while activating others.
Why It Matters
Provides a data-backed guide for policymakers to stimulate innovation and protect consumers without stifling the AI economy.