Tech Employees Are Reportedly Being Evaluated by How Fast They Burn Through LLM Tokens
Engineers compete on leaderboards, with one burning 210B tokens—equal to 33 Wikipedias.
A New York Times report by Kevin Roose reveals a controversial new performance metric at tech giants like Meta, OpenAI, and Shopify: how many LLM tokens employees consume. Internal leaderboards rank workers, with managers reportedly rewarding heavy AI users and criticizing those who don't meet token quotas. The scale is staggering—one OpenAI engineer burned through 210 billion tokens, equated to 33 entire Wikipedias, while a Swedish software engineer's Claude Code usage alone costs more than his salary.
This 'tokenmaxxing' trend is partly fueled by the rise of agentic AI platforms like OpenClaw, which automate tasks using large language models. The virality of such tools contributed to a shift from OpenAI's GPT models to Anthropic's Claude among AI enthusiasts. In response, OpenAI hired OpenClaw's creator. The trend reflects a broader industry focus on raw token volume as a success metric, exemplified by OpenAI President Greg Brockman's recent boast that the coding-focused GPT-5.4 processes 5 trillion tokens per day, generating $1B in annualized new revenue.
- Meta and OpenAI use internal leaderboards tracking employee LLM token consumption for performance reviews.
- One OpenAI engineer used 210 billion tokens, equivalent to 33 Wikipedia-sized datasets, highlighting massive scale.
- OpenAI's GPT-5.4 processes 5 trillion tokens daily, driving a new $1B revenue stream as companies monetize usage.
Why It Matters
Prioritizing token volume over output quality could lead to wasteful spending and misaligned incentives in AI development.