You Could Be Next
Freelancers train AI models that replaced their jobs, facing sudden project cancellations and surveillance.
A collaborative investigation by The Verge and New York Magazine exposes the hidden human workforce behind advanced AI models like ChatGPT. Freelancers, often those whose jobs were automated by AI, are recruited by data-labeling firms such as Mercor to train the very systems that displaced them. One worker, 'Katya,' was hired at $45 per hour after an interview with an AI named 'Melvin.' Her task involved writing example prompts, crafting ideal chatbot responses, and creating detailed evaluation criteria—work she enjoyed, describing it as 'like having a real job.'
However, the work is notoriously unstable. Just two days after Katya started, her project was abruptly paused and then canceled without warning, leaving her financially stranded as she saved for an apartment. The article details a digital assembly line where hundreds of workers label and produce data, often under surveillance software, for anonymous 'clients.' This human feedback is crucial for training models via techniques like Reinforcement Learning from Human Feedback (RLHF), which gave ChatGPT its fluency, but the work offers no job security.
The piece connects this precarious labor to a broader plateau in AI progress. While models excel in domains with clear feedback (like software engineering, where code either compiles or doesn't), most human tasks lack such objective metrics. This creates a relentless demand for human judgment to grade AI outputs on criteria like tone and helpfulness, trapping workers in a cycle of training their own replacements. The investigation underscores the ethical and economic tensions at the core of the AI boom, where technological advancement relies on an invisible, disposable workforce.
- Freelancers are hired by firms like Mercor at $45/hour to create training data via prompts and evaluations for anonymous AI clients.
- Projects are highly unstable, with one worker's gig canceled without warning just two days after starting, highlighting zero job security.
- The work fuels techniques like RLHF but creates a paradox where humans train the AI models that automated their original jobs.
Why It Matters
Reveals the precarious human labor and ethical dilemmas underpinning the AI models transforming the professional world.