Can We Please Stop Calling Every New AI Development “Terrifying”?
From GPT-3 to GPT-4, each breakthrough gets labeled 'scary' before becoming normalized tech.
In a widely shared critique, an AI observer is pushing back against the predictable media cycle that labels every major AI release as 'terrifying.' The article meticulously documents this pattern, starting with GPT-3 in 2020, which was called 'more than a little terrifying' by The New York Times and 'the scariest deepfake of all' by other outlets. Within months, its outputs looked crude and the API became a commodity tool. The pattern repeated with ChatGPT's 'scary good' launch in 2022 and the brief frenzy over Bing's unhinged 'Sydney' persona in early 2023.
Each wave of alarm—including the Future of Life Institute's open letter signed by 27,000 people calling for a pause after GPT-4—subsides as the technology is integrated and understood. The author acknowledges the need for responsible caution but argues that reflexive terror is a distraction. It migrates from one model to the next, preventing a stable, nuanced public conversation about AI's real risks and benefits. The piece is a plea to move beyond sensationalism to more substantive evaluation of what each leap in capability, from companies like Anthropic and OpenAI, actually means for society.
- Traces the 'terrifying' label from GPT-3 (2020) through ChatGPT, Bing's Sydney, to GPT-4 (2023).
- Highlights how each model's perceived threat evaporated as it became normalized technology.
- Calls for more measured dialogue, noting that responsible stewardship from firms like Anthropic requires moving beyond hype cycles.
Why It Matters
Sensationalist framing hinders clear-headed risk assessment and productive policy discussions about transformative AI.