The man who originally coined the acronym "AGI" now says that we’ve achieved it exactly as he envisioned.
The scientist who named AGI says today's AI agents match his original 1997 vision of human-level competence.
In a significant philosophical shift for the AI field, Mark Gubrud—the researcher who first coined the term "Artificial General Intelligence" (AGI) in a 1997 paper—has declared that the current generation of AI systems has, in fact, achieved the milestone he originally described. Gubrud argues that his initial vision was not about creating conscious superintelligence, but about building machines with "the capacity for human-level performance across a wide range of cognitive tasks." He contends that modern large language models (LLMs) like OpenAI's GPT-4 and Anthropic's Claude 3.5, especially when deployed as agents that can reason, use tools, and execute multi-step plans, fulfill this practical definition of general competence.
Gubrud's assertion is a direct challenge to the prevailing narrative in parts of the AI safety community, where AGI is often portrayed as a distant, existential risk or a system with superhuman, god-like capabilities. He criticizes this as "moving the goalposts," suggesting it obscures the real and present impact of today's powerful AI. His re-framing implies that the urgent task is no longer speculating about a future AGI, but responsibly governing and aligning the broadly capable, economically transformative AI agents that are already being integrated into global infrastructure and workflows.
This perspective carries major implications for policy and industry. If the threshold for AGI has been met, it strengthens arguments for accelerated regulatory frameworks focused on current systems' biases, security, and labor market effects, rather than hypothetical future risks. For developers, it shifts the benchmark from chasing an elusive 'true AGI' to a focus on robustness, reliability, and safe deployment of systems that already exhibit a form of general intelligence. Gubrud's comments have ignited debate, forcing a re-examination of what milestones actually matter in the rapid evolution of AI.
- Mark Gubrud first defined AGI in 1997 as human-level performance across wide cognitive tasks, not consciousness.
- He argues today's AI agents (e.g., GPT-4, Claude 3.5) meet this benchmark through broad competence and tool-use.
- This challenges the AI community's tendency to 'move the goalposts' and redefine AGI as distant superintelligence.
Why It Matters
Forces a critical shift from speculating about future super-AGI to managing the risks and impacts of the powerful, general-purpose AI we already have.