Media & Culture

The real skill gap isn't coding anymore, its knowing when the AI is wrong

A senior engineer reveals juniors ship 3x faster with AI but freeze when production breaks, lacking system intuition.

Deep Dive

A viral Reddit post from engineer /u/CrafAir1220 is sparking industry-wide discussion about a subtle but dangerous shift in developer skills. The author observes that while juniors using AI coding assistants (like GitHub Copilot or ChatGPT) can ship features with "genuinely impressive" speed—often 3x faster—they hit a wall when that code breaks in production. The core issue is that AI-generated code is "usually like 85% right," which is dangerously close to correct, leading developers to assemble components without building a deep mental model of how the system actually works. When an error occurs, they're left staring at a stack trace with no intuition for debugging, having relied on the AI as a crutch rather than a tool.

The author experimented with using AI specifically for debugging, not just generation, and found most models simply "throw new code at you." However, newer models like GLM-5 surprised them by being able to walk through logic, trace error chains, and even identify issues like a circular dependency that had stumped manual debugging for an hour. Despite these advanced tools, the post argues the fundamental skill gap is evolving: the developers who will thrive aren't the fastest code generators, but those who can critically evaluate AI output and identify flaws ("no, that's wrong because X") without needing another AI to explain why. The post concludes we are "training a generation to be really good at asking questions but not at evaluating answers," creating a new form of technical debt rooted in understanding, not just output.

Key Points
  • Juniors using AI ship code 3x faster but lack system intuition, freezing on production errors because they only assembled AI-generated pieces.
  • The author notes AI code is "usually like 85% right," which is dangerously close to functional, masking a lack of deep understanding until it fails.
  • Testing revealed most AI models are poor debuggers, but newer ones like GLM-5 can trace logic chains and find complex issues like circular dependencies.

Why It Matters

This shift creates a critical new skill: the ability to audit and understand AI-generated code, not just produce it, impacting hiring, training, and system reliability.