Blogs
Claude Code is making skill gaps impossible to hide.
Mark Z
|
Published on
January 22, 2026
|
2 min read

The engineers who were already good? They're shipping 3-4x faster. They know what to ask for, when to push back on suggestions, and which generated code will cause problems in six months.
The engineers who were struggling? They're now struggling faster. More code, same architectural mistakes, but now it's harder to trace where things went wrong.
AI coding tools are an amplifier, not a transformer.
The uncomfortable truth: if someone couldn't design a clean API before Claude Code, they're now generating messy APIs at scale. If they didn't understand state management, they're now drowning in AI-suggested useEffects they can't debug.
This isn't the AI's fault. And it's not a reason to avoid these tools. But it is a reason to rethink how we evaluate developers.
The old question was: "Can they write this code?"
The new question is: "Do they know if the code is right?"
That second question requires deeper understanding, not less.
For engineering leaders, this means:
Technical interviews need to shift toward evaluation and debugging, not greenfield implementation
Code review becomes even more critical (AI doesn't know your system's constraints)
The senior engineers who "slow things down with questions" are your most valuable asset
AI tools didn't create the skill gap. They just removed the fog that was hiding it.
The teams that acknowledge this are thriving. The ones pretending AI is a skills shortcut are accumulating debt they don't even know about yet.
Good or bad, we’d love to hear your thoughts. Find us on Twitter (@twitter)
Jan 22, 2026
Claude Code is making skill gaps impossible to hide.
Mark Z

Jan 29, 2026
The Deployment Platform Truth (Railway vs. Vercel vs. Cloudflare)
Mark Z
