Estimated reading time: 10 minutes
A few months ago, I reviewed a pull request that, on the surface, looked flawless. The syntax was clean. The functions were well-structured. The tests passed. It checked every technical box – and yet, something felt off. There was no signature in the code, no clarity of intention, no trace of design thinking.
Curious, I asked the engineer about their approach. Their answers were vague, hesitant. Eventually, they admitted that most of it had been drafted using GitHub Copilot.
That moment crystallized a quiet concern I’d been carrying for months: What happens when AI-generated code becomes the norm? Are we gaining speed but losing critical thinking? Are we trading depth for delivery? And – perhaps most worrying – what happens to the growth of our engineers when AI is doing the hard parts for them?
The rise of AI coding assistants is a cultural inflection point.