You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
Key takeaways:
- AI writes more code (and makes it harder to review). AI-generated PRs have 1.7x more issues, and bugs are harder to spot because the code looks so polished.
- Manual code review is now the bottleneck.
- The developer’s job is moving, not disappearing. From line-by-line review to validating intent and guiding AI output.
The rise of AI-assisted code is forcing the industry to rethink one of its most fundamental practices: code review. LeadDev’s 2026 State of AI-Driven Software Releases report reveals just how fast that shift is happening.
AI-assisted development is all the rage, but the real impact is being felt at the review stage.
The use of AI tools to support, augment, or partially automate the process of reviewing code before it is merged or shipped is rapidly becoming standard practice. According to LeadDev’s 2026 State of AI-Driven Software Releases report, 68% say AI has already influenced their approach to code reviews, and among those, 86% use AI to identify issues before a human ever looks at the code.
Yet the human role is far from obsolete. GitHub CEO Thomas Dohmke has emphasized that while AI tools can identify potential issues and accelerate coding, developers must remain central to the process, making the final decisions on code quality, security, and architecture.
The real question is no longer whether AI can assist, but how the role of humans in the loop is evolving, and whether they can keep pace with the growing volume of AI-generated code.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Humans are essential
According to LeadDev’s 2026 report, 28% of respondents now use AI-powered tools for code reviews – up from 17% in the 2025 AI Impact Report.
This includes tools that help identify potential issues, prioritize fixes, or assist with initial verification before a human review.
Despite this increased adoption, efficiency gains remain limited – 29% of respondents say reviews take longer with AI, 24% report time savings, and the largest portion – 47% – see no change at all.
The reason? AI-generated code can be deceptively difficult to review. “LLMs are amazing at producing legit-looking content. Most of the time they do this by producing legit content, but sometimes they just make things look legit,” said Pete Hodgson, head of technology at Tribe AI. “This makes their output really hard to review. The bugs are harder to find, because everything looks so thorough and professional.”
The data backs this up. CodeRabbit’s State of AI vs Human Code Generation report, based on 470 open source pull requests, found that AI-generated code contains significantly more defects across logic, maintainability, security, and performance. On average, AI-generated pull requests include 10.83 issues each, compared to 6.45 in human-generated pull requests, roughly 1.7x more issues per review.
The problem isn’t just the volume of bugs, it’s their nature. “AI-generated code is very solid in terms of low-level details, but AI’s design taste is mediocre. This leads to maintainability issues over the medium turn, as code slowly turns to spaghetti without human judgement,” Hodgson explained.
More AI code, more issues to fix
While human oversight remains crucial, it is also becoming a significant bottleneck. This is because it’s not just the amount of code AI tools are enabling devs to push out, but the size too.
While just 5% have seen their releases get smaller when introducing AI-generated code, 32% saw an increase in release size. That creates a lot of code to review.
Line‑by‑line manual code reviews are struggling to keep pace with modern development workflows – particularly as agent-generated code increases both the volume and complexity of changes.
Of those respondents who said they still require a 100% manual ‘human-in-the-loop’ review for every line of AI code, 38% are spending more time on code review than before, suggesting organizations sticking to that approach are feeling the most pain.
“The biggest shift I see happening is that more of the automated tooling that used to be applied post-merge is going to be shifted left, either run pre-merge, before code is submitted for review, or even pulled directly into the inner loop where the agent can get feedback without any human involvement,” explained Hodgson.
Ankit Jain, founder of multiplayer AI-coding platform Aviator, echoes Hodgson’s sentiment. “As much as AI code reviews can be valuable, these will shift left in the dev cycle. There’s no reason to waste CI resources or manage versioning between review cycles,” he wrote.
The trend is supported by data: 56% of developers have already started using AI tools to catch issues before the formal review stage.
More like this
The developer’s changing role
The bottleneck in software development has shifted from writing code to verifying AI-generated code. This is reshaping the developer’s role.
Instead of reading every line, organizations are being encouraged to review intent and specifications before code is written. Humans focus on specs, plans, constraints, and acceptance criteria rather than diffs.
This is a big shift in where human judgment is exerted – moving from post‑implementation review to pre‑implementation validation.
Charity Majors, CTO of Honeycomb, argues that AI doesn’t eliminate code review work – it relocates it. Instead of reading code line by line, developers increasingly validate it in production, relying on instrumentation and monitoring to answer the question AI can’t: does it actually work?
“Developers aren’t just authors of code anymore – they’re curators, reviewers, and gatekeepers of what gets shipped,” Amna Anwar, software and analytics engineer at PullFlow, wrote in a blog post.
“This shift changes how we work and what matters. If you’re still focused on writing the perfect function from scratch, you might be missing the point. Reviewing, not authoring, is becoming the developer’s most critical skill,” she added.
Hodgson is clear on what this shift demands: developers who know how to work effectively alongside AI, not just with the code it produces.
He defines this as: “Collaborating with agents – creating prompts and agentic systems that maximize the amount of toil the AI can do while still ensuring that the human can guide the software design in as tight a feedback loop as possible.”
For code review, this includes using a separate LLM to perform an initial adversarial review to catch issues early, and then conversing with the AI to quickly understand and evaluate the proposed changes.
Supporting this, 36% of respondents said they use multiple AI agents to refactor code until it meets internal standards before human review.
Humans remain central to AI-assisted code reviews, but their role is shifting from line-by-line checks to guiding, shaping, and validating AI output. AI tools now handle routine verification, allowing reviewers to focus on strategy, logic, and overall quality.

New York • September 15-16, 2026
Speakers Camille Fournier, Gergely Orosz and Will Larson confirmed 🙌