The End of Human Code Review
In the near future, human engineers will review close to zero pull requests. Where we're headed is much more interesting: code review will become always-on and increasingly automatic as code is being written. A new orchestration layer will emerge where agents will decide when PRs are ready to merge and only (infrequently) escalate to humans.
Let's unpack why.
Humans aren't great at code review
Human reviewers rarely exhaustively test and verify the code they're reviewing, can't hold full codebase context in their heads, and can't systematically traverse dependencies. Even the most diligent reviews are painstaking, time-consuming, and still error-prone.
AI review tools operate differently. Macroscope, for example, leverages an agentic pipeline that can simultaneously analyze the entire code graph by using the Abstract Syntax Tree, gather additional context from issue management systems, tirelessly research all related codebases, search git history, search the web, and synthesize all of this with machine-level thoroughness.
While AI code reviewers are far from perfect, on the basis of being able to detect bugs, they have already far surpassed human reviewers. As for the other aspects of code review where humans excel, like providing architectural feedback and coaching junior engineers: these are useful but generally an anti-pattern at the PR stage that everybody would prefer to happen earlier.
Humans are already the bottleneck
Pull requests are a gravity well for human attention. Reviewing code isn't anyone's full-time job. It's a necessary tax engineers pay to ship responsibly, but it fragments focus. Reviewers have to pause their own work and switch into an evaluative mode: finding flaws, anticipating edge cases, and communicating feedback to a peer. Ask any engineer whether they'd rather review someone else's code or generate their own. The answer is obvious. Yet we've normalized a system where engineers spend large portions of their week on coordination instead of building.
Pull requests are also full of dead time. They move at the pace of human attention: reviews wait on reviewers, revisions wait on authors, and each back-and-forth introduces context switching. The wall-clock delay between steps often dwarfs the actual work, slowing how quickly code reaches production.
Coding agents are already ubiquitous, adding strain to this already burdened review process. As autonomous code contribution scales, this coordination bottleneck will grow exponentially and quickly become untenable.
The future
AI review tools have already moved the needle on bug detection, and that will continue to improve. When Macroscope launched in September, it demonstrated industry-leading bug detection according to our benchmarks. Since then, we've built a continuous optimization pipeline that selects the best model and prompt per language and repository, yielding even large gains in recall (the # of true bugs Macroscope can detect), precision (correctly identifying a bug vs flagging a false positive) and skewing towards genuinely critical issues rather than low-value nitpicks.
While those gains will continue to deliver impact, we think there's an even more transformational shift coming. A shift that will drastically reduce the need for humans to review code, relieve the bottleneck of the review process and as a result increase the pace at which code can merge to production.
Code Review will be always on and earlier
Code review will become always-on, automatic, and largely invisible – pulled closer to where code is being written.
Coding models will continue their remarkable pace of writing better and better code, but I'm a firm believer that distinct specialized code review sub-agents will exist. These review agents will continuously analyze code as it's written, go through tireless lengths to verify correctness. They will automatically coordinate with coding agents to validate their findings and address issues in real time.
By the time a pull request is opened, code will be much more likely to be correct and mergeable than pull requests are today.
AI as the Orchestration Layer
The purpose of a pull request will stay the same: ensuring that code merging is correct and ready. But instead of humans managing these PRs and coordinating merge readiness, AI agents will orchestrate them. They will assess whether the code was sufficiently tested prior to creating a pull request (e.g. an attestation from the sub-agent that reviewed), perform additional reviews if needed, direct sub-agents to make changes, evaluate blast radius, assess approveability, and decide whether human escalation is required.
For example, a PR may be automatically approved if:
- No unresolved correctness issues are found by trusted AI review agents
- The change has a small blast radius
- The author or agent has a strong trust profile
- The change doesn't trigger any policy guardrails set by the organization (e.g. changes to auth handlers require human review even if no issues are found)
If any of these conditions aren't met, the agent "escalates" by requiring human approval. A growing number of PRs won't require human review. And this orchestration will happen continuously, automatically, and by default without requiring human attention.
Instead of reviewing and approving every pull request, humans will define the policies that govern when approval is safe, the criteria under which an AI agent is allowed to merge changes automatically, and when it must escalate. This is a better use of human bandwidth and judgment, and a far more scalable system which will be required to accommodate the influx of AI-written code from agents.
This is already how modern AI customer support works. Products like Sierra and Decagon already deflect a substantial share of support volume with agents that have full context and the tools to understand customer concerns and resolve them – issuing refunds, updating accounts, and more. Humans encode the rules, like which issues can be resolved automatically, which require escalation, and under what conditions the agent is allowed to act.
Code review will follow the same arc.
The organizations that adapt to this new world will ship faster, with fewer bugs, and happier engineers. At Macroscope, we're working to pull this future closer.

