Software Engineering

Reviewing AI-generated code, architecture, and spec alignment

AI supports review work around generated or proposed changes by checking code against intended specs, surfacing architectural risks, drafting pull request review feedback, and helping engineers inspect whether an implementation is actually production-ready.

Why the human is still essential here

Human judgment is essential to review AI-generated code, interpret architectural and spec tradeoffs, decide which findings matter, and catch confident but incorrect output before changes move forward.

How people use this

AI-assisted pull request review

Engineers use AI to draft or explain code changes in a pull request, then manually verify correctness, edge cases, and maintainability before approving the merge.

GitHub Copilot / Cursor

Architecture option critique

Teams use AI to propose service boundaries, data flows, or refactor paths, then rely on experienced engineers to judge tradeoffs, scalability, and operational fit.

Claude / ChatGPT

Debugging generated fixes

AI suggests likely root causes and code fixes from stack traces or repository context, but the engineer confirms the diagnosis against logs, tests, and system behavior.

Claude Code / OpenAI Codex

Related Prompts (4)

Community stories (2)

Medium
6 min read

How I Use AI in My Code Review and PR Workflow Right Now

Not the idealized version. The actual one.

I lead a distributed engineering team at Kaz Software. We ship across multiple projects simultaneously, which means PR queues pile up fast. Two years ago, every review was entirely human. Today, roughly 60% of the first-pass review work is handled by AI before a human engineer ever opens the diff.


This is not a post about what AI could do for your review workflow. This is a walkthrough of what I actually do, the tools I rely on, and where I still refuse to let AI make the call.


Three tools do almost all the work:


GitHub Copilot handles in-editor, inline review and the PR summary layer directly inside GitHub.


Claude Code handles deep architectural review, spec alignment checks, and the pre-commit audit that I run before any significant PR goes up.


Codex CLI handles the command-line tasks: automated diff analysis, generating test scaffolding for uncovered paths, and batch processing when I need to review multiple files with a specific lens.

NF
Nur FaraziEngineering Team Lead at Kaz Software
Apr 4, 2026
LinkedIn

I spent my first 6 years as an engineer debugging without Claude Code or Codex

I spent my first 6 years as an engineer debugging without Claude Code or Codex, reading docs that hadn't been summarised by AI, and struggling through concepts until they stuck.

That struggle built something AI can't replace: judgement.


Now I use AI every day. And it's the best thing that's happened to my productivity. But it only works because the fundamentals came first.


I can review AI-generated code because I understand system design. I can spot bad architecture because I've built and operated real systems. I can tell when the output looks confident but is completely wrong because I've been wrong enough times myself.


Pre-AI fundamentals plus post-AI speed. That's the combo.


AI makes fast engineers faster. But it also makes uninformed engineers more dangerous.


Don't skip the fundamentals to chase the tools. Learn both. But learn them in the right order.


---

ā™»ļø Repost to inspire another engineer to learn the fundamentals

āž• Follow Abdirahman Jama for software engineering tips

AJ
Abdirahman JamaSoftware Development Engineer @ AWS
Apr 2, 2026