Skip to main content
Engineering7 min readMarch 3, 2026

Code Review Best Practices: Making Reviews Worth Everyone's Time

Code reviews are one of the highest-leverage engineering practices when done well — and a source of friction and resentment when done poorly. Here's how to do them right.

James Ross Jr.

James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

The Code Review That Makes People Dread Sending PRs

I've worked in codebases where submitting a pull request felt like sending your work to a tribunal. Comments were harsh, nit-picks were prolific, the definition of "done" shifted with the reviewer's mood, and approval could take days of back-and-forth. People started avoiding reviews — submitting large PRs infrequently, self-merging minor changes, or just getting a teammate to rubber-stamp it without reading it.

I've also worked on teams where code review was genuinely useful — where reviews were fast, comments were constructive, and the back-and-forth made the final code noticeably better. The difference wasn't intelligence or experience. It was process and norms.

Good code review is a learnable practice. Here's what it looks like.


What Code Review Is Actually For

Before fixing the process, align on the purpose. Code review serves several functions, and teams that conflate them end up with confused review cultures:

Correctness and logic validation. Does the code actually do what it's supposed to do? Are there edge cases the author missed? Does the business logic match the requirements?

Maintainability and readability. Will the next developer who reads this code understand it? Is it unnecessarily complex? Are the variable names clear? Is the structure consistent with the rest of the codebase?

Security review. Are there injection vulnerabilities? Is user input properly validated? Are permissions being checked appropriately?

Knowledge sharing. Reviews are one of the main ways teams transfer knowledge about the codebase, the business domain, and engineering patterns. A thorough review comment teaches the author something they didn't know.

Style and conventions. Is this consistent with how the team writes code?

These are not equally important. Correctness and security issues must be addressed. Readability issues should be addressed if they're significant. Style should be handled by automated linting, not human reviewers.


The Reviewer's Responsibilities

Review promptly. The most common complaint about code review isn't the quality of feedback — it's the delay. A PR that sits unreviewed for three days creates a cascade of problems: merge conflicts accumulate, the author moves on to other work mentally, and the feedback arrives in a context where making changes feels disruptive.

Agree on a team norm for review turnaround. 24 hours is a reasonable target for most teams. For urgent changes, 4 hours.

Distinguish comment severity. Not all review comments are equal. A comment about a security vulnerability is different from a comment about variable naming. Train yourself to communicate the severity explicitly:

  • "Must fix before merge: This allows SQL injection by interpolating user input directly."
  • "Suggestion: Consider extracting this into a separate function for testability."
  • "Nit: Naming — userList could be users per our convention."

When everything is the same tone, authors can't tell which comments are blockers and which are optional. Label them.

Ask questions rather than making declarations. "This will cause a race condition" is a declaration that puts the author on the defensive. "Could this cause a race condition if two requests hit this endpoint simultaneously? What happens to the counter if they both read before either writes?" is a question that invites thought and might lead to the reviewer being wrong gracefully. One of these builds trust. The other builds resentment.

Explain the why, not just the what. "Don't do it this way" is unhelpful. "I'd avoid this approach because it creates a circular dependency between these two modules — here's the pattern we use instead" is a review comment that teaches something.

Approve when the code is good enough. Perfection is the enemy of shipping. If the code is correct, secure, and reasonably maintainable, approve it. Your personal preference for a different architecture or a different naming convention is not a blocking issue.


The Author's Responsibilities

Keep PRs small. The single biggest variable in review quality is PR size. A 50-line PR with a clear description gets thorough, fast review. A 1,200-line PR covering three features gets a rubber stamp or a three-day ordeal. Break large features into reviewable chunks. One logical change per PR.

Write a meaningful description. The PR description should answer: What does this change do? Why was this approach chosen? Is there anything the reviewer should pay special attention to? Are there known limitations or follow-up tasks?

"Fix bug" is not a description. "Fix off-by-one error in pagination that caused the last item on each page to be skipped — added a test covering this case" is a description.

Test your own code before requesting review. The reviewer's job is not to find your bugs. Run the tests. Manually verify the feature works. Check the obvious edge cases yourself. Review is for the things you couldn't find yourself.

Respond to feedback constructively. If you disagree with a comment, explain why in the PR thread. "I see the point, but I went this direction because..." is a legitimate response. Silently changing things to satisfy the reviewer while disagreeing shows disrespect for the process. Silent non-compliance on a blocking comment is worse.


Automating the Low-Value Work

Linters, formatters, and static analysis tools exist precisely so that humans don't have to spend their review attention on things machines can catch. If your team is still discussing spacing, quotation marks, trailing commas, import order, and unused variables in human reviews, you have an automation gap.

Set up ESLint (or the equivalent for your language), Prettier or similar formatters, and run them in CI. No PR merges if the automated checks fail. This removes an entire class of review friction at zero cost to human attention.

Similarly, type systems catch a category of errors that don't need to be a human review responsibility. TypeScript in strict mode, with no untyped escape hatches, removes whole categories of bugs from the review queue.


Review Culture at the Team Level

Code review norms don't emerge spontaneously. They need to be set explicitly, usually in a CONTRIBUTING.md or team handbook, and reinforced by the most senior engineers modeling the behavior they want to see.

The behaviors to explicitly establish:

  • Review turnaround expectations (24 hours)
  • PR size expectations (under 400 lines as a default target)
  • Comment severity labeling (must fix / suggestion / nit)
  • The criteria for approval (correct, secure, maintainable — not perfect)
  • How to handle disagreements (thread discussion, escalate to tech lead if unresolved in 2 days)

Codifying these norms removes the ambiguity that causes most review friction. When everyone knows the rules, the rules aren't personal.


Code review is one of the highest-leverage engineering investments a team can make — it catches bugs, spreads knowledge, and maintains quality without adding a separate QA cycle. If you're building or reforming an engineering team and want to think through how to structure your review practice, book a call at calendly.com/jamesrossjr.


Keep Reading