Skip to content

Chapter 7: Code Review

Code review is a quality gate — not a formality, not a performance, and not a social ritual. Superpowers treats code review as a rigorous technical activity with explicit protocols for both requesting and receiving feedback.


Part 1: Requesting a Code Review

Why Structure Matters

An unstructured code review request ("hey can you review this?") produces unstructured feedback. The reviewer does not know what the code is supposed to do, what problem it solves, or what tradeoffs were made. The result is surface-level comments that miss important issues.

A structured request gives the reviewer everything they need to give you genuinely useful feedback.

Preparing Context

Before dispatching a reviewer, prepare the following three pieces of information:

WHAT — What does this change do? Write 2–4 sentences describing:

  • The problem being solved
  • The approach taken
  • Any key design decisions and why you made them
  • What you are not doing and why

PLAN — What was the implementation plan? Link to or summarize the plan that guided this work. This lets the reviewer verify that the implementation matches the intent.

BASE_SHA → HEAD_SHA — The exact commit range being reviewed:

# Get the base (where your branch diverged from main)
git merge-base main HEAD

# Get the current HEAD
git rev-parse HEAD

This ensures the reviewer looks at exactly the right diff — not unrelated commits, not the entire history.

Dispatching the Reviewer Subagent

With context prepared, dispatch a code review subagent with the following structure:

REVIEW REQUEST
--------------
WHAT: [2-4 sentence description of the change]

PLAN: [link or summary of implementation plan]

DIFF RANGE: [BASE_SHA]..[HEAD_SHA]

FOCUS AREAS: [optional: specific areas you want scrutinized]

The subagent will examine the diff, check against the plan, run relevant tests, and return structured feedback.

Handling Review Feedback by Severity

Not all feedback has the same urgency. Handle each severity level differently:

SeverityDefinitionAction
CriticalBug, security issue, data loss risk, broken contractBlocks completion. Fix immediately before any other work.
ImportantSignificant quality issue, missing test, performance problemFix before starting the next task. Do not accumulate.
MinorStyle preference, naming suggestion, optional improvementLog it. Create a follow-up ticket. Do not gold-plate the current PR.

The most common mistake is treating all feedback as equally urgent. Critical issues block. Minor issues get logged. Conflating them causes either delays on trivial style feedback or dismissal of real bugs.


Part 2: Receiving a Code Review

The Core Problem: Performative Agreement

AI agents — and many developers — have a tendency toward performative agreement during code review. This is when the reviewer suggests something, and the implementer immediately says "You're absolutely right! I'll fix that right away!" and makes the change without genuinely evaluating whether the suggestion is correct.

Performative agreement is harmful because:

  • It applies incorrect suggestions to working code
  • It trains reviewers to expect unconditional compliance
  • It degrades code quality over time through accumulated bad suggestions
  • It provides false confidence that review is catching real problems

Prohibited Language

The following phrases indicate performative agreement and are not allowed:

ProhibitedWhyUse Instead
"You're absolutely right!"Validates without evaluation"Fixed. Changed X to Y because [reason]."
"Great catch!"Social flattery, no content"Verified. The issue was Z."
"I'll fix that right away"Commitment before understanding"Reading this now."
"That makes total sense"Agreement without analysis"I understand the concern. Here's my evaluation: ..."
"You're correct, I should have..."Self-deprecation without action"Implemented. Here's what changed."

The replacement language focuses on facts: what changed, why it changed, and what was verified.

The Correct Response Pattern

When you receive a code review comment, follow this sequence precisely:

READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT

READ: Read the entire comment, including any referenced code or documentation. Do not respond while reading.

UNDERSTAND: Can you explain the reviewer's concern in your own words? If not, ask a clarifying question. Do not guess at intent.

VERIFY: Check if the concern is valid by looking at the actual code. Run the relevant tests if applicable. Look at the real behavior, not your mental model of it.

EVALUATE: Apply the YAGNI check (You Aren't Gonna Need It):

  • Is this change necessary for the current requirements?
  • Does it solve a real, demonstrated problem?
  • Does it add complexity that the codebase doesn't currently warrant?
  • Is this a legitimate best practice or a personal style preference?

RESPOND: Write a response based on your evaluation, not your instinct to agree.

IMPLEMENT: If the suggestion is valid, implement it. If it is not valid, explain your reasoning.

YAGNI Check on Reviewer Suggestions

A reviewer suggesting "you should add caching here" or "you should abstract this into a generic interface" may be right — but the YAGNI test must be applied:

  • Does the current requirement actually need caching? Is there measured evidence of a performance problem?
  • Is there more than one concrete use case for the abstraction right now?

If the answer is no, the correct response is: "Noted as a potential future improvement. For the current requirements, the simpler approach is sufficient. I've logged it as a follow-up item."

Valid Pushback Scenarios

You are not just permitted to push back on review feedback — you are required to when:

  1. The suggestion introduces a bug. Demonstrate with a test or concrete example why the suggested change produces incorrect behavior.

  2. The suggestion violates the established plan. If the implementation plan was reviewed and approved, deviations require explicit re-approval, not unilateral changes during code review.

  3. The suggestion is out of scope. The suggestion might be good, but it belongs in a separate task. Implementing it now expands scope, delays delivery, and creates mixed-purpose commits.

  4. The suggestion is a style preference with no correctness impact. Style preferences belong in a linter config, not in code review feedback. If it cannot be automated, it is probably not worth blocking a PR.

  5. You have verified the code is correct and the reviewer is mistaken. Provide your evidence clearly and respectfully. You may be wrong — but you must make the case.

Pushback Template

When pushing back, use this structure:

I've evaluated this suggestion. My assessment:

CONCERN: [Restate the reviewer's concern accurately]
INVESTIGATION: [What I checked to evaluate it]
FINDING: [What I found]
DECISION: [Implement / Log for later / Disagree because...]
EVIDENCE: [Test output, reference, or concrete example]

This makes your reasoning auditable and keeps the review conversation focused on facts.


Summary

Code review done well is one of the most powerful quality tools available. Code review done poorly — either as a rubber-stamp or as a compliance theater — is worse than no review at all because it consumes time without producing quality.

The Superpowers protocol ensures that both sides of a review are rigorous: reviewers get the context they need, and implementers evaluate feedback with technical honesty rather than social compliance.