Skip to content

Chapter 9: Iron Laws & Anti-Patterns

The Iron Laws are non-negotiable rules that govern every session with a Superpowers-enabled agent. They are not guidelines, defaults, or suggestions. They are hard constraints that cannot be overridden by instructions, urgency, or convenience.

Understanding why these laws exist — and what happens when they are broken — is essential for using Superpowers effectively.


The 4 Iron Laws

Iron Law 1: TDD

"NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST"

Every feature, every bug fix, every behavioral change must be preceded by a failing test that describes the desired behavior. This is Test-Driven Development (TDD) applied strictly.

Why this is non-negotiable:

Without a failing test first, you have no proof that your code does what you think it does. You are implementing and hoping. With a failing test first, you have a precise specification that your implementation must satisfy — and a verification mechanism that runs forever.

What this prevents:

  • Implementing the wrong thing with confidence
  • Tests that always pass because they test nothing
  • "I'm sure it works" as the only verification
  • Regression when the code changes six months later

How to apply it:

1. Write the test → it fails (RED)
2. Write the minimum code to make it pass (GREEN)
3. Refactor without breaking the test (REFACTOR)

If you cannot write a failing test, you do not yet understand the requirement well enough to implement it. Stop and clarify.


Iron Law 2: Verification

"NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE"

You may not claim that work is complete, fixed, or passing unless you have just run verification and have the output in front of you.

Why this is non-negotiable:

Mental models are unreliable. A change that "should work" frequently does not. Code that "was working earlier" may have been broken by a subsequent change. The only valid evidence is the output of running the actual verification command right now.

What this prevents:

  • Shipping broken code with high confidence
  • The "it worked on my machine" failure pattern
  • Stale test results being treated as current
  • Verbal assurances substituting for evidence

The verification must be fresh: Running tests five commits ago and claiming the current state is fine is not verification. Run the tests on the current state, read the output, then make the claim.


Iron Law 3: Debugging

"NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST"

Before writing any fix code, you must complete a root cause investigation as described in Chapter 6. You must be able to state, in one sentence, exactly what is causing the bug.

Why this is non-negotiable:

Fixing symptoms without understanding causes is the primary driver of accumulating technical debt. Each symptom-fix adds complexity, masks the real problem, and makes the root cause harder to find later. The codebase becomes a palimpsest of patches that nobody understands.

What this prevents:

  • The same bug re-appearing in a different form
  • Fix chains where each fix breaks something else
  • Loss of architectural clarity
  • The 3 Failures pattern (see Chapter 6)

The root cause statement test: Before implementing a fix, you must be able to complete this sentence without hedging: "The bug is caused by [specific thing] because [evidence]." If you cannot, investigation is incomplete.


Iron Law 4: Skills

"NO SKILL WITHOUT A FAILING TEST FIRST"

When creating or modifying a Superpowers skill, the same TDD rule applies. Write a test that demonstrates the skill's intended behavior, verify it fails, then implement the skill.

Why this is non-negotiable:

Skills are code. They have behavior that can be correct or incorrect. Without tests, skill development is just as unreliable as any other untested code — it works until it doesn't, and you have no way to know when it stopped working or why.


Master Red Flags Table

These are the warning signs that an Iron Law is about to be violated. When you observe any of these, STOP the current action and return to the correct protocol.

#Red FlagWhat It SignalsSTOP Action
1"This should work"Verification has been skippedRun the verification command now
2"It's probably fine"Assumption without evidenceFind the evidence or admit uncertainty
3"The tests should pass"Tests have not been runRun the tests
4"Let me just try this fix"No root cause investigationComplete Phase 1 of debugging first
5"I'll add tests later"TDD violation in progressWrite the failing test first, now
6"This is a small change, no test needed"TDD exception being inventedNo exceptions. Write the test.
7"You're absolutely right!" (immediate)Performative agreementRead → Understand → Verify → Evaluate first
8Fixing the test to match broken behaviorCovering up a bugFix the code, not the test
9Committing directly to mainBypassing isolation protocolCreate a worktree and branch
10Fourth fix attempt on same bugArchitecture problem being maskedStop. Redesign. Do not continue patching.
11Summarizing test output instead of showing itPossible hallucination or selective readingShow the actual output
12Skipping baseline tests in a new worktreeUnknown starting stateRun full test suite before writing any code

Anti-Pattern Summary

The following anti-patterns appear across all chapters of this guide. They are collected here as a reference.

From Chapter 1: Foundation

  • Tool Misconfiguration: Using Superpowers without reading the CLAUDE.md or setup documentation. Skills require proper configuration to trigger correctly.
  • Skill Bypassing: Asking the agent to "just do it" without invoking the appropriate skill. Skills exist for a reason; bypassing them produces lower-quality output.

From Chapter 2: Writing Plans

  • Implementation Without a Plan: Starting to write code before a plan is written and reviewed. Plans catch architectural problems cheaply; code does not.
  • Vague Plans: Writing plans in terms of outcomes rather than specific steps. "Implement authentication" is not a plan. A numbered sequence of concrete actions is a plan.

From Chapter 3: Executing Plans

  • Sequential Execution of Independent Tasks: Running tasks one by one when they could be parallelized. This multiplies wall-clock time unnecessarily.
  • Shared State Between Parallel Tasks: Running tasks in parallel that modify the same files or state. This produces merge conflicts and race conditions.

From Chapter 4: TDD

  • Test After Implementation: Writing tests after the code is written. These tests are likely to be shaped around the implementation rather than the requirement.
  • Testing Implementation Details: Writing tests that break every time the code is refactored, even when behavior is unchanged. Test behavior, not implementation.

From Chapter 5: Brainstorming

  • Skipping Brainstorm Before Implementation: Going straight to coding without exploring the problem space. The brainstorm is required before any creative work.
  • Treating Brainstorm as a Formality: Going through brainstorm motions without genuinely exploring alternatives. The value is in the exploration, not the ritual.

From Chapter 6: Debugging

  • Guessing and Checking: Trying fixes without root cause investigation. This is the primary source of the 3 Failures pattern.
  • Fixing Multiple Things Simultaneously: Making multiple changes to address a bug, making it impossible to know which change worked.
  • Premature Completion Claims: Saying "fixed" before running verification.

From Chapter 7: Code Review

  • Performative Agreement: Immediately agreeing with review feedback without evaluating it. This applies bad suggestions to working code.
  • Ignoring Severity Levels: Treating all feedback as equally urgent. Critical issues block; minor issues get logged.

From Chapter 8: Git Worktrees

  • Committing to Main Directly: Any change to main outside of a merge or PR. Main is always a deployable state.
  • Working Without Baseline Tests: Starting feature work without first verifying that all tests pass on the base branch.

Enforcement

The Iron Laws are enforced by agent behavior, not by tooling. When you observe a violation — whether by the agent or by instructions that would cause a violation — the correct response is to name the law being violated and return to the correct protocol.

This is not obstruction. This is how quality is maintained. Every exception to an Iron Law is a bet that this particular case is special enough to warrant it. That bet is almost always wrong.