Skip to content

Chapter 10: Frequently Asked Questions


General

Where can I start?

If you're new to Superpowers, here's a recommended reading path:

  1. Chapter 0: Welcome — Understand what Superpowers is and who it's for
  2. Chapter 1: Getting Started — Set up your environment and run your first skill
  3. Chapter 2: Brainstorming — Learn how to explore ideas before writing code
  4. Chapter 3: Writing Plans — Turn ideas into structured implementation plans
  5. Chapter 9: Iron Laws — The core principles that make everything work

From there, dive into the chapters that match your workflow: TDD (Chapter 5), Debugging (Chapter 6), Code Review (Chapter 7), or Git Worktrees (Chapter 8).

Also check out the Real-World Use Cases section to see how Superpowers is applied in practical scenarios — from solo feature development to team debugging and parallel refactoring.


What platforms does Superpowers support?

Superpowers is designed to work with multiple AI coding platforms:

PlatformStatusNotes
Claude CodeFull supportPrimary platform; all features available
CursorFull supportOfficial support added in v4.3.1 (Feb 2026)
Gemini CLIFull supportExtension support added in v5.0.1 (Mar 2026)
CodexExperimentalAvailable since v3.3.0 (Oct 2025)
OpenCodeSupportedAdded in v3.5.0

Each platform uses the same skill files and CLAUDE.md configuration. Platform-specific differences (if any) are documented in the release notes for the version that introduced support.


How is Superpowers different from using a regular AI coding agent?

Using an AI coding agent without Superpowers is like using a powerful tool without a safety manual. The agent is capable, but it has no enforced discipline around quality, no systematic debugging protocols, and no safeguards against the most common failure modes.

With Superpowers:

  • Every feature starts with a failing test. There is no "implement and hope."
  • Every completion claim requires evidence. There is no "should work."
  • Every bug requires root cause investigation. There is no guessing.
  • Skills enforce domain-specific best practices. The agent follows the right approach for each task type automatically.
  • Plans are written before code is written. Architecture decisions happen on paper, not in production.

The result is not a slower agent — it is an agent whose work you can trust. The time saved on debugging, reverting, and untangling messy codebases more than compensates for the upfront rigor.


Can I use Superpowers with a team?

Yes. Superpowers is particularly well-suited to team environments because it enforces consistent practices across all team members using AI agents.

Recommended team setup:

  1. Commit CLAUDE.md to the repository. All team members use the same configuration. AI agents working on the project pick up the same rules.
  2. Commit the .claude/skills/ directory. Skills are shared. Custom skills created for your project are available to everyone.
  3. Enforce the worktree isolation policy. Each developer and each AI agent works in an isolated worktree. Main branch protection rules prevent direct commits.
  4. Use PR-based code review. The code review protocol (Chapter 7) integrates directly with GitHub/GitLab workflows.

With this setup, an AI agent working for one team member follows the same standards as an AI agent working for another — or a human developer following the team's coding standards.


TDD & Quality

Is TDD mandatory? Are there exceptions?

Yes. TDD is mandatory. There are no exceptions.

This is Iron Law 1: "NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST."

Common arguments for exceptions and why they are rejected:

"It's just a small change." Small changes introduce bugs too. The time cost of writing a test for a small change is measured in minutes. The time cost of debugging a regression from an untested small change is measured in hours.

"We're moving fast, we'll add tests later." Tests added after implementation are shaped around the implementation, not the requirement. They test what the code does, not what it should do. They provide false confidence. "Later" rarely comes.

"The code is UI, it's hard to test." UI behavior can be tested. Component tests, snapshot tests, interaction tests, and end-to-end tests all exist. Difficulty is not exemption.

"I know this code is correct." This is the most dangerous argument. Your future self, or a colleague, does not share your current certainty. The test documents that certainty and preserves it.


What's the recommended model for each task type?

Model selection depends on the task requirements:

Task TypeRecommended ModelReason
Architecture planning, complex reasoningClaude Opus 4.6 (Thinking mode)Extended reasoning produces better plans
Feature implementation, TDDClaude Sonnet 4.5Balanced speed and quality
Code review, debuggingClaude Sonnet 4.5Strong analysis capability
Simple refactoring, formattingClaude Haiku 3.5Fast and cost-effective for mechanical tasks
Multi-agent parallel tasksClaude Haiku 3.5 (subagents)Cost-efficient at scale

For Gemini CLI users: Gemini 2.0 Flash is recommended for implementation tasks; Gemini 2.5 Pro for architecture and planning.

For Cursor users: Use the Claude Sonnet model in Cursor's composer for feature work. Cursor's built-in chat can use smaller models for quick questions.



Troubleshooting

A skill is not triggering correctly. What do I do?

  1. Check the skill file exists in the correct location for your platform.
  2. Verify the trigger description in the skill file. The trigger condition must match the language you use to invoke it.
  3. Be explicit. Instead of hoping the agent infers the right skill, name it directly: "Use the superpowers:systematic-debugging skill for this."
  4. Check for conflicts. If two skills have overlapping trigger conditions, the agent may choose the wrong one. Review your skill configuration.
  5. Re-read the skill file. Skills can become outdated when the codebase changes. The skill may be triggering correctly but producing advice that no longer applies.