Chapter 10: Frequently Asked Questions
General
What platforms does Superpowers support?
Superpowers is designed to work with multiple AI coding platforms:
| Platform | Status | Notes |
|---|---|---|
| Claude Code | Full support | Primary platform; all features available |
| Cursor | Full support | Official support added in v4.3.1 (Feb 2026) |
| Gemini CLI | Full support | Extension support added in v5.0.1 (Mar 2026) |
| Codex | Experimental | Available since v3.3.0 (Oct 2025) |
| OpenCode | Supported | Added in v3.5.0 |
Each platform uses the same skill files and CLAUDE.md configuration. Platform-specific differences (if any) are documented in the release notes for the version that introduced support.
How is Superpowers different from using a regular AI coding agent?
Using an AI coding agent without Superpowers is like using a powerful tool without a safety manual. The agent is capable, but it has no enforced discipline around quality, no systematic debugging protocols, and no safeguards against the most common failure modes.
With Superpowers:
- Every feature starts with a failing test. There is no "implement and hope."
- Every completion claim requires evidence. There is no "should work."
- Every bug requires root cause investigation. There is no guessing.
- Skills enforce domain-specific best practices. The agent follows the right approach for each task type automatically.
- Plans are written before code is written. Architecture decisions happen on paper, not in production.
The result is not a slower agent — it is an agent whose work you can trust. The time saved on debugging, reverting, and untangling messy codebases more than compensates for the upfront rigor.
Can I use Superpowers with a team?
Yes. Superpowers is particularly well-suited to team environments because it enforces consistent practices across all team members using AI agents.
Recommended team setup:
- Commit
CLAUDE.mdto the repository. All team members use the same configuration. AI agents working on the project pick up the same rules. - Commit the
.claude/skills/directory. Skills are shared. Custom skills created for your project are available to everyone. - Enforce the worktree isolation policy. Each developer and each AI agent works in an isolated worktree. Main branch protection rules prevent direct commits.
- Use PR-based code review. The code review protocol (Chapter 7) integrates directly with GitHub/GitLab workflows.
With this setup, an AI agent working for one team member follows the same standards as an AI agent working for another — or a human developer following the team's coding standards.
TDD & Quality
Is TDD mandatory? Are there exceptions?
Yes. TDD is mandatory. There are no exceptions.
This is Iron Law 1: "NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST."
Common arguments for exceptions and why they are rejected:
"It's just a small change." Small changes introduce bugs too. The time cost of writing a test for a small change is measured in minutes. The time cost of debugging a regression from an untested small change is measured in hours.
"We're moving fast, we'll add tests later." Tests added after implementation are shaped around the implementation, not the requirement. They test what the code does, not what it should do. They provide false confidence. "Later" rarely comes.
"The code is UI, it's hard to test." UI behavior can be tested. Component tests, snapshot tests, interaction tests, and end-to-end tests all exist. Difficulty is not exemption.
"I know this code is correct." This is the most dangerous argument. Your future self, or a colleague, does not share your current certainty. The test documents that certainty and preserves it.
What's the recommended model for each task type?
Model selection depends on the task requirements:
| Task Type | Recommended Model | Reason |
|---|---|---|
| Architecture planning, complex reasoning | Claude Sonnet 4.5+ (Thinking mode) | Extended reasoning produces better plans |
| Feature implementation, TDD | Claude Sonnet 4.5 | Balanced speed and quality |
| Code review, debugging | Claude Sonnet 4.5 | Strong analysis capability |
| Simple refactoring, formatting | Claude Haiku 3.5 | Fast and cost-effective for mechanical tasks |
| Multi-agent parallel tasks | Claude Haiku 3.5 (subagents) | Cost-efficient at scale |
For Gemini CLI users: Gemini 2.0 Flash is recommended for implementation tasks; Gemini 2.5 Pro for architecture and planning.
For Cursor users: Use the Claude Sonnet model in Cursor's composer for feature work. Cursor's built-in chat can use smaller models for quick questions.
Installation & Updates
How do I update to a new version of Superpowers?
Superpowers is distributed as a set of files in your repository (CLAUDE.md, skills directory, and related configuration). Updates are applied by pulling the latest version from the Superpowers repository.
Standard update process:
# 1. Create an update worktree (follow Chapter 8 protocol)
git worktree add ../myapp--superpowers-update -b chore/update-superpowers
# 2. In the update worktree, download the latest Superpowers files
# (follow the installation instructions for your specific setup)
# 3. Run the full test suite to verify nothing broke
npm test
# 4. Review the changelog (Chapter 11) for breaking changes
# 5. Merge via PR following normal code review protocol
Before updating:
- Read the changelog for the version you are updating to
- Note any breaking changes that require configuration updates
- Ensure all current tests pass before beginning the update
How do I add a new skill?
Skills are files in the .claude/skills/ directory (for Claude Code) or the equivalent for your platform.
Process for adding a new skill:
-
Identify the domain. What specific area of development does this skill cover? Be precise — a skill for "React performance optimization" is better than a skill for "React."
-
Write a failing test first (Iron Law 4). Define what behavior the skill should produce before writing the skill.
-
Write the skill file. Skills are markdown files with a specific structure: trigger conditions, context, rules, and examples.
-
Test the skill. Invoke it in a real session. Verify it triggers on the right inputs and produces the right guidance.
-
Commit to the repository. Skills committed to the repo are available to all team members and agents.
For detailed skill creation guidance, use the superpowers:writing-skills skill.
Compatibility
Does Superpowers work on Windows?
Yes. Superpowers works on Windows, macOS, and Linux.
Windows users should be aware of a few platform-specific considerations:
- Path separators: Skills and configuration files use forward slashes. Windows handles these correctly in most contexts, but some shell scripts may require adjustment.
- Line endings: Configure git to handle line endings consistently:
git config core.autocrlf input. - Shell commands in examples: Some examples use bash syntax. Windows users should use WSL2 (Windows Subsystem for Linux) or Git Bash for the best experience.
- Worktree paths: Use paths without spaces.
C:\projects\myapp--featureworks;C:\My Projects\myapp featuremay cause issues.
Dedicated Windows fixes are included in the platform when applicable and are noted in the changelog.
Can I use Superpowers without Claude Code specifically?
Yes. While Superpowers was originally designed for Claude Code, the core principles — TDD, systematic debugging, verification before completion, plan-before-code — are platform-agnostic.
The skill system is implemented differently on each platform:
- Claude Code:
.claude/skills/directory with markdown skill files - Cursor: Cursor rules files with adapted skill content
- Gemini CLI: Extension configuration
The Iron Laws and protocols described in this guide apply regardless of which platform you use. The specific invocation mechanism differs, but the behavior they enforce is the same.
Troubleshooting
A skill is not triggering correctly. What do I do?
- Check the skill file exists in the correct location for your platform.
- Verify the trigger description in the skill file. The trigger condition must match the language you use to invoke it.
- Be explicit. Instead of hoping the agent infers the right skill, name it directly: "Use the
superpowers:systematic-debuggingskill for this." - Check for conflicts. If two skills have overlapping trigger conditions, the agent may choose the wrong one. Review your skill configuration.
- Re-read the skill file. Skills can become outdated when the codebase changes. The skill may be triggering correctly but producing advice that no longer applies.
Tests pass locally but fail in CI. What do I do?
This is a root cause investigation problem. Follow Phase 1 of the debugging protocol (Chapter 6):
- Read the CI error output carefully — do not summarize it
- Identify what is different between local and CI (environment variables, Node version, OS, file permissions, network access)
- Reproduce the CI environment locally if possible (Docker, matching Node version)
- Check for tests that depend on global state, file system state, or timing
The most common causes: environment variables not set in CI, tests written with implicit dependencies on local file paths, and tests with timing assumptions that fail under CI load.