One-Line Summary: Define /review, /review-strict, and /review-explain as slash commands — markdown templates Claude Code expands into prompts that dispatch the three sub-agents in parallel and aggregate their findings.

Prerequisites: Step 5 (MCP server)


How Slash Commands Work in Claude Code

A slash command is a markdown file in commands/ (project) or .claude/commands/ (user). Frontmatter declares the command's metadata; the body is the prompt template, with {{arg}} substitutions for user-provided arguments.

When the user types /review, Claude Code:

  1. Looks up commands/review.md (project first, then user, then plugin).
  2. Substitutes any args.
  3. Sends the resulting prompt to the model as if the user typed it.

/review

Create commands/review.md:

---
name: review
description: Run a full code review on the current working tree (style + correctness + security)
argument-hint: "[optional commit range]"
---
 
Run a comprehensive code review on the current working tree changes.
 
If the user provided an argument, treat it as a commit range or path; otherwise default to the diff between the working tree and HEAD.
 
Steps:
 
1. **Identify what to review**: get the current diff. If the user provided `{{ARGS}}`, use that as the scope.
2. **Dispatch the three reviewers in parallel** using the Task tool:
   - `Task(subagent_type="style-reviewer", description="Style review of the current diff")`
   - `Task(subagent_type="correctness-reviewer", description="Correctness review of the current diff")`
   - `Task(subagent_type="security-reviewer", description="Security review of the current diff")`
3. **Wait for all three to complete**. Each will return a JSON object with findings.
4. **Acknowledge that the review is complete**. The Stop hook will produce the aggregated report — you do not need to manually format it.
 
Do NOT comment on the findings yourself; the sub-agents are the authority. Your role is dispatcher, not reviewer.

The argument-hint field is what shows in the autocomplete preview. {{ARGS}} is the placeholder for whatever the user typed after /review.

/review-strict

Create commands/review-strict.md:

---
name: review-strict
description: Like /review, but flag low-severity findings as actionable too
argument-hint: "[optional commit range]"
---
 
Run a comprehensive code review on the current working tree changes, with **strict** severity thresholds.
 
Dispatch the three reviewers as in /review, but in the dispatch description tell each one:
 
> "Apply STRICT thresholds. Treat low-severity findings as actionable. Surface concerns you would normally suppress."
 
Steps:
 
1. Identify the diff (use `{{ARGS}}` if provided).
2. Dispatch the three sub-agents in parallel, with the strict instruction.
3. Wait for completion. The Stop hook produces the aggregated report.
 
Do NOT comment on findings; dispatch only.

The "strict" mode is just a different dispatch instruction. The sub-agents themselves don't change; their behavior shifts because of the explicit instruction in their input description.

/review-explain

Create commands/review-explain.md:

---
name: review-explain
description: Explain a specific finding in detail (give it the file:line and category)
argument-hint: "<file>:<line> <category>"
---
 
The user wants more detail on a specific finding from a recent review.
 
Argument format: `<file>:<line> <category>`. Example: `/review-explain src/auth.ts:42 injection`.
 
Steps:
 
1. Parse `{{ARGS}}` to extract file, line, category.
2. Read the file at and around the specified line (10 lines before and after).
3. Based on the category, dispatch the relevant single sub-agent with a focused description:
   - `naming` / `formatting` / `idiomaticity` / `readability``style-reviewer`
   - `bug` / `edge-case` / `contract` / `race``correctness-reviewer`
   - `injection` / `authz` / `secret` / `deserialization` / `ssrf` / `cve``security-reviewer`
4. The sub-agent's description should be: "Explain finding in {file}:{line} of category '{category}' in detail. Cover: what's wrong, why it matters, concrete fix."
5. Wait for the sub-agent's response. **Pass it through to the user** — this command surfaces the explanation.
 
This is the only review command where you DO show output yourself, because the user explicitly asked for an explanation.

This command is interactively useful: the user sees a finding in the report, wants more detail on one specific item, and uses /review-explain to get a focused explanation without re-running the whole review.

Test the Slash Commands

Restart Claude Code in the test project. Type /:

You should see the three commands in the autocomplete: review, review-strict, review-explain.

Run a real test:

cd ~/some-project-with-changes
claude
> /review

The three sub-agents should fire in parallel. After they finish, the Stop hook produces the aggregated report.

If something doesn't work:

  • Check .claude/codereview-scratchpad.json — is it populated?
  • Check sub-agent outputs in the Claude Code transcript — are they returning valid JSON?
  • Check hook stderr — are the hooks firing without errors?

Commit

cd ~/dev/harness-codereview
git add commands/
git commit -m "Add /review, /review-strict, /review-explain slash commands"

What This Step Did

Exercised:

  • slash-commands.md — markdown templates as user-triggered shortcuts.
  • supervisor-pattern-deep-dive.md — supervisor (main agent) dispatching three workers.
  • role-based-orchestration.md — sub-agents are role-specialized.

You now have a working code-review workflow. The remaining steps are about making it production-shaped: a background worker (Step 7), strong permission scoping (Step 8), and packaging (Step 9).


Next: Step 7 - Add a Background Audit Worker →