🎯

implement-phase

🎯Skill

from mhylle/claude-skills-collection

VibeIndex|
What it does

Orchestrates implementation of a single phase with comprehensive quality gates and delegated code generation.

πŸ“¦

Part of

mhylle/claude-skills-collection(18 items)

implement-phase

Installation

Shell ScriptRun shell script
./install.sh
Shell ScriptRun shell script
./init-workflow.sh ~/projects/myapp # Standard level (recommended)
Shell ScriptRun shell script
./init-workflow.sh ~/projects/myapp minimal # Lightweight reminder
Shell ScriptRun shell script
./init-workflow.sh ~/projects/myapp strict # Full enforcement
npm runRun npm script
npm run dev
πŸ“– Extracted from docs: mhylle/claude-skills-collection
1Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Execute a single phase from an implementation plan with all quality gates. This skill is the unit of work for implement-plan, handling implementation, verification, code review, ADR compliance, and plan synchronization for ONE phase. Triggers when implement-plan delegates a phase, or manually with "/implement-phase" and a phase reference.

Overview

# Implement Phase

Execute a single phase from an implementation plan with comprehensive quality gates. This skill is designed to be called by implement-plan but can also be invoked directly.

---

CRITICAL: Orchestrator Pattern (MANDATORY)

> THIS SESSION IS AN ORCHESTRATOR. YOU MUST NEVER IMPLEMENT CODE DIRECTLY.

What This Means

| DO (Orchestrator) | DO NOT (Direct Implementation) |

|-------------------|--------------------------------|

| Spawn subagents to write code | Write code yourself |

| Spawn subagents to create files | Use Write/Edit tools directly |

| Spawn subagents to run tests | Run tests yourself |

| Spawn subagents to fix issues | Fix code yourself |

| Read files to understand context | Read files to copy/paste code |

| Track progress with Task tools | Implement while tracking |

| Coordinate and delegate | Do the work yourself |

Enforcement

```

β›” VIOLATION: Using Write/Edit/NotebookEdit tools directly

β›” VIOLATION: Creating files without spawning a subagent

β›” VIOLATION: Fixing code without spawning a subagent

β›” VIOLATION: Running implementation commands directly

βœ… CORRECT: Task(subagent): "Create the AuthService at src/auth/..."

βœ… CORRECT: Task(subagent): "Fix the lint errors in src/auth/..."

βœ… CORRECT: Task(subagent): "Run npm test and report results..."

```

Why Orchestration?

  1. Context preservation - Main session retains full plan context
  2. Parallelization - Independent tasks run concurrently
  3. Clean separation - Orchestration logic separate from implementation
  4. Better error handling - Failures don't pollute main context

Subagent Spawning Pattern

```

Task (run_in_background: true): "Create [file] implementing [feature].

Context: Phase [N] - [Name]

Requirements:

  • [Requirement 1]
  • [Requirement 2]

RESPONSE FORMAT: Be concise. Return only:

  • STATUS: PASS/FAIL
  • FILES: created/modified files
  • ERRORS: any issues (omit if none)

Write verbose output to logs/[task].log"

```

Subagent Communication Protocol (CRITICAL)

> Subagents MUST be concise. Context preservation is paramount.

Every subagent prompt MUST include the response format instruction. Verbose responses waste orchestrator context.

Required Response Format Block (include in EVERY subagent prompt):

```

RESPONSE FORMAT: Be concise. Return ONLY:

  • STATUS: PASS/FAIL
  • FILES: list of files created/modified
  • ERRORS: brief error description (omit if none)

DO NOT include:

  • Step-by-step explanations of what you did
  • Code snippets (they're in the files)
  • Suggestions for next steps
  • Restating the original task

For large outputs, WRITE TO DISK:

  • Test results β†’ logs/test-[feature].log
  • Build output β†’ logs/build-[phase].log
  • Error traces β†’ logs/error-[task].log

Return only: "Full output: logs/[filename].log"

```

Good vs Bad Subagent Responses:

```

❌ BAD (wastes context):

"I have successfully created the SummaryAgentService. First, I analyzed

the requirements and determined that we need to implement three methods:

summarize(), retry(), and handleError(). I created the file at

src/agents/summary-agent/summary-agent.service.ts with the following

implementation: [300 lines of code]. The service uses dependency

injection to receive the OllamaService. I also updated the module file

to register the service. You should now be able to run the tests..."

βœ… GOOD (preserves context):

"STATUS: PASS

FILES: src/agents/summary-agent/summary-agent.service.ts (created),

src/agents/summary-agent/summary-agent.module.ts (modified)

ERRORS: None"

```

Disk-Based Communication for Large Data:

| Data Type | Write To | Return |

|-----------|----------|--------|

| Test output (>20 lines) | logs/test-[name].log | "Tests: 47 passed. Full: logs/test-auth.log" |

| Build errors | logs/build-[phase].log | "Build FAIL. Details: logs/build-phase2.log" |

| Lint results | logs/lint-[phase].log | "Lint: 3 errors. See logs/lint-phase2.log" |

| Stack traces | logs/error-[task].log | "Error in X. Trace: logs/error-task.log" |

| Generated code review | logs/review-[phase].md | "Review complete. Report: logs/review-phase2.md" |

---

Architecture

```

implement-plan (orchestrates full plan)

β”‚

└── implement-phase (this skill - one phase at a time)

β”‚

β”œβ”€β”€ 1. Implementation (subagents)

β”œβ”€β”€ 2. Exit Condition Verification (build, lint, unit tests)

β”œβ”€β”€ 3. Automated Integration Testing (Claude tests via API/Playwright)

β”œβ”€β”€ 4. Code Review (code-review skill)

β”œβ”€β”€ 5. ADR Compliance Check

β”œβ”€β”€ 6. Plan Synchronization

β”œβ”€β”€ 7. Prompt Archival (if prompt provided)

└── 8. Phase Completion Report

```

Design Principles

Single Responsibility

This skill does ONE thing: execute a single phase completely and correctly.

Extensibility

The phase execution pipeline is designed as a sequence of steps. New steps can be added without modifying the core logic. See [Phase Steps](#phase-steps-extensible).

Quality Gates

Each step is a gate. If any gate fails, the phase cannot complete.

Composability

This skill orchestrates other skills (code-review, adr) and can be extended to include more.

Mandatory Exit Conditions

> These conditions are NON-NEGOTIABLE. A phase cannot complete until ALL are satisfied.

| Condition | Requirement | Rationale |

|-----------|-------------|-----------|

| verification-loop PASS | All 6 checks pass (Build, Type, Lint, Test, Security, Diff) | Code must compile, type-check, pass linting, tests, and security checks |

| Integration tests PASS | All API/UI tests pass | Feature must work end-to-end |

| Code review PASS | Clean PASS status (not PASS_WITH_NOTES) | No outstanding issues |

| All recommendations fixed | Every recommendation addressed | Recommendations are blocking, not optional |

| ADR compliance PASS | Follows existing ADRs, new decisions documented | Architectural consistency |

| Plan verified | All work items confirmed complete | Specification fulfilled |

Why Recommendations Are Mandatory

```

❌ WRONG: "It's just a recommendation, we can fix it later"

❌ WRONG: "PASS_WITH_NOTES is good enough"

❌ WRONG: "We'll address it in the next phase"

βœ… CORRECT: "Recommendations are blocking issues"

βœ… CORRECT: "Only clean PASS allows phase completion"

βœ… CORRECT: "Fix it now or the phase cannot complete"

```

The Clean Baseline Principle requires:

  • Each phase ends with zero outstanding issues
  • The next phase inherits a clean codebase
  • Technical debt is not accumulated across phases
  • Recommendations, if worth noting, are worth fixing

Input Context

When invoked, this skill expects:

```

Plan Path: [path to plan file]

Phase: [number or name]

Task ID: [task_id from implement-plan's TaskList]

Prompt Path: [optional - path to pre-generated prompt from prompt-generator]

Changed Files: [optional - auto-detected if not provided]

Skip Steps: [optional - list of steps to skip, e.g., for testing]

TDD Mode: [enabled/disabled - from plan metadata, CLI flag, or global settings]

Coverage Threshold: [percentage - default 80%, applies when TDD mode enabled]

```

Prompt Integration

If a Prompt Path is provided (from prompt-generator skill):

  1. Read the prompt file - Contains detailed orchestration instructions
  2. Use prompt as primary guidance - Follows established patterns and conventions
  3. Plan file as reference - For exit conditions and verification steps
  4. Archive on completion - Move prompt to completed/ subfolder

```

# Prompt provides:

  • Detailed orchestration workflow
  • Subagent delegation patterns
  • Specific task breakdowns
  • Error handling guidance

# Plan provides:

  • Exit conditions (source of truth)
  • Success criteria
  • Dependencies

```

Phase Execution Pipeline

⚑ AUTO-CONTINUE RULES (READ THIS FIRST)

```

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”

β”‚ AUTOMATIC CONTINUATION ENGINE β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ β”‚

β”‚ RULE 1: After completing ANY step, IMMEDIATELY start the next step. β”‚

β”‚ RULE 2: Do NOT output "waiting for input" or ask to continue. β”‚

β”‚ RULE 3: Do NOT summarize and stop. Summarize and CONTINUE. β”‚

β”‚ RULE 4: The ONLY valid stop point is after Step 8 completion. β”‚

β”‚ β”‚

β”‚ EXECUTION ALGORITHM: β”‚

β”‚ β”‚

β”‚ current_step = 1 β”‚

β”‚ while current_step <= 8: β”‚

β”‚ result = execute_step(current_step) β”‚

β”‚ if result == PASS: β”‚

β”‚ current_step += 1 # AUTO-CONTINUE β”‚

β”‚ elif result == FAIL: β”‚

β”‚ fix_and_retry(current_step) # Stay on step, fix, retry β”‚

β”‚ elif result == BLOCKED: β”‚

β”‚ return BLOCKED # Only valid early exit β”‚

β”‚ return COMPLETE # Only stop here β”‚

β”‚ β”‚

β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

```

WHAT TO DO AFTER EACH STEP

| After Step | Status | YOUR IMMEDIATE ACTION |

|------------|--------|----------------------|

| Step 1 | PASS | Execute Step 2 NOW (invoke verification-loop) |

| Step 2 | PASS | Execute Step 3 NOW (run integration tests) |

| Step 3 | PASS | Execute Step 4 NOW (invoke code-review skill) |

| Step 4 | PASS | Execute Step 5 NOW (check ADR compliance) |

| Step 5 | PASS | Execute Step 6 NOW (verify plan sync) |

| Step 6 | PASS | Execute Step 7 NOW (archive prompt) |

| Step 7 | PASS | Execute Step 8 NOW (generate completion report) |

| Step 8 | DONE | STOP - Present report, await user |

On FAIL at any step: Fix the issue, re-run the SAME step, get PASS, then continue.

NEVER DO THESE

```

❌ "Step 2 complete. Let me know when you want to continue."

❌ "Verification passed. Would you like me to proceed to Step 3?"

❌ "Code review done. Waiting for your input."

❌ "I've completed the exit conditions. What's next?"

❌ Outputting results without immediately starting the next step

❌ Asking permission to continue between steps 1-7

```

ALWAYS DO THESE

```

βœ… "Step 2 PASS. Executing Step 3: Integration Testing..."

βœ… "Code review PASS. Now checking ADR compliance..."

βœ… "Exit conditions verified. Running integration tests now..."

βœ… Immediately invoke the next step's tools/skills after reporting status

βœ… Chain steps together without pause

```

---

Continuous Execution Details

> The entire pipeline (Steps 1-8) MUST execute as one continuous flow.

After EACH step completes (including skill invocations), IMMEDIATELY proceed to the next step WITHOUT waiting for user input.

Pause Points (ONLY these):

| Scenario | Action |

|----------|--------|

| Step returns BLOCKED status | Stop and present blocker to user |

| Step 8 (Completion Report) done | Await user confirmation before next phase |

| Maximum retries exhausted | Present failure and options to user |

DO NOT PAUSE after:

  • Implementation complete β†’ Continue to Step 2
  • Exit conditions pass β†’ Continue to Step 3
  • Integration tests pass β†’ Continue to Step 4
  • Code review returns PASS β†’ Continue to Step 5
  • ADR compliance returns PASS β†’ Continue to Step 6
  • Plan sync complete β†’ Continue to Step 7
  • Prompt archived β†’ Continue to Step 8
  • Any successful step completion β†’ Continue to next step
  • Fix loop completes with PASS β†’ Continue to next step

Fix Loops (internal, no user pause):

  • Verification fails β†’ Fix code, re-run verification, expect PASS
  • Integration tests fail β†’ Fix code, re-run tests, expect PASS
  • Code review returns PASS_WITH_NOTES β†’ Fix notes, re-run Step 4, expect PASS
  • Code review returns NEEDS_CHANGES β†’ Fix issues, re-run Step 4, expect PASS
  • Any step has fixable issues β†’ Spawn fix subagents, re-run step

Continuous Flow Example:

```

Step 1: Implementation β†’ PASS

↓ (immediately)

Step 2: Exit Conditions β†’ PASS

↓ (immediately)

Step 3: Automated Integration Testing β†’ PASS

↓ (immediately)

Step 4: Code Review Skill β†’ PASS_WITH_NOTES

↓ (fix loop - spawn subagents to fix notes)

β†’ Re-run Code Review β†’ PASS

↓ (now continue)

Step 5: ADR Compliance β†’ PASS

↓ (immediately)

Step 6: Plan Sync β†’ PASS

↓ (immediately)

Step 7: Prompt Archival β†’ PASS

↓ (immediately)

Step 8: Completion Report β†’ Present to user

↓ (NOW wait for user confirmation)

```

Goal: Clean PASS on all steps. PASS_WITH_NOTES means there's work to do.

---

Blocking Elements (ONLY Valid Reasons to Stop)

> A blocking element is something YOU cannot fix autonomously.

Do NOT stop for fixable issues. Only stop when you genuinely cannot proceed without user intervention.

Valid Blocking Elements:

| Blocker | Example | Action |

|---------|---------|--------|

| Permission denied | Subagent cannot write to protected directory | Ask user to adjust permissions or run in correct mode |

| Infrastructure unavailable | Cannot reach required LLM inference server | Report the connectivity issue, ask user to verify infrastructure |

| Missing credentials | API key not configured, auth token expired | Ask user to provide/refresh credentials |

| External service down | Third-party API returning 503 | Report the outage, ask if user wants to wait or skip |

| Ambiguous requirements | Plan says "integrate with payment system" but doesn't specify which | Ask user to clarify before proceeding |

| Destructive operation | Phase requires dropping production database | Confirm with user before executing |

NOT Blocking (fix these yourself):

| Issue | Action |

|-------|--------|

| Test fails | Fix the code, re-run test |

| Lint errors | Fix the code, re-run lint |

| Build errors | Fix the code, re-build |

| Type errors | Fix the types, re-check |

| Code review feedback | Fix the issues, re-run review |

| API returns error | Debug and fix the implementation |

| UI element not found | Fix selector or implementation |

Blocker Protocol:

When you hit a genuine blocker:

```

β›” BLOCKED: [Brief description]

Phase: [N] - [Name]

Step: [Current step]

Blocker Type: [Permission | Infrastructure | Credentials | External | Ambiguous | Destructive]

Details:

[Specific details about what failed and why]

What I Need:

[Specific action required from user]

Options:

A) [Resolve the blocker and continue]

B) [Skip this verification and proceed with risk]

C) [Abort phase]

```

Resume After Blocker:

Once the user resolves the blocker, resume from the blocked step (not from Step 1).

---

Step Completion Checklist (MANDATORY)

> Before reporting phase complete, ALL steps must be executed.

Use this checklist internally. If any step is missing, execute it before completing:

```

PHASE COMPLETION VERIFICATION:

  • [ ] Step 1: Implementation - Subagents spawned, work completed
  • [ ] Step 2: Exit Conditions - Build, runtime, unit tests all verified
  • [ ] Step 3: Integration Testing - YOU tested via API calls or Playwright
  • [ ] Step 4: Code Review - Achieved PASS (not PASS_WITH_NOTES)
  • [ ] Step 5: ADR Compliance - Checked against relevant ADRs
  • [ ] Step 6: Plan Sync - Work items verified, phase status updated
  • [ ] Step 7: Prompt Archival - Archived or explicitly skipped (no prompt)
  • [ ] Step 8: Completion Report - Generated and presented

β›” VIOLATION: Stopping before Step 8

β›” VIOLATION: Waiting for user input between Steps 1-7

β›” VIOLATION: Reporting "phase complete" with unchecked steps

β›” VIOLATION: Proceeding with PASS_WITH_NOTES without fixing notes

β›” VIOLATION: Asking user to "manually test" instead of testing yourself

```

Self-Check Protocol:

After invoking a skill (like code-review), ask yourself:

  1. Did the skill complete? β†’ Check the result status
  2. Did it return PASS? β†’ CONTINUE to next step immediately
  3. Did it return PASS_WITH_NOTES? β†’ Spawn fix subagents, re-run step, expect PASS
  4. Did it return NEEDS_CHANGES? β†’ Spawn fix subagents, re-run step, expect PASS
  5. Am I at Step 8? β†’ If no, execute next step immediately
  6. Did I test the feature myself? β†’ If no, go back to Step 3

The goal is always a clean PASS. PASS_WITH_NOTES is not "good enough" - fix the notes.

---

Progress Tracker (MANDATORY OUTPUT)

> After EVERY step, you MUST output a Progress Tracker before doing ANYTHING else.

This is not optional. The Progress Tracker forces explicit acknowledgment of state and next action.

Format (output after each step completes):

```

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”

β”‚ PROGRESS: Step [N] β†’ Step [N+1] β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ βœ… Step 1: Implementation [DONE/SKIP]β”‚

β”‚ βœ… Step 2: Exit Conditions [DONE/SKIP]β”‚

β”‚ βœ… Step 3: Integration Test [DONE/SKIP]β”‚

β”‚ βœ… Step 4: Code Review [DONE/SKIP]β”‚

β”‚ ⏳ Step 5: ADR Compliance [CURRENT] β”‚

β”‚ ⬚ Step 6: Plan Sync [PENDING] β”‚

β”‚ ⬚ Step 7: Prompt Archival [PENDING] β”‚

β”‚ ⬚ Step 8: Completion Report [PENDING] β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ NEXT ACTION: [Describe what you do next]β”‚

β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

```

Rules:

  1. Output this tracker IMMEDIATELY after each step completes
  2. Mark the CURRENT step you are about to execute
  3. The NEXT ACTION must describe executing the next step (not waiting for user)
  4. If NEXT ACTION says anything other than executing a step, you are VIOLATING the protocol

Example - After Step 4 Code Review Returns PASS:

```

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”

β”‚ PROGRESS: Step 4 β†’ Step 5 β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ βœ… Step 1: Implementation [DONE] β”‚

β”‚ βœ… Step 2: Exit Conditions [DONE] β”‚

β”‚ βœ… Step 3: Integration Test [DONE] β”‚

β”‚ βœ… Step 4: Code Review [DONE] β”‚

β”‚ ⏳ Step 5: ADR Compliance [CURRENT] β”‚

β”‚ ⬚ Step 6: Plan Sync [PENDING] β”‚

β”‚ ⬚ Step 7: Prompt Archival [PENDING] β”‚

β”‚ ⬚ Step 8: Completion Report [PENDING] β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ NEXT ACTION: Check ADR compliance now β”‚

β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

```

β›” VIOLATION Examples:

  • Not outputting the Progress Tracker after a step
  • NEXT ACTION: "Waiting for user confirmation" (before Step 8)
  • NEXT ACTION: "Let me know if you want me to continue"
  • NEXT ACTION: "Please manually verify the feature works"
  • Skipping to Step 8 without completing Steps 5-7

---

Execution Contract (READ BEFORE STARTING)

> ⚠️ THIS IS A BINDING CONTRACT. VIOLATION = FAILURE.

Before executing ANY step, internalize these rules:

```

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”

β”‚ EXECUTION CONTRACT β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ β”‚

β”‚ I WILL execute Steps 1-8 as ONE continuous operation. β”‚

β”‚ I WILL NOT stop between steps (except Step 8 or BLOCKED). β”‚

β”‚ I WILL invoke the next step's tools IMMEDIATELY after each step. β”‚

β”‚ I WILL output a Progress Tracker after EVERY step. β”‚

β”‚ I WILL test the feature MYSELF in Step 3 (not ask the user). β”‚

β”‚ I WILL NOT stop after Step 4 (code review) - there are 4 more steps. β”‚

β”‚ I WILL NOT ask the user if they want me to continue. β”‚

β”‚ I WILL NOT ask the user to manually verify anything. β”‚

β”‚ I WILL only stop at Step 8 after presenting the Completion Report. β”‚

β”‚ I WILL only stop early for genuine BLOCKING elements I cannot fix. β”‚

β”‚ β”‚

β”‚ AFTER EACH STEP OUTPUT: β”‚

β”‚ β”œβ”€β”€ Output status (PASS/FAIL) β”‚

β”‚ β”œβ”€β”€ Output Progress Tracker β”‚

β”‚ └── IMMEDIATELY execute next step (no pause, no waiting) β”‚

β”‚ β”‚

β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

```

If you find yourself about to stop before Step 8, RE-READ this contract.

---

Step 1: Implementation

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 1 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 2. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Execute all tasks in the phase using subagent delegation.

> REMINDER: You are an orchestrator. Spawn subagents for ALL implementation work.

Process:

  1. Read phase requirements and tasks from plan (orchestrator reads)
  2. Read coding standards from docs/standards/CODING_STANDARDS.md (if exists)
  3. Identify independent tasks for parallelization
  4. SPAWN test subagents FIRST (verification-first)
  5. SPAWN implementation subagents (include coding standards reference)
  6. Monitor subagent progress and handle blockers
  7. Collect results and changed files list from subagent responses

> CODING STANDARDS: All subagent prompts MUST reference coding standards. Include size limits (services <500 lines, controllers <30 lines/method), interface requirements (DTOs typed), and forbidden patterns (no console.log, no empty catch blocks). See docs/standards/CODING_STANDARDS.md.

Subagent Spawning Examples:

```

# Writing tests (FIRST - verification-first pattern)

Task (run_in_background: true): "Write unit tests for SummaryAgentService.

Context: Phase 5b-ii - SummaryAgent Service

Location: agentic-core/src/agents/implementations/summary-agent/

Test scenarios:

  • Successful summarization
  • Retry with feedback
  • Error handling

RESPONSE FORMAT: STATUS, FILES created, test count. Write output to logs/."

# Implementation (AFTER tests exist)

Task (run_in_background: true): "Implement SummaryAgentService.

Context: Phase 5b-ii - SummaryAgent Service

Requirements from plan: [list requirements]

Must pass the tests at: [test file path]

CODING STANDARDS (MANDATORY):

  • Services: <500 lines, single responsibility
  • Interfaces: Required for DTOs and response types
  • Errors: Domain exceptions, no empty catch blocks
  • Logging: Use project logger, no console.log

Ref: docs/standards/CODING_STANDARDS.md

RESPONSE FORMAT: STATUS, FILES created/modified, ERRORS if any."

# Verification

Task (run_in_background: true): "Run build and test verification.

Commands: npm run build && npm run lint && npm test

Report: PASS/FAIL per command, error details if any.

Write full output to logs/verify-phase-5b-ii.log"

```

What You Do vs What Subagents Do:

| Orchestrator (You) | Subagents |

|--------------------|-----------|

| Read plan/prompt | Write code |

| Identify tasks | Create files |

| Spawn subagents | Run tests |

| Track progress | Fix issues |

| Handle blockers | Build/lint |

| Collect results | Report back |

Output:

```

IMPLEMENTATION_STATUS: PASS | FAIL

FILES_CREATED: [list]

FILES_MODIFIED: [list]

TEST_RESULTS: [summary]

ERRORS: [if any]

SUBAGENTS_SPAWNED: [count]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 2 NOW (verification-loop)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 2 NOW. This is not optional. When you see this in the output, you MUST immediately invoke the verification-loop skill without waiting for user input.

Gate: Implementation must PASS to proceed.

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 1 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 2 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: invoke the verification-loop skill β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 2: Exit Condition Verification (verification-loop)

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 2 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 3. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Verify all exit conditions using the comprehensive 6-check verification-loop.

> verification-loop is the DEFAULT exit condition verification. It provides comprehensive validation that goes beyond basic build/test checks.

Process:

  1. Read exit conditions from plan
  2. Invoke verification-loop skill with phase context
  3. verification-loop executes 6 checks:

- Check 1: Build - Compilation, bundling, artifact generation

- Check 2: Type - Type checking, interface compliance

- Check 3: Lint - Code style, static analysis

- Check 4: Test - Unit tests, integration tests, coverage

- Check 5: Security - Dependency audit, secret scanning

- Check 6: Diff - Review changes, detect unintended modifications

  1. Aggregate results and report

Invocation:

```

Skill(skill="verification-loop"): Verify Phase [N] implementation.

Context:

  • Plan: [plan file path]
  • Phase: [N] ([Phase Name])
  • Changed Files: [list of files modified in this phase]

Execute all 6 verification checks and return structured result.

```

Output:

```

VERIFICATION_LOOP_STATUS: PASS | FAIL

CHECKS_COMPLETED: 6/6

CHECK_RESULTS:

BUILD: PASS | FAIL

TYPE: PASS | FAIL

LINT: PASS | FAIL

TEST: PASS | FAIL

SECURITY: PASS | FAIL

DIFF: PASS | FAIL

FAILED_CHECKS: [list if any]

EVIDENCE: logs/verification-loop-phase-N.log

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 3 NOW (Integration Testing)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 3 NOW. This is not optional. When you see this in the output, you MUST immediately execute Step 3 without waiting for user input.

Gate: ALL 6 verification checks must PASS to proceed.

On Failure: Spawn fix subagents for failed checks, re-run verification-loop, repeat until all pass or escalate.

Disabling verification-loop (not recommended):

```yaml

# In plan metadata - only for special cases

phase_config:

verification_loop: false # Falls back to basic exit conditions

```

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 2 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 3 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: run integration tests (Step 3) β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 3: Automated Integration Testing

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 3 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 4. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Verify the implementation works end-to-end through automated testing performed by YOU, not the user.

> YOU are the tester. Do not ask the user to manually verify. Use tools to test the system yourself.

> For UI Testing: Use the browser-verification-agent - spawn ONE agent per test scenario for context preservation. The agent wraps Playwright MCP and returns structured evidence.

Process:

  1. Determine the testing approach based on implementation type:

- Backend/API: Use curl, httpie, or spawn subagent to make API calls

- Frontend/UI: Spawn browser-verification-agent for each test scenario

- CLI tools: Execute commands and verify output

- Libraries: Write and run integration test scripts

  1. Spawn testing subagents for each verification scenario (ONE test per agent for UI)
  2. Capture results and any failures
  3. On failure: spawn fix subagents, re-test

Testing by Implementation Type:

| Type | Testing Method | Tools/Agents |

|------|----------------|--------------|

| REST API | Make HTTP requests, verify responses | curl, httpie, fetch |

| GraphQL | Execute queries/mutations | curl with GraphQL payload |

| Web UI | Navigate, interact, assert | browser-verification-agent |

| Database | Query and verify data | psql, mysql, prisma |

| Background jobs | Trigger and verify completion | API calls + polling |

| File processing | Provide input, check output | Bash, Read tool |

Subagent Examples:

```

# API Testing (general-purpose subagent)

Task: "Test the new /api/users endpoint.

Make these API calls and report results:

  1. POST /api/users with valid payload - expect 201
  2. POST /api/users with invalid email - expect 400
  3. GET /api/users/:id - expect 200 with user data
  4. GET /api/users/nonexistent - expect 404

RESPONSE FORMAT: STATUS, test results summary, ERRORS if any."

# UI Testing (browser-verification-agent) - ONE test per agent spawn

Task(subagent_type="browser-verification-agent"): "Verify login with valid credentials.

base_url: http://localhost:3000

test_description: Navigate to /login, enter 'test@example.com' in email field,

enter 'password123' in password field, click Login button

expected_outcome: URL changes to /dashboard, welcome message visible

session_context: fresh"

Task(subagent_type="browser-verification-agent"): "Verify login with invalid credentials.

base_url: http://localhost:3000

test_description: Navigate to /login, enter 'test@example.com' in email field,

enter 'wrongpassword' in password field, click Login button

expected_outcome: Error message 'Invalid credentials' is displayed, URL stays on /login

session_context: fresh"

```

UI Testing Response Format (from browser-verification-agent):

```

STATUS: PASS | FAIL | FLAKY | BLOCKED

SCREENSHOT: logs/screenshots/2026-01-23-143022-login-test.png

OBSERVED: [what actually happened]

EXPECTED: [echo of expected_outcome]

ERRORS: [if any]

```

Aggregated Output (for Step 3 completion):

```

INTEGRATION_TEST_STATUS: PASS | FAIL

TESTS_RUN: [count]

TESTS_PASSED: [count]

TESTS_FAILED: [count]

FAILURE_DETAILS: [if any]

EVIDENCE: [log files, screenshots]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 4 NOW (Code Review)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 4 NOW. This is not optional. When you see this in the output, you MUST immediately invoke the code-review skill without waiting for user input.

Gate: Integration tests must PASS to proceed.

On Failure:

  1. Analyze failure root cause
  2. Spawn fix subagents
  3. Re-run failed tests
  4. Repeat until pass or hit blocking element

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 3 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 4 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: invoke the code-review skill β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 4: Code Review

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 4 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 5. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Validate implementation quality across all dimensions.

Process:

  1. Invoke code-review skill with phase context
  2. Provide: plan path, phase number, changed files
  3. Receive structured review result

Output:

```

CODE_REVIEW_STATUS: PASS | PASS_WITH_NOTES | NEEDS_CHANGES

BLOCKING_ISSUES: [count]

RECOMMENDATIONS: [list]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 5 NOW (ADR Compliance)

```

> CRITICAL: When CODE_REVIEW_STATUS is PASS, the output MUST include NEXT_STEP: EXECUTE STEP 5 NOW. This is not optional. When you see this in the output, you MUST immediately check ADR compliance without waiting for user input.

Gate: Code review must be PASS to proceed. PASS_WITH_NOTES is NOT acceptable.

> ⚠️ MANDATORY: All Recommendations Must Be Fixed

>

> This is a non-negotiable exit condition. PASS_WITH_NOTES means there are recommendations that MUST be addressed before the phase can complete.

>

> - Recommendations are NOT optional suggestions

> - Recommendations are NOT "nice to have"

> - Recommendations are blocking issues that must be resolved

> - The only acceptable code review status is PASS

On PASS_WITH_NOTES or NEEDS_CHANGES:

  1. Spawn fix subagents to address ALL issues (blocking issues AND recommendations)
  2. Re-run code review
  3. Repeat until clean PASS (max 3 retries)
  4. Escalate to user only if max retries exhausted

Why are recommendations mandatory?

  • Recommendations indicate pattern violations, missing tests, or technical debt
  • Leaving them unfixed accumulates debt that compounds across phases
  • The "Clean Baseline Principle" requires each phase to end clean
  • Future phases inherit our mess if we don't fix it now
  • Consistency: if it's worth noting, it's worth fixing

---

> ⚠️ CRITICAL TRANSITION POINT - DO NOT STOP HERE ⚠️

>

> After code-review skill returns, you MUST continue. This is the #1 failure point.

> - Code review returned PASS? β†’ Output Progress Tracker β†’ Execute Step 5 NOW

> - Code review returned PASS_WITH_NOTES? β†’ Fix issues β†’ Re-run β†’ Get PASS β†’ Execute Step 5

> - DO NOT report to user and wait. DO NOT ask if they want to continue.

> - The phase is NOT complete. You have 4 more steps to execute.

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 4 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 5 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: check ADR compliance β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 5: ADR Compliance Check

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 5 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 6. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Ensure architectural decisions are followed and documented.

Process:

  1. Read docs/decisions/INDEX.md to identify relevant ADRs
  2. Check implementation against applicable ADRs
  3. Identify any new architectural decisions made during implementation
  4. If new decisions found, invoke adr skill to document them

Output:

```

ADR_COMPLIANCE_STATUS: PASS | NEEDS_DOCUMENTATION

APPLICABLE_ADRS: [list]

COMPLIANCE_RESULTS: [per-ADR status]

NEW_DECISIONS_DOCUMENTED: [list of new ADR numbers, if any]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 6 NOW (Plan Synchronization)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 6 NOW. This is not optional. When you see this in the output, you MUST immediately verify plan synchronization without waiting for user input.

Gate: ADR compliance must PASS to proceed.

On NEEDS_DOCUMENTATION: Invoke adr skill for each undocumented decision.

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 5 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 6 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: verify plan synchronization β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 6: Plan Synchronization

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 6 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 7. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Verify work items completed and update plan status.

Process:

  1. Verify all work items for this phase were completed
  2. Add ADR references if new ADRs were created
  3. Note any deviations from original plan
  4. Mark phase status as complete (add βœ… to phase header)

Note: Per ADR-0001, plans are specification documents. Progress is tracked via Task tools, not by modifying checkboxes in the plan file.

Output:

```

PLAN_SYNC_STATUS: PASS | FAIL

WORK_ITEMS_VERIFIED: [count]

DEVIATIONS_NOTED: [count]

ADR_REFERENCES_ADDED: [count]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 7 NOW (Prompt Archival)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 7 NOW. This is not optional. When you see this in the output, you MUST immediately archive the prompt (or skip if none) without waiting for user input.

Gate: Plan sync must complete successfully.

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 6 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 7 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: archive the prompt (if provided) β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 7: Prompt Archival

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 7 - This is part of a continuous 8-step pipeline. β•‘

β•‘ When this step completes with PASS, IMMEDIATELY execute Step 8. β•‘

β•‘ Do NOT stop. Do NOT wait for user input. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Archive the phase prompt to the completed folder (if prompt was provided).

Process:

  1. Check if a prompt file was used for this phase
  2. If yes, move to completed/ subfolder:

```bash

# Create completed folder if it doesn't exist

mkdir -p docs/prompts/completed

# Move the prompt file

mv docs/prompts/phase-2-data-pipeline.md docs/prompts/completed/

```

  1. Log the archival

Output:

```

PROMPT_ARCHIVAL_STATUS: PASS | SKIPPED | FAIL

PROMPT_FILE: [original path]

ARCHIVED_TO: [new path in completed/]

─────────────────────────────────────────────────

⚑ NEXT_STEP: EXECUTE STEP 8 NOW (Completion Report)

```

> CRITICAL: The output MUST include NEXT_STEP: EXECUTE STEP 8 NOW. This is not optional. When you see this in the output, you MUST immediately generate the completion report without waiting for user input.

Gate: Non-blocking (failure logged but doesn't stop completion).

Why Archive?

  • Prevents re-using the same prompt accidentally
  • Creates a record of completed work
  • Keeps the prompts folder clean for pending work
  • Allows review of what instructions were used

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ βœ… STEP 7 COMPLETE β†’ IMMEDIATELY EXECUTE STEP 8 NOW β•‘

β•‘ Do NOT output results and wait. Do NOT ask "shall I continue?" β•‘

β•‘ Your next action MUST be: generate the completion report β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

---

Step 8: Phase Completion Report

```

╔═══════════════════════════════════════════════════════════════════════════════╗

β•‘ ⏩ ENTERING STEP 8 (FINAL STEP) - This completes the pipeline. β•‘

β•‘ After generating the completion report, the phase is DONE. β•‘

β•‘ You may NOW stop and await user input for the next phase. β•‘

β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

```

Responsibility: Generate summary for orchestrator and user.

Output Format:

```

═══════════════════════════════════════════════════════════════

● PHASE [N] COMPLETE: [Phase Name]

═══════════════════════════════════════════════════════════════

Implementation:

Files Created: [count] ([file list])

Files Modified: [count] ([file list])

Tests: [X passing, Y failing]

Exit Conditions:

Build: βœ… PASS

Runtime: βœ… PASS

Unit Tests: βœ… PASS

Integration Testing (performed by Claude):

API Tests: βœ… [X/Y passed] (or N/A)

UI Tests: βœ… [X/Y passed] (or N/A)

Evidence: logs/integration-test-phase-N.log

Code Review:

Status: βœ… PASS (all recommendations addressed)

Blocking Issues: 0

Recommendations Fixed: [count]

ADR Compliance:

Status: βœ… PASS

Applicable ADRs: [list]

New ADRs Created: [list or "None"]

Plan Updated:

Work Items Verified: [count]

Phase Status: βœ… Complete

Task Status: completed (via TaskUpdate)

Prompt:

Status: βœ… Archived (or ⏭️ Skipped - no prompt provided)

Archived To: docs/prompts/completed/phase-2-data-pipeline.md

User Verification (only if truly not automatable):

[None - all verification automated]

OR

- [ ] [Physical hardware check]

- [ ] [Third-party dashboard verification]

Learnings Captured:

Status: βœ… Extracted

Patterns Found: [count]

Saved To: ~/.claude/skills/learned/

═══════════════════════════════════════════════════════════════

PHASE STATUS: βœ… COMPLETE - Ready for next phase

═══════════════════════════════════════════════════════════════

```

> Note on User Verification: This section should almost always be empty. Claude performs integration testing in Step 3. Only include items here that genuinely require human eyes (e.g., "verify email arrived in inbox", "check physical device display").

πŸ›‘ STOP POINT: This is the ONLY valid stopping point in the entire pipeline.

After outputting the Completion Report:

  1. Output final Progress Tracker showing all steps βœ… DONE
  2. Invoke continuous-learning skill to capture patterns from this phase
  3. Present the report to the user
  4. NOW (and ONLY now) await user confirmation before proceeding to next phase

Continuous Learning at Phase Boundary

> Phase completion is a natural learning boundary. Always invoke continuous-learning here.

Why invoke here?

  • User may /clear or /compact before next phase
  • Patterns from implementation are fresh and detailed
  • Captures decisions, fixes, and approaches while context is complete
  • Prevents loss of valuable learnings

Invocation:

```

Skill(skill="continuous-learning"): Extract patterns from Phase [N] completion.

Context:

  • Phase: [N] ([Phase Name])
  • Files Changed: [list]
  • Key Decisions: [any architectural choices made]
  • Issues Resolved: [problems fixed during implementation]
  • Approaches Used: [patterns and techniques applied]

Extract valuable patterns for future sessions.

```

Output (appended to completion report):

```

Learnings Captured:

Patterns Extracted: [count]

Saved To: ~/.claude/skills/learned/

```

```

YOU HAVE REACHED THE END OF THE PHASE PIPELINE.

THIS IS THE ONLY PLACE WHERE YOU MAY STOP AND WAIT FOR USER INPUT.

```

---

Progress Tracking

Progress is tracked via Task tools, not checkboxes. See [ADR-0001](../../docs/decisions/ADR-0001-separate-plan-spec-from-progress-tracking.md).

  • Task status is managed by implement-plan (in_progress when starting, completed when done)
  • Plan files remain pure specification documents
  • Step 6 verifies work items but does not modify plan checkboxes

---

Phase Steps (Extensible)

The execution pipeline is defined as an ordered list of steps. This design allows easy extension:

```

PHASE_STEPS = [

{ name: "implementation", required: true, skill: null },

{ name: "exit_conditions", required: true, skill: null },

{ name: "integration_testing", required: true, skill: null }, // Claude tests via API/Playwright

{ name: "code_review", required: true, skill: "code-review" },

{ name: "adr_compliance", required: true, skill: "adr" },

{ name: "plan_sync", required: true, skill: null },

{ name: "prompt_archival", required: false, skill: null },

{ name: "completion_report", required: true, skill: null },

]

```

Adding New Steps

To add a new step (e.g., security scan, performance check):

  1. Define the step with its gate criteria
  2. Add to the pipeline at appropriate position
  3. Implement the step logic or delegate to a skill

Example - Adding Security Scan:

```

{ name: "security_scan", required: false, skill: "security-scan" }

```

Conditional Steps

Steps can be conditional based on:

  • Phase type (e.g., only run security scan on auth phases)
  • Configuration flags
  • Plan metadata

```

if (phase.metadata.security_sensitive) {

run_step("security_scan")

}

```

Optional Skill Steps

Optional steps can be added to the phase execution pipeline when needed. These steps are not run by default and must be explicitly enabled via plan metadata or configuration.

Optional Steps Overview

| Step | Skill | Purpose | Default |

|------|-------|---------|---------|

| Security Review | security-review | Comprehensive OWASP security audit | Disabled |

> Note: verification-loop is NOT optional - it is the default exit condition verification in Step 2.

Key characteristics:

  • Not included in standard pipeline execution
  • Enabled via plan metadata optional_steps configuration
  • Can be enabled globally or per-phase
  • Integrate at specific points in the pipeline

Security Review Step

Skill: security-review

Purpose: Comprehensive security audit for implementations that handle sensitive operations, user data, or security-critical code paths.

When to enable:

  • Authentication and authorization code
  • User input handling and validation
  • API endpoints exposed to external clients
  • Secrets management and credential handling
  • Payment processing or financial transactions
  • Personal data processing (PII, PHI)
  • Cryptographic operations

How to enable:

```yaml

# In plan metadata

phase_config:

optional_steps:

security_review: true

```

Integration point: After Step 4 (Code Review), before Step 5 (ADR Compliance)

```

Step 1: Implementation

Step 2: Exit Conditions

Step 3: Integration Testing

Step 4: Code Review

Step 4.5: Security Review ← INSERTED HERE

Step 5: ADR Compliance

Step 6: Plan Sync

Step 7: Prompt Archival

Step 8: Completion Report

```

Expected output format:

```

SECURITY_REVIEW_STATUS: PASS | FAIL | NEEDS_REMEDIATION

VULNERABILITIES_FOUND: [count]

SEVERITY_BREAKDOWN:

CRITICAL: [count]

HIGH: [count]

MEDIUM: [count]

LOW: [count]

ISSUES:

- [severity] [category]: [description]

RECOMMENDATIONS: [list]

COMPLIANCE_CHECKS: [list of standards checked, e.g., OWASP Top 10]

```

Gate behavior: Security review must PASS to proceed. NEEDS_REMEDIATION triggers fix subagents, then re-review.

Enabling Security Review

Security review can be enabled for phases that handle sensitive operations.

Example configuration:

```yaml

# In plan metadata

phase_config:

optional_steps:

security_review: true

```

Execution order with security review enabled:

```

Step 1: Implementation

Step 2: Exit Conditions (verification-loop - always runs)

Step 3: Integration Testing

Step 4: Code Review

Step 4.5: Security Review (if enabled)

Step 5: ADR Compliance

Step 6: Plan Sync

Step 7: Prompt Archival

Step 8: Completion Report

```

Per-phase overrides:

```yaml

# Enable globally but override for specific phases

phase_config:

optional_steps:

security_review: true

phases:

- name: "Database Schema"

# Inherits security_review: true from global config

- name: "Static Content"

optional_steps:

security_review: false # Override: skip for this phase

```

Invocation

From implement-plan (primary use)

```

Skill(skill="implement-phase"): Execute Phase 2 of the implementation plan.

Context:

  • Plan: docs

More from this repository10

🎯
brainstorm🎯Skill

Helps users systematically explore and refine ideas through Socratic questioning, multi-perspective analysis, and proactive research to transform raw concepts into structured proposals.

🎯
prompt-generator🎯Skill

Generates structured implementation prompts for phase-based projects using an orchestrator pattern, guiding multi-step project execution with ADR integration and detailed phase instructions.

🎯
create-plan🎯Skill

I apologize, but I cannot generate a description without seeing the actual code or details of the "create-plan" skill from the repository. Could you provide more context about what the skill does, ...

🎯
iterate-plan🎯Skill

Iteratively updates and refines implementation plans through user feedback, research, and migration to Task tools system.

🎯
implement-plan🎯Skill

Orchestrates complete implementation plans by delegating phases to implement-phase skill, managing lifecycle, sequencing, and tracking progress.

🎯
security-review🎯Skill

Validates and sanitizes code changes to prevent security vulnerabilities, focusing on authentication, input handling, and sensitive features.

🎯
agent-creator🎯Skill

I apologize, but I cannot generate a description without seeing the actual repository or skill details. Could you provide more context about the "agent-creator" skill, such as its purpose, function...

🎯
codebase-research🎯Skill

Researches codebase comprehensively by decomposing queries into parallel sub-agent tasks and synthesizing detailed findings about code structure and functionality.

🎯
strategic-compact🎯Skill

Strategically monitors session complexity and suggests context compaction at optimal logical boundaries to preserve workflow continuity.

🎯
skill-visualizer🎯Skill

Visualizes skills, codebase, and dependencies using interactive D3.js-powered HTML diagrams with color-coded, collapsible nodes.