Phase 1: Discovery (Automated)
Auto-discovers Team ID from docs/tasks/kanban_board.md (see CLAUDE.md "Configuration Auto-Discovery").
Input: Story ID from orchestrator (ln-510)
Phase 2: Story + Tasks Analysis (NO Dialog)
Step 0: Study Project Test Files
- Scan for test-related files:
- tests/README.md (commands, setup, environment)
- Test configs (jest.config.js, vitest.config.ts, pytest.ini)
- Existing test structure (tests/, __tests__/ directories)
- Coverage config (.coveragerc, coverage.json)
- Extract: test commands, framework, patterns, coverage thresholds
- Ensures test planning aligns with project practices
Step 1: Load Research and Manual Test Results
- Fetch Story from Linear (must have label "user-story")
- Extract Story.id (UUID) - Use UUID, NOT short ID (required for Linear API)
- Load research comment (from ln-511): "## Test Research: {Feature}"
- Load manual test results comment (from ln-512): "## Manual Testing Results"
- If not found β ERROR: Run ln-510-test-planner pipeline first
- Parse sections: AC results (PASS/FAIL), Edge Cases, Error Handling, Integration flows
- Map to test design: PASSED AC β E2E, Edge cases β Unit, Errors β Error handling, Flows β Integration
Step 2: Analyze Story + Tasks
- Parse Story: Goal, Test Strategy, Technical Notes
- Fetch all child Tasks (parentId = Story.id, status = Done) from Linear
- Analyze each Task:
- Components implemented
- Business logic added
- Integration points created
- Conditional branches (if/else/switch)
- Identify what needs testing
Phase 3: Parsing Strategy for Manual Test Results
Process: Locate Linear comment with "Manual Testing Results" header β Verify Format Version 1.0 β Extract structured sections (Acceptance Criteria, Test Results by AC, Edge Cases, Error Handling, Integration Testing) using regex β Validate (at least 1 PASSED AC, AC count matches Story, completeness check) β Map parsed data to test design structure
Error Handling: Missing comment β ERROR (run ln-512 first), Missing format version β WARNING (try legacy parsing), Required section missing β ERROR (re-run ln-512), No PASSED AC β ERROR (fix implementation)
Phase 4: Risk-Based Test Planning (Automated)
Reference: See references/risk_based_testing_guide.md for complete methodology.
E2E-First Approach: Prioritize by business risk (Priority = Impact x Probability), not coverage metrics.
Workflow:
Step 1: Risk Assessment
Calculate Priority for each scenario from manual testing:
```
Priority = Business Impact (1-5) x Probability (1-5)
```
Decision Criteria:
- Priority >=15 β MUST test
- Priority 9-14 β SHOULD test if not covered
- Priority <=8 β SKIP (manual testing sufficient)
Step 2: E2E Test Selection (2-5): Baseline 2 (positive + negative) ALWAYS + 0-3 additional (Priority >=15 only)
Step 3: Unit Test Selection (0-15): DEFAULT 0. Add ONLY for complex business logic (Priority >=15): financial, security, algorithms
Step 4: Integration Test Selection (0-8): DEFAULT 0. Add ONLY if E2E gaps AND Priority >=15: rollback, concurrency, external API errors
Step 5: Validation: Limits 2-28 total (realistic goal: 2-7). Auto-trim if >7 (keep 2 baseline + top 5 by Priority)
Phase 5: Test Task Generation (Automated)
Generates complete test task per test_task_template.md (11 sections):
Sections 1-7: Context, Risk Matrix, E2E/Integration/Unit Tests (with Priority scores + justifications), Coverage, DoD
Section 8: Existing Tests to Fix (analysis of affected tests from implementation tasks)
Section 9: Infrastructure Changes (packages, Docker, configs - based on test dependencies)
Section 10: Documentation Updates (README, CHANGELOG, tests/README, config docs)
Section 11: Legacy Code Cleanup (deprecated patterns, backward compat, dead code)
Shows preview for review.
Phase 6: Confirmation & Delegation
Step 1: Preview generated test plan (always displayed for transparency)
Step 2: Confirmation logic:
- autoApprove: true (default from ln-510) β proceed automatically
- Manual run β prompt user to type "confirm"
Step 3: Check for existing test task
Query Linear: list_issues(parentId=Story.id, labels=["tests"])
Decision:
- Count = 0 β CREATE MODE (Step 4a)
- Count >= 1 β REPLAN MODE (Step 4b)
Step 4a: CREATE MODE (if Count = 0)
Invoke ln-301-task-creator worker with taskType: "test"
Pass to worker:
- taskType, teamId, storyData (Story.id, title, AC, Technical Notes, Context)
- researchFindings (from ln-511 comment)
- manualTestResults (from ln-512 comment)
- testPlan (e2eTests, integrationTests, unitTests, riskPriorityMatrix)
- infrastructureChanges, documentationUpdates, legacyCleanup
Worker returns: Task URL + summary
Step 4b: REPLAN MODE (if Count >= 1)
Invoke ln-302-task-replanner worker with taskType: "test"
Pass to worker:
- Same data as CREATE MODE + existingTaskIds
Worker returns: Operations summary + warnings
Step 5: Return summary to orchestrator (ln-510)
---