create-plan
π―Skillfrom ferueda/agent-skills
Generates comprehensive implementation plans by thoroughly researching context, analyzing requirements, and collaboratively developing detailed technical specifications.
Part of
ferueda/agent-skills(6 items)
Installation
npx skills add ferueda/agent-skillsSkill Details
Create detailed implementation plans with thorough research and iteration. Use when the user asks to build a feature, create a plan, or specifically invokes this skill.
Overview
# Implementation Plan
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
Initial Response
When this command is invoked:
- Check if parameters were provided:
- If a file path was provided as a parameter, skip the default message
- Immediately read any provided files FULLY
- Begin the research process
- If no parameters provided, respond with:
```
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
- The task description (or reference to a file)
- Any relevant context, constraints, or specific requirements
- Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a file directly: /create_plan /dev/todos/eng_1234.md
For deeper analysis, try: /create_plan think deeply about /dev/research/eng_1234.md
```
Then wait for the user's input.
Process Steps
Step 1: Context Gathering & Initial Analysis
- Read all mentioned files immediately and FULLY:
- Todo files (e.g., /dev/todos/eng_1234.md)
- Research documents (e.g., /dev/research/eng_1234.md)
- Related implementation plans
- Any JSON/data files mentioned
- IMPORTANT: Use the Read tool/ReadFileTool WITHOUT limit/offset parameters to read entire files
- CRITICAL: DO NOT spawn sub-tasks before reading these files yourself in the main context
- NEVER read files partially - if a file is mentioned, read it completely
- Spawn initial research tasks to gather context:
Before asking the user any questions, use sub agents/sub tasks to research in parallel:
- Use the Codebase Investigator sub agent to find all files related to the task
- Use the Codebase Investigator sub agent to understand how the current implementation works
- Find any existing thoughts documents about this feature
These agents will:
- Find relevant source files, configs, and tests
- Trace data flow and key functions
- Return detailed explanations with file:line references
- Read all files identified by research tasks:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
- Analyze and verify understanding:
- Cross-reference the requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
- Present informed understanding and focused questions:
```
Based on the todo and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
```
Only ask questions that you genuinely cannot answer through code investigation.
Step 2: Research & Discovery
After getting initial clarifications:
- If the user corrects any misunderstanding:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
- Create a research todo list using write_todos/TodoWrite to track exploration tasks
- Spawn parallel sub-tasks for comprehensive research:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research
- Wait for ALL sub-tasks to complete before proceeding
- Present findings and design options:
```
Based on my research, here's what I found:
Current State:
- [Key discovery about existing code]
- [Pattern or convention to follow]
- [Existing functionality at file:line]
Design Options:
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
Open Questions:
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
```
Step 3: Plan Structure Development
Once aligned on approach:
- Create initial plan outline:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
```
- Get feedback on structure before writing details
Step 4: Detailed Plan Writing
After structure approval:
- Write the plan to
dev/plans/YYYYMMDD-description.md
- Format: YYYYMMDD-description.md where:
- YYYYMMDD is today's date
- description is a brief kebab-case description
- Use this template structure:
````markdown
# [Feature/Task Name] Implementation Plan
Overview
[Brief description of what we're implementing and why]
Current State Analysis
[What exists now, what's missing, key constraints discovered]
Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
Implementation Approach
[High-level strategy and reasoning]
Phase 1: [Descriptive Name]
Overview
[What this phase accomplishes]
Changes Required:
#### 1. [Component/File Group]
File: path/to/file.ext
Changes: [Summary of changes]
```[language]
// Specific code to add/modify
```
Success Criteria:
#### Automated Verification:
- [ ] Make check passes:
make check
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
Implementation Note: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
Testing Strategy
Unit Tests:
- [What to test]
- [Key edge cases]
Integration Tests:
- [End-to-end scenarios]
Manual Testing Steps:
- [Specific step to verify feature]
- [Another verification step]
- [Edge case to test manually]
Performance Considerations
[Any performance implications or optimizations needed]
Migration Notes
[If applicable, how to handle existing data/systems]
References
- Original ticket:
dev/todos/eng_XXXX.md - Related research:
dev/research/[relevant].md - Similar implementation:
[file:line]
````
Step 5: Sync and Review
- Sync the thoughts directory:
- This ensures the plan is properly indexed and available
- Present the draft plan location:
```
I've created the initial implementation plan at:
dev/plans/YYYYMMDD-description.md
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
```
- Iterate based on feedback - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
- Continue refining until the user is satisfied
Important Guidelines
- Be Skeptical:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
- Be Interactive:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
- Be Thorough:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
- Be Practical:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
- Track Progress:
- Use write_todos/TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
- No Open Questions in Final Plan:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
Success Criteria Guidelines
Always separate success criteria into two categories:
- Automated Verification (can be run by execution agents):
- Commands that can be run: make test, make check, make lint, make fix etc.
- Specific files that should exist
- Code compilation/type checking
- Automated test suites
- Manual Verification (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
Format example:
```markdown
Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully:
make migrate - [ ] All unit tests pass:
go test ./... - [ ] No linting errors:
golangci-lint run - [ ] API endpoint returns 200:
curl localhost:8080/api/new-endpoint
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
```
Common Patterns
For Database Changes:
- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
For New Features:
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
For Refactoring:
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
Sub-task Spawning Best Practices
When spawning research sub-tasks:
- Spawn multiple tasks in parallel for efficiency
- Each task should be focused on a specific area
- Provide detailed instructions including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
- Be EXTREMELY specific about directories:
- Include the full path context in your prompts
- Specify read-only tools to use
- Request specific file:line references in responses
- Wait for all tasks to complete before synthesizing
- Verify sub-task results:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
More from this repository5
ask-questions skill from ferueda/agent-skills
summarize-work skill from ferueda/agent-skills
implement-plan skill from ferueda/agent-skills
research-codebase skill from ferueda/agent-skills
review-spec skill from ferueda/agent-skills