🎯

create-plan

🎯Skill

from ferueda/agent-skills

VibeIndex|
What it does

Generates comprehensive implementation plans by thoroughly researching context, analyzing requirements, and collaboratively developing detailed technical specifications.

πŸ“¦

Part of

ferueda/agent-skills(6 items)

create-plan

Installation

Quick InstallInstall with npx
npx skills add ferueda/agent-skills
πŸ“– Extracted from docs: ferueda/agent-skills
7Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Create detailed implementation plans with thorough research and iteration. Use when the user asks to build a feature, create a plan, or specifically invokes this skill.

Overview

# Implementation Plan

You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.

Initial Response

When this command is invoked:

  1. Check if parameters were provided:

- If a file path was provided as a parameter, skip the default message

- Immediately read any provided files FULLY

- Begin the research process

  1. If no parameters provided, respond with:

```

I'll help you create a detailed implementation plan. Let me start by understanding what we're building.

Please provide:

  1. The task description (or reference to a file)
  2. Any relevant context, constraints, or specific requirements
  3. Links to related research or previous implementations

I'll analyze this information and work with you to create a comprehensive plan.

Tip: You can also invoke this command with a file directly: /create_plan /dev/todos/eng_1234.md

For deeper analysis, try: /create_plan think deeply about /dev/research/eng_1234.md

```

Then wait for the user's input.

Process Steps

Step 1: Context Gathering & Initial Analysis

  1. Read all mentioned files immediately and FULLY:

- Todo files (e.g., /dev/todos/eng_1234.md)

- Research documents (e.g., /dev/research/eng_1234.md)

- Related implementation plans

- Any JSON/data files mentioned

- IMPORTANT: Use the Read tool/ReadFileTool WITHOUT limit/offset parameters to read entire files

- CRITICAL: DO NOT spawn sub-tasks before reading these files yourself in the main context

- NEVER read files partially - if a file is mentioned, read it completely

  1. Spawn initial research tasks to gather context:

Before asking the user any questions, use sub agents/sub tasks to research in parallel:

- Use the Codebase Investigator sub agent to find all files related to the task

- Use the Codebase Investigator sub agent to understand how the current implementation works

- Find any existing thoughts documents about this feature

These agents will:

- Find relevant source files, configs, and tests

- Trace data flow and key functions

- Return detailed explanations with file:line references

  1. Read all files identified by research tasks:

- After research tasks complete, read ALL files they identified as relevant

- Read them FULLY into the main context

- This ensures you have complete understanding before proceeding

  1. Analyze and verify understanding:

- Cross-reference the requirements with actual code

- Identify any discrepancies or misunderstandings

- Note assumptions that need verification

- Determine true scope based on codebase reality

  1. Present informed understanding and focused questions:

```

Based on the todo and my research of the codebase, I understand we need to [accurate summary].

I've found that:

- [Current implementation detail with file:line reference]

- [Relevant pattern or constraint discovered]

- [Potential complexity or edge case identified]

Questions that my research couldn't answer:

- [Specific technical question that requires human judgment]

- [Business logic clarification]

- [Design preference that affects implementation]

```

Only ask questions that you genuinely cannot answer through code investigation.

Step 2: Research & Discovery

After getting initial clarifications:

  1. If the user corrects any misunderstanding:

- DO NOT just accept the correction

- Spawn new research tasks to verify the correct information

- Read the specific files/directories they mention

- Only proceed once you've verified the facts yourself

  1. Create a research todo list using write_todos/TodoWrite to track exploration tasks
  1. Spawn parallel sub-tasks for comprehensive research:

- Create multiple Task agents to research different aspects concurrently

- Use the right agent for each type of research

  1. Wait for ALL sub-tasks to complete before proceeding
  1. Present findings and design options:

```

Based on my research, here's what I found:

Current State:

- [Key discovery about existing code]

- [Pattern or convention to follow]

- [Existing functionality at file:line]

Design Options:

1. [Option A] - [pros/cons]

2. [Option B] - [pros/cons]

Open Questions:

- [Technical uncertainty]

- [Design decision needed]

Which approach aligns best with your vision?

```

Step 3: Plan Structure Development

Once aligned on approach:

  1. Create initial plan outline:

```

Here's my proposed plan structure:

## Overview

[1-2 sentence summary]

## Implementation Phases:

1. [Phase name] - [what it accomplishes]

2. [Phase name] - [what it accomplishes]

3. [Phase name] - [what it accomplishes]

Does this phasing make sense? Should I adjust the order or granularity?

```

  1. Get feedback on structure before writing details

Step 4: Detailed Plan Writing

After structure approval:

  1. Write the plan to dev/plans/YYYYMMDD-description.md

- Format: YYYYMMDD-description.md where:

- YYYYMMDD is today's date

- description is a brief kebab-case description

  1. Use this template structure:

````markdown

# [Feature/Task Name] Implementation Plan

Overview

[Brief description of what we're implementing and why]

Current State Analysis

[What exists now, what's missing, key constraints discovered]

Desired End State

[A Specification of the desired end state after this plan is complete, and how to verify it]

Key Discoveries:

  • [Important finding with file:line reference]
  • [Pattern to follow]
  • [Constraint to work within]

What We're NOT Doing

[Explicitly list out-of-scope items to prevent scope creep]

Implementation Approach

[High-level strategy and reasoning]

Phase 1: [Descriptive Name]

Overview

[What this phase accomplishes]

Changes Required:

#### 1. [Component/File Group]

File: path/to/file.ext

Changes: [Summary of changes]

```[language]

// Specific code to add/modify

```

Success Criteria:

#### Automated Verification:

  • [ ] Make check passes: make check

#### Manual Verification:

  • [ ] Feature works as expected when tested via UI
  • [ ] Performance is acceptable under load
  • [ ] Edge case handling verified manually
  • [ ] No regressions in related features

Implementation Note: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.

---

Phase 2: [Descriptive Name]

[Similar structure with both automated and manual success criteria...]

---

Testing Strategy

Unit Tests:

  • [What to test]
  • [Key edge cases]

Integration Tests:

  • [End-to-end scenarios]

Manual Testing Steps:

  1. [Specific step to verify feature]
  2. [Another verification step]
  3. [Edge case to test manually]

Performance Considerations

[Any performance implications or optimizations needed]

Migration Notes

[If applicable, how to handle existing data/systems]

References

  • Original ticket: dev/todos/eng_XXXX.md
  • Related research: dev/research/[relevant].md
  • Similar implementation: [file:line]

````

Step 5: Sync and Review

  1. Sync the thoughts directory:

- This ensures the plan is properly indexed and available

  1. Present the draft plan location:

```

I've created the initial implementation plan at:

dev/plans/YYYYMMDD-description.md

Please review it and let me know:

- Are the phases properly scoped?

- Are the success criteria specific enough?

- Any technical details that need adjustment?

- Missing edge cases or considerations?

```

  1. Iterate based on feedback - be ready to:

- Add missing phases

- Adjust technical approach

- Clarify success criteria (both automated and manual)

- Add/remove scope items

  1. Continue refining until the user is satisfied

Important Guidelines

  1. Be Skeptical:

- Question vague requirements

- Identify potential issues early

- Ask "why" and "what about"

- Don't assume - verify with code

  1. Be Interactive:

- Don't write the full plan in one shot

- Get buy-in at each major step

- Allow course corrections

- Work collaboratively

  1. Be Thorough:

- Read all context files COMPLETELY before planning

- Research actual code patterns using parallel sub-tasks

- Include specific file paths and line numbers

- Write measurable success criteria with clear automated vs manual distinction

  1. Be Practical:

- Focus on incremental, testable changes

- Consider migration and rollback

- Think about edge cases

- Include "what we're NOT doing"

  1. Track Progress:

- Use write_todos/TodoWrite to track planning tasks

- Update todos as you complete research

- Mark planning tasks complete when done

  1. No Open Questions in Final Plan:

- If you encounter open questions during planning, STOP

- Research or ask for clarification immediately

- Do NOT write the plan with unresolved questions

- The implementation plan must be complete and actionable

- Every decision must be made before finalizing the plan

Success Criteria Guidelines

Always separate success criteria into two categories:

  1. Automated Verification (can be run by execution agents):

- Commands that can be run: make test, make check, make lint, make fix etc.

- Specific files that should exist

- Code compilation/type checking

- Automated test suites

  1. Manual Verification (requires human testing):

- UI/UX functionality

- Performance under real conditions

- Edge cases that are hard to automate

- User acceptance criteria

Format example:

```markdown

Success Criteria:

#### Automated Verification:

  • [ ] Database migration runs successfully: make migrate
  • [ ] All unit tests pass: go test ./...
  • [ ] No linting errors: golangci-lint run
  • [ ] API endpoint returns 200: curl localhost:8080/api/new-endpoint

#### Manual Verification:

  • [ ] New feature appears correctly in the UI
  • [ ] Performance is acceptable with 1000+ items
  • [ ] Error messages are user-friendly
  • [ ] Feature works correctly on mobile devices

```

Common Patterns

For Database Changes:

  • Start with schema/migration
  • Add store methods
  • Update business logic
  • Expose via API
  • Update clients

For New Features:

  • Research existing patterns first
  • Start with data model
  • Build backend logic
  • Add API endpoints
  • Implement UI last

For Refactoring:

  • Document current behavior
  • Plan incremental changes
  • Maintain backwards compatibility
  • Include migration strategy

Sub-task Spawning Best Practices

When spawning research sub-tasks:

  1. Spawn multiple tasks in parallel for efficiency
  2. Each task should be focused on a specific area
  3. Provide detailed instructions including:

- Exactly what to search for

- Which directories to focus on

- What information to extract

- Expected output format

  1. Be EXTREMELY specific about directories:

- Include the full path context in your prompts

  1. Specify read-only tools to use
  2. Request specific file:line references in responses
  3. Wait for all tasks to complete before synthesizing
  4. Verify sub-task results:

- If a sub-task returns unexpected results, spawn follow-up tasks

- Cross-check findings against the actual codebase

- Don't accept results that seem incorrect