create-meta-prompts
π―Skillfrom cfircoo/claude-code-toolkit
Generates structured, multi-stage Claude-to-Claude prompts with XML outputs, automated summarization, and provenance tracking for complex workflow pipelines.
Part of
cfircoo/claude-code-toolkit(15 items)
Installation
git clone https://github.com/cfircoo/claude-code-toolkit.git./install.sh./install.sh -i./install-mac.sh./install-linux.sh+ 5 more commands
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"h...Skill Details
Create optimized prompts for Claude-to-Claude pipelines with research, planning, and execution stages. Use when building prompts that produce outputs for other prompts to consume, or when running multi-stage workflows (research -> plan -> implement).
Overview
Create prompts optimized for Claude-to-Claude communication in multi-stage workflows. Outputs are structured with XML and metadata for efficient parsing by subsequent prompts.
Every execution produces a SUMMARY.md for quick human scanning without reading full outputs.
Each prompt gets its own folder in .prompts/ with its output artifacts, enabling clear provenance and chain detection.
- Intake: Determine purpose (Do/Plan/Research/Refine), gather requirements
- Chain detection: Check for existing research/plan files to reference
- Generate: Create prompt using purpose-specific patterns
- Save: Create folder in
.prompts/{number}-{topic}-{purpose}/ - Present: Show decision tree for running
- Execute: Run prompt(s) with dependency-aware execution engine
- Summarize: Create SUMMARY.md for human scanning
```
.prompts/
βββ 001-auth-research/
β βββ completed/
β β βββ 001-auth-research.md # Prompt (archived after run)
β βββ auth-research.md # Full output (XML for Claude)
β βββ SUMMARY.md # Executive summary (markdown for human)
βββ 002-auth-plan/
β βββ completed/
β β βββ 002-auth-plan.md
β βββ auth-plan.md
β βββ SUMMARY.md
βββ 003-auth-implement/
β βββ completed/
β β βββ 003-auth-implement.md
β βββ SUMMARY.md # Do prompts create code elsewhere
βββ 004-auth-research-refine/
β βββ completed/
β β βββ 004-auth-research-refine.md
β βββ archive/
β β βββ auth-research-v1.md # Previous version
β βββ SUMMARY.md
```
Prompts directory: !ls -d .prompts 2>/dev/null
Existing prompts: !ls .prompts 2>/dev/null
BEFORE analyzing anything, check if context was provided.
IF no context provided (skill invoked without description):
β IMMEDIATELY use AskUserQuestion with:
- header: "Purpose"
- question: "What is the purpose of this prompt?"
- options:
- "Do" - Execute a task, produce an artifact
- "Plan" - Create an approach, roadmap, or strategy
- "Research" - Gather information or understand something
- "Refine" - Improve an existing research or plan output
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF context was provided:
β Check if purpose is inferable from keywords:
- implement, build, create, fix, add, refactor β Do
- plan, roadmap, approach, strategy, decide, phases β Plan
- research, understand, learn, gather, analyze, explore β Research
- refine, improve, deepen, expand, iterate, update β Refine
β If unclear, ask the Purpose question above as first contextual question
β If clear, proceed to adaptive_analysis with inferred purpose
Extract and infer:
- Purpose: Do, Plan, Research, or Refine
- Topic identifier: Kebab-case identifier for file naming (e.g.,
auth,stripe-payments) - Complexity: Simple vs complex (affects prompt depth)
- Prompt structure: Single vs multiple prompts
- Target (Refine only): Which existing output to improve
If topic identifier not obvious, ask:
- header: "Topic"
- question: "What topic/feature is this for? (used for file naming)"
- Let user provide via "Other" option
- Enforce kebab-case (convert spaces/underscores to hyphens)
For Refine purpose, also identify target output from .prompts/*/ to improve.
Scan .prompts// for existing -research.md and *-plan.md files.
If found:
- List them: "Found existing files: auth-research.md (in 001-auth-research/), stripe-plan.md (in 005-stripe-plan/)"
- Use AskUserQuestion:
- header: "Reference"
- question: "Should this prompt reference any existing research or plans?"
- options: List found files + "None"
- multiSelect: true
Match by topic keyword when possible (e.g., "auth plan" β suggest auth-research.md).
Generate 2-4 questions using AskUserQuestion based on purpose and gaps.
Load questions from: [references/question-bank.md](references/question-bank.md)
Route by purpose:
- Do β artifact type, scope, approach
- Plan β plan purpose, format, constraints
- Research β depth, sources, output format
- Refine β target selection, feedback, preservation
After receiving answers, present decision gate using AskUserQuestion:
- header: "Ready"
- question: "Ready to create the prompt?"
- options:
- "Proceed" - Create the prompt with current context
- "Ask more questions" - I have more details to clarify
- "Let me add context" - I want to provide additional information
Loop until "Proceed" selected.
After "Proceed" selected, state confirmation:
"Creating a {purpose} prompt for: {topic}
Folder: .prompts/{number}-{topic}-{purpose}/
References: {list any chained files}"
Then proceed to generation.
Load purpose-specific patterns:
- Do: [references/do-patterns.md](references/do-patterns.md)
- Plan: [references/plan-patterns.md](references/plan-patterns.md)
- Research: [references/research-patterns.md](references/research-patterns.md)
- Refine: [references/refine-patterns.md](references/refine-patterns.md)
Load intelligence rules: [references/intelligence-rules.md](references/intelligence-rules.md)
All generated prompts include:
- Objective: What to accomplish, why it matters
- Context: Referenced files (@), dynamic context (!)
- Requirements: Specific instructions for the task
- Output specification: Where to save, what structure
- Metadata requirements: For research/plan outputs, specify XML metadata structure
- SUMMARY.md requirement: All prompts must create a SUMMARY.md file
- Success criteria: How to know it worked
For Research and Plan prompts, output must include:
- How confident in findings- What's needed to proceed- What remains uncertain- What was assumed
All prompts must create SUMMARY.md with:
- One-liner - Substantive description of outcome
- Version - v1 or iteration info
- Key Findings - Actionable takeaways
- Files Created - (Do prompts only)
- Decisions Needed - What requires user input
- Blockers - External impediments
- Next Step - Concrete forward action
- Create folder:
.prompts/{number}-{topic}-{purpose}/ - Create
completed/subfolder - Write prompt to:
.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md - Prompt instructs output to:
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
After saving prompt(s), present inline (not AskUserQuestion):
```
Prompt created: .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md
What's next?
- Run prompt now
- Review/edit prompt first
- Save for later
- Other
Choose (1-4): _
```
```
Prompts created:
- .prompts/001-auth-research/001-auth-research.md
- .prompts/002-auth-plan/002-auth-plan.md
- .prompts/003-auth-implement/003-auth-implement.md
Detected execution order: Sequential (002 references 001 output, 003 references 002 output)
What's next?
- Run all prompts (sequential)
- Review/edit prompts first
- Save for later
- Other
Choose (1-4): _
```
Straightforward execution of one prompt.
- Read prompt file contents
- Spawn Task agent with subagent_type="general-purpose"
- Include in task prompt:
- The complete prompt contents
- Output location: .prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
- Wait for completion
- Validate output (see validation section)
- Archive prompt to
completed/subfolder - Report results with next-step options
For chained prompts where each depends on previous output.
- Build execution queue from dependency order
- For each prompt in queue:
a. Read prompt file
b. Spawn Task agent
c. Wait for completion
d. Validate output
e. If validation fails β stop, report failure, offer recovery options
f. If success β archive prompt, continue to next
- Report consolidated results
Show progress during execution:
```
Executing 1/3: 001-auth-research... β
Executing 2/3: 002-auth-plan... β
Executing 3/3: 003-auth-implement... (running)
```
For independent prompts with no dependencies.
- Read all prompt files
- CRITICAL: Spawn ALL Task agents in a SINGLE message
- This is required for true parallel execution
- Each task includes its output location
- Wait for all to complete
- Validate all outputs
- Archive all prompts
- Report consolidated results (successes and failures)
Unlike sequential, parallel continues even if some fail:
- Collect all results
- Archive successful prompts
- Report failures with details
- Offer to retry failed prompts
For complex DAGs (e.g., two parallel research β one plan).
- Analyze dependency graph from @ references
- Group into execution layers:
- Layer 1: No dependencies (run parallel)
- Layer 2: Depends only on layer 1 (run after layer 1 completes)
- Layer 3: Depends on layer 2, etc.
- Execute each layer:
- Parallel within layer
- Sequential between layers
- Stop if any dependency fails (downstream prompts can't run)
```
Layer 1 (parallel): 001-api-research, 002-db-research
Layer 2 (after layer 1): 003-architecture-plan
Layer 3 (after layer 2): 004-implement
```
Scan prompt contents for @ references to determine dependencies:
- Parse each prompt for
@.prompts/{number}-{topic}/patterns - Build dependency graph
- Detect cycles (error if found)
- Determine execution order
If no explicit @ references found, infer from purpose:
- Research prompts: No dependencies (can parallel)
- Plan prompts: Depend on same-topic research
- Do prompts: Depend on same-topic plan
Override with explicit references when present.
If a prompt references output that doesn't exist:
- Check if it's another prompt in this session (will be created)
- Check if it exists in
.prompts/*/(already completed) - If truly missing:
- Warn user: "002-auth-plan references auth-research.md which doesn't exist"
- Offer: Create the missing research prompt first? / Continue anyway? / Cancel?
After each prompt completes, verify success:
- File exists: Check output file was created
- Not empty: File has content (> 100 chars)
- Metadata present (for research/plan): Check for required XML tags
-
-
-
-
- SUMMARY.md exists: Check SUMMARY.md was created
- SUMMARY.md complete: Has required sections (Key Findings, Decisions Needed, Blockers, Next Step)
- One-liner is substantive: Not generic like "Research completed"
If validation fails:
- Report what's missing
- Offer options:
- Retry the prompt
- Continue anyway (for non-critical issues)
- Stop and investigate
Stop the chain immediately:
```
β Failed at 2/3: 002-auth-plan
Completed:
- 001-auth-research β (archived)
Failed:
- 002-auth-plan: Output file not created
Not started:
- 003-auth-implement
What's next?
- Retry 002-auth-plan
- View error details
- Stop here (keep completed work)
- Other
```
Continue others, report all results:
```
Parallel execution completed with errors:
β 001-api-research (archived)
β 002-db-research: Validation failed - missing
β 003-ui-research (archived)
What's next?
- Retry failed prompt (002)
- View error details
- Continue without 002
- Other
```
- Sequential: Archive each prompt immediately after successful completion
- Provides clear state if execution stops mid-chain
- Parallel: Archive all at end after collecting results
- Keeps prompts available for potential retry
Move prompt file to completed subfolder:
```bash
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/
```
Output file stays in place (not moved).
```
β Executed: 001-auth-research
β Created: .prompts/001-auth-research/SUMMARY.md
βββββββββββββββββββββββββββββββββββββββββββββββββ
# Auth Research Summary
JWT with jose library and httpOnly cookies recommended
Key Findings
β’ jose outperforms jsonwebtoken with better TypeScript support
β’ httpOnly cookies required (localStorage is XSS vulnerable)
β’ Refresh rotation is OWASP standard
Decisions Needed
None - ready for planning
Blockers
None
Next Step
Create auth-plan.md
βββββββββββββββββββββββββββββββββββββββββββββββββ
What's next?
- Create planning prompt (auth-plan)
- View full research output
- Done
- Other
```
Display the actual SUMMARY.md content inline so user sees findings without opening files.
```
β Chain completed: auth workflow
Results:
βββββββββββββββββββββββββββββββββββββββββββββββββ
001-auth-research
JWT with jose library and httpOnly cookies recommended
Decisions: None β’ Blockers: None
002-auth-plan
4-phase implementation: types β JWT core β refresh β tests
Decisions: Approve 15-min token expiry β’ Blockers: None
003-auth-implement
JWT middleware complete with 6 files created
Decisions: Review before Phase 2 β’ Blockers: None
βββββββββββββββββββββββββββββββββββββββββββββββββ
All prompts archived. Full summaries in .prompts/*/SUMMARY.md
What's next?
- Review implementation
- Run tests
- Create new prompt chain
- Other
```
For chains, show condensed one-liner from each SUMMARY.md with decisions/blockers flagged.
If user wants to re-run an already-completed prompt:
- Check if prompt is in
completed/subfolder - Move it back to parent folder
- Optionally backup existing output:
{output}.bak - Execute normally
If output file already exists:
- For re-runs: Backup existing β
{filename}.bak - For new runs: Should not happen (unique numbering)
- If conflict detected: Ask user - Overwrite? / Rename? / Cancel?
After successful execution:
- Do NOT auto-commit (user controls git workflow)
- Mention what files were created/modified
- User can commit when ready
Exception: If user explicitly requests commit, stage and commit:
- Output files created
- Prompts archived
- Any implementation changes (for Do prompts)
If a prompt's output includes instructions to create more prompts:
- This is advanced usage - don't auto-detect
- Present the output to user
- User can invoke skill again to create follow-up prompts
- Maintains user control over prompt creation
Prompt patterns by purpose:
- [references/do-patterns.md](references/do-patterns.md) - Execution prompts + output structure
- [references/plan-patterns.md](references/plan-patterns.md) - Planning prompts + plan.md structure
- [references/research-patterns.md](references/research-patterns.md) - Research prompts + research.md structure
- [references/refine-patterns.md](references/refine-patterns.md) - Iteration prompts + versioning
Shared templates:
- [references/summary-template.md](references/summary-template.md) - SUMMARY.md structure and field requirements
- [references/metadata-guidelines.md](references/metadata-guidelines.md) - Confidence, dependencies, open questions, assumptions
Supporting references:
- [references/question-bank.md](references/question-bank.md) - Intake questions by purpose
- [references/intelligence-rules.md](references/intelligence-rules.md) - Extended thinking, parallel tools, depth decisions
Prompt Creation:
- Intake gate completed with purpose and topic identified
- Chain detection performed, relevant files referenced
- Prompt generated with correct structure for purpose
- Folder created in
.prompts/with correct naming - Output file location specified in prompt
- SUMMARY.md requirement included in prompt
- Metadata requirements included for Research/Plan outputs
- Quality controls included for Research outputs (verification checklist, QA, pre-submission)
- Streaming write instructions included for Research outputs
- Decision tree presented
Execution (if user chooses to run):
- Dependencies correctly detected and ordered
- Prompts executed in correct order (sequential/parallel/mixed)
- Output validated after each completion
- SUMMARY.md created with all required sections
- One-liner is substantive (not generic)
- Failed prompts handled gracefully with recovery options
- Successful prompts archived to
completed/subfolder - SUMMARY.md displayed inline in results
- Results presented with decisions/blockers flagged
Research Quality (for Research prompts):
- Verification checklist completed
- Quality report distinguishes verified from assumed claims
- Sources consulted listed with URLs
- Confidence levels assigned to findings
- Critical claims verified with official documentation
More from this repository10
sqlalchemy-postgres skill from cfircoo/claude-code-toolkit
Generates a comprehensive Product Requirements Document (PRD) by extracting key details from project context and creating a structured, professional specification.
pytest-best-practices skill from cfircoo/claude-code-toolkit
ralph-orchestrator skill from cfircoo/claude-code-toolkit
damage-control skill from cfircoo/claude-code-toolkit
Generates and configures Claude Code hooks to automate workflows, validate commands, and customize tool usage through event-driven shell or LLM-based triggers.
Systematically investigates complex code issues using scientific method, gathering evidence and testing hypotheses to uncover root causes beyond surface-level symptoms.
ralph-convert-prd skill from cfircoo/claude-code-toolkit
Performs safe Git operations like committing, pushing, and creating pull requests with built-in best practices and version control safeguards.
Generates precise, executable project plans optimized for solo developer and Claude collaboration, breaking complex projects into atomic, high-quality phases and tasks.