🎯

automatic-stateful-prompt-improver

🎯Skill

from erichowens/some_claude_skills

VibeIndex|
What it does

automatic-stateful-prompt-improver skill from erichowens/some_claude_skills

automatic-stateful-prompt-improver

Installation

Install skill:
npx skills add https://github.com/erichowens/some_claude_skills --skill automatic-stateful-prompt-improver
9
Last UpdatedJan 23, 2026

Skill Details

SKILL.md

Automatically intercepts and optimizes prompts using the prompt-learning MCP server. Learns from performance over time via embedding-indexed history. Uses APE, OPRO, DSPy patterns. Activate on "optimize prompt", "improve this prompt", "prompt engineering", or ANY complex task request. Requires prompt-learning MCP server. NOT for simple questions (just answer them), NOT for direct commands (just execute them), NOT for conversational responses (no optimization needed).

Overview

# Automatic Stateful Prompt Improver

MANDATORY AUTOMATIC BEHAVIOR

When this skill is active, I MUST follow these rules:

Auto-Optimization Triggers

I AUTOMATICALLY call mcp__prompt-learning__optimize_prompt BEFORE responding when:

  1. Complex task (multi-step, requires reasoning)
  2. Technical output (code, analysis, structured data)
  3. Reusable content (system prompts, templates, instructions)
  4. Explicit request ("improve", "better", "optimize")
  5. Ambiguous requirements (underspecified, multiple interpretations)
  6. Precision-critical (code, legal, medical, financial)

Auto-Optimization Process

```

  1. INTERCEPT the user's request
  2. CALL: mcp__prompt-learning__optimize_prompt

- prompt: [user's original request]

- domain: [inferred domain]

- max_iterations: [3-20 based on complexity]

  1. RECEIVE: optimized prompt + improvement details
  2. INFORM user briefly: "I've refined your request for [reason]"
  3. PROCEED with the OPTIMIZED version

```

Do NOT Optimize

  • Simple questions ("what is X?")
  • Direct commands ("run npm install")
  • Conversational responses ("hello", "thanks")
  • File operations without reasoning
  • Already-optimized prompts

Learning Loop (Post-Response)

After completing ANY significant task:

```

  1. ASSESS: Did the response achieve the goal?
  2. CALL: mcp__prompt-learning__record_feedback

- prompt_id: [from optimization response]

- success: [true/false]

- quality_score: [0.0-1.0]

  1. This enables future retrievals to learn from outcomes

```

Quick Reference

Iteration Decision

| Factor | Low (3-5) | Medium (5-10) | High (10-20) |

|--------|-----------|---------------|--------------|

| Complexity | Simple | Multi-step | Agent/pipeline |

| Ambiguity | Clear | Some | Underspecified |

| Domain | Known | Moderate | Novel |

| Stakes | Low | Moderate | Critical |

Convergence (When to Stop)

  • Improvement < 1% for 3 iterations
  • User satisfied
  • Token budget exhausted
  • 20 iterations reached
  • Validation score > 0.95

Performance Expectations

| Scenario | Improvement | Iterations |

|----------|-------------|------------|

| Simple task | 10-20% | 3-5 |

| Complex reasoning | 20-40% | 10-15 |

| Agent/pipeline | 30-50% | 15-20 |

| With history | +10-15% bonus | Varies |

Anti-Patterns

Over-Optimization

| What it looks like | Why it's wrong |

|--------------------|----------------|

| Prompt becomes overly complex with many constraints | Causes brittleness, model confusion, token waste |

| Instead: Apply Occam's Razor - simplest sufficient prompt wins |

Template Obsession

| What it looks like | Why it's wrong |

|--------------------|----------------|

| Focusing on templates rather than task understanding | Templates don't generalize; understanding does |

| Instead: Focus on WHAT the task requires, not HOW to format it |

Iteration Without Measurement

| What it looks like | Why it's wrong |

|--------------------|----------------|

| Multiple rewrites without tracking improvements | Can't know if changes help without metrics |

| Instead: Always define success criteria before optimizing |

Ignoring Model Capabilities

| What it looks like | Why it's wrong |

|--------------------|----------------|

| Assumes model can't do things it can | Over-scaffolding wastes tokens |

| Instead: Test capabilities before heavy prompting |

Reference Files

Load for detailed implementations:

| File | Contents |

|------|----------|

| references/optimization-techniques.md | APE, OPRO, CoT, instruction rewriting, constraint engineering |

| references/learning-architecture.md | Warm start, embedding retrieval, MCP setup, drift detection |

| references/iteration-strategy.md | Decision matrices, complexity scoring, convergence algorithms |

---

Goal: Simplest prompt that achieves the outcome reliably. Optimize for clarity, specificity, and measurable improvement.

More from this repository10

🎯
ai-engineer🎯Skill

Builds production-ready LLM applications with advanced RAG, vector search, and intelligent agent architectures for enterprise AI solutions.

🎯
research-analyst🎯Skill

Conducts comprehensive market research, competitive analysis, and evidence-based strategy recommendations across diverse landscapes and industries.

🎯
design-archivist🎯Skill

Systematically builds comprehensive visual design databases by analyzing 500-1000 real-world examples across diverse domains, extracting actionable design patterns and trends.

🎯
skill-architect🎯Skill

Systematically creates, validates, and improves Agent Skills by encoding domain expertise and preventing incorrect activations.

🎯
llm-streaming-response-handler🎯Skill

Manages real-time streaming responses from language models, enabling smooth parsing, buffering, and event-driven handling of incremental AI outputs

🎯
typography-expert🎯Skill

Analyzes and refines typography, providing expert guidance on font selection, kerning, readability, and design consistency across digital and print media

🎯
clip-aware-embeddings🎯Skill

Performs semantic image-text matching using CLIP embeddings for zero-shot classification, image search, and similarity tasks.

🎯
dag-output-validator🎯Skill

Validates and enforces output quality by checking agent responses against predefined schemas, structural requirements, and content standards.

🎯
orchestrator🎯Skill

Intelligently coordinates multiple specialized skills, dynamically decomposes complex tasks, synthesizes outputs, and creates new skills to fill capability gaps.

🎯
color-theory-palette-harmony-expert🎯Skill

Generates harmonious color palettes using color theory principles, recommending complementary, analogous, and triadic color schemes for design projects.