🎯

skill-creator-pro

🎯Skill

from panaversity/agentfactory

VibeIndex|
What it does

Generates production-grade skills by discovering domain expertise, asking clarifying questions, and creating adaptable, reusable intelligence.

πŸ“¦

Part of

panaversity/agentfactory(23 items)

skill-creator-pro

Installation

πŸ“‹ No install commands found in docs. Showing default command. Check GitHub for actual instructions.
Quick InstallInstall with npx
npx skills add panaversity/agentfactory --skill skill-creator-pro
2Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

|

Overview

# Skill Creator Pro

Create production-grade skills that extend Claude's capabilities.

How This Skill Works

```

User: "Create a skill for X"

↓

Claude Code uses this meta-skill as guidance

↓

Follow Domain Discovery β†’ Ask user clarifying questions β†’ Create skill

↓

Generated skill with embedded domain expertise

```

This skill provides guidance and structure for creating skills. Claude Code:

  1. Uses this skill's framework to discover domain knowledge
  2. Asks user for clarifications about THEIR specific requirements
  3. Decides how to structure the generated skill based on domain needs

What This Skill Does

  • Guides creation of new skills from scratch
  • Helps improve existing skills to production quality
  • Provides patterns for 5 skill types (Builder, Guide, Automation, Analyzer, Validator)
  • Ensures skills encode procedural knowledge + domain expertise

What This Skill Does NOT Do

  • Handle skill versioning/updates after creation
  • Create requirement-specific skills (always create reusable intelligence)
  • Deploy skills to production (but DOES require local testing before delivery)

---

Domain Discovery Framework

Key Principle: Users want domain expertise IN the skill. They may not BE domain experts.

Phase 1: Automatic Discovery (No User Input)

Proactively research the domain before asking anything:

| Discover | How | Example: "Kafka integration" |

|----------|-----|------------------------------|

| Core concepts | Official docs, Context7 | Producers, consumers, topics, partitions |

| Standards/compliance | Search "[domain] standards" | Kafka security, exactly-once semantics |

| Best practices | Search "[domain] best practices 2025" | Partitioning strategies, consumer groups |

| Anti-patterns | Search "[domain] common mistakes" | Too many partitions, no monitoring |

| Security | Search "[domain] security" | SASL, SSL, ACLs, encryption |

| Ecosystem | Search "[domain] ecosystem tools" | Confluent, Schema Registry, Connect |

Sources priority: Official docs β†’ Library docs (Context7) β†’ GitHub β†’ Community β†’ WebSearch

Phase 2: Knowledge Sufficiency Check

Before asking user anything, verify internally:

```

  • [ ] Core concepts understood?
  • [ ] Best practices identified?
  • [ ] Anti-patterns known?
  • [ ] Security considerations covered?
  • [ ] Official sources found?

If ANY gap β†’ Research more (don't ask user for domain knowledge)

Only if CANNOT discover (proprietary/internal) β†’ Ask user

```

Phase 3: User Requirements (NOT Domain Knowledge)

Only ask about user's SPECIFIC context:

| Ask | Don't Ask |

|-----|-----------|

| "What's YOUR use case?" | "What is Kafka?" |

| "What's YOUR tech stack?" | "What options exist?" |

| "Any existing resources?" | "How does it work?" |

| "Specific constraints?" | "What are best practices?" |

The skill contains domain expertise. User provides requirements.

---

Required Clarifications

Ask about SKILL METADATA and USER REQUIREMENTS (not domain knowledge):

Skill Metadata

1. Skill Type - "What type of skill?"

| Type | Purpose | Example |

|------|---------|---------|

| Builder | Create artifacts | Widgets, code, documents |

| Guide | Provide instructions | How-to, tutorials |

| Automation | Execute workflows | File processing, deployments |

| Analyzer | Extract insights | Code review, data analysis |

| Validator | Enforce quality | Compliance checks, scoring |

2. Domain - "What domain or technology?"

User Requirements (After Domain Discovery)

3. Use Case - "What's YOUR specific use case?"

  • Not "what can it do" but "what do YOU need"

4. Tech Stack - "What's YOUR environment?"

  • Languages, frameworks, existing infrastructure

5. Existing Resources - "Any scripts, templates, configs to include?"

6. Constraints - "Any specific requirements or limitations?"

  • Performance, security, compliance specific to user's context

Note

  • Questions 1-2: Ask immediately
  • Domain Discovery: Research automatically after knowing domain
  • Questions 3-6: Ask after discovery, informed by domain knowledge
  • Question pacing: Avoid asking too many questions in a single message. Start with most important, follow up as needed.

---

Core Principles

Reusable Intelligence, Not Requirement-Specific

Skills must handle VARIATIONS, not single requirements:

```

❌ Bad: "Create bar chart with sales data using Recharts"

βœ… Good: "Create visualizations - adaptable to data shape, chart type, library"

❌ Bad: "Deploy to AWS EKS with Helm"

βœ… Good: "Deploy applications - adaptable to platform, orchestration, environment"

```

Identify what VARIES vs what's CONSTANT in the domain. See references/reusability-patterns.md.

Concise is Key

Context window is a public good (~1,500+ tokens per skill activation). Challenge each piece:

  • "Does Claude really need this explanation?"
  • "Does this paragraph justify its token cost?"

Prefer concise examples over verbose explanations.

Appropriate Freedom

Match specificity to task fragility:

| Freedom Level | When to Use | Example |

|---------------|-------------|---------|

| High | Multiple approaches valid | "Choose your preferred style" |

| Medium | Preferred pattern exists | Pseudocode with parameters |

| Low | Operations are fragile | Exact scripts, few parameters |

Progressive Disclosure

Three-level loading system:

  1. Metadata (~100 tokens) - Always in context (description ≀1024 chars)
  2. SKILL.md body (<500 lines) - When skill triggers
  3. References (unlimited) - Loaded as needed by Claude

---

Anatomy of a Skill

Generated skills are zero-shot domain experts with embedded knowledge.

```

skill-name/

β”œβ”€β”€ SKILL.md (required)

β”‚ β”œβ”€β”€ YAML frontmatter (name, description, allowed-tools?, model?)

β”‚ └── Procedural knowledge (workflows, steps, decision trees)

└── Bundled Resources

β”œβ”€β”€ references/ - Domain expertise (structure based on domain needs)

β”œβ”€β”€ scripts/ - Executable code (tested, reliable)

└── assets/ - Templates, boilerplate, images

```

SKILL.md Requirements

| Component | Requirement |

|-----------|-------------|

| Line count | <500 lines (extract to references/) |

| Frontmatter | See references/skill-patterns.md for complete spec |

| name | Lowercase, numbers, hyphens; ≀64 chars; match directory |

| description | [What] + [When]; ≀1024 chars; third-person style |

| Description style | "This skill should be used when..." (not "Use when...") |

| Form | Imperative ("Do X" not "You should X") |

| Scope | What it does AND does not do |

What Goes in references/

Embed domain knowledge gathered during discovery:

| Gathered Knowledge | Purpose in Skill |

|--------------------|------------------|

| Library/API documentation | Enable correct implementation |

| Best practices | Guide quality decisions |

| Code examples | Provide reference patterns |

| Anti-patterns | Prevent common mistakes |

| Domain-specific details | Support edge cases |

Structure references/ based on what the domain needs.

Large files: If references >10k words, include grep search patterns in SKILL.md for efficient discovery.

When to Generate scripts/

Generate scripts when domain requires deterministic, executable procedures:

| Domain Need | Example Scripts |

|-------------|-----------------|

| Setup/installation | Install dependencies, initialize project |

| Processing | Transform data, process files |

| Validation | Check compliance, verify output |

| Deployment | Deploy services, configure infrastructure |

Decision: If procedure is complex, error-prone, or needs to be exactly repeatable β†’ create script. Otherwise β†’ document in SKILL.md or references/.

When to Generate assets/

Generate assets when domain requires exact templates or boilerplate:

| Domain Need | Example Assets |

|-------------|----------------|

| Starting templates | HTML boilerplate, component scaffolds |

| Configuration files | Config templates, schema definitions |

| Code boilerplate | Base classes, starter code |

What NOT to Include

  • README.md (SKILL.md IS the readme)
  • CHANGELOG.md
  • LICENSE (inherited from repo)
  • Duplicate information

What Generated Skill Does at Runtime

```

User invokes skill β†’ Gather context from:

1. Codebase (if existing project)

2. Conversation (user's requirements)

3. Own references/ (embedded domain expertise)

4. User-specific guidelines

β†’ Ensure all information gathered β†’ Implement ZERO-SHOT

```

Include in Generated Skills

Every generated skill should include:

```markdown

Before Implementation

Gather context to ensure successful implementation:

| Source | Gather |

|--------|--------|

| Codebase | Existing structure, patterns, conventions to integrate with |

| Conversation | User's specific requirements, constraints, preferences |

| Skill References | Domain patterns from references/ (library docs, best practices, examples) |

| User Guidelines | Project-specific conventions, team standards |

Ensure all required context is gathered before implementing.

Only ask user for THEIR specific requirements (domain expertise is in this skill).

```

---

Type-Aware Creation

After determining skill type, follow type-specific patterns:

| Type | Key Sections | Reference |

|------|--------------|-----------|

| Builder | Clarifications β†’ Output Spec β†’ Standards β†’ Checklist | skill-patterns.md#builder |

| Guide | Workflow β†’ Examples β†’ Official Docs | skill-patterns.md#guide |

| Automation | Scripts β†’ Dependencies β†’ Error Handling | skill-patterns.md#automation |

| Analyzer | Scope β†’ Criteria β†’ Output Format | skill-patterns.md#analyzer |

| Validator | Criteria β†’ Scoring β†’ Thresholds β†’ Remediation | skill-patterns.md#validator |

---

Skill Creation Process

```

Metadata β†’ Discovery β†’ Requirements β†’ Analyze β†’ Embed β†’ Structure β†’ Implement β†’ Validate

```

See references/creation-workflow.md for detailed steps.

Quick Steps

  1. Metadata: Ask skill type + domain (Questions 1-2)
  2. Discovery: Research domain automatically (Phase 1-2 above)
  3. Requirements: Ask user's specific needs (Questions 3-6)
  4. Analyze: Identify procedural (HOW) + domain (WHAT) knowledge
  5. Embed: Put gathered domain expertise into references/
  6. Structure: Initialize skill directory
  7. Implement: Write SKILL.md + resources following type patterns
  8. Validate: Run scripts/package_skill.py and test

SKILL.md Template

```yaml

---

name: skill-name # lowercase, hyphens, ≀64 chars

description: | # ≀1024 chars

[What] Capability statement.

[When] Use when users ask to .

allowed-tools: Read, Grep, Glob # optional: restrict tools

---

```

See references/skill-patterns.md for complete frontmatter spec and body patterns.

---

Output Checklist

Before delivering a skill, verify:

Domain Discovery Complete

  • [ ] Core concepts discovered and understood
  • [ ] Best practices identified from authentic sources
  • [ ] Anti-patterns documented
  • [ ] Security considerations covered
  • [ ] Official documentation linked
  • [ ] User was NOT asked for domain knowledge

Frontmatter

  • [ ] name: lowercase, hyphens, ≀64 chars, matches directory
  • [ ] description: [What]+[When], ≀1024 chars, clear triggers
  • [ ] allowed-tools: Set if restricted access needed

Structure

  • [ ] SKILL.md <500 lines
  • [ ] Progressive disclosure (details in references/)

Design Principles (see `references/design-principles.md`)

  • [ ] Modular (one skill = one responsibility)
  • [ ] Clear (explicit > clever)
  • [ ] Simple (minimal complexity)
  • [ ] Transparent (inspectable, debuggable)

Knowledge Coverage

  • [ ] Procedural (HOW): Workflows, decision trees, error handling
  • [ ] Domain (WHAT): Concepts, best practices, anti-patterns

Zero-Shot Implementation (in generated skill)

  • [ ] Includes "Before Implementation" section
  • [ ] Gathers runtime context (codebase, conversation, user guidelines)
  • [ ] Domain expertise embedded in references/ (structured per domain needs)
  • [ ] Only asks user for THEIR requirements (not domain knowledge)

Reusability

  • [ ] Handles variations (not requirement-specific)
  • [ ] Clarifications capture variable elements (user's context)
  • [ ] Constants encoded (domain patterns, best practices)

Type-Specific (see `references/skill-patterns.md`)

  • [ ] Builder: Clarifications, output spec, standards, checklist
  • [ ] Guide: Workflow, examples, official docs
  • [ ] Automation: Scripts, dependencies, error handling
  • [ ] Analyzer: Scope, criteria, output format
  • [ ] Validator: Criteria, scoring, thresholds, remediation

Battle Testing (REQUIRED)

  • [ ] Deployment tested: make test or equivalent passes
  • [ ] Versions verified: Latest tool versions, no deprecated APIs
  • [ ] Real scenario tested: Skill answers domain questions, not just deploys tools
  • [ ] Assets executed: Every file in assets/ was actually run
  • [ ] No over-engineering: Uses native tools (Helm, kubectl), not Python wrappers

See references/validation-checklist.md for detailed validation process.

---

Reference Files

| File | When to Read |

|------|--------------|

| references/design-principles.md | Unix philosophy applied to skill design (foundational) |

| references/creation-workflow.md | Detailed step-by-step creation process |

| references/skill-patterns.md | Frontmatter spec, type-specific patterns, assets guidance |

| references/reusability-patterns.md | Procedural+domain knowledge, varies vs constant |

| references/quality-patterns.md | Clarifications, enforcement, checklists |

| references/technical-patterns.md | Error handling, security, dependencies |

| references/workflows.md | Sequential and conditional workflow patterns |

| references/output-patterns.md | Template and example patterns |

More from this repository10

🎯
skill-validator🎯Skill

Validates skills comprehensively across 9 quality categories, scoring structure, content, interaction, documentation, and technical robustness to provide actionable improvement recommendations.

🎯
summary-generator🎯Skill

Generates concise, Socratic-style lesson summaries by extracting core concepts, mental models, patterns, and AI collaboration insights from educational markdown files.

🎯
canonical-format-checker🎯Skill

Checks and validates content formats against canonical sources to prevent inconsistent pattern implementations across platform documentation.

🎯
assessment-architect🎯Skill

Generates comprehensive skill assessments by dynamically creating evaluation frameworks, rubrics, and scoring mechanisms for educational and professional contexts.

🎯
concept-scaffolding🎯Skill

Designs progressive learning sequences by breaking complex concepts into manageable steps, managing cognitive load, and validating understanding across different learning tiers.

🎯
content-evaluation-framework🎯Skill

Evaluates educational content systematically using a 6-category weighted rubric, scoring technical accuracy, pedagogical effectiveness, and constitutional compliance.

🎯
chapter-evaluator🎯Skill

Evaluates educational chapters by analyzing chapters chapters through aable student and teacher perspectives, generating structured ratings,, identifying content gaps, providing and prioritableized...

🎯
pptx🎯Skill

Generates, edits, and analyzes PowerPoint presentations with precise content control and narrative coherence.

🎯
docx🎯Skill

Generates, edits, and analyzes Microsoft Word documents (.docx) with advanced capabilities like tracked changes, comments, and text extraction.

🎯
content-refiner🎯Skill

Refines content that failed Gate 4 by precisely trimming verbosity, strengthening lesson connections, and ensuring targeted improvements based on specific diagnostic criteria.