🎯

prompt-engineering

🎯Skill

from hainamchung/agent-assistant

VibeIndex|
What it does

Guides users through advanced prompt engineering techniques to improve AI model performance and reliability.

πŸ“¦

Part of

hainamchung/agent-assistant(227 items)

prompt-engineering

Installation

npm installInstall npm package
npm install -g @namch/agent-assistant
git cloneClone repository
git clone https://github.com/hainamchung/agent-assistant.git
Node.jsRun Node.js server
node cli/install.js install cursor # Cursor
Node.jsRun Node.js server
node cli/install.js install claude # Claude Code
Node.jsRun Node.js server
node cli/install.js install copilot # GitHub Copilot

+ 7 more commands

πŸ“– Extracted from docs: hainamchung/agent-assistant
1Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Expert guide on prompt engineering patterns, best practices, and optimization techniques. Use when user wants to improve prompts, learn prompting strategies, or debug agent behavior.

Overview

# Prompt Engineering Patterns

Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.

Core Capabilities

1. Few-Shot Learning

Teach the model by showing examples instead of explaining rules. Include 2-5 input-output pairs that demonstrate the desired behavior. Use when you need consistent formatting, specific reasoning patterns, or handling of edge cases. More examples improve accuracy but consume tokensβ€”balance based on task complexity.

Example:

```markdown

Extract key information from support tickets:

Input: "My login doesn't work and I keep getting error 403"

Output: {"issue": "authentication", "error_code": "403", "priority": "high"}

Input: "Feature request: add dark mode to settings"

Output: {"issue": "feature_request", "error_code": null, "priority": "low"}

Now process: "Can't upload files larger than 10MB, getting timeout"

```

2. Chain-of-Thought Prompting

Request step-by-step reasoning before the final answer. Add "Let's think step by step" (zero-shot) or include example reasoning traces (few-shot). Use for complex problems requiring multi-step logic, mathematical reasoning, or when you need to verify the model's thought process. Improves accuracy on analytical tasks by 30-50%.

Example:

```markdown

Analyze this bug report and determine root cause.

Think step by step:

  1. What is the expected behavior?
  2. What is the actual behavior?
  3. What changed recently that could cause this?
  4. What components are involved?
  5. What is the most likely root cause?

Bug: "Users can't save drafts after the cache update deployed yesterday"

```

3. Prompt Optimization

Systematically improve prompts through testing and refinement. Start simple, measure performance (accuracy, consistency, token usage), then iterate. Test on diverse inputs including edge cases. Use A/B testing to compare variations. Critical for production prompts where consistency and cost matter.

Example:

```markdown

Version 1 (Simple): "Summarize this article"

β†’ Result: Inconsistent length, misses key points

Version 2 (Add constraints): "Summarize in 3 bullet points"

β†’ Result: Better structure, but still misses nuance

Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"

β†’ Result: Consistent, accurate, captures key information

```

4. Template Systems

Build reusable prompt structures with variables, conditional sections, and modular components. Use for multi-turn conversations, role-based interactions, or when the same pattern applies to different inputs. Reduces duplication and ensures consistency across similar tasks.

Example:

```python

# Reusable code review template

template = """

Review this {language} code for {focus_area}.

Code:

{code_block}

Provide feedback on:

{checklist}

"""

# Usage

prompt = template.format(

language="Python",

focus_area="security vulnerabilities",

code_block=user_code,

checklist="1. SQL injection\n2. XSS risks\n3. Authentication"

)

```

5. System Prompt Design

Set global behavior and constraints that persist across the conversation. Define the model's role, expertise level, output format, and safety guidelines. Use system prompts for stable instructions that shouldn't change turn-to-turn, freeing up user message tokens for variable content.

Example:

```markdown

System: You are a senior backend engineer specializing in API design.

Rules:

  • Always consider scalability and performance
  • Suggest RESTful patterns by default
  • Flag security concerns immediately
  • Provide code examples in Python
  • Use early return pattern

Format responses as:

  1. Analysis
  2. Recommendation
  3. Code example
  4. Trade-offs

```

Key Patterns

Progressive Disclosure

Start with simple prompts, add complexity only when needed:

  1. Level 1: Direct instruction

- "Summarize this article"

  1. Level 2: Add constraints

- "Summarize this article in 3 bullet points, focusing on key findings"

  1. Level 3: Add reasoning

- "Read this article, identify the main findings, then summarize in 3 bullet points"

  1. Level 4: Add examples

- Include 2-3 example summaries with input-output pairs

Instruction Hierarchy

```

[System Context] β†’ [Task Instruction] β†’ [Examples] β†’ [Input Data] β†’ [Output Format]

```

Error Recovery

Build prompts that gracefully handle failures:

  • Include fallback instructions
  • Request confidence scores
  • Ask for alternative interpretations when uncertain
  • Specify how to indicate missing information

Best Practices

  1. Be Specific: Vague prompts produce inconsistent results
  2. Show, Don't Tell: Examples are more effective than descriptions
  3. Test Extensively: Evaluate on diverse, representative inputs
  4. Iterate Rapidly: Small changes can have large impacts
  5. Monitor Performance: Track metrics in production
  6. Version Control: Treat prompts as code with proper versioning
  7. Document Intent: Explain why prompts are structured as they are

Common Pitfalls

  • Over-engineering: Starting with complex prompts before trying simple ones
  • Example pollution: Using examples that don't match the target task
  • Context overflow: Exceeding token limits with excessive examples
  • Ambiguous instructions: Leaving room for multiple interpretations
  • Ignoring edge cases: Not testing on unusual or boundary inputs

More from this repository10

🎯
senior-devops🎯Skill

Skill

🎯
cpp-pro🎯Skill

Develops high-performance C++ applications with modern C++20/23 features, template metaprogramming, and zero-overhead systems design.

🎯
senior-architect🎯Skill

Designs scalable software architectures using modern tech stacks, generating architecture diagrams, analyzing dependencies, and providing system design recommendations.

🎯
senior-frontend🎯Skill

Generates, analyzes, and scaffolds modern frontend projects using ReactJS, NextJS, TypeScript, and Tailwind CSS with automated best practices.

🎯
spec-miner🎯Skill

Extracts and documents specifications from legacy or undocumented codebases by systematically analyzing code structure, data flows, and system behaviors.

🎯
docs-seeker🎯Skill

Searches and retrieves technical documentation by executing intelligent scripts across library sources, GitHub repos, and context7.com with automated query detection.

🎯
writing-plans🎯Skill

Generates comprehensive, step-by-step implementation plans for software features with precise file paths, test-driven development approach, and clear task granularity.

🎯
file path traversal testing🎯Skill

Tests and identifies potential file path traversal vulnerabilities in code by analyzing file path handling and input validation mechanisms.

🎯
nodejs-best-practices🎯Skill

Guides developers in making strategic Node.js architecture and framework decisions by providing context-aware selection principles and modern runtime considerations.

🎯
red-team-tactics🎯Skill

Simulates adversarial attack techniques across MITRE ATT&CK framework phases, mapping network vulnerabilities and demonstrating systematic compromise strategies.