🎯

llm-tuning-patterns

🎯Skill

from parcadei/continuous-claude-v3

VibeIndex|
What it does

Provides evidence-based LLM parameter tuning strategies for theorem proving, code generation, and creative tasks based on research.

llm-tuning-patterns

Installation

Install skill:
npx skills add https://github.com/parcadei/continuous-claude-v3 --skill llm-tuning-patterns
12
Last UpdatedJan 26, 2026

Skill Details

SKILL.md

LLM Tuning Patterns

Overview

# LLM Tuning Patterns

Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.

Pattern

Different tasks require different LLM configurations. Use these evidence-based settings.

Theorem Proving / Formal Reasoning

Based on APOLLO parity analysis:

| Parameter | Value | Rationale |

|-----------|-------|-----------|

| max_tokens | 4096 | Proofs need space for chain-of-thought |

| temperature | 0.6 | Higher creativity for tactic exploration |

| top_p | 0.95 | Allow diverse proof paths |

Proof Plan Prompt

Always request a proof plan before tactics:

```

Given the theorem to prove:

[theorem statement]

First, write a high-level proof plan explaining your approach.

Then, suggest Lean 4 tactics to implement each step.

```

The proof plan (chain-of-thought) significantly improves tactic quality.

Parallel Sampling

For hard proofs, use parallel sampling:

  • Generate N=8-32 candidate proof attempts
  • Use best-of-N selection
  • Each sample at temperature 0.6-0.8

Code Generation

| Parameter | Value | Rationale |

|-----------|-------|-----------|

| max_tokens | 2048 | Sufficient for most functions |

| temperature | 0.2-0.4 | Prefer deterministic output |

Creative / Exploration Tasks

| Parameter | Value | Rationale |

|-----------|-------|-----------|

| max_tokens | 4096 | Space for exploration |

| temperature | 0.8-1.0 | Maximum creativity |

Anti-Patterns

  • Too low tokens for proofs: 512 tokens truncates chain-of-thought
  • Too low temperature for proofs: 0.2 misses creative tactic paths
  • No proof plan: Jumping to tactics without planning reduces success rate

Source Sessions

  • This session: APOLLO parity - increased max_tokens 512->4096, temp 0.2->0.6
  • This session: Added proof plan prompt for chain-of-thought before tactics

More from this repository10

🎯
agentica-claude-proxy🎯Skill

Enables seamless integration between Agentica agents and Claude Code CLI by managing proxy configurations, tool permissions, and response formatting.

🎯
git-commits🎯Skill

Manages git commits by removing Claude attribution, generating reasoning documentation, and ensuring clean commit workflows.

🎯
debug-hooks🎯Skill

Systematically diagnose and resolve hook registration, execution, and output issues in Claude Code projects by checking cache, settings, files, and manual testing.

🎯
migrate🎯Skill

Systematically researches, analyzes, plans, implements, and reviews migrations across frameworks, languages, and infrastructure with minimal risk.

🎯
background-agent-pings🎯Skill

Enables background agent execution with system-triggered progress notifications, avoiding manual polling and context flooding.

🎯
agentica-infrastructure🎯Skill

Provides comprehensive reference and infrastructure for building sophisticated multi-agent coordination patterns and workflows with precise API specifications and tracking mechanisms.

🎯
system-overview🎯Skill

Generates a comprehensive summary of the current system's configuration, components, and key metrics across skills, agents, hooks, and other core systems.

🎯
cli-reference🎯Skill

Provides comprehensive CLI commands and flags for interacting with Claude Code, enabling headless mode, automation, and session management.

🎯
braintrust-tracing🎯Skill

Traces and correlates Claude Code session events across parent and sub-agent interactions using comprehensive Braintrust instrumentation.

🎯
morph-apply🎯Skill

Rapidly edits files using AI-powered Morph Apply API with high accuracy and speed, without requiring full file context.