🎯

cc-history

🎯Skill

from solatis/claude-config

VibeIndex|
What it does

cc-history skill from solatis/claude-config

cc-history

Installation

Install skill:
npx skills add https://github.com/solatis/claude-config --skill cc-history
4
AddedJan 29, 2026

Skill Details

SKILL.md

Overview

# My Claude Code Workflow

I use Claude Code for most of my work. After months of iteration, I noticed a

pattern: LLM-assisted code rots faster than hand-written code. Technical debt

accumulates because the LLM does not know what it does not know, and neither do

you until it is too late.

This repo is my solution: skills and workflows that force planning before

execution, keep context focused, and catch mistakes before they compound.

Why This Exists

LLM-assisted coding fails long-term. Technical debt accumulates because the LLM

cannot see it, and you are moving too fast to notice. I treat this as an

engineering problem, not a tooling problem.

LLMs are tools, not collaborators. When an engineer says "add retry logic",

another engineer infers exponential backoff, jitter, and idempotency. An LLM

infers nothing you do not explicitly state. It cannot read the room. It has no

institutional memory. It will cheerfully implement the wrong thing with perfect

confidence and call it "production-ready".

Larger context windows do not help. Giving an LLM more text is like giving a

human a larger stack of papers; attention drifts to the beginning and end, and

details in the middle get missed. More context makes this worse. Give the LLM

exactly what it needs for the task at hand -- nothing more.

Principles

This workflow is built on four principles.

Context Hygiene

Each task gets precisely the information it needs -- no more. Sub-agents start

with a fresh context, so architectural knowledge must be encoded somewhere

persistent.

I use a two-file pattern in every directory:

CLAUDE.md -- Claude loads these automatically when entering a directory.

Because they load whether needed or not, content must be minimal: a tabular

index with short descriptions and triggers for when to open each file. When

Claude opens app/web/controller.py, it retrieves just the indexes along that

path -- not prose it might never need.

README.md -- Invisible knowledge: architecture decisions, invariants not

apparent from code. The test: if a developer could learn it by reading source

files, it does not belong here. Claude reads these only when the CLAUDE.md

trigger says to.

The principle is just-in-time context. Indexes load automatically but stay

small. Detailed knowledge loads only when relevant.

The technical writer agent enforces token budgets: ~200 tokens for CLAUDE.md,

~500 for README.md, 100 for function docs, 150 for module docs. These limits

force discipline -- if you are exceeding them, you are probably documenting what

code already shows. Function docs include "use when..." triggers so the LLM

knows when to reach for them.

The planner workflow maintains this hierarchy automatically. If you bypass the

planner, you maintain it yourself.

Planning Before Execution

LLMs make first-shot mistakes. Always. The workflow separates planning from

execution, forcing ambiguities to surface when they are cheap to fix.

Plans capture why decisions were made, what alternatives were rejected, and what

risks were accepted. Plans are written to files. When you clear context and

start fresh, the reasoning survives.

Review Cycles

Execution is split into milestones -- smaller units that are manageable and can

be validated individually. This ensures continuous, verified progress. Without

it, execution becomes a waterfall: one small oversight early on and agents

compound each mistake until the result is unusable.

Quality gates run at every stage. A technical writer agent checks clarity; a

quality reviewer checks completeness. The loop runs until both pass.

Plans pass review before execution begins. During execution, each milestone

passes review before the next starts.

Cost-Effective Delegation

The orchestrator delegates to smaller agents -- Haiku for straightforward tasks,

Sonnet for moderate complexity. Prompts are injected just-in-time, giving

smaller models precisely the guidance they need at each step.

When quality review fails o