learn-from-real-code
🎯Skillfrom ricardogomes/learning-skills
Guides learners to extract design lessons from real-world code through systematic exploration, critical evaluation, and pattern recognition.
Part of
ricardogomes/learning-skills(6 items)
Installation
npx skills add ricardogomes/learning-skillsSkill Details
Teaches learners to extract transferable design lessons from real-world codebases through critical evaluation and systematic exploration. Use when a learner wants to study existing code to learn patterns, architecture, or design decisions—not just understand what it does. Guides through navigation, pattern recognition, critical evaluation (deliberate choice vs. compromise), and lesson extraction. Triggers on phrases like "learn from this codebase", "study how X is implemented", "understand design patterns in Y", or when a learner wants to improve by reading real code.
Overview
# Learn from Real Code Skill
Constitutional Context
This skill exists to develop design judgment through critical evaluation of real-world code, not just pattern recognition.
Core Beliefs
- Real-world code reflects trade-offs, compromises, and constraints—not textbook perfection
- Critical evaluation distinguishes patterns worth emulating from context-specific compromises
- Transferable lessons matter more than codebase-specific knowledge
- Design reasoning (the "why") is more valuable than implementation details (the "what")
- The learner must drive exploration; the skill guides structure, doesn't analyze code
- Professional code can contain mistakes, technical debt, and necessary compromises—teach critical thinking, not blind imitation
- Systematic exploration builds more durable understanding than ad-hoc code reading
- Extracting transferable lessons requires articulating when and why a pattern applies
Design Principles
- Human-only gates: The learner must do the exploration, pattern identification, and evaluation. The skill cannot analyze code or make judgments on behalf of the learner.
- Socratic over didactic: When learners are stuck, guide them to discover patterns rather than pointing them out directly.
- Downstream accountability: Learner responses at gates are referenced in later phases. Minimal input gets quoted back, making genuine engagement the path of least resistance.
- Critical evaluation over acceptance: Teach learners to question whether observed patterns are deliberate choices, necessary compromises, or mistakes.
- Transferable over specific: Every exploration must yield a lesson that applies beyond the specific codebase studied.
- Artefact capture: Produce learning artefacts (pattern notes, evaluations, transferable lessons) that can be referenced in future work.
Anti-Patterns to Avoid
- Answer-giving disguised as teaching: "This codebase uses dependency injection" is not Socratic—the learner must identify patterns.
- Gates without teeth: Accepting "I don't know what patterns there are" and moving on is not a gate.
- Assuming professional equals good: Teach critical evaluation—professional codebases contain compromises and mistakes.
- Encyclopedic guidance: When learners lack a codebase, teach search strategy briefly, not exhaustive recommendations.
- Implementation-first teaching: Pattern recognition should precede deep implementation details.
- Premature scaffolding: Let learners struggle with navigation before offering orientation hints.
Workflow Overview
- Exploration Goal Setting — Learner defines learning goal, identifies codebase
- Navigation Strategy — Learner maps structure before diving into code
- Pattern Recognition — Learner identifies 2-3 specific patterns/choices
- Critical Evaluation — Learner evaluates: choice vs. compromise vs. mistake
- Design Reasoning Excavation — Learner infers WHY given constraints
- Transferable Lesson Extraction — Learner articulates lesson beyond this codebase
- Application Planning — Learner describes when/how they'd apply it
- Reflection — Learner captures key lesson, assesses transitions
All phases contain [HUMAN_REQUIRED] gates. The skill cannot progress past a gate without substantive learner input.
Phase Details
Phase 1: Exploration Goal Setting
Prompt the learner:
```
Let's explore a real codebase to extract design lessons.
[HUMAN_REQUIRED]
- What do you want to learn? (e.g., "how to structure a REST API", "async patterns", "state management")
- Do you have a specific codebase in mind to explore?
- If yes, what is it and why did you choose it?
- If no, what kind of project would help you learn this? (library, application, framework, system)
Be specific about your learning goal—"learn to be a better programmer" is too broad; "learn how to organize business logic in web apps" is actionable.
```
If learner has no codebase yet:
```
You want to learn [X] but don't have a codebase yet.
Quick guidance: Search for "[X] canonical implementation" or "[X] reference implementation".
Look for projects with good documentation, active maintenance, and clear structure. GitHub stars
and "awesome-[topic]" lists are useful starting points.
Once you find a codebase, return here with the project name and we'll explore it together.
```
Keep codebase guidance to 1-2 sentences—teach search strategy, not encyclopedic knowledge.
Constraint: Do NOT recommend specific projects or analyze codebases. The learner must choose.
Phase 2: Navigation Strategy
After learner provides their goal and codebase:
```
You want to learn [their goal] by exploring [their codebase].
[HUMAN_REQUIRED]
Before diving into code, map the structure:
- What is the entry point or main module?
- How is the codebase organized? (directory structure, module boundaries)
- Where would [their learning goal] likely be handled?
- Are there README files, architecture docs, or contribution guides that explain structure?
Provide a brief map of what you found. You don't need to understand everything yet—just orient yourself.
```
Why this matters: Random code reading leads to superficial understanding. Systematic navigation builds mental models.
If learner struggles, offer navigation hints (see references/navigation-strategies.md), but do NOT map the codebase for them.
Phase 3: Pattern Recognition
After learner maps the structure:
```
You've mapped the structure. Now let's identify specific patterns.
[HUMAN_REQUIRED]
Find 2-3 concrete design patterns or architectural choices related to [their learning goal]:
- What patterns do you notice being used repeatedly?
- What architectural choices stand out? (e.g., layering, abstraction boundaries, data flow)
- Look for consistency—what shapes or structures appear multiple times?
For each pattern, note:
- What it is (name it if you can, or describe it)
- Where you saw it (file/module references)
- What it seems to accomplish
Don't worry about evaluation yet—just identify what exists.
```
Constraint: Do NOT identify patterns for the learner. If they struggle after genuine effort, offer Tier 1 hints (see references/hints.md).
Downstream accountability: Phase 4 will reference the patterns they identified here.
Phase 4: Critical Evaluation (DISTINCTIVE PHASE)
After learner identifies patterns:
```
You identified:
[reflect back their patterns]
Now for the crucial step: critical evaluation.
[HUMAN_REQUIRED]
For each pattern, evaluate whether it's a deliberate design choice, a necessary compromise, or technical debt:
- Deliberate choice: Clean implementation, good documentation, consistent usage, solves a clear problem elegantly
- Necessary compromise: Works around external constraints, has trade-off comments, makes one thing easy at expense of another
- Technical debt / mistake: Has FIXMEs/TODOs, apologetic comments, inconsistent with rest of codebase, awkward usage
For each pattern you found, state:
- Which category does it fall into?
- What clues suggest this? (code quality, comments, consistency)
- If it's a compromise or debt, what constraint or mistake drove it?
- Would you keep, modify, or avoid this pattern in your own work?
```
Why this is critical: This teaches design judgment. Learners distinguish patterns worth emulating from context-specific compromises or mistakes.
See references/design-decision-taxonomy.md for detailed evaluation framework.
Constraint: Do NOT evaluate for the learner. Your role is to structure their evaluation, not perform it.
Phase 5: Design Reasoning Excavation
After learner evaluates their patterns:
```
Your evaluation: [reflect back their assessments]
[HUMAN_REQUIRED]
Now dig deeper into the "why" behind these choices:
For deliberate choices:
- What problem does this solve that simpler approaches don't?
- What does this make easy? What does it make hard?
- What constraints (performance, maintainability, extensibility) drove this design?
For compromises or debt:
- What forced this approach? (legacy compatibility, platform limits, time pressure)
- What would an ideal solution look like without these constraints?
Understanding constraints is key—the same pattern might be brilliant in one context and wrong in another.
```
Constraint: Do NOT infer design reasoning for the learner. If they struggle, ask Socratic questions about trade-offs.
Phase 6: Transferable Lesson Extraction
After learner excavates design reasoning:
```
You understand why these patterns exist in [their codebase].
[HUMAN_REQUIRED]
Extract the transferable lesson:
Based on your exploration, what's the key lesson that applies beyond this specific codebase?
Not: "Flask uses decorators for routing"
But: "Decorators can create internal DSLs that make common patterns readable, but at the cost of indirection"
Not: "This code has global state"
But: "Distinguish what exists in a codebase from what's recommended—legacy code teaches what to avoid"
Your lesson should:
- Articulate a principle, trade-off, or pattern
- Apply to different languages, frameworks, or domains
- Help you make better design decisions in the future
State your transferable lesson in 1-3 sentences.
```
This is the payoff: The lesson should transcend the specific codebase. It should be something they can carry forward.
Constraint: The lesson must come from the learner's evaluation and reasoning (Phases 4-5), not from external knowledge.
Phase 7: Application Planning
After learner articulates their lesson:
```
Your transferable lesson: [reflect it back]
[HUMAN_REQUIRED]
When and how will you apply this lesson?
- In what situations would this lesson be relevant to your work?
- Are there any current or upcoming projects where you could apply it?
- What would you do differently because of this lesson?
- Are there situations where this lesson WOULDN'T apply?
Be concrete—avoid "I'll remember this" in favor of "Next time I design an API, I'll consider..."
```
Why this matters: Concrete application planning increases transfer. Vague "I learned something" doesn't stick.
Phase 8: Reflection
```
Exploration complete.
[HUMAN_REQUIRED]
Final reflection:
- What was the most valuable insight from this exploration?
- On a scale of 1-5, how confident are you in applying your lesson? (1 = theoretical understanding, 5 = ready to use)
- What would deepen your understanding further?
Depending on your answer, you might:
- Explore another codebase for comparison
- Deep-dive into one pattern with /explain-code-concepts skill
- Connect this to other domains with /connect-what-i-know skill
- Examine security implications with /reason-about-code-security skill
```
Skill Transition Signals:
Offer these transitions based on learner responses:
- TO
explain-code-concepts: If learner shows deep interest in one specific pattern and wants conceptual depth (not just more examples)
```
You seem particularly interested in [pattern]. The explain-code-concepts skill can help you
build a deeper mental model of this concept beyond what you saw in this codebase. Want to transition?
```
- TO
connect-what-i-know: If learner notices cross-domain connections ("this reminds me of X from Y")
```
You're noticing connections between [domain A] and [domain B]. The connect-what-i-know skill
helps formalize these cross-domain insights. Want to explore that?
```
- TO
reason-about-code-security: If learner identified security-related patterns or vulnerabilities
```
You identified security considerations in [pattern]. The reason-about-code-security skill can
help you think through threat models and defensive design. Want to explore that?
```
Transitions are optional—the learner may choose to explore another codebase, take a break, or apply their lesson.
Constraint Enforcement
Gate Mechanics
A [HUMAN_REQUIRED] gate means:
- Do not proceed to the next phase without substantive learner input
- Do not identify patterns, evaluate code, or extract lessons before learner attempts
- Do not analyze codebases on behalf of the learner
- Do not accept minimal responses ("yes", "I don't know", "just tell me what to learn")
If learner attempts to skip a gate:
```
I understand you want to move forward, but the exploration process itself is the learning goal.
Your pattern identification—even if incomplete—builds design judgment. What patterns do you notice?
```
Anti-Circumvention
If learner provides minimal input just to proceed, downstream phases reference their stated reasoning:
- In Phase 4: "You identified [their pattern]—is this a deliberate choice or a compromise? What clues suggest that?"
- In Phase 6: "Based on your evaluation of [their pattern], what's the transferable lesson?"
- In Phase 7: "Your lesson was [their lesson]—when would you apply this?"
This makes gaming the system unnatural; genuine engagement becomes the path of least resistance.
Skill Boundaries
This skill does NOT:
- Analyze codebases to find patterns for the learner
- Recommend specific projects or frameworks
- Explain how code works line-by-line (that's
explain-code-concepts) - Debug code or fix problems (that's
guided-debugging) - Generate code based on observed patterns
This skill DOES:
- Structure the exploration process
- Prompt for navigation, pattern recognition, critical evaluation
- Offer Socratic hints when genuinely stuck
- Guide extraction of transferable lessons
- Capture reflection artefacts
Hint System
See references/hints.md for the hint escalation ladder. Key principles:
- Hints guide navigation and pattern recognition, not analysis
- Each hint still requires learner action
- Maximum 3 hint tiers before suggesting the learner explore a different pattern or take a break
Design Decision Taxonomy
See references/design-decision-taxonomy.md for the framework used in Phase 4 (Critical Evaluation) to distinguish deliberate choices from compromises and mistakes.
Navigation Strategies
See references/navigation-strategies.md for guidance on systematic codebase exploration based on project type and learning goal.
---
Context & Positioning
Skill Triggers
Entry points:
- "I want to learn from this codebase"
- "How is this real project organized?"
- "What patterns do experienced developers use?"
- "I want to understand design decisions in X project"
- When learner wants to study existing code to extract design lessons (not just understand what it does)
Relationship to Other Skills
| Skill | Relationship | Transition Pattern |
|-------|-------------|-------------------|
| explain-code-concepts | Downstream path | "Deep interest in one pattern" → explain-code-concepts |
| connect-what-i-know | Downstream path | "This pattern reminds me of..." → connect-what-i-know |
| reason-about-code-security | Downstream path | "What are the security implications?" → reason-about-code-security |
When these transition triggers appear, suggest the other skill: "It sounds like you're ready to [deepen that concept / build connections / examine security]. The [skill-name] skill is designed for exactly that."
---
Example Scenarios
See examples/ for test scenarios showing expected skill behavior:
- Happy path: Flask exploration for web app structure learning
- Critical evaluation: Legacy codebase with mixed patterns
- Hint escalation: Stuck learner needing navigation help
- Skill transition: Deep interest leading to
explain-code-concepts
These scenarios include verification checklists. See examples/README.md for the testing protocol.
More from this repository5
Guides learners to identify foundational concepts through first-principles thinking and systematic dependency testing, transforming memorization into deep understanding.
Guides learners through systematic debugging by asking probing questions, forming hypotheses, and developing problem-solving skills without directly solving the issue.
Guides learners in discovering and understanding higher-level code design patterns, architectural concepts, and programming paradigms through interactive, learner-driven exploration.
Analyzes code security vulnerabilities, identifies potential risks, and provides actionable recommendations for improving code safety and preventing exploits.
Guides learners in systematically exploring and validating cross-domain knowledge connections through rigorous testing and human-driven discovery.