🎯

systematic-debugging

🎯Skill

from mjunaidca/mjs-agent-skills

VibeIndex|
What it does

Systematically investigates technical issues by tracing root causes, analyzing patterns, testing hypotheses, and implementing precise fixes without random guesswork.

πŸ“¦

Part of

mjunaidca/mjs-agent-skills(17 items)

systematic-debugging

Installation

git cloneClone repository
git clone https://github.com/mshaukat/mjs-skills.git
PythonRun Python server
python .claude/hooks/analyze-skills.py
πŸ“– Extracted from docs: mjunaidca/mjs-agent-skills
3Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

|

Overview

# Systematic Debugging

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

The Iron Law

```

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

```

If you haven't completed Phase 1, you cannot propose fixes.

---

The Four Phases

Phase 1: Root Cause Investigation

BEFORE attempting ANY fix:

  1. Read Error Messages Carefully

- Don't skip past errors or warnings

- Read stack traces completely

- Note line numbers, file paths, error codes

  1. Reproduce Consistently

- Can you trigger it reliably?

- What are the exact steps?

- If not reproducible, gather more data - don't guess

  1. Check Recent Changes

- Git diff, recent commits

- New dependencies, config changes

- Environmental differences

  1. Gather Evidence in Multi-Component Systems

When system has multiple components (CI -> build -> signing, API -> service -> database):

```

For EACH component boundary:

- Log what data enters component

- Log what data exits component

- Verify environment/config propagation

Run once to gather evidence showing WHERE it breaks

THEN analyze to identify failing component

```

  1. Trace Data Flow

See [references/root-cause-tracing.md](references/root-cause-tracing.md) for backward tracing technique.

Quick version: Where does bad value originate? Keep tracing up until you find the source. Fix at source, not symptom.

Phase 2: Pattern Analysis

  1. Find Working Examples - Locate similar working code in same codebase
  2. Compare Against References - Read reference implementations COMPLETELY, don't skim
  3. Identify Differences - List every difference between working and broken
  4. Understand Dependencies - What settings, config, environment assumptions?

Phase 3: Hypothesis and Testing

  1. Form Single Hypothesis - "I think X is the root cause because Y"
  2. Test Minimally - SMALLEST possible change, one variable at a time
  3. Verify Before Continuing - Worked? Phase 4. Didn't? NEW hypothesis, don't stack fixes

Phase 4: Implementation

  1. Create Failing Test Case - Simplest reproduction, automated if possible
  2. Implement Single Fix - ONE change, no "while I'm here" improvements
  3. Verify Fix - Test passes? No regressions?
  1. If Fix Doesn't Work:

- Count: How many fixes have you tried?

- If < 3: Return to Phase 1, re-analyze

- If >= 3: STOP and question the architecture

  1. If 3+ Fixes Failed: Question Architecture

Pattern indicating architectural problem:

- Each fix reveals new shared state/coupling

- Fixes require "massive refactoring"

- Each fix creates new symptoms elsewhere

STOP. Discuss with user before attempting more fixes.

---

Red Flags - STOP and Follow Process

If you catch yourself thinking:

  • "Quick fix for now, investigate later"
  • "Just try changing X and see"
  • "Add multiple changes, run tests"
  • "I'm confident it's X, let me fix that"
  • "One more fix attempt" (when already tried 2+)
  • Proposing solutions before tracing data flow

ALL of these mean: STOP. Return to Phase 1.

---

Supporting Techniques

Defense-in-Depth

When you fix a bug, validate at EVERY layer:

| Layer | Purpose | Example |

|-------|---------|---------|

| Entry Point | Reject invalid input at API boundary | if (!dir) throw new Error('dir required') |

| Business Logic | Ensure data makes sense for operation | Validate before processing |

| Environment Guards | Prevent dangerous ops in specific contexts | Refuse git init outside tmpdir in tests |

| Debug Instrumentation | Capture context for forensics | Log with stack trace before dangerous ops |

Single validation feels sufficient, but different code paths bypass it. Make bugs structurally impossible.

Condition-Based Waiting

Flaky tests guess at timing. Wait for actual conditions instead:

```python

# BAD: Guessing at timing

await asyncio.sleep(0.05)

result = get_result()

# GOOD: Wait for condition

await wait_for(lambda: get_result() is not None)

result = get_result()

```

Pattern:

```python

async def wait_for(condition, timeout_ms=5000):

start = time.time()

while True:

if condition():

return

if (time.time() - start) * 1000 > timeout_ms:

raise TimeoutError("Condition not met")

await asyncio.sleep(0.01) # Poll every 10ms

```

---

Common Rationalizations

| Excuse | Reality |

|--------|---------|

| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |

| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |

| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |

| "I see the problem, let me fix it" | Seeing symptoms != understanding root cause. |

| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |

---

Verification

Run: python scripts/verify.py

References

  • [references/root-cause-tracing.md](references/root-cause-tracing.md) - Trace bugs backward through call stack

More from this repository10

🎯
working-with-spreadsheets🎯Skill

working-with-spreadsheets skill from mjunaidca/mjs-agent-skills

🎯
working-with-documents🎯Skill

working-with-documents skill from mjunaidca/mjs-agent-skills

🎯
browsing-with-playwright🎯Skill

Navigates websites, fills forms, clicks elements, takes screenshots, and extracts data using Playwright browser automation.

🎯
tool-design🎯Skill

I apologize, but I cannot generate a description without seeing the actual content or context of the "tool-design" skill from the repository. Could you provide more details about what this specific...

🎯
styling-with-shadcn🎯Skill

styling-with-shadcn skill from mjunaidca/mjs-agent-skills

🎯
building-chat-interfaces🎯Skill

Builds production-grade AI chat interfaces with custom backends, authentication, and contextual agent integration.

🎯
building-nextjs-apps🎯Skill

Guides developers in building Next.js 16 applications with correct patterns, handling breaking changes, and implementing modern React design practices.

🎯
building-mcp-servers🎯Skill

Guides developers in creating robust MCP servers that enable LLMs to interact seamlessly with external services through well-designed, discoverable tools.

🎯
scaffolding-fastapi-dapr🎯Skill

scaffolding-fastapi-dapr skill from mjunaidca/mjs-agent-skills

🎯
docker🎯Skill

Generates production-grade Docker configurations by auto-detecting project structure, analyzing resources, and creating secure containerization templates.