systematic-debugging
π―Skillfrom nickcrew/claude-ctx-plugin
A four-phase debugging framework skill that ensures understanding before attempting solutions, covering root cause investigation, pattern analysis, hypothesis testing, and implementation for any bug or test failure.
Same repository
nickcrew/claude-ctx-plugin(100 items)
Installation
npx vibeindex add nickcrew/claude-ctx-plugin --skill systematic-debuggingnpx skills add nickcrew/claude-ctx-plugin --skill systematic-debugging~/.claude/skills/systematic-debugging/SKILL.mdSKILL.md
More from this repository10
Cortex β a context management toolkit packaged as a Claude Code plugin with 50+ curated commands, 80+ skills, subagents with dependency metadata, and a Python CLI/TUI.
OWASP Top 10 security skill for identifying, preventing, and remediating critical web application security risks with secure coding practices and defense-in-depth strategies
Generates high-quality, non-generic UI designs with distinctive aesthetics, focusing on performance and progressive disclosure to avoid generic AI-generated design patterns.
Provides Helm chart patterns and Kubernetes deployment best practices from Cortex, a context management toolkit packaged as a Claude Code plugin.
Cortex β a context management toolkit packaged as a Claude Code plugin with 50+ curated commands, 80+ skills, subagents with dependency metadata, and a Python CLI/TUI.
Cortex β a context management toolkit packaged as a Claude Code plugin with 50+ curated commands, 80+ skills, subagents with dependency metadata, and a Python CLI/TUI.
Provides Terraform infrastructure-as-code best practices and guidelines, from Cortex, a comprehensive context management toolkit for Claude Code.
Cortex β a context management toolkit packaged as a Claude Code plugin with 50+ curated commands, 80+ skills, subagents with dependency metadata, and a Python CLI/TUI.
Provides Kubernetes deployment pattern guidance and best practices from Cortex, a context management toolkit for Claude Code with curated agents, commands, and modes.
Skill for testing AI agent skills with subagents using a TDD approach -- running baseline scenarios without the skill to identify failures, then writing and refining the skill to address those failures.