0. MANDATORY: Use context7 Before Any Code Changes
CRITICAL: Before creating, editing, or suggesting code changes, ALWAYS use context7 to look up current documentation.
When to use context7:
- Before suggesting fixes or improvements
- When reviewing library/framework usage
- Before writing code examples or snippets
- When uncertain about API behavior or best practices
How to use:
```
- Resolve library: context7 resolve
- Get docs: context7 get-library-docs --library --topic
```
Example:
```
context7 resolve react
context7 get-library-docs --library react --topic "useEffect dependencies"
```
1. Determine Review Scope
Identify what's being reviewed:
- Single file: Review that file
- Directory: Review all relevant files
- Diff/PR: Focus on changed lines with surrounding context
- Entire codebase: Start with entry points, follow dependencies
2. Select Review Depth
Choose automatically based on context, or accept user override:
| Depth | When to Use | Focus |
|-------|-------------|-------|
| Quick | Small changes, trivial files, time-sensitive | Critical issues only |
| Standard | Most reviews, single files, typical PRs | All categories, balanced |
| Deep | Pre-production, security-sensitive, complex systems | Exhaustive, security-focused |
3. Detect Languages and Load References
Identify languages present, then load relevant reference files:
- R β [references/r.md](references/r.md)
- Python β [references/python.md](references/python.md)
- JavaScript β [references/javascript.md](references/javascript.md)
- SQL β [references/sql.md](references/sql.md)
- C++ β [references/cpp.md](references/cpp.md)
- Rust β [references/rust.md](references/rust.md)
- Go β [references/go.md](references/go.md)
- Ansible β [references/ansible.md](references/ansible.md)
- Kubernetes/Kustomize β [references/kubernetes.md](references/kubernetes.md)
- Dockerfile β [references/dockerfile.md](references/dockerfile.md)
- Docker Compose β [references/docker-compose.md](references/docker-compose.md)
- Bash β [references/bash.md](references/bash.md)
4. Use context7 MCP for Documentation (MANDATORY)
ALWAYS query context7 when:
- Reviewing ANY library/framework usage (not just unfamiliar ones)
- Before suggesting code changes or fixes
- Checking if APIs are used correctly
- Verifying deprecated patterns
- Confirming best practices for specific versions
- Writing code examples or snippets in review feedback
Process:
- Identify libraries/frameworks in the code being reviewed
- Use
context7 resolve for each one - Use
context7 get-library-docs to verify API usage and patterns - Only THEN proceed with review findings
Example queries:
context7 resolve react then context7 get-library-docs --library react --topic "hooks"context7 resolve tensorflow then context7 get-library-docs --library tensorflow --topic "layers"context7 resolve tidyverse for R tidyverse patternscontext7 resolve kubernetes for K8s manifest validationcontext7 resolve express for Node.js API patterns
Never skip this step - outdated or incorrect documentation can lead to poor review suggestions.
5. Spawn Subagents for Parallel Review
Use subagents to parallelize review work:
Language Subagent (one per language detected):
```
Task: Review [language] code in [files] for idioms, patterns, and language-specific issues.
Focus: Style, idioms, language-specific performance, common pitfalls.
Reference: Load references/[language].md
Output: Structured findings list
```
Security Subagent:
```
Task: Analyze [files] for security vulnerabilities.
Focus: Injection, auth issues, secrets exposure, unsafe operations, dependency risks.
Output: Security findings with severity and remediation
```
Architecture Subagent:
```
Task: Review overall structure and design of [files/project].
Focus: Coupling, cohesion, separation of concerns, design patterns, testability.
Output: Architecture findings and recommendations
```
6. Review Categories
Each category produces findings with severity ratings.
#### Correctness
- Logic errors and bugs
- Edge cases not handled
- Off-by-one errors
- Null/undefined handling
- Type mismatches
- Race conditions
#### Security
- Injection vulnerabilities (SQL, command, XSS)
- Authentication/authorization flaws
- Secrets in code
- Unsafe deserialization
- Path traversal
- Dependency vulnerabilities
#### Performance
- Algorithmic complexity issues
- Unnecessary allocations
- N+1 queries
- Missing caching opportunities
- Blocking operations
- Memory leaks
#### Testing
- Missing test coverage
- Untested edge cases
- Brittle tests
- Missing integration tests
- Inadequate mocking
#### Documentation
- Missing function/class docstrings
- Outdated comments
- Unclear variable names
- Missing README updates
- Undocumented public APIs
#### Architecture
- Tight coupling
- God objects/functions
- Circular dependencies
- Layer violations
- Missing abstractions
- Poor separation of concerns
7. Classify Findings
Rate each finding:
| Severity | Definition | Action |
|----------|------------|--------|
| Critical | Security vulnerability, data loss risk, crash in production | Must fix before merge |
| Major | Significant bug, performance issue, maintainability blocker | Should fix before merge |
| Minor | Code smell, style issue, minor inefficiency | Fix when convenient |
| Nitpick | Preference, very minor style, optional improvement | Consider fixing |
8. Generate Outputs
#### Output 1: Claude Code Action Format
Produce structured findings Claude Code can act on directly:
```