Step 1: Identify Changes to Review
If given a PR URL:
```bash
# Extract PR info
gh pr view PR_NUMBER --json title,body,additions,deletions,files
# Get the diff
gh pr diff PR_NUMBER
```
If reviewing current branch:
```bash
# Find the base branch
git log --oneline -1 origin/master
# Show what will be in the PR
git diff origin/master...HEAD --stat
git diff origin/master...HEAD
```
If reviewing uncommitted changes:
```bash
git diff --stat
git diff
```
Step 2: Gather Context
Before reviewing, understand the intent:
- Read the PR description or commit messages
- Check for linked issues or documentation
- Look for project-specific guidelines:
```bash
# Check for project CLAUDE.md or AGENTS.md
cat CLAUDE.md 2>/dev/null || cat AGENTS.md 2>/dev/null || echo "No project guidelines found"
```
Step 3: Review the Changes
For each file changed, evaluate these key areas:
- Implementation Completeness
- Are all code paths handled?
- Any placeholder or stub code left behind?
- Do error messages make sense?
- Test Quality
- Are tests added for new functionality?
- Do tests verify behavior, not just coverage?
- Are edge cases tested?
- Would these tests catch a regression?
- Complexity Impact
- Does this add new abstractions? Are they justified?
- Is there a simpler way to achieve the same goal?
- Does it follow existing patterns in the codebase?
- Performance Considerations
- Any new loops over large datasets?
- Unnecessary memory allocations in hot paths?
- I/O operations that could be batched?
- Duplication Check
- Search for similar existing code:
```bash
# Look for similar function names or patterns
rg "similar_function_name" --type py
```
Step 3.5: Synthesize Multi-Model Reviews (If Available)
If external reviews were collected in Step 0, synthesize them with your findings:
- If you saved to temp files, read them:
```bash
# Check for Gemini review
[ -f /tmp/claude-gemini-review.md ] && cat /tmp/claude-gemini-review.md
# Check for Codex review
[ -f /tmp/claude-codex-review.md ] && cat /tmp/claude-codex-review.md
```
- Cross-reference findings:
- Issues found by multiple models β Higher confidence, prioritize in "Must Address"
- Unique findings from each model β Evaluate independently, include if valid
- Contradicting assessments β Note the disagreement and provide your judgment
- Deduplicate and merge:
- Combine similar issues into single entries
- Use the clearest explanation from any source
- Add model agreement indicator where multiple models agree
- Clean up temp files:
```bash
rm -f /tmp/claude-gemini-review.md /tmp/claude-codex-review.md
```
Step 4: Provide Feedback
Structure your review as:
```markdown