Step 1: Define Cascade Levels
- Use architect agent to identify cascade levels
- Define PRIMARY approach (optimal solution)
- Define SECONDARY approach (acceptable degradation)
- Define TERTIARY approach (guaranteed completion)
- Set timeout for each level
- Document what degrades at each level
- CRITICAL: Ensure tertiary ALWAYS succeeds
Example Cascade Definitions:
Code Analysis with AI:
- PRIMARY: GPT-4 comprehensive analysis (timeout: 30s)
- SECONDARY: GPT-3.5 standard analysis (timeout: 10s)
- TERTIARY: Static analysis with regex (timeout: 5s)
External API Data Fetch:
- PRIMARY: Live API call (timeout: 10s)
- SECONDARY: Cached data (timeout: 2s)
- TERTIARY: Default values (timeout: 0s)
Test Execution:
- PRIMARY: Full test suite (timeout: 120s)
- SECONDARY: Critical tests only (timeout: 30s)
- TERTIARY: Smoke tests (timeout: 10s)
Step 2: Attempt Primary Approach
- Execute optimal solution
- Set timeout based on strategy configuration
- Monitor execution progress
- If completes successfully: DONE (best outcome)
- If fails or times out: Continue to Step 3
- Log attempt and reason for failure
```python
# Pseudocode for primary attempt
try:
result = execute_primary_approach(timeout=PRIMARY_TIMEOUT)
log_success(level="PRIMARY", result=result)
return result # DONE - best outcome achieved
except TimeoutError:
log_failure(level="PRIMARY", reason="timeout")
# Continue to Step 3
except ExternalServiceError as e:
log_failure(level="PRIMARY", reason=f"service_error: {e}")
# Continue to Step 3
```
Step 3: Attempt Secondary Approach
- Log degradation to secondary level
- Execute acceptable fallback solution
- Set shorter timeout (typically 1/3 of primary)
- Monitor execution progress
- If completes successfully: DONE (acceptable outcome)
- If fails or times out: Continue to Step 4
- Log attempt and reason for failure
```python
# Pseudocode for secondary attempt
log_degradation(from_level="PRIMARY", to_level="SECONDARY")
try:
result = execute_secondary_approach(timeout=SECONDARY_TIMEOUT)
log_success(level="SECONDARY", result=result, degraded=True)
return result # DONE - acceptable outcome
except TimeoutError:
log_failure(level="SECONDARY", reason="timeout")
# Continue to Step 4
```
Step 4: Attempt Tertiary Approach
- Log degradation to tertiary level
- Execute guaranteed completion approach
- Set minimal timeout (typically 1s)
- MUST succeed - no failures allowed
- Return minimal but functional result
- Log success (degraded but functional)
- DONE (guaranteed completion)
```python
# Pseudocode for tertiary attempt
log_degradation(from_level="SECONDARY", to_level="TERTIARY")
try:
result = execute_tertiary_approach(timeout=TERTIARY_TIMEOUT)
log_success(level="TERTIARY", result=result, heavily_degraded=True)
return result # DONE - minimal but functional
except Exception as e:
# THIS SHOULD NEVER HAPPEN
log_critical_failure("TERTIARY approach failed - this is a bug!")
raise SystemError("Cascade safety violation: tertiary failed")
```
Step 5: Report Degradation
- Determine notification level from configuration
- Silent: Log only, no user message
- Warning: Brief notification to user
- Explicit: Detailed degradation explanation
- Document which level succeeded
- Explain impact of degradation
- Log cascade path taken for analysis
Degradation Reporting Templates:
Silent Mode:
```
[LOG] CASCADE: PRIMARY timeout (30s) β SECONDARY success (6s)
Result: standard_analysis (degraded from comprehensive)
```
Warning Mode:
```
β οΈ Using cached data (less than 1 hour old)
Current real-time data unavailable.
```
Explicit Mode:
```
βΉοΈ Analysis Quality Notice
We attempted to provide comprehensive code analysis using GPT-4,
but encountered slow response times (>30s timeout).
Fallback Applied:
- Used: GPT-3.5 standard analysis (completed in 6s)
- Quality: Standard (vs. Comprehensive)
- Impact: Advanced semantic insights not included
What You're Getting:
β Basic pattern detection
β Standard recommendations
β Code quality assessment
What's Missing:
β Complex architectural insights
β Deep semantic analysis
β Advanced refactoring suggestions
```
Step 6: Log Cascade Metrics
- Record cascade path taken
- Document level reached (primary/secondary/tertiary)
- Log timing for each level attempted
- Track degradation frequency
- Identify patterns in failures
- Update cascade strategy if needed
Metrics to Track:
- Success rate by level
- Average response times
- Degradation frequency
- User impact assessment
Step 7: Continuous Optimization
- Use analyzer agent to review cascade metrics
- Identify optimization opportunities
- Adjust timeouts based on success rates
- Improve secondary approaches if frequently used
- Update tertiary if inadequate
- Store learnings in memory using
store_discovery() from amplihack.memory.discoveries
Optimization Criteria:
- If PRIMARY succeeds < 50%: Timeout too aggressive β Increase timeout
- If SECONDARY used > 40%: Secondary is really the "normal" case β Swap primary and secondary
- If TERTIARY used > 10%: Secondary not reliable enough β Improve secondary