tldr-stats
π―Skillfrom parcadei/continuous-claude-v3
Generates a detailed dashboard showing token usage, API costs, TLDR savings, and hook activity for Claude sessions.
Installation
npx skills add https://github.com/parcadei/continuous-claude-v3 --skill tldr-statsSkill Details
Show full session token usage, costs, TLDR savings, and hook activity
Overview
# TLDR Stats Skill
Show a beautiful dashboard with token usage, actual API costs, TLDR savings, and hook activity.
When to Use
- See how much TLDR is saving you in real $ terms
- Check total session token usage and costs
- Before/after comparisons of TLDR effectiveness
- Debug whether TLDR/hooks are being used
- See which model is being used
Instructions
IMPORTANT: Run the script AND display the output to the user.
- Run the stats script:
```bash
python3 $CLAUDE_PROJECT_DIR/.claude/scripts/tldr_stats.py
```
- Copy the full output into your response so the user sees the dashboard directly in the chat. Do not just run the command silently - the user wants to see the stats.
Sample Output
```
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π Session Stats β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
You've spent $96.52 this session
Tokens Used
1.2M sent to Claude
416.3K received back
97.8K from prompt cache (8% reused)
TLDR Savings
You sent: 1.2M
Without TLDR: 2.5M
π° TLDR saved you ~$18.83
(Without TLDR: $115.35 β With TLDR: $96.52)
File reads: 1.3M β 20.9K ββββββββββ 98% smaller
TLDR Cache
Re-reading the same file? TLDR remembers it.
βββββββββββββββ 37% cache hits
(35 reused / 60 parsed fresh)
Hooks: 553 calls (β all ok)
History: βββ ββββ avg 84% compression
Daemon: 24m up β 3 sessions
```
Understanding the Numbers
| Metric | What it means |
|--------|---------------|
| You've spent | Actual $ spent on Claude API this session |
| You sent / Without TLDR | Actual tokens vs what it would have been |
| TLDR saved you | Money saved by compressing file reads |
| File reads X β Y | Raw file tokens compressed to TLDR summary |
| Cache hits | How often TLDR reuses parsed file results |
| History sparkline | Compression % over recent sessions (β = high) |
Visual Elements
- Progress bars show savings and cache efficiency at a glance
- Sparklines show historical trends (β = high savings, β = low)
- Colors indicate status (green = good, yellow = moderate, red = concern)
- Emojis distinguish model types (π Opus, π΅ Sonnet, π Haiku)
Notes
- Token savings vary by file size (big files = more savings)
- Cache hit rate starts low, increases as you re-read files
- Cost estimates use: Opus $15/1M, Sonnet $3/1M, Haiku $0.25/1M
- Stats update in real-time as you work
More from this repository10
Enables seamless integration between Agentica agents and Claude Code CLI by managing proxy configurations, tool permissions, and response formatting.
Manages git commits by removing Claude attribution, generating reasoning documentation, and ensuring clean commit workflows.
Systematically diagnose and resolve hook registration, execution, and output issues in Claude Code projects by checking cache, settings, files, and manual testing.
Systematically researches, analyzes, plans, implements, and reviews migrations across frameworks, languages, and infrastructure with minimal risk.
Enables background agent execution with system-triggered progress notifications, avoiding manual polling and context flooding.
Provides comprehensive reference and infrastructure for building sophisticated multi-agent coordination patterns and workflows with precise API specifications and tracking mechanisms.
Generates a comprehensive summary of the current system's configuration, components, and key metrics across skills, agents, hooks, and other core systems.
Provides comprehensive CLI commands and flags for interacting with Claude Code, enabling headless mode, automation, and session management.
Traces and correlates Claude Code session events across parent and sub-agent interactions using comprehensive Braintrust instrumentation.
Rapidly edits files using AI-powered Morph Apply API with high accuracy and speed, without requiring full file context.