context-compression
π―Skillfrom muratcankoylan/agent-skills-for-context-engineering
Compresses and optimizes conversation context by strategically summarizing and preserving critical information while minimizing token usage across long-running agent sessions.
Part of
muratcankoylan/agent-skills-for-context-engineering(21 items)
Installation
/plugin marketplace add muratcankoylan/Agent-Skills-for-Context-Engineering/plugin install context-engineering-fundamentals@context-engineering-marketplace/plugin install agent-architecture@context-engineering-marketplace/plugin install agent-evaluation@context-engineering-marketplace/plugin install agent-development@context-engineering-marketplace+ 1 more commands
Skill Details
This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
More from this repository10
Context Engineering skills for building production-grade AI agent systems
context-engineering-collection skill from muratcankoylan/agent-skills-for-context-engineering
hosted-agents skill from muratcankoylan/agent-skills-for-context-engineering
Enables persistent knowledge storage and retrieval across agent sessions through layered memory architectures, knowledge graphs, and temporal tracking.
Optimizes context windows by strategically compressing, masking, caching, and partitioning to extend effective context capacity without increasing model size.
filesystem-context skill from muratcankoylan/agent-skills-for-context-engineering
Guides users through designing LLM project architectures, evaluating task-model fit, and selecting optimal agent-based development strategies.
Skill
bdi-mental-states skill from muratcankoylan/agent-skills-for-context-engineering
Explains context engineering principles, architecture design, and optimization strategies for AI agent systems' context management and attention mechanics.