Quality over quantity: Each skill provides comprehensive, expert-level guidance with real code examples, troubleshooting guides, and production-ready workflows.
📦 Quick Install (Recommended)
Install skills to any coding agent (Claude Code, OpenCode, Cursor, Codex, Gemini CLI, Qwen Code) with one command:
```bash
npx @orchestra-research/ai-research-skills
```
This launches an interactive installer that:
- Auto-detects your installed coding agents
- Installs skills to
~/.orchestra/skills/ with symlinks to each agent - Offers everything, quickstart bundle, by category, or individual skills
- Updates installed skills with latest versions
- Uninstalls all or selected skills
CLI Commands
```bash
# Interactive installer (recommended)
npx @orchestra-research/ai-research-skills
# Direct commands
npx @orchestra-research/ai-research-skills list # View installed skills
npx @orchestra-research/ai-research-skills update # Update installed skills
```
Claude Code Marketplace (Alternative)
Install skill categories directly using the Claude Code CLI:
```bash
# Add the marketplace
/plugin marketplace add orchestra-research/AI-research-SKILLs
# Install by category (20 categories available)
/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth
/plugin install post-training@ai-research-skills # TRL, GRPO, OpenRLHF, SimPO, verl, slime, miles, torchforge
/plugin install inference-serving@ai-research-skills # vLLM, TensorRT-LLM, llama.cpp, SGLang
/plugin install distributed-training@ai-research-skills
/plugin install optimization@ai-research-skills
```
All 20 Categories (82 Skills)
| Category | Skills | Included |
|----------|--------|----------|
| Model Architecture | 5 | LitGPT, Mamba, NanoGPT, RWKV, TorchTitan |
| Tokenization | 2 | HuggingFace Tokenizers, SentencePiece |
| Fine-Tuning | 4 | Axolotl, LLaMA-Factory, PEFT, Unsloth |
| Mech Interp | 4 | TransformerLens, SAELens, pyvene, nnsight |
| Data Processing | 2 | NeMo Curator, Ray Data |
| Post-Training | 8 | TRL, GRPO, OpenRLHF, SimPO, verl, slime, miles, torchforge |
| Safety | 3 | Constitutional AI, LlamaGuard, NeMo Guardrails |
| Distributed | 6 | DeepSpeed, FSDP, Accelerate, Megatron-Core, Lightning, Ray Train |
| Infrastructure | 3 | Modal, Lambda Labs, SkyPilot |
| Optimization | 6 | Flash Attention, bitsandbytes, GPTQ, AWQ, HQQ, GGUF |
| Evaluation | 3 | lm-eval-harness, BigCode, NeMo Evaluator |
| Inference | 4 | vLLM, TensorRT-LLM, llama.cpp, SGLang |
| MLOps | 3 | W&B, MLflow, TensorBoard |
| Agents | 4 | LangChain, LlamaIndex, CrewAI, AutoGPT |
| RAG | 5 | Chroma, FAISS, Pinecone, Qdrant, Sentence Transformers |
| Prompt Eng | 4 | DSPy, Instructor, Guidance, Outlines |
| Observability | 2 | LangSmith, Phoenix |
| Multimodal | 7 | CLIP, Whisper, LLaVA, BLIP-2, SAM, Stable Diffusion, AudioCraft |
| Emerging | 6 | MoE, Model Merging, Long Context, Speculative Decoding, Distillation, Pruning |
| ML Paper Writing | 1 | ML Paper Writing (LaTeX templates, citation verification) |
🏗️ Model Architecture (5 skills)
- [LitGPT](01-model-architecture/litgpt/) - Lightning AI's 20+ clean LLM implementations with production training recipes (462 lines + 4 refs)
- [Mamba](01-model-architecture/mamba/) - State-space models with O(n) complexity, 5× faster than Transformers (253 lines + 3 refs)
- [RWKV](01-model-architecture/rwkv/) - RNN+Transformer hybrid, infinite context, Linux Foundation project (253 lines + 3 refs)
- [NanoGPT](01-model-architecture/nanogpt/) - Educational GPT in ~300 lines by Karpathy (283 lines + 3 refs)
- [TorchTitan](01-model-architecture/torchtitan/) - PyTorch-native distributed training for Llama 3.1 with 4D parallelism
🔤 Tokenization (2 skills)
- [HuggingFace Tokenizers](02-tokenization/huggingface-tokenizers/) - Rust-based, <20s/GB, BPE/WordPiece/Unigram algorithms (486 lines + 4 refs)
- **[