🎯

verl-rl-training

🎯Skill

from orchestra-research/ai-research-skills

VibeIndex|
What it does

Trains and fine-tunes Verifiable Reinforcement Learning models using advanced algorithmic techniques for robust AI policy optimization

πŸ“¦

Part of

orchestra-research/ai-research-skills(84 items)

verl-rl-training

Installation

npxRun with npx
npx @orchestra-research/ai-research-skills
npxRun with npx
npx @orchestra-research/ai-research-skills list # View installed skills
npxRun with npx
npx @orchestra-research/ai-research-skills update # Update installed skills
Add MarketplaceAdd marketplace to Claude Code
/plugin marketplace add orchestra-research/AI-research-SKILLs
Install PluginInstall plugin from marketplace
/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth

+ 4 more commands

πŸ“– Extracted from docs: orchestra-research/ai-research-skills
1Installs
-
AddedFeb 7, 2026

More from this repository10

πŸͺ
orchestra-research-ai-research-skillsπŸͺMarketplace

Streamlines AI research workflows by providing curated Claude skills for data analysis, literature review, experiment design, and research paper generation.

🎯
ml-paper-writing🎯Skill

Assists AI researchers in drafting, structuring, and generating machine learning research papers with academic writing best practices and technical precision.

🎯
ray-train🎯Skill

Streamlines distributed machine learning training using Ray, optimizing hyperparameter tuning and parallel model execution across compute clusters.

🎯
ray-data🎯Skill

Streamlines distributed data processing and machine learning workflows using Ray's scalable data loading and transformation capabilities.

🎯
fine-tuning-with-trl🎯Skill

Streamlines parameter-efficient fine-tuning of large language models using Transformers Reinforcement Learning (TRL) techniques and best practices.

🎯
pytorch-fsdp2🎯Skill

Enables distributed training of large AI models using PyTorch's Fully Sharded Data Parallel (FSDP) with advanced memory optimization and scaling techniques

🎯
nnsight-remote-interpretability🎯Skill

Enables remote neural network interpretation and analysis through advanced visualization, layer probing, and activation tracking techniques.

🎯
speculative-decoding🎯Skill

Accelerates AI model inference by predicting and parallel processing multiple token candidates to reduce latency and improve generation speed.

🎯
moe-training🎯Skill

Streamlines training and fine-tuning of Mixture of Experts (MoE) models with automated hyperparameter optimization and distributed learning strategies

🎯
peft-fine-tuning🎯Skill

Efficiently fine-tune large language models using Parameter-Efficient Fine-Tuning (PEFT) techniques with minimal computational resources and memory overhead.