🎯

nemo-curator

🎯Skill

from orchestra-research/ai-research-skills

VibeIndex|
What it does

Automates scientific literature curation by extracting, summarizing, and organizing research papers from marine biology and oceanography domains

πŸ“¦

Part of

orchestra-research/ai-research-skills(84 items)

nemo-curator

Installation

npxRun with npx
npx @orchestra-research/ai-research-skills
npxRun with npx
npx @orchestra-research/ai-research-skills list # View installed skills
npxRun with npx
npx @orchestra-research/ai-research-skills update # Update installed skills
Add MarketplaceAdd marketplace to Claude Code
/plugin marketplace add orchestra-research/AI-research-SKILLs
Install PluginInstall plugin from marketplace
/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth

+ 4 more commands

πŸ“– Extracted from docs: orchestra-research/ai-research-skills
1Installs
-
AddedFeb 7, 2026

More from this repository10

πŸͺ
orchestra-research-ai-research-skillsπŸͺMarketplace

Streamlines AI research workflows by providing curated Claude skills for data analysis, literature review, experiment design, and research paper generation.

🎯
ml-paper-writing🎯Skill

Assists AI researchers in drafting, structuring, and generating machine learning research papers with academic writing best practices and technical precision.

🎯
torchforge-rl-training🎯Skill

Streamlines reinforcement learning model training in PyTorch with automated hyperparameter tuning, environment setup, and advanced policy optimization techniques.

🎯
distributed-llm-pretraining-torchtitan🎯Skill

Streamlines large-scale distributed machine learning training for transformer models using PyTorch Titan, optimizing GPU utilization and model performance

🎯
quantizing-models-bitsandbytes🎯Skill

Quantize large language models to reduce memory footprint and accelerate inference using efficient 8-bit and 4-bit compression techniques with bitsandbytes.

🎯
awq-quantization🎯Skill

Quantizes large language models using Activation-aware Weight Quantization (AWQ) to reduce model size and improve inference efficiency.

🎯
llama-factory🎯Skill

Streamlines fine-tuning and deployment of Llama language models with automated configuration, dataset processing, and model optimization workflows.

🎯
dspy🎯Skill

Automates complex AI prompt engineering and optimization using DSPy's programmatic framework for building reliable language model pipelines.

🎯
mlflow🎯Skill

Streamline machine learning experiment tracking, model versioning, and deployment management with comprehensive MLflow integration and best practices.

🎯
hqq-quantization🎯Skill

Performs hardware-aware quantization of neural networks using HQQ (Highly Quantized Quantization) to reduce model size and improve inference efficiency.