huggingface-accelerate
π―Skillfrom orchestra-research/ai-research-skills
Streamlines distributed training and inference for machine learning models across multiple GPUs, TPUs, and hardware configurations using Hugging Face Accelerate.
Part of
orchestra-research/ai-research-skills(84 items)
Installation
npx @orchestra-research/ai-research-skillsnpx @orchestra-research/ai-research-skills list # View installed skillsnpx @orchestra-research/ai-research-skills update # Update installed skills/plugin marketplace add orchestra-research/AI-research-SKILLs/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth+ 4 more commands
More from this repository10
Streamlines AI research workflows by providing curated Claude skills for data analysis, literature review, experiment design, and research paper generation.
Assists AI researchers in drafting, structuring, and generating machine learning research papers with academic writing best practices and technical precision.
Streamlines machine learning experiment tracking, visualization, and hyperparameter optimization using Weights & Biases platform integration
Quantize large language models to GGUF format, reducing model size and improving inference performance across different hardware platforms.
Accelerates AI model inference by predicting and parallel processing multiple token candidates to reduce latency and improve generation speed.
Analyze and describe images using advanced multimodal AI, extracting detailed visual insights and contextual understanding across various domains.
Streamlines supervised fine-tuning of language models using Simple, Interpretable, Modular Policy Optimization (SIMPO) techniques
Orchestrates collaborative AI agents using CrewAI to solve complex tasks through dynamic role assignment, task delegation, and intelligent workflow management.
Automates scientific literature curation by extracting, summarizing, and organizing research papers from marine biology and oceanography domains
Streamlines reinforcement learning model training in PyTorch with automated hyperparameter tuning, environment setup, and advanced policy optimization techniques.