pytorch-fsdp2
π―Skillfrom orchestra-research/ai-research-skills
Enables distributed training of large AI models using PyTorch's Fully Sharded Data Parallel (FSDP) with advanced memory optimization and scaling techniques
Part of
orchestra-research/ai-research-skills(84 items)
Installation
npx @orchestra-research/ai-research-skillsnpx @orchestra-research/ai-research-skills list # View installed skillsnpx @orchestra-research/ai-research-skills update # Update installed skills/plugin marketplace add orchestra-research/AI-research-SKILLs/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth+ 4 more commands
More from this repository10
Streamlines AI research workflows by providing curated Claude skills for data analysis, literature review, experiment design, and research paper generation.
Assists AI researchers in drafting, structuring, and generating machine learning research papers with academic writing best practices and technical precision.
Streamlines distributed data processing and machine learning workflows using Ray's scalable data loading and transformation capabilities.
Streamlines distributed machine learning training using Ray, optimizing hyperparameter tuning and parallel model execution across compute clusters.
Streamlines distributed training and inference for machine learning models across multiple GPUs, TPUs, and hardware configurations using Hugging Face Accelerate.
Automates scientific literature curation by extracting, summarizing, and organizing research papers from marine biology and oceanography domains
Quantize large language models to GGUF format, reducing model size and improving inference performance across different hardware platforms.
Generates structured document outlines and hierarchical content maps with customizable depth and formatting for research and writing workflows
Trains sparse autoencoders on neural network activations to discover interpretable features and understand internal representations
Optimize and accelerate large language model inference using NVIDIA TensorRT-LLM for faster, more efficient AI model deployment and performance