nnsight-remote-interpretability
π―Skillfrom orchestra-research/ai-research-skills
Enables remote neural network interpretation and analysis through advanced visualization, layer probing, and activation tracking techniques.
Part of
orchestra-research/ai-research-skills(84 items)
Installation
npx @orchestra-research/ai-research-skillsnpx @orchestra-research/ai-research-skills list # View installed skillsnpx @orchestra-research/ai-research-skills update # Update installed skills/plugin marketplace add orchestra-research/AI-research-SKILLs/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth+ 4 more commands
More from this repository10
Streamlines AI research workflows by providing curated Claude skills for data analysis, literature review, experiment design, and research paper generation.
Assists AI researchers in drafting, structuring, and generating machine learning research papers with academic writing best practices and technical precision.
Streamlines distributed machine learning training using Ray, optimizing hyperparameter tuning and parallel model execution across compute clusters.
Streamlines distributed data processing and machine learning workflows using Ray's scalable data loading and transformation capabilities.
Evaluates and benchmarks NVIDIA NeMo language models with comprehensive performance metrics, test suite generation, and model comparison tools.
Provides structured, context-aware advice and recommendations for complex problem-solving, research workflows, and strategic decision-making
Quantizes large language models using Activation-aware Weight Quantization (AWQ) to reduce model size and improve inference efficiency.
Streamlines distributed training and inference for machine learning models across multiple GPUs, TPUs, and hardware configurations using Hugging Face Accelerate.
Streamlines fine-tuning and deployment of Llama language models with automated configuration, dataset processing, and model optimization workflows.
Quantize large language models to GGUF format, reducing model size and improving inference performance across different hardware platforms.