transformers
π―Skillfrom itsmostafa/llm-engineering-skills
Simplifies fine-tuning, inference, and deployment of transformer models like BERT, GPT, and T5 with optimized workflows and best practices
Part of
itsmostafa/llm-engineering-skills(11 items)
Installation
npx skills add https://github.com/itsmostafa/llm-engineering-skills --skill transformersNeed more details? View full documentation on GitHub β
More from this repository10
Claude skills for LLM engineering tasks including PyTorch, Transformers, LoRA fine-tuning, and MLX on Apple Silicon
Collection of LLM engineering skills for PyTorch, Transformers, LoRA, and MLX
Implements LoRA fine-tuning techniques for efficient parameter-efficient model adaptation and transfer learning across large language models
Optimize AI prompts by strategically structuring context, improving response quality, relevance, and task-specific performance across language models.
Craft precise, effective prompts to maximize AI model performance, optimize response quality, and control output for various use cases
Guides AI model fine-tuning through reinforcement learning with human feedback, improving alignment and performance
Develop, configure, and orchestrate AI agents with advanced prompting, tool integration, and multi-agent collaboration strategies
Efficiently fine-tune large language models using quantized low-rank adaptation for memory-efficient and performant model customization
Streamlines deep learning workflows with PyTorch, enabling efficient model design, training, and deployment across neural network architectures.
Simplifies machine learning workflows with Apple's MLX framework, enabling efficient tensor operations and neural network development on Apple Silicon.