Showing 12 of 45939 results
neversight/skills.sh_feed
Based on the README, I'll infer a concise description for the "task-runner" skill: Automates and manages execution of complex task sequences, allowing efficient workflow orchestration and sequenti...
marius-townhouse/effective-typescript-skills
Transforms callback-based asynchronous code into clean, readable async/await patterns for better type flow and error handling.
majesticlabs-dev/majestic-marketplace
Generates comprehensive test fixtures and mock data for software development, simplifying testing and development workflows.
mintlify/com
I apologize, but I cannot generate a description without seeing the specific context or details about the "skill-creator" skill from the "mintlify/com" repository. Could you provide more informatio...
orchestra-research/ai-research-skills
Efficiently deploy and serve large language models using vLLM for high-performance inference with optimized GPU utilization and low-latency responses
orchestra-research/ai-research-skills
Streamlines machine learning model optimization by reducing parameter count and computational complexity while preserving performance accuracy
hummingbot/skills
Generates professional slide decks from markdown or text input, automatically creating visually appealing presentations with consistent design and layout.
orchestra-research/ai-research-skills
from orchestra-research/ai-research-skills
orchestra-research/ai-research-skills
Streamlines distributed training and inference for machine learning models across multiple GPUs, TPUs, and hardware configurations using Hugging Face Accelerate.
orchestra-research/ai-research-skills
Quantize large language models to GGUF format, reducing model size and improving inference performance across different hardware platforms.
orchestra-research/ai-research-skills
Streamlines PyTorch model training with automated logging, distributed computing, and advanced callbacks for efficient deep learning workflows
orchestra-research/ai-research-skills
Accelerates AI model inference by predicting and parallel processing multiple token candidates to reduce latency and improve generation speed.