distributed-llm-pretraining-torchtitan
π―Skillfrom davila7/claude-code-templates
Enables distributed large language model pre-training using PyTorch Titan, facilitating efficient and scalable training across multiple compute nodes.
Part of
davila7/claude-code-templates(625 items)
Installation
npx claude-code-templates@latest --agent development-team/frontend-developer --command testing/generate-tests --mcp development/github-integration --yesnpx claude-code-templates@latestnpx claude-code-templates@latest --agent development-tools/code-reviewer --yesnpx claude-code-templates@latest --command performance/optimize-bundle --yesnpx claude-code-templates@latest --setting performance/mcp-timeouts --yes+ 7 more commands
Skill Details
More from this repository10
Performance optimization suite with profiling, bundle analysis, and speed improvement tools
Enterprise security toolkit with auditing, penetration testing, and compliance automation
Automated documentation generation with API docs, technical writing, and content management
Complete Supabase workflow with specialized commands, data engineering agents, and MCP integrations
Project management toolkit with sprint planning, task automation, and team collaboration tools
Complete Next.js and Vercel development toolkit with deployment automation and performance optimization
DevOps automation suite with CI/CD, infrastructure management, and deployment orchestration
Git workflow automation: feature, release, and hotfix commands with git specialists
Comprehensive testing toolkit with E2E, unit, integration, and visual testing automation
AI and Machine Learning development suite with data engineering and model deployment tools