🔌

hugging-face-model-trainer

🔌Plugin

huggingface/skills

VibeIndex|
What it does
|

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

Overview

Hugging Face Model Trainer is a plugin from the official Hugging Face Skills repository that provides AI/ML task definitions for model training workflows. It is part of a collection of interoperable skills following the standardized Agent Skill format, working with Claude Code, OpenAI Codex, Gemini CLI, and Cursor.

Key Features

  • Model Training Guidance - Provides structured instructions for training machine learning models within the Hugging Face ecosystem
  • Multi-Agent Compatibility - Works with Claude Code, OpenAI Codex, Gemini CLI, Cursor, and other AI coding assistants through the Agent Skills standard
  • Plugin Marketplace - Installable via Claude Code's plugin marketplace with /plugin marketplace add huggingface/skills followed by /plugin install
  • Standardized Format - Uses SKILL.md files with YAML frontmatter containing name, description, and detailed guidance for AI agents
  • Cross-Tool Fallback - Includes AGENTS.md and gemini-extension.json for compatibility with tools that do not natively support skills

Who is this for?

This plugin is designed for ML engineers and data scientists who train models using the Hugging Face ecosystem and want AI coding assistants to provide expert guidance during the training process. It is ideal for teams working on model fine-tuning, training configuration, and optimization who benefit from structured, agent-consumable instructions.

🏪

Part of

huggingface-skills

Installation

Add marketplace in Claude Code:
/plugin marketplace add huggingface/skills
Step 2. Install plugin:
/plugin install hugging-face-model-trainer@huggingface-skills
857
-
Last UpdatedJan 14, 2026

More from this repository10

🏪
huggingface-skills🏪Marketplace

Agent Skills for AI/ML tasks including dataset creation, model training, evaluation, and research paper publishing on Hugging Face Hub

🔌
hugging-face-jobs🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-evaluation🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-tool-builder🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-datasets🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-cli🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-paper-publisher🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🔌
hugging-face-trackio🔌Plugin

Official Hugging Face skills defining AI/ML tasks like dataset creation, model training, and evaluation. Interoperable with Claude Code, OpenAI Codex, Gemini CLI, and Cursor using the standardized Agent Skill format.

🎯
hf-cli🎯Skill

Skill

🎯
hugging-face-model-trainer🎯Skill

Trains and fine-tunes language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure, supporting SFT, DPO, GRPO, reward modeling, and GGUF conversion for local deployment.