querying-mlflow-metrics
π―Skillfrom mlflow/skills
Enables AI coding assistants to query MLflow metrics for monitoring and evaluating GenAI application performance.
Same repository
mlflow/skills(8 items)
Installation
npx vibeindex add mlflow/skills --skill querying-mlflow-metricsnpx skills add mlflow/skills --skill querying-mlflow-metrics~/.claude/skills/querying-mlflow-metrics/SKILL.mdSKILL.md
More from this repository7
Agent evaluation skill using MLflow for systematically evaluating and improving LLM agent output quality. Covers tool selection accuracy, answer quality, cost reduction, and end-to-end evaluation with datasets, scorers, and tracing.
An MLflow skill for analyzing tracing sessions, giving AI coding assistants deep knowledge of MLflow's tracing, evaluation, and observability for debugging GenAI applications.
Provides onboarding guidance for MLflow, helping AI coding assistants get started with MLflow's tracing, evaluation, and observability features.
Enables AI coding assistants to search and navigate MLflow documentation for building, debugging, and evaluating GenAI applications.
Teaches AI coding assistants how to retrieve and query MLflow traces for analyzing GenAI application performance and behavior.
Teaches AI coding assistants how to analyze MLflow traces for debugging and evaluating GenAI applications with MLflow's observability features.
Guides AI coding assistants in instrumenting applications with MLflow tracing for observability and debugging of GenAI workflows.