arize-evaluator
๐ฏSkillfrom arize-ai/arize-skills
Arize skill for creating **LLM-as-judge evaluators**, running evaluation tasks, and setting up continuous monitoring โ part of the Arize platform's skills that guide AI coding agents to add observability, run experiments, and optimize prompts for LLM applications using the `ax` CLI.
Same repository
arize-ai/arize-skills(9 items)
Installation
npx vibeindex add arize-ai/arize-skills --skill arize-evaluatornpx skills add arize-ai/arize-skills --skill arize-evaluator~/.claude/skills/arize-evaluator/SKILL.mdSKILL.md
More from this repository8
Skill
Skill
Skill
Skills that guide AI coding agents to add observability, run experiments, and optimize prompts for LLM applications using the Arize platform and the ax CLI, with support for tracing, debugging, and production monitoring workflows.
Skills for AI coding agents to manage datasets, add observability, and run experiments on LLM applications using the Arize platform, with workflows for tracing, debugging, and prompt optimization in production.
Skill
Create and manage annotation configs (categorical, continuous, freeform); bulk-annotate project spans via the Python SDK.
Create and manage LLM provider credentials (OpenAI, Anthropic, Azure, Bedrock, Vertex, and more).