2 results for tag "arize-prompt-optimization"
Data-driven prompt optimization loop for LLM apps that extracts prompts from OpenInference trace spans (`attributes.llm.input_messages`, `llm.prompt_template.*`) and joins them with annotation / LLM-as-judge eval signals, then iterates via the `ax` CLI. Use when improving, debugging, or optimizing prompts based on production trace data rather than guesswork.