{"id":"99f64468-145e-4a45-b8af-f81287e7a511","shortId":"TcGfnd","kind":"skill","title":"arize-prompt-optimization","tagline":"INVOKE THIS SKILL when optimizing, improving, or debugging LLM prompts using production trace data, evaluations, and annotations. Also use when the user wants to make their AI respond better or improve AI output quality. Covers extracting prompts from spans, gathering performance","description":"# Arize Prompt Optimization Skill\n\n> **`SPACE`** — All `--space` flags and the `ARIZE_SPACE` env var accept a space **name** (e.g., `my-workspace`) or a base64 space **ID** (e.g., `U3BhY2U6...`). Find yours with `ax spaces list`.\n\n## Concepts\n\n### Where Prompts Live in Trace Data\n\nLLM applications emit spans following OpenInference semantic conventions. Prompts are stored in different span attributes depending on the span kind and instrumentation:\n\n| Column | What it contains | When to use |\n|--------|-----------------|-------------|\n| `attributes.llm.input_messages` | Structured chat messages (system, user, assistant, tool) in role-based format | **Primary source** for chat-based LLM prompts |\n| `attributes.llm.input_messages.roles` | Array of roles: `system`, `user`, `assistant`, `tool` | Extract individual message roles |\n| `attributes.llm.input_messages.contents` | Array of message content strings | Extract message text |\n| `attributes.input.value` | Serialized prompt or user question (generic, all span kinds) | Fallback when structured messages are not available |\n| `attributes.llm.prompt_template.template` | Template with `{variable}` placeholders (e.g., `\"Answer {question} using {context}\"`) | When the app uses prompt templates |\n| `attributes.llm.prompt_template.variables` | Template variable values (JSON object) | See what values were substituted into the template |\n| `attributes.output.value` | Model response text | See what the LLM produced |\n| `attributes.llm.output_messages` | Structured model output (including tool calls) | Inspect tool-calling responses |\n\n### Finding Prompts by Span Kind\n\n- **LLM span** (`attributes.openinference.span.kind = 'LLM'`): Check `attributes.llm.input_messages` for structured chat messages, OR `attributes.input.value` for a serialized prompt. Check `attributes.llm.prompt_template.template` for the template.\n- **Chain/Agent span**: `attributes.input.value` contains the user's question. The actual LLM prompt lives on **child LLM spans** -- navigate down the trace tree.\n- **Tool span**: `attributes.input.value` has tool input, `attributes.output.value` has tool result. Not typically where prompts live.\n\n### Performance Signal Columns\n\nThese columns carry the feedback data used for optimization:\n\n| Column pattern | Source | What it tells you |\n|---------------|--------|-------------------|\n| `annotation.<name>.label` | Human reviewers | Categorical grade (e.g., `correct`, `incorrect`, `partial`) |\n| `annotation.<name>.score` | Human reviewers | Numeric quality score (e.g., 0.0 - 1.0) |\n| `annotation.<name>.text` | Human reviewers | Freeform explanation of the grade |\n| `eval.<name>.label` | LLM-as-judge evals | Automated categorical assessment |\n| `eval.<name>.score` | LLM-as-judge evals | Automated numeric score |\n| `eval.<name>.explanation` | LLM-as-judge evals | Why the eval gave that score -- **most valuable for optimization** |\n| `attributes.input.value` | Trace data | What went into the LLM |\n| `attributes.output.value` | Trace data | What the LLM produced |\n| `{experiment_name}.output` | Experiment runs | Output from a specific experiment |\n\n## Prerequisites\n\nProceed directly with the task — run the `ax` command you need. Do NOT check versions, env vars, or profiles upfront.\n\nIf an `ax` command fails, troubleshoot based on the error:\n- `command not found` or version error → see references/ax-setup.md\n- `401 Unauthorized` / missing API key → run `ax profiles show` to inspect the current profile. If the profile is missing or the API key is wrong, follow references/ax-profiles.md to create/update it. If the user doesn't have their key, direct them to https://app.arize.com/admin > API Keys\n- Space unknown → run `ax spaces list` to pick by name, or ask the user\n- Project unclear → ask the user, or run `ax projects list -o json --limit 100` and present as selectable options\n- LLM provider call fails (missing OPENAI_API_KEY / ANTHROPIC_API_KEY) → run `ax ai-integrations list --space SPACE` to check for platform-managed credentials. If none exist, ask the user to provide the key or create an integration via the **arize-ai-provider-integration** skill\n- **Security:** Never read `.env` files or search the filesystem for credentials. Use `ax profiles` for Arize credentials and `ax ai-integrations` for LLM provider keys. If credentials are not available through these channels, ask the user.\n\n## Phase 1: Extract the Current Prompt\n\n### Find LLM spans containing prompts\n\n```bash\n# Sample LLM spans (where prompts live)\nax spans export PROJECT --filter \"attributes.openinference.span.kind = 'LLM'\" -l 10 --stdout\n\n# Filter by model\nax spans export PROJECT --filter \"attributes.llm.model_name = 'gpt-4o'\" -l 10 --stdout\n\n# Filter by span name (e.g., a specific LLM call)\nax spans export PROJECT --filter \"name = 'ChatCompletion'\" -l 10 --stdout\n```\n\n### Export a trace to inspect prompt structure\n\n```bash\n# Export all spans in a trace\nax spans export PROJECT --trace-id TRACE_ID\n\n# Export a single span\nax spans export PROJECT --span-id SPAN_ID\n```\n\n### Extract prompts from exported JSON\n\n```bash\n# Extract structured chat messages (system + user + assistant)\njq '.[0] | {\n  messages: .attributes.llm.input_messages,\n  model: .attributes.llm.model_name\n}' trace_*/spans.json\n\n# Extract the system prompt specifically\njq '[.[] | select(.attributes.llm.input_messages.roles[]? == \"system\")] | .[0].attributes.llm.input_messages' trace_*/spans.json\n\n# Extract prompt template and variables\njq '.[0].attributes.llm.prompt_template' trace_*/spans.json\n\n# Extract from input.value (fallback for non-structured prompts)\njq '.[0].attributes.input.value' trace_*/spans.json\n```\n\n### Reconstruct the prompt as messages\n\nOnce you have the span data, reconstruct the prompt as a messages array:\n\n```json\n[\n  {\"role\": \"system\", \"content\": \"You are a helpful assistant that...\"},\n  {\"role\": \"user\", \"content\": \"Given {input}, answer the question: {question}\"}\n]\n```\n\nIf the span has `attributes.llm.prompt_template.template`, the prompt uses variables. Preserve these placeholders (`{variable}` or `{{variable}}`) -- they are substituted at runtime.\n\n## Phase 2: Gather Performance Data\n\n### From traces (production feedback)\n\n```bash\n# Find error spans -- these indicate prompt failures\nax spans export PROJECT \\\n  --filter \"status_code = 'ERROR' AND attributes.openinference.span.kind = 'LLM'\" \\\n  -l 20 --stdout\n\n# Find spans with low eval scores\nax spans export PROJECT \\\n  --filter \"annotation.correctness.label = 'incorrect'\" \\\n  -l 20 --stdout\n\n# Find spans with high latency (may indicate overly complex prompts)\nax spans export PROJECT \\\n  --filter \"attributes.openinference.span.kind = 'LLM' AND latency_ms > 10000\" \\\n  -l 20 --stdout\n\n# Export error traces for detailed inspection\nax spans export PROJECT --trace-id TRACE_ID\n```\n\n### From datasets and experiments\n\n```bash\n# Export a dataset (ground truth examples)\nax datasets export DATASET_NAME --space SPACE\n# -> dataset_*/examples.json\n\n# Export experiment results (what the LLM produced)\nax experiments export EXPERIMENT_NAME --dataset DATASET_NAME --space SPACE\n# -> experiment_*/runs.json\n```\n\n### Merge dataset + experiment for analysis\n\nJoin the two files by `example_id` to see inputs alongside outputs and evaluations:\n\n```bash\n# Count examples and runs\njq 'length' dataset_*/examples.json\njq 'length' experiment_*/runs.json\n\n# View a single joined record\njq -s '\n  .[0] as $dataset |\n  .[1][0] as $run |\n  ($dataset[] | select(.id == $run.example_id)) as $example |\n  {\n    input: $example,\n    output: $run.output,\n    evaluations: $run.evaluations\n  }\n' dataset_*/examples.json experiment_*/runs.json\n\n# Find failed examples (where eval score < threshold)\njq '[.[] | select(.evaluations.correctness.score < 0.5)]' experiment_*/runs.json\n```\n\n### Identify what to optimize\n\nLook for patterns across failures:\n\n1. **Compare outputs to ground truth**: Where does the LLM output differ from expected?\n2. **Read eval explanations**: `eval.*.explanation` tells you WHY something failed\n3. **Check annotation text**: Human feedback describes specific issues\n4. **Look for verbosity mismatches**: If outputs are too long/short vs ground truth\n5. **Check format compliance**: Are outputs in the expected format?\n\n## Phase 3: Optimize the Prompt\n\n### The Optimization Meta-Prompt\n\nUse this template to generate an improved version of the prompt. Fill in the three placeholders and send it to your LLM (GPT-4o, Claude, etc.):\n\n````\nYou are an expert in prompt optimization. Given the original baseline prompt\nand the associated performance data (inputs, outputs, evaluation labels, and\nexplanations), generate a revised version that improves results.\n\nORIGINAL BASELINE PROMPT\n========================\n\n{PASTE_ORIGINAL_PROMPT_HERE}\n\n========================\n\nPERFORMANCE DATA\n================\n\nThe following records show how the current prompt performed. Each record\nincludes the input, the LLM output, and evaluation feedback:\n\n{PASTE_RECORDS_HERE}\n\n================\n\nHOW TO USE THIS DATA\n\n1. Compare outputs: Look at what the LLM generated vs what was expected\n2. Review eval scores: Check which examples scored poorly and why\n3. Examine annotations: Human feedback shows what worked and what didn't\n4. Identify patterns: Look for common issues across multiple examples\n5. Focus on failures: The rows where the output DIFFERS from the expected\n   value are the ones that need fixing\n\nALIGNMENT STRATEGY\n\n- If outputs have extra text or reasoning not present in the ground truth,\n  remove instructions that encourage explanation or verbose reasoning\n- If outputs are missing information, add instructions to include it\n- If outputs are in the wrong format, add explicit format instructions\n- Focus on the rows where the output differs from the target -- these are\n  the failures to fix\n\nRULES\n\nMaintain Structure:\n- Use the same template variables as the current prompt ({var} or {{var}})\n- Don't change sections that are already working\n- Preserve the exact return format instructions from the original prompt\n\nAvoid Overfitting:\n- DO NOT copy examples verbatim into the prompt\n- DO NOT quote specific test data outputs exactly\n- INSTEAD: Extract the ESSENCE of what makes good vs bad outputs\n- INSTEAD: Add general guidelines and principles\n- INSTEAD: If adding few-shot examples, create SYNTHETIC examples that\n  demonstrate the principle, not real data from above\n\nGoal: Create a prompt that generalizes well to new inputs, not one that\nmemorizes the test data.\n\nOUTPUT FORMAT\n\nReturn the revised prompt as a JSON array of messages:\n\n[\n  {\"role\": \"system\", \"content\": \"...\"},\n  {\"role\": \"user\", \"content\": \"...\"}\n]\n\nAlso provide a brief reasoning section (bulleted list) explaining:\n- What problems you found\n- How the revised prompt addresses each one\n````\n\n### Preparing the performance data\n\nFormat the records as a JSON array before pasting into the template:\n\n```bash\n# From dataset + experiment: join and select relevant columns\njq -s '\n  .[0] as $ds |\n  [.[1][] | . as $run |\n    ($ds[] | select(.id == $run.example_id)) as $ex |\n    {\n      input: $ex.input,\n      expected: $ex.expected_output,\n      actual_output: $run.output,\n      eval_score: $run.evaluations.correctness.score,\n      eval_label: $run.evaluations.correctness.label,\n      eval_explanation: $run.evaluations.correctness.explanation\n    }\n  ]\n' dataset_*/examples.json experiment_*/runs.json\n\n# From exported spans: extract input/output pairs with annotations\njq '[.[] | select(.attributes.openinference.span.kind == \"LLM\") | {\n  input: .attributes.input.value,\n  output: .attributes.output.value,\n  status: .status_code,\n  model: .attributes.llm.model_name\n}]' trace_*/spans.json\n```\n\n### Applying the revised prompt\n\nAfter the LLM returns the revised messages array:\n\n1. Compare the original and revised prompts side by side\n2. Verify all template variables are preserved\n3. Check that format instructions are intact\n4. Test on a few examples before full deployment\n\n## Phase 4: Iterate\n\n### The optimization loop\n\n```\n1. Extract prompt    -> Phase 1 (once)\n2. Run experiment    -> ax experiments create ...\n3. Export results    -> ax experiments export EXPERIMENT_NAME --dataset DATASET_NAME --space SPACE\n4. Analyze failures  -> jq to find low scores\n5. Run meta-prompt   -> Phase 3 with new failure data\n6. Apply revised prompt\n7. Repeat from step 2\n```\n\n### Measure improvement\n\n```bash\n# Compare scores across experiments\n# Experiment A (baseline)\njq '[.[] | .evaluations.correctness.score] | add / length' experiment_a/runs.json\n\n# Experiment B (optimized)\njq '[.[] | .evaluations.correctness.score] | add / length' experiment_b/runs.json\n\n# Find examples that flipped from fail to pass\njq -s '\n  [.[0][] | select(.evaluations.correctness.label == \"incorrect\")] as $fails |\n  [.[1][] | select(.evaluations.correctness.label == \"correct\") |\n    select(.example_id as $id | $fails | any(.example_id == $id))\n  ] | length\n' experiment_a/runs.json experiment_b/runs.json\n```\n\n### A/B compare two prompts\n\n1. Create two experiments against the same dataset, each using a different prompt version\n2. Export both: `ax experiments export EXP_A` and `ax experiments export EXP_B`\n3. Compare average scores, failure rates, and specific example flips\n4. Check for regressions -- examples that passed with prompt A but fail with prompt B\n\n## Prompt Engineering Best Practices\n\nApply these when writing or revising prompts:\n\n| Technique | When to apply | Example |\n|-----------|--------------|---------|\n| Clear, detailed instructions | Output is vague or off-topic | \"Classify the sentiment as exactly one of: positive, negative, neutral\" |\n| Instructions at the beginning | Model ignores later instructions | Put the task description before examples |\n| Step-by-step breakdowns | Complex multi-step processes | \"First extract entities, then classify each, then summarize\" |\n| Specific personas | Need consistent style/tone | \"You are a senior financial analyst writing for institutional investors\" |\n| Delimiter tokens | Sections blend together | Use `---`, `###`, or XML tags to separate input from instructions |\n| Few-shot examples | Output format needs clarification | Show 2-3 synthetic input/output pairs |\n| Output length specifications | Responses are too long or short | \"Respond in exactly 2-3 sentences\" |\n| Reasoning instructions | Accuracy is critical | \"Think step by step before answering\" |\n| \"I don't know\" guidelines | Hallucination is a risk | \"If the answer is not in the provided context, say 'I don't have enough information'\" |\n\n### Variable preservation\n\nWhen optimizing prompts that use template variables:\n\n- **Single braces** (`{variable}`): Python f-string / Jinja style. Most common in Arize.\n- **Double braces** (`{{variable}}`): Mustache style. Used when the framework requires it.\n- Never add or remove variable placeholders during optimization\n- Never rename variables -- the runtime substitution depends on exact names\n- If adding few-shot examples, use literal values, not variable placeholders\n\n## Workflows\n\n### Optimize a prompt from a failing trace\n\n1. Find failing traces:\n   ```bash\n   ax traces list PROJECT --filter \"status_code = 'ERROR'\" --limit 5\n   ```\n2. Export the trace:\n   ```bash\n   ax spans export PROJECT --trace-id TRACE_ID\n   ```\n3. Extract the prompt from the LLM span:\n   ```bash\n   jq '[.[] | select(.attributes.openinference.span.kind == \"LLM\")][0] | {\n     messages: .attributes.llm.input_messages,\n     template: .attributes.llm.prompt_template,\n     output: .attributes.output.value,\n     error: .attributes.exception.message\n   }' trace_*/spans.json\n   ```\n4. Identify what failed from the error message or output\n5. Fill in the optimization meta-prompt (Phase 3) with the prompt and error context\n6. Apply the revised prompt\n\n### Optimize using a dataset and experiment\n\n1. Find the dataset and experiment:\n   ```bash\n   ax datasets list --space SPACE\n   ax experiments list --dataset DATASET_NAME --space SPACE\n   ```\n2. Export both:\n   ```bash\n   ax datasets export DATASET_NAME --space SPACE\n   ax experiments export EXPERIMENT_NAME --dataset DATASET_NAME --space SPACE\n   ```\n3. Prepare the joined data for the meta-prompt\n4. Run the optimization meta-prompt\n5. Create a new experiment with the revised prompt to measure improvement\n\n### Debug a prompt that produces wrong format\n\n1. Export spans where the output format is wrong:\n   ```bash\n   ax spans export PROJECT \\\n     --filter \"attributes.openinference.span.kind = 'LLM' AND annotation.format.label = 'incorrect'\" \\\n     -l 10 --stdout > bad_format.json\n   ```\n2. Look at what the LLM is producing vs what was expected\n3. Add explicit format instructions to the prompt (JSON schema, examples, delimiters)\n4. Common fix: add a few-shot example showing the exact desired output format\n\n### Reduce hallucination in a RAG prompt\n\n1. Find traces where the model hallucinated:\n   ```bash\n   ax spans export PROJECT \\\n     --filter \"annotation.faithfulness.label = 'unfaithful'\" \\\n     -l 20 --stdout\n   ```\n2. Export and inspect the retriever + LLM spans together:\n   ```bash\n   ax spans export PROJECT --trace-id TRACE_ID\n   jq '[.[] | {kind: .attributes.openinference.span.kind, name, input: .attributes.input.value, output: .attributes.output.value}]' trace_*/spans.json\n   ```\n3. Check if the retrieved context actually contained the answer\n4. Add grounding instructions to the system prompt: \"Only use information from the provided context. If the answer is not in the context, say so.\"\n\n## Troubleshooting\n\n| Problem | Solution |\n|---------|----------|\n| `ax: command not found` | See references/ax-setup.md |\n| `No profile found` | No profile is configured. See references/ax-profiles.md to create one. |\n| No `input_messages` on span | Check span kind -- Chain/Agent spans store prompts on child LLM spans, not on themselves |\n| Prompt template is `null` | Not all instrumentations emit `prompt_template`. Use `input_messages` or `input.value` instead |\n| Variables lost after optimization | Verify the revised prompt preserves all `{var}` placeholders from the original |\n| Optimization makes things worse | Check for overfitting -- the meta-prompt may have memorized test data. Ensure few-shot examples are synthetic |\n| No eval/annotation columns | Run evaluations first (via Arize UI or SDK), then re-export |\n| Experiment output column not found | The column name is `{experiment_name}.output` -- check exact experiment name via `ax experiments get` |\n| `jq` errors on span JSON | Ensure you're targeting the correct file path (e.g., `trace_*/spans.json`) |","tags":["arize","prompt","optimization","skills","arize-ai","agent-skills","ai-agents","ai-observability","claude-code","codex","cursor","datasets"],"capabilities":["skill","source-arize-ai","skill-arize-prompt-optimization","topic-agent-skills","topic-ai-agents","topic-ai-observability","topic-arize","topic-claude-code","topic-codex","topic-cursor","topic-datasets","topic-experiments","topic-llmops","topic-tracing"],"categories":["arize-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/Arize-ai/arize-skills/arize-prompt-optimization","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add Arize-ai/arize-skills","source_repo":"https://github.com/Arize-ai/arize-skills","install_from":"skills.sh"}},"qualityScore":"0.456","qualityRationale":"deterministic score 0.46 from registry signals: · indexed on github topic:agent-skills · 13 github stars · SKILL.md body (18,637 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-24T01:02:56.712Z","embedding":null,"createdAt":"2026-04-23T13:03:47.637Z","updatedAt":"2026-04-24T01:02:56.712Z","lastSeenAt":"2026-04-24T01:02:56.712Z","tsv":"'-3':1877,1894 '/admin':485 '/examples.json':929,976,1009,1522 '/runs.json':948,980,1011,1024,1524 '/spans.json':727,741,752,766,1548,2057,2287,2467 '0':719,737,748,763,988,992,1491,1688,2045 '0.0':330 '0.5':1022 '1':607,991,1034,1195,1494,1561,1600,1604,1694,1717,2003,2095,2172,2241 '1.0':331 '10':632,648,667,2193 '100':515 '10000':891 '2':825,1048,1208,1571,1606,1652,1731,1876,1893,2018,2115,2196,2259 '20':853,869,893,2257 '3':1059,1092,1219,1578,1612,1639,1745,2032,2077,2136,2208,2288 '4':1068,1231,1585,1595,1625,1755,2058,2146,2220,2298 '401':442 '4o':646,1125 '5':1081,1241,1633,2017,2068,2153 '6':1644,2084 '7':1648 'a/b':1713 'a/runs.json':1668,1710 'accept':60 'accuraci':1898 'across':1032,1238,1658 'actual':265,1509,2294 'ad':1392,1984 'add':1289,1301,1385,1665,1674,1966,2209,2223,2299 'address':1461 'ai':31,36,535,565,589 'ai-integr':534,588 'align':1261 'alongsid':964 'alreadi':1343 'also':22,1444 'analysi':953 'analyst':1848 'analyz':1626 'annot':21,312,322,332,1061,1221,1532 'annotation.correctness.label':866 'annotation.faithfulness.label':2254 'annotation.format.label':2190 'answer':183,800,1906,1918,2297,2315 'anthrop':529 'api':445,463,486,527,530 'app':189 'app.arize.com':484 'app.arize.com/admin':483 'appli':1549,1645,1774,1784,2085 'applic':89 'ariz':2,46,56,564,584,1953,2424 'arize-ai-provider-integr':563 'arize-prompt-optim':1 'array':140,152,784,1435,1474,1560 'ask':499,504,550,603 'assess':350 'assist':124,145,717,793 'associ':1142 'attribut':102 'attributes.exception.message':2055 'attributes.input.value':160,246,258,280,378,764,1538,2283 'attributes.llm.input':117,239,721,738,2047 'attributes.llm.input_messages.contents':151 'attributes.llm.input_messages.roles':139,735 'attributes.llm.model':642,724,1545 'attributes.llm.output':216 'attributes.llm.prompt':749,2050 'attributes.llm.prompt_template.template':177,252,808 'attributes.llm.prompt_template.variables':193 'attributes.openinference.span.kind':236,629,850,886,1535,2043,2187,2280 'attributes.output.value':207,284,386,1540,2053,2285 'autom':348,358 'avail':176,599 'averag':1747 'avoid':1355 'ax':78,411,426,448,491,509,533,581,587,624,637,659,683,696,841,861,881,901,921,937,1609,1615,1734,1740,2008,2023,2102,2107,2119,2126,2182,2249,2269,2326,2449 'b':1670,1744,1769 'b/runs.json':1677,1712 'bad':1382 'bad_format.json':2195 'base':129,136,430 'base64':70 'baselin':1138,1159,1662 'bash':617,676,710,833,914,968,1480,1655,2007,2022,2040,2101,2118,2181,2248,2268 'begin':1809 'best':1772 'better':33 'blend':1856 'brace':1942,1955 'breakdown':1824 'brief':1447 'bullet':1450 'call':223,227,523,658 'carri':298 'categor':316,349 'chain/agent':256,2352 'chang':1339 'channel':602 'chat':120,135,243,713 'chat-bas':134 'chatcomplet':665 'check':238,251,417,541,1060,1082,1212,1579,1756,2289,2349,2398,2444 'child':270,2357 'clarif':1874 'classifi':1796,1834 'claud':1126 'clear':1786 'code':847,1543,2014 'column':110,295,297,305,1488,2419,2434,2438 'command':412,427,434,2327 'common':1236,1951,2221 'compar':1035,1196,1562,1656,1714,1746 'complex':879,1825 'complianc':1084 'concept':81 'configur':2338 'consist':1841 'contain':113,259,615,2295 'content':155,788,797,1440,1443 'context':186,1924,2083,2293,2312,2320 'convent':95 'copi':1359 'correct':319,1697,2462 'count':969 'cover':39 'creat':558,1397,1410,1611,1718,2154,2342 'create/update':470 'credenti':546,579,585,596 'critic':1900 'current':454,610,1173,1332 'data':18,87,301,380,388,777,828,1144,1166,1194,1370,1406,1425,1467,1643,2140,2409 'dataset':911,917,922,924,928,942,943,950,975,990,995,1008,1482,1521,1620,1621,1724,2092,2098,2103,2110,2111,2120,2122,2131,2132 'debug':12,2165 'delimit':1853,2219 'demonstr':1401 'depend':103,1979 'deploy':1593 'describ':1065 'descript':1817 'desir':2232 'detail':899,1787 'didn':1229 'differ':100,1045,1250,1312,1728 'direct':405,480 'doesn':475 'doubl':1954 'ds':1493,1497 'e.g':64,73,182,318,329,654,2465 'emit':90,2370 'encourag':1279 'engin':1771 'enough':1930 'ensur':2410,2457 'entiti':1832 'env':58,419,572 'error':433,439,835,848,896,2015,2054,2064,2082,2453 'essenc':1376 'etc':1127 'eval':341,347,351,357,361,367,370,859,1016,1050,1052,1210,1512,1515,1518 'eval/annotation':2418 'evalu':19,967,1006,1147,1185,2421 'evaluations.correctness.label':1690,1696 'evaluations.correctness.score':1021,1664,1673 'ex':1503 'ex.expected':1507 'ex.input':1505 'exact':1347,1372,1800,1892,1981,2231,2445 'examin':1220 'exampl':920,959,970,1001,1003,1014,1214,1240,1360,1396,1399,1590,1679,1699,1705,1753,1759,1785,1819,1870,1988,2218,2228,2414 'exist':549 'exp':1737,1743 'expect':1047,1089,1207,1253,1506,2207 'experi':393,396,402,913,931,938,940,947,951,979,1010,1023,1483,1523,1608,1610,1616,1618,1659,1660,1667,1669,1676,1709,1711,1720,1735,1741,2094,2100,2108,2127,2129,2157,2432,2441,2446,2450 'expert':1131 'explain':1452 'explan':337,362,1051,1053,1150,1280,1519 'explicit':1302,2210 'export':626,639,661,669,677,685,692,698,708,843,863,883,895,903,915,923,930,939,1526,1613,1617,1732,1736,1742,2019,2025,2116,2121,2128,2173,2184,2251,2260,2271,2431 'extra':1266 'extract':40,147,157,608,705,711,728,742,753,1374,1528,1601,1831,2033 'f':1946 'f-string':1945 'fail':428,524,1013,1058,1683,1693,1703,1766,2001,2005,2061 'failur':840,1033,1244,1319,1627,1642,1749 'fallback':170,756 'feedback':300,832,1064,1186,1223 'few-shot':1393,1867,1985,2225,2411 'file':573,957,2463 'filesystem':577 'fill':1112,2069 'filter':628,634,641,650,663,845,865,885,2012,2186,2253 'financi':1847 'find':75,229,612,834,855,871,1012,1630,1678,2004,2096,2242 'first':1830,2422 'fix':1260,1321,2222 'flag':53 'flip':1681,1754 'focus':1242,1305 'follow':92,467,1168 'format':130,1083,1090,1300,1303,1349,1427,1468,1581,1872,2171,2178,2211,2234 'found':436,1456,2329,2334,2436 'framework':1962 'freeform':336 'full':1592 'gather':44,826 'gave':371 'general':1386,1414 'generat':1105,1151,1203 'generic':166 'get':2451 'given':798,1135 'goal':1409 'good':1380 'gpt':645,1124 'gpt-4o':644,1123 'grade':317,340 'ground':918,1038,1079,1274,2300 'guidelin':1387,1911 'hallucin':1912,2236,2247 'help':792 'high':874 'human':314,324,334,1063,1222 'id':72,689,691,702,704,907,909,960,997,999,1499,1501,1700,1702,1706,1707,2029,2031,2275,2277 'identifi':1025,1232,2059 'ignor':1811 'improv':10,35,1107,1156,1654,2164 'includ':221,1178,1292 'incorrect':320,867,1691,2191 'indic':838,877 'individu':148 'inform':1288,1931,2308 'input':283,799,963,1002,1145,1180,1418,1504,1537,1864,2282,2345,2374 'input.value':755,2377 'input/output':1529,1879 'inspect':224,452,673,900,2262 'instead':1373,1384,1390,2378 'institut':1851 'instruct':1277,1290,1304,1350,1582,1788,1806,1813,1866,1897,2212,2301 'instrument':109,2369 'intact':1584 'integr':536,560,567,590 'investor':1852 'invok':5 'issu':1067,1237 'iter':1596 'jinja':1948 'join':954,984,1484,2139 'jq':718,733,747,762,973,977,986,1019,1489,1533,1628,1663,1672,1686,2041,2278,2452 'json':197,513,709,785,1434,1473,2216,2456 'judg':346,356,366 'key':446,464,479,487,528,531,556,594 'kind':107,169,233,2279,2351 'know':1910 'l':631,647,666,852,868,892,2192,2256 'label':313,342,1148,1516 'latenc':875,889 'later':1812 'length':974,978,1666,1675,1708,1882 'limit':514,2016 'list':80,493,511,537,1451,2010,2104,2109 'liter':1990 'live':84,268,292,623 'llm':13,88,137,214,234,237,266,271,344,354,364,385,391,521,592,613,619,630,657,851,887,935,1043,1122,1182,1202,1536,1555,2038,2044,2188,2201,2265,2358 'llm-as-judg':343,353,363 'long':1887 'long/short':1077 'look':1029,1069,1198,1234,2197 'loop':1599 'lost':2380 'low':858,1631 'maintain':1323 'make':29,1379,2395 'manag':545 'may':876,2405 'measur':1653,2163 'memor':1422,2407 'merg':949 'messag':118,121,149,154,158,173,217,240,244,714,720,722,739,771,783,1437,1559,2046,2048,2065,2346,2375 'meta':1099,1636,2074,2144,2151,2403 'meta-prompt':1098,1635,2073,2143,2150,2402 'mismatch':1072 'miss':444,460,525,1287 'model':208,219,636,723,1544,1810,2246 'ms':890 'multi':1827 'multi-step':1826 'multipl':1239 'mustach':1957 'my-workspac':65 'name':63,394,497,643,653,664,725,925,941,944,1546,1619,1622,1982,2112,2123,2130,2133,2281,2439,2442,2447 'navig':273 'need':414,1259,1840,1873 'negat':1804 'neutral':1805 'never':570,1965,1973 'new':1417,1641,2156 'non':759 'non-structur':758 'none':548 'null':2366 'numer':326,359 'o':512 'object':198 'off-top':1793 'one':1257,1420,1463,1801,2343 'openai':526 'openinfer':93 'optim':4,9,48,304,377,1028,1093,1097,1134,1598,1671,1935,1972,1996,2072,2089,2149,2382,2394 'option':520 'origin':1137,1158,1162,1353,1564,2393 'output':37,220,395,398,965,1004,1036,1044,1074,1086,1146,1183,1197,1249,1264,1285,1295,1311,1371,1383,1426,1508,1510,1539,1789,1871,1881,2052,2067,2177,2233,2284,2433,2443 'over':878 'overfit':1356,2400 'pair':1530,1880 'partial':321 'pass':1685,1761 'past':1161,1187,1476 'path':2464 'pattern':306,1031,1233 'perform':45,293,827,1143,1165,1175,1466 'persona':1839 'phase':606,824,1091,1594,1603,1638,2076 'pick':495 'placehold':181,815,1116,1970,1994,2390 'platform':544 'platform-manag':543 'poor':1216 'posit':1803 'practic':1773 'prepar':1464,2137 'prerequisit':403 'present':517,1271 'preserv':813,1345,1577,1933,2387 'primari':131 'principl':1389,1403 'problem':1454,2324 'proceed':404 'process':1829 'produc':215,392,936,2169,2203 'product':16,831 'profil':422,449,455,458,582,2333,2336 'project':502,510,627,640,662,686,699,844,864,884,904,2011,2026,2185,2252,2272 'prompt':3,14,41,47,83,96,138,162,191,230,250,267,291,611,616,622,674,706,731,743,761,769,780,810,839,880,1095,1100,1111,1133,1139,1160,1163,1174,1333,1354,1364,1412,1431,1460,1552,1567,1602,1637,1647,1716,1729,1763,1768,1770,1780,1936,1998,2035,2075,2080,2088,2145,2152,2161,2167,2215,2240,2305,2355,2363,2371,2386,2404 'provid':522,554,566,593,1445,1923,2311 'put':1814 'python':1944 'qualiti':38,327 'question':165,184,263,802,803 'quot':1367 'rag':2239 'rate':1750 're':2430,2459 're-export':2429 'read':571,1049 'real':1405 'reason':1269,1283,1448,1896 'reconstruct':767,778 'record':985,1169,1177,1188,1470 'reduc':2235 'references/ax-profiles.md':468,2340 'references/ax-setup.md':441,2331 'regress':1758 'relev':1487 'remov':1276,1968 'renam':1974 'repeat':1649 'requir':1963 'respond':32,1890 'respons':209,228,1884 'result':287,932,1157,1614 'retriev':2264,2292 'return':1348,1428,1556 'review':315,325,335,1209 'revis':1153,1430,1459,1551,1558,1566,1646,1779,2087,2160,2385 'risk':1915 'role':128,142,150,786,795,1438,1441 'role-bas':127 'row':1246,1308 'rule':1322 'run':397,409,447,490,508,532,972,994,1496,1607,1634,2147,2420 'run.evaluations':1007 'run.evaluations.correctness.explanation':1520 'run.evaluations.correctness.label':1517 'run.evaluations.correctness.score':1514 'run.example':998,1500 'run.output':1005,1511 'runtim':823,1977 'sampl':618 'say':1925,2321 'schema':2217 'score':323,328,352,360,373,860,1017,1211,1215,1513,1632,1657,1748 'sdk':2427 'search':575 'section':1340,1449,1855 'secur':569 'see':199,211,440,962,2330,2339 'select':519,734,996,1020,1486,1498,1534,1689,1695,1698,2042 'semant':94 'send':1118 'senior':1846 'sentenc':1895 'sentiment':1798 'separ':1863 'serial':161,249 'short':1889 'shot':1395,1869,1987,2227,2413 'show':450,1170,1224,1875,2229 'side':1568,1570 'signal':294 'singl':694,983,1941 'skill':7,49,568 'skill-arize-prompt-optimization' 'solut':2325 'someth':1057 'sourc':132,307 'source-arize-ai' 'space':50,52,57,62,71,79,488,492,538,539,926,927,945,946,1623,1624,2105,2106,2113,2114,2124,2125,2134,2135 'span':43,91,101,106,168,232,235,257,272,279,614,620,625,638,652,660,679,684,695,697,701,703,776,806,836,842,856,862,872,882,902,1527,2024,2039,2174,2183,2250,2266,2270,2348,2350,2353,2359,2455 'span-id':700 'specif':401,656,732,1066,1368,1752,1838,1883 'status':846,1541,1542,2013 'stdout':633,649,668,854,870,894,2194,2258 'step':1651,1821,1823,1828,1902,1904 'step-by-step':1820 'store':98,2354 'strategi':1262 'string':156,1947 'structur':119,172,218,242,675,712,760,1324 'style':1949,1958 'style/tone':1842 'substitut':203,821,1978 'summar':1837 'synthet':1398,1878,2416 'system':122,143,715,730,736,787,1439,2304 'tag':1861 'target':1315,2460 'task':408,1816 'techniqu':1781 'tell':310,1054 'templat':178,192,194,206,255,744,750,1103,1328,1479,1574,1939,2049,2051,2364,2372 'test':1369,1424,1586,2408 'text':159,210,333,1062,1267 'thing':2396 'think':1901 'three':1115 'threshold':1018 'togeth':1857,2267 'token':1854 'tool':125,146,222,226,278,282,286 'tool-cal':225 'topic':1795 'topic-agent-skills' 'topic-ai-agents' 'topic-ai-observability' 'topic-arize' 'topic-claude-code' 'topic-codex' 'topic-cursor' 'topic-datasets' 'topic-experiments' 'topic-llmops' 'topic-tracing' 'trace':17,86,276,379,387,671,682,688,690,726,740,751,765,830,897,906,908,1547,2002,2006,2009,2021,2028,2030,2056,2243,2274,2276,2286,2466 'trace-id':687,905,2027,2273 'tree':277 'troubleshoot':429,2323 'truth':919,1039,1080,1275 'two':956,1715,1719 'typic':289 'u3bhy2u6':74 'ui':2425 'unauthor':443 'unclear':503 'unfaith':2255 'unknown':489 'upfront':423 'use':15,23,116,185,190,302,580,811,1101,1192,1325,1726,1858,1938,1959,1989,2090,2307,2373 'user':26,123,144,164,261,474,501,506,552,605,716,796,1442 'vagu':1791 'valu':196,201,1254,1991 'valuabl':375 'var':59,420,1334,1336,2389 'variabl':180,195,746,812,816,818,1329,1575,1932,1940,1943,1956,1969,1975,1993,2379 'verbatim':1361 'verbos':1071,1282 'verifi':1572,2383 'version':418,438,1108,1154,1730 'via':561,2423,2448 'view':981 'vs':1078,1204,1381,2204 'want':27 'well':1415 'went':382 'work':1226,1344 'workflow':1995 'workspac':67 'wors':2397 'write':1777,1849 'wrong':466,1299,2170,2180 'xml':1860","prices":[{"id":"f1b29870-589f-46f0-9cd5-eb30fa9c1111","listingId":"99f64468-145e-4a45-b8af-f81287e7a511","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"Arize-ai","category":"arize-skills","install_from":"skills.sh"},"createdAt":"2026-04-23T13:03:47.637Z"}],"sources":[{"listingId":"99f64468-145e-4a45-b8af-f81287e7a511","source":"github","sourceId":"Arize-ai/arize-skills/arize-prompt-optimization","sourceUrl":"https://github.com/Arize-ai/arize-skills/tree/main/skills/arize-prompt-optimization","isPrimary":false,"firstSeenAt":"2026-04-23T13:03:47.637Z","lastSeenAt":"2026-04-24T01:02:56.712Z"}],"details":{"listingId":"99f64468-145e-4a45-b8af-f81287e7a511","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"Arize-ai","slug":"arize-prompt-optimization","github":{"repo":"Arize-ai/arize-skills","stars":13,"topics":["agent-skills","ai-agents","ai-observability","arize","claude-code","codex","cursor","datasets","experiments","llmops","tracing"],"license":"mit","html_url":"https://github.com/Arize-ai/arize-skills","pushed_at":"2026-04-24T00:52:08Z","description":"Agent skills for Arize — datasets, experiments, and traces via the ax CLI","skill_md_sha":"968255da13b33395210af8562f82df12265f319e","skill_md_path":"skills/arize-prompt-optimization/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/Arize-ai/arize-skills/tree/main/skills/arize-prompt-optimization"},"layout":"multi","source":"github","category":"arize-skills","frontmatter":{"name":"arize-prompt-optimization","description":"INVOKE THIS SKILL when optimizing, improving, or debugging LLM prompts using production trace data, evaluations, and annotations. Also use when the user wants to make their AI respond better or improve AI output quality. Covers extracting prompts from spans, gathering performance signal, and running a data-driven optimization loop using the ax CLI."},"skills_sh_url":"https://skills.sh/Arize-ai/arize-skills/arize-prompt-optimization"},"updatedAt":"2026-04-24T01:02:56.712Z"}}