{"id":"58746eb3-3790-4a56-8b92-246efb751878","shortId":"TBaU39","kind":"skill","title":"arize-annotation","tagline":"INVOKE THIS SKILL when creating, managing, or using annotation configs or annotation queues on Arize (categorical, continuous, freeform), or applying human annotations to project spans via the Python SDK. Configs are the label schema for human feedback; queues are review workflow","description":"# Arize Annotation Skill\n\n> **`SPACE`** — All `--space` flags and the `ARIZE_SPACE` env var accept a space **name** (e.g., `my-workspace`) or a base64 space **ID** (e.g., `U3BhY2U6...`). Find yours with `ax spaces list`.\n\nThis skill covers **annotation configs** (the label schema) and **annotation queues** (human review workflows), as well as programmatically annotating project spans via the Python SDK.\n\n**Direction:** Human labeling in Arize attaches values defined by configs to **spans**, **dataset examples**, **experiment-related records**, and **queue items** in the product UI. This skill covers: `ax annotation-configs`, `ax annotation-queues`, and bulk span updates with `ArizeClient.spans.update_annotations`.\n\n---\n\n## Prerequisites\n\nProceed directly with the task — run the `ax` command you need. Do NOT check versions, env vars, or profiles upfront.\n\nIf an `ax` command fails, troubleshoot based on the error:\n- `command not found` or version error → see references/ax-setup.md\n- `401 Unauthorized` / missing API key → run `ax profiles show` to inspect the current profile. If the profile is missing or the API key is wrong, follow references/ax-profiles.md to create/update it. If the user doesn't have their key, direct them to https://app.arize.com/admin > API Keys\n- Space unknown → run `ax spaces list` to pick by name, or ask the user\n- **Security:** Never read `.env` files or search the filesystem for credentials. Use `ax profiles` for Arize credentials and `ax ai-integrations` for LLM provider keys. If credentials are not available through these channels, ask the user.\n\n---\n\n## Concepts\n\n### What is an Annotation Config?\n\nAn **annotation config** defines the schema for a single type of human feedback label. Before anyone can annotate a span, dataset record, experiment output, or queue item, a config must exist for that label in the space.\n\n| Field | Description |\n|-------|-------------|\n| **Name** | Descriptive identifier (e.g. `Correctness`, `Helpfulness`). Must be unique within the space. |\n| **Type** | `categorical` (pick from a list), `continuous` (numeric range), or `freeform` (free text). |\n| **Values** | For categorical: array of `{\"label\": str, \"score\": number}` pairs. |\n| **Min/Max Score** | For continuous: numeric bounds. |\n| **Optimization Direction** | Whether higher scores are better (`maximize`) or worse (`minimize`). Used to render trends in the UI. |\n\n### Where labels get applied (surfaces)\n\n| Surface | Typical path |\n|---------|----------------|\n| **Project spans** | Python SDK `spans.update_annotations` (below) and/or the Arize UI |\n| **Dataset examples** | Arize UI (human labeling flows); configs must exist in the space |\n| **Experiment outputs** | Often reviewed alongside datasets or traces in the UI — see arize-experiment, arize-dataset |\n| **Annotation queue items** | `ax annotation-queues` CLI (below) and/or the Arize UI; configs must exist |\n\nAlways ensure the relevant **annotation config** exists in the space before expecting labels to persist.\n\n---\n\n## Basic CRUD: Annotation Configs\n\n### List\n\n```bash\nax annotation-configs list --space SPACE\nax annotation-configs list --space SPACE -o json\nax annotation-configs list --space SPACE --limit 20\n```\n\n### Create — Categorical\n\nCategorical configs present a fixed set of labels for reviewers to choose from.\n\n```bash\nax annotation-configs create \\\n  --name \"Correctness\" \\\n  --space SPACE \\\n  --type categorical \\\n  --value correct \\\n  --value incorrect \\\n  --optimization-direction maximize\n```\n\nCommon binary label pairs:\n- `correct` / `incorrect`\n- `helpful` / `unhelpful`\n- `safe` / `unsafe`\n- `relevant` / `irrelevant`\n- `pass` / `fail`\n\n### Create — Continuous\n\nContinuous configs let reviewers enter a numeric score within a defined range.\n\n```bash\nax annotation-configs create \\\n  --name \"Quality Score\" \\\n  --space SPACE \\\n  --type continuous \\\n  --min-score 0 \\\n  --max-score 10 \\\n  --optimization-direction maximize\n```\n\n### Create — Freeform\n\nFreeform configs collect open-ended text feedback. No additional flags needed beyond name, space, and type.\n\n```bash\nax annotation-configs create \\\n  --name \"Reviewer Notes\" \\\n  --space SPACE \\\n  --type freeform\n```\n\n### Get\n\n```bash\nax annotation-configs get NAME_OR_ID\nax annotation-configs get NAME_OR_ID -o json\nax annotation-configs get NAME_OR_ID --space SPACE   # required when using name instead of ID\n```\n\n### Delete\n\n```bash\nax annotation-configs delete NAME_OR_ID\nax annotation-configs delete NAME_OR_ID --space SPACE   # required when using name instead of ID\nax annotation-configs delete NAME_OR_ID --force   # skip confirmation\n```\n\n**Note:** Deletion is irreversible. Any annotation queue associations to this config are also removed in the product (queues may remain; fix associations in the Arize UI if needed).\n\n---\n\n## Annotation Queues: `ax annotation-queues`\n\nAnnotation queues route records (spans, dataset examples, experiment runs) to human reviewers. Each queue is linked to one or more annotation configs that define what labels reviewers can apply.\n\n### List / Get\n\n```bash\nax annotation-queues list --space SPACE\nax annotation-queues list --space SPACE -o json\n\nax annotation-queues get NAME_OR_ID --space SPACE\nax annotation-queues get NAME_OR_ID --space SPACE -o json\n```\n\n### Create\n\nAt least one `--annotation-config-id` is required.\n\n```bash\nax annotation-queues create \\\n  --name \"Correctness Review\" \\\n  --space SPACE \\\n  --annotation-config-id CONFIG_ID \\\n  --annotator-email reviewer@example.com \\\n  --instructions \"Label each response as correct or incorrect.\" \\\n  --assignment-method all   # or: random\n```\n\nRepeat `--annotation-config-id` and `--annotator-email` to attach multiple configs or reviewers.\n\n### Update\n\nList flags (`--annotation-config-id`, `--annotator-email`) **fully replace** existing values when provided — pass all desired values, not just the new ones.\n\n```bash\nax annotation-queues update NAME_OR_ID --space SPACE --name \"New Name\"\nax annotation-queues update NAME_OR_ID --space SPACE --instructions \"Updated instructions\"\nax annotation-queues update NAME_OR_ID --space SPACE \\\n  --annotation-config-id CONFIG_ID_A \\\n  --annotation-config-id CONFIG_ID_B\n```\n\n### Delete\n\n```bash\nax annotation-queues delete NAME_OR_ID --space SPACE\nax annotation-queues delete NAME_OR_ID --space SPACE --force   # skip confirmation\n```\n\n### List Records\n\n```bash\nax annotation-queues list-records NAME_OR_ID --space SPACE\nax annotation-queues list-records NAME_OR_ID --space SPACE --limit 50 -o json\n```\n\n### Submit an Annotation for a Record\n\nAnnotations are upserted by config name — call once per annotation config. Supply at least one of `--score`, `--label`, or `--text`.\n\n```bash\nax annotation-queues annotate-record NAME_OR_ID RECORD_ID \\\n  --annotation-name \"Correctness\" \\\n  --label \"correct\" \\\n  --space SPACE\n\nax annotation-queues annotate-record NAME_OR_ID RECORD_ID \\\n  --annotation-name \"Quality Score\" \\\n  --score 8.5 \\\n  --text \"Response was accurate but slightly verbose.\" \\\n  --space SPACE\n```\n\n### Assign a Record\n\nAssign users to review a specific record:\n\n```bash\nax annotation-queues assign-record NAME_OR_ID RECORD_ID --space SPACE\n```\n\n### Delete Records\n\n```bash\nax annotation-queues delete-records NAME_OR_ID --space SPACE\n```\n\n---\n\n## Applying Annotations to Spans (Python SDK)\n\nUse the Python SDK to bulk-apply annotations to **project spans** when you already have labels (e.g., from a review export or an external labeling tool).\n\n```python\nimport pandas as pd\nfrom arize import ArizeClient\n\nimport os\n\nclient = ArizeClient(api_key=os.environ[\"ARIZE_API_KEY\"])\n\n# Build a DataFrame with annotation columns\n# Required: context.span_id + at least one annotation.<name>.label or annotation.<name>.score\nannotations_df = pd.DataFrame([\n    {\n        \"context.span_id\": \"span_001\",\n        \"annotation.Correctness.label\": \"correct\",\n        \"annotation.Correctness.updated_by\": \"reviewer@example.com\",\n    },\n    {\n        \"context.span_id\": \"span_002\",\n        \"annotation.Correctness.label\": \"incorrect\",\n        \"annotation.Correctness.updated_by\": \"reviewer@example.com\",\n    },\n])\n\nresponse = client.spans.update_annotations(\n    space_id=os.environ[\"ARIZE_SPACE\"],\n    project_name=\"your-project\",\n    dataframe=annotations_df,\n    validate=True,\n)\n```\n\n**DataFrame column schema:**\n\n| Column | Required | Description |\n|--------|----------|-------------|\n| `context.span_id` | yes | The span to annotate |\n| `annotation.<name>.label` | one of | Categorical or freeform label |\n| `annotation.<name>.score` | one of | Numeric score |\n| `annotation.<name>.updated_by` | no | Annotator identifier (email or name) |\n| `annotation.<name>.updated_at` | no | Timestamp in milliseconds since epoch |\n| `annotation.notes` | no | Freeform notes on the span |\n\n**Limitation:** Annotations apply only to spans within 31 days prior to submission.\n\n---\n\n## Troubleshooting\n\n| Problem | Solution |\n|---------|----------|\n| `ax: command not found` | See references/ax-setup.md |\n| `401 Unauthorized` | API key may not have access to this space. Verify at https://app.arize.com/admin > API Keys |\n| `Annotation config not found` | `ax annotation-configs list --space SPACE` (or use `ax annotation-configs get NAME_OR_ID --space SPACE`) |\n| `409 Conflict on create` | Name already exists in the space. Use a different name or get the existing config ID. |\n| Queue not found | `ax annotation-queues list --space SPACE`; verify the queue name or ID |\n| Record not appearing in queue | Ensure the annotation config linked to the queue exists; check `ax annotation-configs list --space SPACE` |\n| Span SDK errors or missing spans | Confirm `project_name`, `space_id`, and span IDs; use arize-trace to export spans |\n\n---\n\n## Related Skills\n\n- **arize-trace**: Export spans to find span IDs and time ranges\n- **arize-dataset**: Find dataset IDs and example IDs\n- **arize-evaluator**: Automated LLM-as-judge alongside human annotation\n- **arize-experiment**: Experiments tied to datasets and evaluation workflows\n- **arize-link**: Deep links to annotation configs and queues in the Arize UI\n\n---\n\n## Save Credentials for Future Use\n\nSee references/ax-profiles.md § Save Credentials for Future Use.","tags":["arize","annotation","skills","arize-ai","agent-skills","ai-agents","ai-observability","claude-code","codex","cursor","datasets","experiments"],"capabilities":["skill","source-arize-ai","skill-arize-annotation","topic-agent-skills","topic-ai-agents","topic-ai-observability","topic-arize","topic-claude-code","topic-codex","topic-cursor","topic-datasets","topic-experiments","topic-llmops","topic-tracing"],"categories":["arize-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/Arize-ai/arize-skills/arize-annotation","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add Arize-ai/arize-skills","source_repo":"https://github.com/Arize-ai/arize-skills","install_from":"skills.sh"}},"qualityScore":"0.456","qualityRationale":"deterministic score 0.46 from registry signals: · indexed on github topic:agent-skills · 13 github stars · SKILL.md body (10,234 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-24T01:02:56.117Z","embedding":null,"createdAt":"2026-04-23T13:03:46.975Z","updatedAt":"2026-04-24T01:02:56.117Z","lastSeenAt":"2026-04-24T01:02:56.117Z","tsv":"'/admin':229,1301 '0':578 '001':1180 '002':1189 '10':582 '20':498 '31':1272 '401':186,1286 '409':1327 '50':987 '8.5':1055 'accept':58 'access':1293 'accur':1059 'addit':598 'ai':266 'ai-integr':265 'alongsid':423,1437 'alreadi':1125,1332 'also':706 'alway':453 'and/or':402,446 'annot':3,12,15,25,46,82,88,97,134,138,146,287,290,306,400,437,442,457,470,476,483,492,517,565,609,623,631,641,660,668,685,699,722,726,728,748,762,769,778,788,803,811,820,826,845,850,862,866,886,899,912,921,928,938,948,964,976,992,996,1005,1019,1022,1030,1039,1042,1050,1078,1095,1106,1119,1161,1169,1172,1174,1197,1209,1225,1226,1234,1240,1244,1249,1266,1304,1310,1319,1352,1370,1380,1439,1456 'annotate-record':1021,1041 'annotation-config':133,475,482,491,516,564,608,622,630,640,659,667,684,1309,1318,1379 'annotation-config-id':802,819,844,861,920,927 'annotation-nam':1029,1049 'annotation-queu':137,441,725,761,768,777,787,810,885,898,911,937,947,963,975,1018,1038,1077,1094,1351 'annotation.correctness.label':1181,1190 'annotation.correctness.updated':1183,1192 'annotation.notes':1258 'annotator-email':825,849,865 'anyon':304 'api':189,207,230,1151,1155,1288,1302 'app.arize.com':228,1300 'app.arize.com/admin':227,1299 'appear':1365 'appli':23,390,756,1105,1118,1267 'ariz':2,18,45,54,108,261,404,408,432,435,448,718,1144,1154,1201,1401,1409,1421,1430,1441,1451,1462 'arize-annot':1 'arize-dataset':434,1420 'arize-evalu':1429 'arize-experi':431,1440 'arize-link':1450 'arize-trac':1400,1408 'arizecli':1146,1150 'arizeclient.spans.update':145 'array':356 'ask':243,280 'assign':838,1065,1068,1081 'assign-record':1080 'assignment-method':837 'associ':701,715 'attach':109,853 'autom':1432 'avail':276 'ax':76,132,136,155,170,192,235,258,264,440,474,481,490,515,563,607,621,629,639,658,666,683,724,760,767,776,786,809,884,897,910,936,946,962,974,1017,1037,1076,1093,1280,1308,1317,1350,1378 'b':933 'base':174 'base64':68 'bash':473,514,562,606,620,657,759,808,883,935,961,1016,1075,1092 'basic':468 'better':375 'beyond':601 'binari':535 'bound':368 'build':1157 'bulk':141,1117 'bulk-appli':1116 'call':1002 'categor':19,341,355,500,501,525,1230 'channel':279 'check':161,1377 'choos':512 'cli':444 'client':1149 'client.spans.update':1196 'collect':591 'column':1162,1214,1216 'command':156,171,178,1281 'common':534 'concept':283 'config':13,33,83,113,135,288,291,317,413,450,458,471,477,484,493,502,518,551,566,590,610,624,632,642,661,669,686,704,749,804,821,823,846,855,863,922,924,929,931,1000,1006,1305,1311,1320,1345,1371,1381,1457 'confirm':693,958,1391 'conflict':1328 'context.span':1164,1177,1186,1219 'continu':20,346,366,549,550,574 'correct':332,521,527,538,815,834,1032,1034,1182 'cover':81,131 'creat':8,499,519,548,567,587,611,798,813,1330 'create/update':214 'credenti':256,262,273,1465,1472 'crud':469 'current':198 'datafram':1159,1208,1213 'dataset':116,309,406,424,436,733,1422,1424,1446 'day':1273 'deep':1453 'defin':111,292,560,751 'delet':656,662,670,687,695,934,940,950,1090,1098 'delete-record':1097 'descript':327,329,1218 'desir':876 'df':1175,1210 'differ':1339 'direct':104,149,224,370,532,585 'doesn':219 'e.g':62,71,331,1128 'email':827,851,867,1246 'end':594 'ensur':454,1368 'enter':554 'env':56,163,249 'epoch':1257 'error':177,183,1387 'evalu':1431,1448 'exampl':117,407,734,1427 'exist':319,415,452,459,870,1333,1344,1376 'expect':464 'experi':119,311,419,433,735,1442,1443 'experiment-rel':118 'export':1132,1404,1411 'extern':1135 'fail':172,547 'feedback':40,301,596 'field':326 'file':250 'filesystem':254 'find':73,1414,1423 'fix':505,714 'flag':51,599,860 'flow':412 'follow':211 'forc':691,956 'found':180,1283,1307,1349 'free':351 'freeform':21,350,588,589,618,1232,1260 'fulli':868 'futur':1467,1474 'get':389,619,625,633,643,758,780,790,1321,1342 'help':333,540 'higher':372 'human':24,39,90,105,300,410,738,1438 'id':70,628,636,646,655,665,673,682,690,783,793,805,822,824,847,864,891,904,917,923,925,930,932,943,953,971,983,1026,1028,1046,1048,1085,1087,1102,1165,1178,1187,1199,1220,1324,1346,1362,1395,1398,1416,1425,1428 'identifi':330,1245 'import':1139,1145,1147 'incorrect':529,539,836,1191 'inspect':196 'instead':653,680 'instruct':829,907,909 'integr':267 'invok':4 'irrelev':545 'irrevers':697 'item':124,315,439 'json':489,638,775,797,989 'judg':1436 'key':190,208,223,231,271,1152,1156,1289,1303 'label':36,85,106,302,322,358,388,411,465,508,536,753,830,1013,1033,1127,1136,1170,1227,1233 'least':800,1009,1167 'let':552 'limit':497,986,1265 'link':743,1372,1452,1454 'list':78,237,345,472,478,485,494,757,764,771,859,959,967,979,1312,1354,1382 'list-record':966,978 'llm':269,1434 'llm-as-judg':1433 'manag':9 'max':580 'max-scor':579 'maxim':376,533,586 'may':712,1290 'method':839 'millisecond':1255 'min':576 'min-scor':575 'min/max':363 'minim':379 'miss':188,204,1389 'multipl':854 'must':318,334,414,451 'my-workspac':63 'name':61,241,328,520,568,602,612,626,634,644,652,663,671,679,688,781,791,814,889,894,896,902,915,941,951,969,981,1001,1024,1031,1044,1051,1083,1100,1204,1248,1322,1331,1340,1360,1393 'need':158,600,721 'never':247 'new':881,895 'note':614,694,1261 'number':361 'numer':347,367,556,1238 'o':488,637,774,796,988 'often':421 'one':745,801,882,1010,1168,1228,1236 'open':593 'open-end':592 'optim':369,531,584 'optimization-direct':530,583 'os':1148 'os.environ':1153,1200 'output':312,420 'pair':362,537 'panda':1140 'pass':546,874 'path':394 'pd':1142 'pd.dataframe':1176 'per':1004 'persist':467 'pick':239,342 'prerequisit':147 'present':503 'prior':1274 'problem':1278 'proceed':148 'product':127,710 'profil':166,193,199,202,259 'programmat':96 'project':27,98,395,1121,1203,1207,1392 'provid':270,873 'python':31,102,397,1109,1113,1138 'qualiti':569,1052 'queue':16,41,89,123,139,314,438,443,700,711,723,727,729,741,763,770,779,789,812,887,900,913,939,949,965,977,1020,1040,1079,1096,1347,1353,1359,1367,1375,1459 'random':842 'rang':348,561,1419 'read':248 'record':121,310,731,960,968,980,995,1023,1027,1043,1047,1067,1074,1082,1086,1091,1099,1363 'references/ax-profiles.md':212,1470 'references/ax-setup.md':185,1285 'relat':120,1406 'relev':456,544 'remain':713 'remov':707 'render':382 'repeat':843 'replac':869 'requir':649,676,807,1163,1217 'respons':832,1057,1195 'review':43,91,422,510,553,613,739,754,816,857,1071,1131 'reviewer@example.com':828,1185,1194 'rout':730 'run':153,191,234,736 'safe':542 'save':1464,1471 'schema':37,86,294,1215 'score':360,364,373,557,570,577,581,1012,1053,1054,1173,1235,1239 'sdk':32,103,398,1110,1114,1386 'search':252 'secur':246 'see':184,430,1284,1469 'set':506 'show':194 'sinc':1256 'singl':297 'skill':6,47,80,130,1407 'skill-arize-annotation' 'skip':692,957 'slight':1061 'solut':1279 'source-arize-ai' 'space':48,50,55,60,69,77,232,236,325,339,418,462,479,480,486,487,495,496,522,523,571,572,603,615,616,647,648,674,675,765,766,772,773,784,785,794,795,817,818,892,893,905,906,918,919,944,945,954,955,972,973,984,985,1035,1036,1063,1064,1088,1089,1103,1104,1198,1202,1296,1313,1314,1325,1326,1336,1355,1356,1383,1384,1394 'span':28,99,115,142,308,396,732,1108,1122,1179,1188,1223,1264,1270,1385,1390,1397,1405,1412,1415 'spans.update':399 'specif':1073 'str':359 'submiss':1276 'submit':990 'suppli':1007 'surfac':391,392 'task':152 'text':352,595,1015,1056 'tie':1444 'time':1418 'timestamp':1253 'tool':1137 'topic-agent-skills' 'topic-ai-agents' 'topic-ai-observability' 'topic-arize' 'topic-claude-code' 'topic-codex' 'topic-cursor' 'topic-datasets' 'topic-experiments' 'topic-llmops' 'topic-tracing' 'trace':426,1402,1410 'trend':383 'troubleshoot':173,1277 'true':1212 'type':298,340,524,573,605,617 'typic':393 'u3bhy2u6':72 'ui':128,386,405,409,429,449,719,1463 'unauthor':187,1287 'unhelp':541 'uniqu':336 'unknown':233 'unsaf':543 'updat':143,858,888,901,908,914,1241,1250 'upfront':167 'upsert':998 'use':11,257,380,651,678,1111,1316,1337,1399,1468,1475 'user':218,245,282,1069 'valid':1211 'valu':110,353,526,528,871,877 'var':57,164 'verbos':1062 'verifi':1297,1357 'version':162,182 'via':29,100 'well':94 'whether':371 'within':337,558,1271 'workflow':44,92,1449 'workspac':65 'wors':378 'wrong':210 'yes':1221 'your-project':1205","prices":[{"id":"aa328478-f28c-46be-9a09-e7bf112fe174","listingId":"58746eb3-3790-4a56-8b92-246efb751878","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"Arize-ai","category":"arize-skills","install_from":"skills.sh"},"createdAt":"2026-04-23T13:03:46.975Z"}],"sources":[{"listingId":"58746eb3-3790-4a56-8b92-246efb751878","source":"github","sourceId":"Arize-ai/arize-skills/arize-annotation","sourceUrl":"https://github.com/Arize-ai/arize-skills/tree/main/skills/arize-annotation","isPrimary":false,"firstSeenAt":"2026-04-23T13:03:46.975Z","lastSeenAt":"2026-04-24T01:02:56.117Z"}],"details":{"listingId":"58746eb3-3790-4a56-8b92-246efb751878","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"Arize-ai","slug":"arize-annotation","github":{"repo":"Arize-ai/arize-skills","stars":13,"topics":["agent-skills","ai-agents","ai-observability","arize","claude-code","codex","cursor","datasets","experiments","llmops","tracing"],"license":"mit","html_url":"https://github.com/Arize-ai/arize-skills","pushed_at":"2026-04-24T00:52:08Z","description":"Agent skills for Arize — datasets, experiments, and traces via the ax CLI","skill_md_sha":"0e66ee4622d6cde086899ed35c8eab069566ae1c","skill_md_path":"skills/arize-annotation/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/Arize-ai/arize-skills/tree/main/skills/arize-annotation"},"layout":"multi","source":"github","category":"arize-skills","frontmatter":{"name":"arize-annotation","description":"INVOKE THIS SKILL when creating, managing, or using annotation configs or annotation queues on Arize (categorical, continuous, freeform), or applying human annotations to project spans via the Python SDK. Configs are the label schema for human feedback; queues are review workflows that route records to annotators. Triggers: annotation config, annotation queue, label schema, human feedback schema, bulk annotate spans, update_annotations, labeling queue, annotate record."},"skills_sh_url":"https://skills.sh/Arize-ai/arize-skills/arize-annotation"},"updatedAt":"2026-04-24T01:02:56.117Z"}}