{"id":"275005cc-339e-4dfc-aff1-e62edc4d9da6","shortId":"NWJQxz","kind":"skill","title":"Arize Dataset","tagline":"Awesome Copilot skill by Github","description":"# Arize Dataset Skill\n\n## Concepts\n\n- **Dataset** = a versioned collection of examples used for evaluation and experimentation\n- **Dataset Version** = a snapshot of a dataset at a point in time; updates can be in-place or create a new version\n- **Example** = a single record in a dataset with arbitrary user-defined fields (e.g., `question`, `answer`, `context`)\n- **Space** = an organizational container; datasets belong to a space\n\nSystem-managed fields on examples (`id`, `created_at`, `updated_at`) are auto-generated by the server -- never include them in create or append payloads.\n\n## Prerequisites\n\nProceed directly with the task — run the `ax` command you need. Do NOT check versions, env vars, or profiles upfront.\n\nIf an `ax` command fails, troubleshoot based on the error:\n- `command not found` or version error → see references/ax-setup.md\n- `401 Unauthorized` / missing API key → run `ax profiles show` to inspect the current profile. If the profile is missing or the API key is wrong: check `.env` for `ARIZE_API_KEY` and use it to create/update the profile via references/ax-profiles.md. If `.env` has no key either, ask the user for their Arize API key (https://app.arize.com/admin > API Keys)\n- Space ID unknown → check `.env` for `ARIZE_SPACE_ID`, or run `ax spaces list -o json`, or ask the user\n- Project unclear → check `.env` for `ARIZE_DEFAULT_PROJECT`, or ask, or run `ax projects list -o json --limit 100` and present as selectable options\n\n## List Datasets: `ax datasets list`\n\nBrowse datasets in a space. Output goes to stdout.\n\n```bash\nax datasets list\nax datasets list --space-id SPACE_ID --limit 20\nax datasets list --cursor CURSOR_TOKEN\nax datasets list -o json\n```\n\n### Flags\n\n| Flag | Type | Default | Description |\n|------|------|---------|-------------|\n| `--space-id` | string | from profile | Filter by space |\n| `--limit, -l` | int | 15 | Max results (1-100) |\n| `--cursor` | string | none | Pagination cursor from previous response |\n| `-o, --output` | string | table | Output format: table, json, csv, parquet, or file path |\n| `-p, --profile` | string | default | Configuration profile |\n\n## Get Dataset: `ax datasets get`\n\nQuick metadata lookup -- returns dataset name, space, timestamps, and version list.\n\n```bash\nax datasets get DATASET_ID\nax datasets get DATASET_ID -o json\n```\n\n### Flags\n\n| Flag | Type | Default | Description |\n|------|------|---------|-------------|\n| `DATASET_ID` | string | required | Positional argument |\n| `-o, --output` | string | table | Output format |\n| `-p, --profile` | string | default | Configuration profile |\n\n### Response fields\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `id` | string | Dataset ID |\n| `name` | string | Dataset name |\n| `space_id` | string | Space this dataset belongs to |\n| `created_at` | datetime | When the dataset was created |\n| `updated_at` | datetime | Last modification time |\n| `versions` | array | List of dataset versions (id, name, dataset_id, created_at, updated_at) |\n\n## Export Dataset: `ax datasets export`\n\nDownload all examples to a file. Use `--all` for datasets larger than 500 examples (unlimited bulk export).\n\n```bash\nax datasets export DATASET_ID\n# -> dataset_abc123_20260305_141500/examples.json\n\nax datasets export DATASET_ID --all\nax datasets export DATASET_ID --version-id VERSION_ID\nax datasets export DATASET_ID --output-dir ./data\nax datasets export DATASET_ID --stdout\nax datasets export DATASET_ID --stdout | jq '.[0]'\n```\n\n### Flags\n\n| Flag | Type | Default | Description |\n|------|------|---------|-------------|\n| `DATASET_ID` | string | required | Positional argument |\n| `--version-id` | string | latest | Export a specific dataset version |\n| `--all` | bool | false | Unlimited bulk export (use for datasets > 500 examples) |\n| `--output-dir` | string | `.` | Output directory |\n| `--stdout` | bool | false | Print JSON to stdout instead of file |\n| `-p, --profile` | string | default | Configuration profile |\n\n**Agent auto-escalation rule:** If an export returns exactly 500 examples, the result is likely truncated — re-run with `--all` to get the full dataset.\n\n**Export completeness verification:** After exporting, confirm the row count matches what the server reports:\n```bash\n# Get the server-reported count from dataset metadata\nax datasets get DATASET_ID -o json | jq '.versions[-1] | {version: .id, examples: .example_count}'\n\n# Compare to what was exported\njq 'length' dataset_*/examples.json\n\n# If counts differ, re-export with --all\n```\n\nOutput is a JSON array of example objects. Each example has system fields (`id`, `created_at`, `updated_at`) plus all user-defined fields:\n\n```json\n[\n  {\n    \"id\": \"ex_001\",\n    \"created_at\": \"2026-01-15T10:00:00Z\",\n    \"updated_at\": \"2026-01-15T10:00:00Z\",\n    \"question\": \"What is 2+2?\",\n    \"answer\": \"4\",\n    \"topic\": \"math\"\n  }\n]\n```\n\n## Create Dataset: `ax datasets create`\n\nCreate a new dataset from a data file.\n\n```bash\nax datasets create --name \"My Dataset\" --space-id SPACE_ID --file data.csv\nax datasets create --name \"My Dataset\" --space-id SPACE_ID --file data.json\nax datasets create --name \"My Dataset\" --space-id SPACE_ID --file data.jsonl\nax datasets create --name \"My Dataset\" --space-id SPACE_ID --file data.parquet\n```\n\n### Flags\n\n| Flag | Type | Required | Description |\n|------|------|----------|-------------|\n| `--name, -n` | string | yes | Dataset name |\n| `--space-id` | string | yes | Space to create the dataset in |\n| `--file, -f` | path | yes | Data file: CSV, JSON, JSONL, or Parquet |\n| `-o, --output` | string | no | Output format for the returned dataset metadata |\n| `-p, --profile` | string | no | Configuration profile |\n\n### Passing data via stdin\n\nUse `--file -` to pipe data directly — no temp file needed:\n\n```bash\necho '[{\"question\": \"What is 2+2?\", \"answer\": \"4\"}]' | ax datasets create --name \"my-dataset\" --space-id SPACE_ID --file -\n\n# Or with a heredoc\nax datasets create --name \"my-dataset\" --space-id SPACE_ID --file - << 'EOF'\n[{\"question\": \"What is 2+2?\", \"answer\": \"4\"}]\nEOF\n```\n\nTo add rows to an existing dataset, use `ax datasets append --json '[...]'` instead — no file needed.\n\n### Supported file formats\n\n| Format | Extension | Notes |\n|--------|-----------|-------|\n| CSV | `.csv` | Column headers become field names |\n| JSON | `.json` | Array of objects |\n| JSON Lines | `.jsonl` | One object per line (NOT a JSON array) |\n| Parquet | `.parquet` | Column names become field names; preserves types |\n\n**Format gotchas:**\n- **CSV**: Loses type information — dates become strings, `null` becomes empty string. Use JSON/Parquet to preserve types.\n- **JSONL**: Each line is a separate JSON object. A JSON array (`[{...}, {...}]`) in a `.jsonl` file will fail — use `.json` extension instead.\n- **Parquet**: Preserves column types. Requires `pandas`/`pyarrow` to read locally: `pd.read_parquet(\"examples.parquet\")`.\n\n## Append Examples: `ax datasets append`\n\nAdd examples to an existing dataset. Two input modes -- use whichever fits.\n\n### Inline JSON (agent-friendly)\n\nGenerate the payload directly -- no temp files needed:\n\n```bash\nax datasets append DATASET_ID --json '[{\"question\": \"What is 2+2?\", \"answer\": \"4\"}]'\n\nax datasets append DATASET_ID --json '[\n  {\"question\": \"What is gravity?\", \"answer\": \"A fundamental force...\"},\n  {\"question\": \"What is light?\", \"answer\": \"Electromagnetic radiation...\"}\n]'\n```\n\n### From a file\n\n```bash\nax datasets append DATASET_ID --file new_examples.csv\nax datasets append DATASET_ID --file additions.json\n```\n\n### To a specific version\n\n```bash\nax datasets append DATASET_ID --json '[{\"q\": \"...\"}]' --version-id VERSION_ID\n```\n\n### Flags\n\n| Flag | Type | Required | Description |\n|------|------|----------|-------------|\n| `DATASET_ID` | string | yes | Positional argument |\n| `--json` | string | mutex | JSON array of example objects |\n| `--file, -f` | path | mutex | Data file (CSV, JSON, JSONL, Parquet) |\n| `--version-id` | string | no | Append to a specific version (default: latest) |\n| `-o, --output` | string | no | Output format for the returned dataset metadata |\n| `-p, --profile` | string | no | Configuration profile |\n\nExactly one of `--json` or `--file` is required.\n\n### Validation\n\n- Each example must be a JSON object with at least one user-defined field\n- Maximum 100,000 examples per request\n\n**Schema validation before append:** If the dataset already has examples, inspect its schema before appending to avoid silent field mismatches:\n\n```bash\n# Check existing field names in the dataset\nax datasets export DATASET_ID --stdout | jq '.[0] | keys'\n\n# Verify your new data has matching field names\necho '[{\"question\": \"...\"}]' | jq '.[0] | keys'\n\n# Both outputs should show the same user-defined fields\n```\n\nFields are free-form: extra fields in new examples are added, and missing fields become null. However, typos in field names (e.g., `queston` vs `question`) create new columns silently -- verify spelling before appending.\n\n## Delete Dataset: `ax datasets delete`\n\n```bash\nax datasets delete DATASET_ID\nax datasets delete DATASET_ID --force   # skip confirmation prompt\n```\n\n### Flags\n\n| Flag | Type | Default | Description |\n|------|------|---------|-------------|\n| `DATASET_ID` | string | required | Positional argument |\n| `--force, -f` | bool | false | Skip confirmation prompt |\n| `-p, --profile` | string | default | Configuration profile |\n\n## Workflows\n\n### Find a dataset by name\n\nUsers often refer to datasets by name rather than ID. Resolve a name to an ID before running other commands:\n\n```bash\n# Find dataset ID by name\nax datasets list -o json | jq '.[] | select(.name == \"eval-set-v1\") | .id'\n\n# If the list is paginated, fetch more\nax datasets list -o json --limit 100 | jq '.[] | select(.name | test(\"eval-set\")) | {id, name}'\n```\n\n### Create a dataset from file for evaluation\n\n1. Prepare a CSV/JSON/Parquet file with your evaluation columns (e.g., `input`, `expected_output`)\n   - If generating data inline, pipe it via stdin using `--file -` (see the Create Dataset section)\n2. `ax datasets create --name \"eval-set-v1\" --space-id SPACE_ID --file eval_data.csv`\n3. Verify: `ax datasets get DATASET_ID`\n4. Use the dataset ID to run experiments\n\n### Add examples to an existing dataset\n\n```bash\n# Find the dataset\nax datasets list\n\n# Append inline or from a file (see Append Examples section for full syntax)\nax datasets append DATASET_ID --json '[{\"question\": \"...\", \"answer\": \"...\"}]'\nax datasets append DATASET_ID --file additional_examples.csv\n```\n\n### Download dataset for offline analysis\n\n1. `ax datasets list` -- find the dataset\n2. `ax datasets export DATASET_ID` -- download to file\n3. Parse the JSON: `jq '.[] | .question' dataset_*/examples.json`\n\n### Export a specific version\n\n```bash\n# List versions\nax datasets get DATASET_ID -o json | jq '.versions'\n\n# Export that version\nax datasets export DATASET_ID --version-id VERSION_ID\n```\n\n### Iterate on a dataset\n\n1. Export current version: `ax datasets export DATASET_ID`\n2. Modify the examples locally\n3. Append new rows: `ax datasets append DATASET_ID --file new_rows.csv`\n4. Or create a fresh version: `ax datasets create --name \"eval-set-v2\" --space-id SPACE_ID --file updated_data.json`\n\n### Pipe export to other tools\n\n```bash\n# Count examples\nax datasets export DATASET_ID --stdout | jq 'length'\n\n# Extract a single field\nax datasets export DATASET_ID --stdout | jq '.[].question'\n\n# Convert to CSV with jq\nax datasets export DATASET_ID --stdout | jq -r '.[] | [.question, .answer] | @csv'\n```\n\n## Dataset Example Schema\n\nExamples are free-form JSON objects. There is no fixed schema -- columns are whatever fields you provide. System-managed fields are added by the server:\n\n| Field | Type | Managed by | Notes |\n|-------|------|-----------|-------|\n| `id` | string | server | Auto-generated UUID. Required on update, forbidden on create/append |\n| `created_at` | datetime | server | Immutable creation timestamp |\n| `updated_at` | datetime | server | Auto-updated on modification |\n| *(any user field)* | any JSON type | user | String, number, boolean, null, nested object, array |\n\n\n## Related Skills\n\n- **arize-trace**: Export production spans to understand what data to put in datasets → use `arize-trace`\n- **arize-experiment**: Run evaluations against this dataset → next step is `arize-experiment`\n- **arize-prompt-optimization**: Use dataset + experiment results to improve prompts → use `arize-prompt-optimization`\n\n## Troubleshooting\n\n| Problem | Solution |\n|---------|----------|\n| `ax: command not found` | See references/ax-setup.md |\n| `401 Unauthorized` | API key is wrong, expired, or doesn't have access to this space. Fix the profile using references/ax-profiles.md. |\n| `No profile found` | No profile is configured. See references/ax-profiles.md to create one. |\n| `Dataset not found` | Verify dataset ID with `ax datasets list` |\n| `File format error` | Supported: CSV, JSON, JSONL, Parquet. Use `--file -` to read from stdin. |\n| `platform-managed column` | Remove `id`, `created_at`, `updated_at` from create/append payloads |\n| `reserved column` | Remove `time`, `count`, or any `source_record_*` field |\n| `Provide either --json or --file` | Append requires exactly one input source |\n| `Examples array is empty` | Ensure your JSON array or file contains at least one example |\n| `not a JSON object` | Each element in the `--json` array must be a `{...}` object, not a string or number |\n\n## Save Credentials for Future Use\n\nSee references/ax-profiles.md § Save Credentials for Future Use.","tags":["arize","dataset","awesome","copilot","github"],"capabilities":["skill","source-github","category-awesome-copilot"],"categories":["awesome-copilot"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/github/awesome-copilot/arize-dataset","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"install_from":"skills.sh"}},"qualityScore":"0.300","qualityRationale":"deterministic score 0.30 from registry signals: · indexed on skills.sh · published under github/awesome-copilot","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T03:40:38.250Z","embedding":null,"createdAt":"2026-04-18T20:36:13.469Z","updatedAt":"2026-04-22T03:40:38.250Z","lastSeenAt":"2026-04-22T03:40:38.250Z","tsv":"'+2':685,826,864,1015 '-01':668,676 '-1':614 '-100':300 '-15':669,677 '/admin':193 '/data':485 '/examples.json':628,1503 '0':499,1197,1210 '00':671,679 '000':1158 '001':664 '00z':672,680 '1':299,1375,1480,1537 '100':234,1157,1358 '141500/examples.json':460 '15':296 '2':684,825,863,1014,1403,1487,1546 '20':267 '2026':667,675 '20260305':459 '3':1419,1496,1551 '4':687,828,866,1017,1426,1562 '401':137,1764 '500':446,530,564 'abc123':458 'access':1775 'ad':1233,1653 'add':869,979,1434 'additional_examples.csv':1474 'additions.json':1056 'agent':554,994 'agent-friend':993 'alreadi':1169 'analysi':1479 'answer':61,686,827,865,1016,1028,1036,1467,1625 'api':140,158,166,189,194,1766 'app.arize.com':192 'app.arize.com/admin':191 'append':96,878,974,978,1007,1020,1045,1052,1064,1108,1165,1176,1255,1447,1454,1462,1470,1552,1557,1848 'arbitrari':54 'argument':367,510,1084,1286 'ariz':1,8,165,188,202,221,1708,1723,1726,1737,1740,1752 'arize-experi':1725,1736 'arize-prompt-optim':1739,1751 'arize-trac':1707,1722 'array':416,641,899,912,950,1089,1704,1855,1861,1878 'ask':183,213,225 'auto':85,556,1666,1687 'auto-escal':555 'auto-gener':84,1665 'auto-upd':1686 'avoid':1178 'awesom':3 'ax':106,121,143,207,228,242,255,258,268,274,330,345,350,431,452,461,467,477,486,492,605,692,704,717,730,743,829,846,876,976,1005,1018,1043,1050,1062,1190,1258,1262,1267,1332,1352,1404,1421,1444,1460,1468,1481,1488,1511,1523,1541,1555,1568,1591,1603,1616,1758,1803 'base':125 'bash':254,344,451,595,703,820,1004,1042,1061,1182,1261,1326,1440,1508,1588 'becom':894,917,929,932,1237 'belong':68,399 'bool':522,539,1289 'boolean':1700 'brows':245 'bulk':449,525 'category-awesome-copilot' 'check':112,162,199,218,1183 'collect':15 'column':892,915,963,1250,1383,1642,1823,1834 'command':107,122,129,1325,1759 'compar':620 'complet':582 'concept':11 'configur':326,378,552,804,1130,1298,1790 'confirm':586,1274,1292 'contain':66,1864 'context':62 'convert':1611 'copilot':4 'count':589,601,619,630,1589,1837 'creat':42,79,94,401,408,425,651,665,690,694,695,706,719,732,745,774,831,848,1248,1368,1400,1406,1564,1570,1675,1794,1826 'create/append':1674,1831 'create/update':172 'creation':1680 'credenti':1889,1896 'csv':317,784,890,891,924,1099,1613,1626,1810 'csv/json/parquet':1378 'current':149,1539 'cursor':271,272,301,305 'data':701,782,807,814,1097,1202,1390,1716 'data.csv':716 'data.json':729 'data.jsonl':742 'data.parquet':755 'dataset':2,9,12,23,29,52,67,241,243,246,256,259,269,275,329,331,337,346,348,351,353,362,387,391,398,406,419,423,430,432,443,453,455,457,462,464,468,470,478,480,487,489,493,495,505,519,529,580,603,606,608,627,691,693,698,705,709,718,722,731,735,744,748,765,776,798,830,835,847,852,874,877,977,984,1006,1008,1019,1021,1044,1046,1051,1053,1063,1065,1079,1124,1168,1189,1191,1193,1257,1259,1263,1265,1268,1270,1281,1303,1310,1328,1333,1353,1370,1401,1405,1422,1424,1429,1439,1443,1445,1461,1463,1469,1471,1476,1482,1486,1489,1491,1502,1512,1514,1524,1526,1536,1542,1544,1556,1558,1569,1592,1594,1604,1606,1617,1619,1627,1720,1732,1744,1796,1800,1804 'date':928 'datetim':403,411,1677,1684 'default':222,282,325,360,377,503,551,1113,1279,1297 'defin':57,659,1154,1220 'delet':1256,1260,1264,1269 'descript':283,361,384,504,760,1078,1280 'differ':631 'dir':484,534 'direct':100,815,999 'directori':537 'doesn':1772 'download':434,1475,1493 'e.g':59,1244,1384 'echo':821,1207 'either':182,1844 'electromagnet':1037 'element':1874 'empti':933,1857 'ensur':1858 'env':114,163,178,200,219 'eof':859,867 'error':128,134,1808 'escal':557 'eval':1341,1364,1409,1573 'eval-set':1363 'eval-set-v1':1340,1408 'eval-set-v2':1572 'eval_data.csv':1418 'evalu':20,1374,1382,1729 'ex':663 'exact':563,1132,1850 'exampl':17,46,77,436,447,531,565,617,618,643,646,975,980,1091,1142,1159,1171,1231,1435,1455,1549,1590,1628,1630,1854,1868 'examples.parquet':973 'exist':873,983,1184,1438 'expect':1386 'experi':1433,1727,1738,1745 'experiment':22 'expir':1770 'export':429,433,450,454,463,469,479,488,494,516,526,561,581,585,624,634,1192,1490,1504,1520,1525,1538,1543,1584,1593,1605,1618,1710 'extens':888,959 'extra':1227 'extract':1599 'f':779,1094,1288 'fail':123,956 'fals':523,540,1290 'fetch':1350 'field':58,75,381,382,649,660,895,918,1155,1180,1185,1205,1221,1222,1228,1236,1242,1602,1645,1651,1657,1693,1842 'file':320,439,547,702,715,728,741,754,778,783,811,818,841,858,882,885,954,1002,1041,1048,1055,1093,1098,1137,1372,1379,1397,1417,1452,1473,1495,1560,1581,1806,1815,1847,1863 'filter':290 'find':1301,1327,1441,1484 'fit':990 'fix':1640,1779 'flag':279,280,357,358,500,501,756,757,1074,1075,1276,1277 'forbidden':1672 'forc':1031,1272,1287 'form':1226,1634 'format':314,373,794,886,887,922,1120,1807 'found':131,1761,1786,1798 'free':1225,1633 'free-form':1224,1632 'fresh':1566 'friend':995 'full':579,1458 'fundament':1030 'futur':1891,1898 'generat':86,996,1389,1667 'get':328,332,347,352,577,596,607,1423,1513 'github':7 'goe':251 'gotcha':923 'graviti':1027 'header':893 'heredoc':845 'howev':1239 'id':78,197,204,263,265,286,349,354,363,385,388,394,421,424,456,465,471,474,476,481,490,496,506,513,609,616,650,662,712,714,725,727,738,740,751,753,769,838,840,855,857,1009,1022,1047,1054,1066,1071,1073,1080,1105,1194,1266,1271,1282,1315,1321,1329,1344,1366,1414,1416,1425,1430,1464,1472,1492,1515,1527,1530,1532,1545,1559,1578,1580,1595,1607,1620,1662,1801,1825 'immut':1679 'improv':1748 'in-plac':38 'includ':91 'inform':927 'inlin':991,1391,1448 'input':986,1385,1852 'inspect':147,1172 'instead':545,880,960 'int':295 'iter':1533 'jq':498,612,625,1196,1209,1337,1359,1500,1518,1597,1609,1615,1622 'json':211,232,278,316,356,542,611,640,661,785,879,897,898,902,911,946,949,958,992,1010,1023,1067,1085,1088,1100,1135,1146,1336,1356,1465,1499,1517,1635,1695,1811,1845,1860,1871,1877 'json/parquet':936 'jsonl':786,904,940,953,1101,1812 'key':141,159,167,181,190,195,1198,1211,1767 'l':294 'larger':444 'last':412 'latest':515,1114 'least':1150,1866 'length':626,1598 'light':1035 'like':569 'limit':233,266,293,1357 'line':903,908,942 'list':209,230,240,244,257,260,270,276,343,417,1334,1347,1354,1446,1483,1509,1805 'local':970,1550 'lookup':335 'lose':925 'manag':74,1650,1659,1822 'match':590,1204 'math':689 'max':297 'maximum':1156 'metadata':334,604,799,1125 'mismatch':1181 'miss':139,155,1235 'mode':987 'modif':413,1690 'modifi':1547 'must':1143,1879 'mutex':1087,1096 'my-dataset':833,850 'n':762 'name':338,389,392,422,707,720,733,746,761,766,832,849,896,916,919,1186,1206,1243,1305,1312,1318,1331,1339,1361,1367,1407,1571 'need':109,819,883,1003 'nest':1702 'never':90 'new':44,697,1201,1230,1249,1553 'new_examples.csv':1049 'new_rows.csv':1561 'next':1733 'none':303 'note':889,1661 'null':931,1238,1701 'number':1699,1887 'o':210,231,277,309,355,368,610,789,1115,1335,1355,1516 'object':644,901,906,947,1092,1147,1636,1703,1872,1882 'offlin':1478 'often':1307 'one':905,1133,1151,1795,1851,1867 'optim':1742,1754 'option':239 'organiz':65 'output':250,310,313,369,372,483,533,536,637,790,793,1116,1119,1213,1387 'output-dir':482,532 'p':322,374,548,800,1126,1294 'pagin':304,1349 'panda':966 'parquet':318,788,913,914,961,972,1102,1813 'pars':1497 'pass':806 'path':321,780,1095 'payload':97,998,1832 'pd.read':971 'per':907,1160 'pipe':813,1392,1583 'place':40 'platform':1821 'platform-manag':1820 'plus':655 'point':32 'posit':366,509,1083,1285 'prepar':1376 'prerequisit':98 'present':236 'preserv':920,938,962 'previous':307 'print':541 'problem':1756 'proceed':99 'product':1711 'profil':117,144,150,153,174,289,323,327,375,379,549,553,801,805,1127,1131,1295,1299,1781,1785,1788 'project':216,223,229 'prompt':1275,1293,1741,1749,1753 'provid':1647,1843 'put':1718 'pyarrow':967 'q':1068 'question':60,681,822,860,1011,1024,1032,1208,1247,1466,1501,1610,1624 'queston':1245 'quick':333 'r':1623 'radiat':1038 'rather':1313 're':572,633 're-export':632 're-run':571 'read':969,1817 'record':49,1841 'refer':1308 'references/ax-profiles.md':176,1783,1792,1894 'references/ax-setup.md':136,1763 'relat':1705 'remov':1824,1835 'report':594,600 'request':1161 'requir':365,508,759,965,1077,1139,1284,1669,1849 'reserv':1833 'resolv':1316 'respons':308,380 'result':298,567,1746 'return':336,562,797,1123 'row':588,870,1554 'rule':558 'run':104,142,206,227,573,1323,1432,1728 'save':1888,1895 'schema':1162,1174,1629,1641 'section':1402,1456 'see':135,1398,1453,1762,1791,1893 'select':238,1338,1360 'separ':945 'server':89,593,599,1656,1664,1678,1685 'server-report':598 'set':1342,1365,1410,1574 'show':145,1215 'silent':1179,1251 'singl':48,1601 'skill':5,10,1706 'skip':1273,1291 'snapshot':26 'solut':1757 'sourc':1840,1853 'source-github' 'space':63,71,196,203,208,249,262,264,285,292,339,393,396,711,713,724,726,737,739,750,752,768,772,837,839,854,856,1413,1415,1577,1579,1778 'space-id':261,284,710,723,736,749,767,836,853,1412,1576 'span':1712 'specif':518,1059,1111,1506 'spell':1253 'stdin':809,1395,1819 'stdout':253,491,497,538,544,1195,1596,1608,1621 'step':1734 'string':287,302,311,324,364,370,376,386,390,395,507,514,535,550,763,770,791,802,930,934,1081,1086,1106,1117,1128,1283,1296,1663,1698,1885 'support':884,1809 'syntax':1459 'system':73,648,1649 'system-manag':72,1648 't10':670,678 'tabl':312,315,371 'task':103 'temp':817,1001 'test':1362 'time':34,414,1836 'timestamp':340,1681 'token':273 'tool':1587 'topic':688 'trace':1709,1724 'troubleshoot':124,1755 'truncat':570 'two':985 'type':281,359,383,502,758,921,926,939,964,1076,1278,1658,1696 'typo':1240 'unauthor':138,1765 'unclear':217 'understand':1714 'unknown':198 'unlimit':448,524 'updat':35,81,409,427,653,673,1671,1682,1688,1828 'updated_data.json':1582 'upfront':118 'use':18,169,440,527,810,875,935,957,988,1396,1427,1721,1743,1750,1782,1814,1892,1899 'user':56,185,215,658,1153,1219,1306,1692,1697 'user-defin':55,657,1152,1218 'uuid':1668 'v1':1343,1411 'v2':1575 'valid':1140,1163 'var':115 'verif':583 'verifi':1199,1252,1420,1799 'version':14,24,45,113,133,342,415,420,473,475,512,520,613,615,1060,1070,1072,1104,1112,1507,1510,1519,1522,1529,1531,1540,1567 'version-id':472,511,1069,1103,1528 'via':175,808,1394 'vs':1246 'whatev':1644 'whichev':989 'workflow':1300 'wrong':161,1769 'yes':764,771,781,1082","prices":[{"id":"04738c8f-475f-4af8-846c-7a2e4a2eadd1","listingId":"275005cc-339e-4dfc-aff1-e62edc4d9da6","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"github","category":"awesome-copilot","install_from":"skills.sh"},"createdAt":"2026-04-18T20:36:13.469Z"}],"sources":[{"listingId":"275005cc-339e-4dfc-aff1-e62edc4d9da6","source":"github","sourceId":"github/awesome-copilot/arize-dataset","sourceUrl":"https://github.com/github/awesome-copilot/tree/main/skills/arize-dataset","isPrimary":false,"firstSeenAt":"2026-04-18T21:48:13.979Z","lastSeenAt":"2026-04-22T00:52:03.629Z"},{"listingId":"275005cc-339e-4dfc-aff1-e62edc4d9da6","source":"skills_sh","sourceId":"github/awesome-copilot/arize-dataset","sourceUrl":"https://skills.sh/github/awesome-copilot/arize-dataset","isPrimary":true,"firstSeenAt":"2026-04-18T20:36:13.469Z","lastSeenAt":"2026-04-22T03:40:38.250Z"}],"details":{"listingId":"275005cc-339e-4dfc-aff1-e62edc4d9da6","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"github","slug":"arize-dataset","source":"skills_sh","category":"awesome-copilot","skills_sh_url":"https://skills.sh/github/awesome-copilot/arize-dataset"},"updatedAt":"2026-04-22T03:40:38.250Z"}}