{"id":"9a2b7be1-30af-4474-b888-57c881491cd2","shortId":"43RQRb","kind":"skill","title":"hugging-face-datasets","tagline":"Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.","description":"# Overview\nThis skill provides tools to manage datasets on the Hugging Face Hub with a focus on creation, configuration, content management, and SQL-based data manipulation. It is designed to complement the existing Hugging Face MCP server by providing dataset editing and querying capabilities.\n\n## When to Use\n- You need to create, configure, or update datasets on the Hugging Face Hub.\n- You want SQL-style querying, transformation, or export flows over Hub datasets.\n- You are managing dataset content and metadata directly rather than only searching existing datasets.\n\n## Integration with HF MCP Server\n- **Use HF MCP Server for**: Dataset discovery, search, and metadata retrieval\n- **Use This Skill for**: Dataset creation, content editing, SQL queries, data transformation, and structured data formatting\n\n# Version\n2.1.0\n\n# Dependencies\n# This skill uses PEP 723 scripts with inline dependency management\n# Scripts auto-install requirements when run with: uv run scripts/script_name.py\n\n- uv (Python package manager)\n- Getting Started: See \"Usage Instructions\" below for PEP 723 usage\n\n# Core Capabilities\n\n## 1. Dataset Lifecycle Management\n- **Initialize**: Create new dataset repositories with proper structure\n- **Configure**: Store detailed configuration including system prompts and metadata\n- **Stream Updates**: Add rows efficiently without downloading entire datasets\n\n## 2. SQL-Based Dataset Querying (NEW)\nQuery any Hugging Face dataset using DuckDB SQL via `scripts/sql_manager.py`:\n- **Direct Queries**: Run SQL on datasets using the `hf://` protocol\n- **Schema Discovery**: Describe dataset structure and column types\n- **Data Sampling**: Get random samples for exploration\n- **Aggregations**: Count, histogram, unique values analysis\n- **Transformations**: Filter, join, reshape data with SQL\n- **Export & Push**: Save results locally or push to new Hub repos\n\n## 3. Multi-Format Dataset Support\nSupports diverse dataset types through template system:\n- **Chat/Conversational**: Chat templating, multi-turn dialogues, tool usage examples\n- **Text Classification**: Sentiment analysis, intent detection, topic classification\n- **Question-Answering**: Reading comprehension, factual QA, knowledge bases\n- **Text Completion**: Language modeling, code completion, creative writing\n- **Tabular Data**: Structured data for regression/classification tasks\n- **Custom Formats**: Flexible schema definition for specialized needs\n\n## 4. Quality Assurance Features\n- **JSON Validation**: Ensures data integrity during uploads\n- **Batch Processing**: Efficient handling of large datasets\n- **Error Recovery**: Graceful handling of upload failures and conflicts\n\n# Usage Instructions\n\nThe skill includes two Python scripts that use PEP 723 inline dependency management:\n\n> **All paths are relative to the directory containing this SKILL.md\nfile.**\n> Scripts are run with: `uv run scripts/script_name.py [arguments]`\n\n- `scripts/dataset_manager.py` - Dataset creation and management\n- `scripts/sql_manager.py` - SQL-based dataset querying and transformation\n\n### Prerequisites\n- `uv` package manager installed\n- `HF_TOKEN` environment variable must be set with a Write-access token\n\n---\n\n# SQL Dataset Querying (sql_manager.py)\n\nQuery, transform, and push Hugging Face datasets using DuckDB SQL. The `hf://` protocol provides direct access to any public dataset (or private with token).\n\n## Quick Start\n\n```bash\n# Query a dataset\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT * FROM data WHERE subject='nutrition' LIMIT 10\"\n\n# Get dataset schema\nuv run scripts/sql_manager.py describe --dataset \"cais/mmlu\"\n\n# Sample random rows\nuv run scripts/sql_manager.py sample --dataset \"cais/mmlu\" --n 5\n\n# Count rows with filter\nuv run scripts/sql_manager.py count --dataset \"cais/mmlu\" --where \"subject='nutrition'\"\n```\n\n## SQL Query Syntax\n\nUse `data` as the table name in your SQL - it gets replaced with the actual `hf://` path:\n\n```sql\n-- Basic select\nSELECT * FROM data LIMIT 10\n\n-- Filtering\nSELECT * FROM data WHERE subject='nutrition'\n\n-- Aggregations\nSELECT subject, COUNT(*) as cnt FROM data GROUP BY subject ORDER BY cnt DESC\n\n-- Column selection and transformation\nSELECT question, choices[answer] AS correct_answer FROM data\n\n-- Regex matching\nSELECT * FROM data WHERE regexp_matches(question, 'nutrition|diet')\n\n-- String functions\nSELECT regexp_replace(question, '\\n', '') AS cleaned FROM data\n```\n\n## Common Operations\n\n### 1. Explore Dataset Structure\n```bash\n# Get schema\nuv run scripts/sql_manager.py describe --dataset \"cais/mmlu\"\n\n# Get unique values in column\nuv run scripts/sql_manager.py unique --dataset \"cais/mmlu\" --column \"subject\"\n\n# Get value distribution\nuv run scripts/sql_manager.py histogram --dataset \"cais/mmlu\" --column \"subject\" --bins 20\n```\n\n### 2. Filter and Transform\n```bash\n# Complex filtering with SQL\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT subject, COUNT(*) as cnt FROM data GROUP BY subject HAVING cnt > 100\"\n\n# Using transform command\nuv run scripts/sql_manager.py transform \\\n  --dataset \"cais/mmlu\" \\\n  --select \"subject, COUNT(*) as cnt\" \\\n  --group-by \"subject\" \\\n  --order-by \"cnt DESC\" \\\n  --limit 10\n```\n\n### 3. Create Subsets and Push to Hub\n```bash\n# Query and push to new dataset\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT * FROM data WHERE subject='nutrition'\" \\\n  --push-to \"username/mmlu-nutrition-subset\" \\\n  --private\n\n# Transform and push\nuv run scripts/sql_manager.py transform \\\n  --dataset \"ibm/duorc\" \\\n  --config \"ParaphraseRC\" \\\n  --select \"question, answers\" \\\n  --where \"LENGTH(question) > 50\" \\\n  --push-to \"username/duorc-long-questions\"\n```\n\n### 4. Export to Local Files\n```bash\n# Export to Parquet\nuv run scripts/sql_manager.py export \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT * FROM data WHERE subject='nutrition'\" \\\n  --output \"nutrition.parquet\" \\\n  --format parquet\n\n# Export to JSONL\nuv run scripts/sql_manager.py export \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT * FROM data LIMIT 100\" \\\n  --output \"sample.jsonl\" \\\n  --format jsonl\n```\n\n### 5. Working with Dataset Configs/Splits\n```bash\n# Specify config (subset)\nuv run scripts/sql_manager.py query \\\n  --dataset \"ibm/duorc\" \\\n  --config \"ParaphraseRC\" \\\n  --sql \"SELECT * FROM data LIMIT 5\"\n\n# Specify split\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --split \"test\" \\\n  --sql \"SELECT COUNT(*) FROM data\"\n\n# Query all splits\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --split \"*\" \\\n  --sql \"SELECT * FROM data LIMIT 10\"\n```\n\n### 6. Raw SQL with Full Paths\nFor complex queries or joining datasets:\n```bash\nuv run scripts/sql_manager.py raw --sql \"\n  SELECT a.*, b.* \n  FROM 'hf://datasets/dataset1@~parquet/default/train/*.parquet' a\n  JOIN 'hf://datasets/dataset2@~parquet/default/train/*.parquet' b\n  ON a.id = b.id\n  LIMIT 100\n\"\n```\n\n## Python API Usage\n\n```python\nfrom sql_manager import HFDatasetSQL\n\nsql = HFDatasetSQL()\n\n# Query\nresults = sql.query(\"cais/mmlu\", \"SELECT * FROM data WHERE subject='nutrition' LIMIT 10\")\n\n# Get schema\nschema = sql.describe(\"cais/mmlu\")\n\n# Sample\nsamples = sql.sample(\"cais/mmlu\", n=5, seed=42)\n\n# Count\ncount = sql.count(\"cais/mmlu\", where=\"subject='nutrition'\")\n\n# Histogram\ndist = sql.histogram(\"cais/mmlu\", \"subject\")\n\n# Filter and transform\nresults = sql.filter_and_transform(\n    \"cais/mmlu\",\n    select=\"subject, COUNT(*) as cnt\",\n    group_by=\"subject\",\n    order_by=\"cnt DESC\",\n    limit=10\n)\n\n# Push to Hub\nurl = sql.push_to_hub(\n    \"cais/mmlu\",\n    \"username/nutrition-subset\",\n    sql=\"SELECT * FROM data WHERE subject='nutrition'\",\n    private=True\n)\n\n# Export locally\nsql.export_to_parquet(\"cais/mmlu\", \"output.parquet\", sql=\"SELECT * FROM data LIMIT 100\")\n\nsql.close()\n```\n\n## HF Path Format\n\nDuckDB uses the `hf://` protocol to access datasets:\n```\nhf://datasets/{dataset_id}@{revision}/{config}/{split}/*.parquet\n```\n\nExamples:\n- `hf://datasets/cais/mmlu@~parquet/default/train/*.parquet`\n- `hf://datasets/ibm/duorc@~parquet/ParaphraseRC/test/*.parquet`\n\nThe `@~parquet` revision provides auto-converted Parquet files for any dataset format.\n\n## Useful DuckDB SQL Functions\n\n```sql\n-- String functions\nLENGTH(column)                    -- String length\nregexp_replace(col, '\\n', '')     -- Regex replace\nregexp_matches(col, 'pattern')    -- Regex match\nLOWER(col), UPPER(col)           -- Case conversion\n\n-- Array functions  \nchoices[0]                        -- Array indexing (0-based)\narray_length(choices)             -- Array length\nunnest(choices)                   -- Expand array to rows\n\n-- Aggregations\nCOUNT(*), SUM(col), AVG(col)\nGROUP BY col HAVING condition\n\n-- Sampling\nUSING SAMPLE 10                   -- Random sample\nUSING SAMPLE 10 (RESERVOIR, 42)   -- Reproducible sample\n\n-- Window functions\nROW_NUMBER() OVER (PARTITION BY col ORDER BY col2)\n```\n\n---\n\n# Dataset Creation (dataset_manager.py)\n\n### Recommended Workflow\n\n**1. Discovery (Use HF MCP Server):**\n```python\n# Use HF MCP tools to find existing datasets\nsearch_datasets(\"conversational AI training\")\nget_dataset_details(\"username/dataset-name\")\n```\n\n**2. Creation (Use This Skill):**\n```bash\n# Initialize new dataset\nuv run scripts/dataset_manager.py init --repo_id \"your-username/dataset-name\" [--private]\n\n# Configure with detailed system prompt\nuv run scripts/dataset_manager.py config --repo_id \"your-username/dataset-name\" --system_prompt \"$(cat system_prompt.txt)\"\n```\n\n**3. Content Management (Use This Skill):**\n```bash\n# Quick setup with any template\nuv run scripts/dataset_manager.py quick_setup \\\n  --repo_id \"your-username/dataset-name\" \\\n  --template classification\n\n# Add data with template validation\nuv run scripts/dataset_manager.py add_rows \\\n  --repo_id \"your-username/dataset-name\" \\\n  --template qa \\\n  --rows_json \"$(cat your_qa_data.json)\"\n```\n\n### Template-Based Data Structures\n\n**1. Chat Template (`--template chat`)**\n```json\n{\n  \"messages\": [\n    {\"role\": \"user\", \"content\": \"Natural user request\"},\n    {\"role\": \"assistant\", \"content\": \"Response with tool usage\"},\n    {\"role\": \"tool\", \"content\": \"Tool response\", \"tool_call_id\": \"call_123\"}\n  ],\n  \"scenario\": \"Description of use case\",\n  \"complexity\": \"simple|intermediate|advanced\"\n}\n```\n\n**2. Classification Template (`--template classification`)**\n```json\n{\n  \"text\": \"Input text to be classified\",\n  \"label\": \"classification_label\",\n  \"confidence\": 0.95,\n  \"metadata\": {\"domain\": \"technology\", \"language\": \"en\"}\n}\n```\n\n**3. QA Template (`--template qa`)**\n```json\n{\n  \"question\": \"What is the question being asked?\",\n  \"answer\": \"The complete answer\",\n  \"context\": \"Additional context if needed\",\n  \"answer_type\": \"factual|explanatory|opinion\",\n  \"difficulty\": \"easy|medium|hard\"\n}\n```\n\n**4. Completion Template (`--template completion`)**\n```json\n{\n  \"prompt\": \"The beginning text or context\",\n  \"completion\": \"The expected continuation\",\n  \"domain\": \"code|creative|technical|conversational\",\n  \"style\": \"description of writing style\"\n}\n```\n\n**5. Tabular Template (`--template tabular`)**\n```json\n{\n  \"columns\": [\n    {\"name\": \"feature1\", \"type\": \"numeric\", \"description\": \"First feature\"},\n    {\"name\": \"target\", \"type\": \"categorical\", \"description\": \"Target variable\"}\n  ],\n  \"data\": [\n    {\"feature1\": 123, \"target\": \"class_a\"},\n    {\"feature1\": 456, \"target\": \"class_b\"}\n  ]\n}\n```\n\n### Advanced System Prompt Template\n\nFor high-quality training data generation:\n```text\nYou are an AI assistant expert at using MCP tools effectively.\n\n## MCP SERVER DEFINITIONS\n[Define available servers and tools]\n\n## TRAINING EXAMPLE STRUCTURE\n[Specify exact JSON schema for chat templating]\n\n## QUALITY GUIDELINES\n[Detail requirements for realistic scenarios, progressive complexity, proper tool usage]\n\n## EXAMPLE CATEGORIES\n[List development workflows, debugging scenarios, data management tasks]\n```\n\n### Example Categories & Templates\n\nThe skill includes diverse training examples beyond just MCP usage:\n\n**Available Example Sets:**\n- `training_examples.json` - MCP tool usage examples (debugging, project setup, database analysis)\n- `diverse_training_examples.json` - Broader scenarios including:\n  - **Educational Chat** - Explaining programming concepts, tutorials\n  - **Git Workflows** - Feature branches, version control guidance\n  - **Code Analysis** - Performance optimization, architecture review\n  - **Content Generation** - Professional writing, creative brainstorming\n  - **Codebase Navigation** - Legacy code exploration, systematic analysis\n  - **Conversational Support** - Problem-solving, technical discussions\n\n**Using Different Example Sets:**\n```bash\n# Add MCP-focused examples\nuv run scripts/dataset_manager.py add_rows --repo_id \"your-username/dataset-name\" \\\n  --rows_json \"$(cat examples/training_examples.json)\"\n\n# Add diverse conversational examples\nuv run scripts/dataset_manager.py add_rows --repo_id \"your-username/dataset-name\" \\\n  --rows_json \"$(cat examples/diverse_training_examples.json)\"\n\n# Mix both for comprehensive training data\nuv run scripts/dataset_manager.py add_rows --repo_id \"your-username/dataset-name\" \\\n  --rows_json \"$(jq -s '.[0] + .[1]' examples/training_examples.json examples/diverse_training_examples.json)\"\n```\n\n### Commands Reference\n\n**List Available Templates:**\n```bash\nuv run scripts/dataset_manager.py list_templates\n```\n\n**Quick Setup (Recommended):**\n```bash\nuv run scripts/dataset_manager.py quick_setup --repo_id \"your-username/dataset-name\" --template classification\n```\n\n**Manual Setup:**\n```bash\n# Initialize repository\nuv run scripts/dataset_manager.py init --repo_id \"your-username/dataset-name\" [--private]\n\n# Configure with system prompt\nuv run scripts/dataset_manager.py config --repo_id \"your-username/dataset-name\" --system_prompt \"Your prompt here\"\n\n# Add data with validation\nuv run scripts/dataset_manager.py add_rows \\\n  --repo_id \"your-username/dataset-name\" \\\n  --template qa \\\n  --rows_json '[{\"question\": \"What is AI?\", \"answer\": \"Artificial Intelligence...\"}]'\n```\n\n**View Dataset Statistics:**\n```bash\nuv run scripts/dataset_manager.py stats --repo_id \"your-username/dataset-name\"\n```\n\n### Error Handling\n- **Repository exists**: Script will notify and continue with configuration\n- **Invalid JSON**: Clear error message with parsing details\n- **Network issues**: Automatic retry for transient failures\n- **Token permissions**: Validation before operations begin\n\n---\n\n# Combined Workflow Examples\n\n## Example 1: Create Training Subset from Existing Dataset\n```bash\n# 1. Explore the source dataset\nuv run scripts/sql_manager.py describe --dataset \"cais/mmlu\"\nuv run scripts/sql_manager.py histogram --dataset \"cais/mmlu\" --column \"subject\"\n\n# 2. Query and create subset\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT * FROM data WHERE subject IN ('nutrition', 'anatomy', 'clinical_knowledge')\" \\\n  --push-to \"username/mmlu-medical-subset\" \\\n  --private\n```\n\n## Example 2: Transform and Reshape Data\n```bash\n# Transform MMLU to QA format with correct answers extracted\nuv run scripts/sql_manager.py query \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT question, choices[answer] as correct_answer, subject FROM data\" \\\n  --push-to \"username/mmlu-qa-format\"\n```\n\n## Example 3: Merge Multiple Dataset Splits\n```bash\n# Export multiple splits and combine\nuv run scripts/sql_manager.py export \\\n  --dataset \"cais/mmlu\" \\\n  --split \"*\" \\\n  --output \"mmlu_all.parquet\"\n```\n\n## Example 4: Quality Filtering\n```bash\n# Filter for high-quality examples\nuv run scripts/sql_manager.py query \\\n  --dataset \"squad\" \\\n  --sql \"SELECT * FROM data WHERE LENGTH(context) > 500 AND LENGTH(question) > 20\" \\\n  --push-to \"username/squad-filtered\"\n```\n\n## Example 5: Create Custom Training Dataset\n```bash\n# 1. Query source data\nuv run scripts/sql_manager.py export \\\n  --dataset \"cais/mmlu\" \\\n  --sql \"SELECT question, subject FROM data WHERE subject='nutrition'\" \\\n  --output \"nutrition_source.jsonl\" \\\n  --format jsonl\n\n# 2. Process with your pipeline (add answers, format, etc.)\n\n# 3. Push processed data\nuv run scripts/dataset_manager.py init --repo_id \"username/nutrition-training\"\nuv run scripts/dataset_manager.py add_rows \\\n  --repo_id \"username/nutrition-training\" \\\n  --template qa \\\n  --rows_json \"$(cat processed_data.json)\"\n```\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["hugging","face","datasets","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-hugging-face-datasets","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/hugging-face-datasets","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34768 github stars · SKILL.md body (16,662 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T18:51:29.291Z","embedding":null,"createdAt":"2026-04-18T21:38:44.413Z","updatedAt":"2026-04-23T18:51:29.291Z","lastSeenAt":"2026-04-23T18:51:29.291Z","tsv":"'/dataset-name':1167,1183,1210,1228,1542,1561,1582,1616,1633,1648,1668,1693 '0':1069,1072,1587 '0.95':1295 '1':199,616,1125,1240,1588,1730,1738,1882 '10':496,556,708,861,920,967,1099,1104 '100':683,803,897,998 '123':1269,1381 '2':229,655,1149,1279,1757,1785,1905 '2.1.0':160 '20':654,1870 '3':294,709,1188,1301,1822,1914 '4':357,763,1332,1843 '42':933,1106 '456':1386 '5':516,808,830,931,1358,1876 '50':758 '500':1866 '6':862 '723':166,195,395 'a.id':894 'access':447,467,1008 'actual':547 'add':222,1213,1221,1527,1535,1547,1554,1575,1654,1661,1910,1928 'addit':1319 'advanc':1278,1390 'aggreg':270,564,1085 'ai':1143,1405,1676 'alongsid':31 'analysi':275,320,1478,1497,1514 'anatomi':1776 'answer':327,586,589,754,1314,1317,1323,1677,1798,1810,1813,1911 'api':899 'architectur':1500 'argument':417 'array':1066,1070,1074,1077,1082 'artifici':1678 'ask':1313,1972 'assist':1254,1406 'assur':359 'auto':174,1029 'auto-convert':1028 'auto-instal':173 'automat':1715 'avail':1417,1466,1594 'avg':1089 'b':882,892,1389 'b.id':895 'base':25,63,232,333,426,1073,1237 'bash':478,620,659,716,768,813,874,1154,1194,1526,1596,1605,1621,1683,1737,1790,1827,1846,1881 'basic':550 'batch':368 'begin':1340,1725 'beyond':1462 'bin':653 'boundari':1980 'brainstorm':1507 'branch':1492 'broader':1480 'cais/mmlu':487,505,514,526,628,639,650,669,692,728,777,797,838,854,912,925,929,937,944,953,975,991,1748,1754,1767,1805,1838,1891 'call':1266,1268 'capabl':83,198 'case':1064,1274 'cat':1186,1233,1545,1564,1937 'categor':1375 'categori':1444,1454 'chat':308,1241,1244,1429,1484 'chat/conversational':307 'choic':585,1068,1076,1080,1809 'clarif':1974 'class':1383,1388 'classif':318,324,1212,1280,1283,1292,1618 'classifi':1290 'clean':611 'clear':1707,1947 'clinic':1777 'cnt':569,577,675,682,697,705,958,964 'code':338,1349,1496,1511 'codebas':1508 'col':1050,1056,1061,1063,1088,1090,1093,1116 'col2':1119 'column':261,579,633,640,651,1045,1364,1755 'combin':1726,1832 'command':686,1591 'common':614 'complement':70 'complet':335,339,1316,1333,1336,1344 'complex':660,869,1275,1439 'comprehens':36,329,1569 'concept':1487 'condit':1095 'confid':1294 'config':750,815,823,1014,1177,1642 'configs/splits':812 'configs/system':17 'configur':57,91,211,214,1169,1635,1704 'conflict':383 'contain':406 'content':58,117,149,1189,1249,1255,1262,1502 'context':1318,1320,1343,1865 'continu':1347,1702 'control':1494 'convers':1065,1142,1352,1515,1549 'convert':1030 'core':197 'correct':588,1797,1812 'count':271,517,524,567,673,695,843,934,935,956,1086 'creat':5,90,204,710,1731,1760,1877 'creation':56,148,420,1121,1150 'creativ':340,1350,1506 'criteria':1983 'custom':349,1878 'data':64,153,157,263,280,343,345,364,491,534,554,560,571,591,596,613,677,732,781,801,828,845,859,915,980,996,1214,1238,1379,1399,1450,1571,1655,1771,1789,1816,1862,1885,1897,1917 'databas':1477 'dataset':4,8,26,37,46,79,94,112,116,126,137,147,200,206,228,233,240,251,258,298,302,374,419,427,450,459,471,481,486,498,504,513,525,618,627,638,649,668,691,722,727,748,776,796,811,821,837,853,873,1009,1010,1011,1035,1120,1139,1141,1146,1157,1681,1736,1742,1747,1753,1766,1804,1825,1837,1857,1880,1890 'dataset_manager.py':1122 'datasets/cais/mmlu':1018 'datasets/dataset1':884 'datasets/dataset2':889 'datasets/ibm/duorc':1021 'debug':1448,1474 'defin':16,1416 'definit':353,1415 'depend':161,170,397 'desc':578,706,965 'describ':257,503,626,1746,1951 'descript':1271,1354,1369,1376 'design':28,68 'detail':213,1147,1171,1433,1712 'detect':322 'develop':1446 'dialogu':313 'diet':602 'differ':1523 'difficulti':1328 'direct':120,246,466 'directori':405 'discoveri':138,256,1126 'discuss':1521 'dist':942 'distribut':644 'divers':301,1459,1548 'diverse_training_examples.json':1479 'domain':1297,1348 'download':226 'duckdb':242,461,1003,1038 'easi':1329 'edit':80,150 'educ':1483 'effect':1412 'effici':224,370 'en':1300 'ensur':363 'entir':227 'environ':438,1963 'environment-specif':1962 'error':375,1694,1708 'etc':1913 'exact':1425 'exampl':316,1017,1422,1443,1453,1461,1467,1473,1524,1531,1550,1728,1729,1784,1821,1842,1852,1875 'examples/diverse_training_examples.json':1565,1590 'examples/training_examples.json':1546,1589 'exist':72,125,1138,1697,1735 'expand':1081 'expect':1346 'expert':1407,1968 'explain':1485 'explanatori':1326 'explor':269,617,1512,1739 'export':108,283,764,769,775,789,795,986,1828,1836,1889 'extract':1799 'face':3,11,50,74,98,239,458 'factual':330,1325 'failur':381,1719 'featur':360,1371,1491 'feature1':1366,1380,1385 'file':409,767,1032 'filter':277,520,557,656,661,946,1845,1847 'find':1137 'first':1370 'flexibl':351 'flow':109 'focus':54,1530 'format':158,297,350,787,806,1002,1036,1795,1903,1912 'full':866 'function':604,1040,1043,1067,1110 'generat':1400,1503 'get':187,265,497,543,621,629,642,921,1145 'git':1489 'grace':377 'group':572,678,699,959,1091 'group-bi':698 'guidanc':1495 'guidelin':1432 'handl':371,378,1695 'hard':1331 'hf':32,129,133,436,1000,1128,1133 'hfdatasetsql':906,908 'high':1396,1850 'high-qual':1395,1849 'histogram':272,648,941,1752 'hub':12,51,99,111,292,715,970,974 'hug':2,10,49,73,97,238,457 'hugging-face-dataset':1 'ibm/duorc':749,822 'id':1012,1163,1179,1206,1224,1267,1538,1557,1578,1612,1629,1644,1664,1689,1923,1931 'import':905 'includ':215,388,1458,1482 'index':1071 'init':1161,1627,1921 'initi':14,203,1155,1622 'inlin':169,396 'input':1286,1977 'instal':175,435 'instruct':191,385 'integr':127,365 'intellig':1679 'intent':321 'intermedi':1277 'invalid':1705 'issu':1714 'join':278,872,888 'jq':1585 'json':361,1232,1245,1284,1306,1337,1363,1426,1544,1563,1584,1672,1706,1936 'jsonl':791,807,1904 'knowledg':332,1778 'label':1291,1293 'languag':336,1299 'larg':373 'legaci':1510 'length':756,1044,1047,1075,1078,1864,1868 'lifecycl':201 'limit':495,555,707,802,829,860,896,919,966,997,1939 'list':1445,1593,1600 'local':287,766,987 'lower':1060 'manag':7,45,59,115,171,186,202,398,422,434,904,1190,1451 'manipul':65 'manual':1619 'match':593,599,1055,1059,1948 'mcp':33,75,130,134,1129,1134,1410,1413,1464,1470,1529 'mcp-focus':1528 'medium':1330 'merg':1823 'messag':1246,1709 'metadata':119,141,219,1296 'miss':1985 'mix':1566 'mmlu':1792 'mmlu_all.parquet':1841 'model':337 'multi':296,311 'multi-format':295 'multi-turn':310 'multipl':1824,1829 'must':440 'n':515,609,930,1051 'name':538,1365,1372 'natur':1250 'navig':1509 'need':88,356,1322 'network':1713 'new':205,235,291,721,1156 'notifi':1700 'number':1112 'numer':1368 'nutrit':494,529,563,601,735,784,918,940,983,1775,1900 'nutrition.parquet':786 'nutrition_source.jsonl':1902 'oper':615,1724 'opinion':1327 'optim':1499 'order':575,703,962,1117 'order-bi':702 'output':785,804,1840,1901,1957 'output.parquet':992 'overview':39 'packag':185,433 'paraphraserc':751,824 'parquet':771,788,886,891,990,1016,1020,1023,1025,1031 'parquet/default/train':885,890,1019 'parquet/paraphraserc/test':1022 'pars':1711 'partit':1114 'path':400,548,867,1001 'pattern':1057 'pep':165,194,394 'perform':1498 'permiss':1721,1978 'pipelin':1909 'prerequisit':431 'privat':473,740,984,1168,1634,1783 'problem':1518 'problem-solv':1517 'process':369,1906,1916 'processed_data.json':1938 'profession':1504 'program':1486 'progress':1438 'project':1475 'prompt':18,217,1173,1185,1338,1392,1638,1650,1652 'proper':209,1440 'protocol':254,464,1006 'provid':42,78,465,1027 'public':470 'push':284,289,456,713,719,737,743,760,968,1780,1818,1872,1915 'push-to':736,759,1779,1817,1871 'python':184,390,898,901,1131 'qa':331,1230,1302,1305,1670,1794,1934 'qualiti':358,1397,1431,1844,1851 'queri':82,105,152,234,236,247,428,451,453,479,485,531,667,717,726,820,836,846,852,870,909,1758,1765,1803,1856,1883 'querying/transformation':27 'question':326,584,600,608,753,757,1307,1311,1673,1808,1869,1894 'question-answ':325 'quick':476,1195,1203,1602,1609 'random':266,507,1100 'rather':121 'raw':863,878 'read':328 'realist':1436 'recommend':1123,1604 'recoveri':376 'refer':1592 'regex':592,1052,1058 'regexp':598,606,1048,1054 'regression/classification':347 'relat':402 'replac':544,607,1049,1053 'repo':15,293,1162,1178,1205,1223,1537,1556,1577,1611,1628,1643,1663,1688,1922,1930 'repositori':207,1623,1696 'reproduc':1107 'request':1252 'requir':176,1434,1976 'reservoir':1105 'reshap':279,1788 'respons':1256,1264 'result':286,910,949 'retri':1716 'retriev':142 'review':1501,1969 'revis':1013,1026 'role':1247,1253,1260 'row':20,223,508,518,1084,1111,1222,1231,1536,1543,1555,1562,1576,1583,1662,1671,1929,1935 'run':178,181,248,412,415,483,501,510,522,624,635,646,665,688,724,745,773,793,818,834,850,876,1159,1175,1201,1219,1533,1552,1573,1598,1607,1625,1640,1659,1685,1744,1750,1763,1801,1834,1854,1887,1919,1926 'safeti':1979 'sampl':264,267,506,512,926,927,1096,1098,1101,1103,1108 'sample.jsonl':805 'save':285 'scenario':1270,1437,1449,1481 'schema':255,352,499,622,922,923,1427 'scope':1950 'script':167,172,391,410,1698 'scripts/dataset_manager.py':418,1160,1176,1202,1220,1534,1553,1574,1599,1608,1626,1641,1660,1686,1920,1927 'scripts/script_name.py':182,416 'scripts/sql_manager.py':245,423,484,502,511,523,625,636,647,666,689,725,746,774,794,819,835,851,877,1745,1751,1764,1802,1835,1855,1888 'search':124,139,1140 'see':189 'seed':932 'select':489,551,552,558,565,580,583,594,605,671,693,730,752,779,799,826,842,857,880,913,954,978,994,1769,1807,1860,1893 'sentiment':319 'server':34,76,131,135,1130,1414,1418 'set':442,1468,1525 'setup':1196,1204,1476,1603,1610,1620 'simpl':1276 'skill':41,145,163,387,1153,1193,1457,1942 'skill-hugging-face-datasets' 'skill.md':408 'solv':1519 'sourc':1741,1884 'source-sickn33' 'special':355 'specif':1964 'specifi':814,831,1424 'split':832,839,848,855,1015,1826,1830,1839 'sql':24,62,103,151,231,243,249,282,425,449,462,488,530,541,549,663,670,729,778,798,825,841,856,864,879,903,907,977,993,1039,1041,1768,1806,1859,1892 'sql-base':23,61,230,424 'sql-style':102 'sql.close':999 'sql.count':936 'sql.describe':924 'sql.export':988 'sql.filter':950 'sql.histogram':943 'sql.push':972 'sql.query':911 'sql.sample':928 'sql_manager.py':452 'squad':1858 'start':188,477 'stat':1687 'statist':1682 'stop':1970 'store':212 'stream':19,220 'string':603,1042,1046 'structur':156,210,259,344,619,1239,1423 'style':104,1353,1357 'subject':493,528,562,566,574,641,652,672,680,694,701,734,783,917,939,945,955,961,982,1756,1773,1814,1895,1899 'subset':711,816,1733,1761 'substitut':1960 'success':1982 'sum':1087 'support':13,299,300,1516 'syntax':532 'system':216,306,1172,1184,1391,1637,1649 'system_prompt.txt':1187 'systemat':1513 'tabl':537 'tabular':342,1359,1362 'target':1373,1377,1382,1387 'task':348,1452,1946 'technic':1351,1520 'technolog':1298 'templat':305,309,1199,1211,1216,1229,1236,1242,1243,1281,1282,1303,1304,1334,1335,1360,1361,1393,1430,1455,1595,1601,1617,1669,1933 'template-bas':1235 'test':840,1966 'text':317,334,1285,1287,1341,1401 'token':437,448,475,1720 'tool':43,314,1135,1258,1261,1263,1265,1411,1420,1441,1471 'topic':323 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'train':1144,1398,1421,1460,1570,1732,1879 'training_examples.json':1469 'transform':106,154,276,430,454,582,658,685,690,741,747,948,952,1786,1791 'transient':1718 'treat':1955 'true':985 'turn':312 'tutori':1488 'two':389 'type':262,303,1324,1367,1374 'uniqu':273,630,637 'unnest':1079 'updat':21,93,221 'upload':367,380 'upper':1062 'url':971 'usag':190,196,315,384,900,1259,1442,1465,1472 'use':86,132,143,164,241,252,393,460,533,684,1004,1037,1097,1102,1127,1132,1151,1191,1273,1409,1522,1940 'user':1248,1251 'usernam':1166,1182,1209,1227,1541,1560,1581,1615,1632,1647,1667,1692 'username/dataset-name':1148 'username/duorc-long-questions':762 'username/mmlu-medical-subset':1782 'username/mmlu-nutrition-subset':739 'username/mmlu-qa-format':1820 'username/nutrition-subset':976 'username/nutrition-training':1924,1932 'username/squad-filtered':1874 'uv':180,183,414,432,482,500,509,521,623,634,645,664,687,723,744,772,792,817,833,849,875,1158,1174,1200,1218,1532,1551,1572,1597,1606,1624,1639,1658,1684,1743,1749,1762,1800,1833,1853,1886,1918,1925 'valid':362,1217,1657,1722,1965 'valu':274,631,643 'variabl':439,1378 'version':159,1493 'via':244 'view':1680 'want':101 'window':1109 'without':225 'work':30,809 'workflow':38,1124,1447,1490,1727 'write':341,446,1356,1505 'write-access':445 'your-usernam':1164,1180,1207,1225,1539,1558,1579,1613,1630,1645,1665,1690 'your_qa_data.json':1234","prices":[{"id":"6e9360f2-8dfc-462d-bf5e-0bc806791f93","listingId":"9a2b7be1-30af-4474-b888-57c881491cd2","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:38:44.413Z"}],"sources":[{"listingId":"9a2b7be1-30af-4474-b888-57c881491cd2","source":"github","sourceId":"sickn33/antigravity-awesome-skills/hugging-face-datasets","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/hugging-face-datasets","isPrimary":false,"firstSeenAt":"2026-04-18T21:38:44.413Z","lastSeenAt":"2026-04-23T18:51:29.291Z"}],"details":{"listingId":"9a2b7be1-30af-4474-b888-57c881491cd2","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"hugging-face-datasets","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34768,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"8edfeb45ee3fbaa01e724bfc5e427b7c80b990cb","skill_md_path":"skills/hugging-face-datasets/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/hugging-face-datasets"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"hugging-face-datasets","description":"Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/hugging-face-datasets"},"updatedAt":"2026-04-23T18:51:29.291Z"}}