{"id":"f79d9429-a90f-4d8d-964b-1d5f4b538209","shortId":"qzwuj7","kind":"skill","title":"voc-analysis","tagline":"VOC (Voice of Customer) analysis and competitive intelligence research. Collects user feedback from Reddit, Twitter, Amazon, App Store, YouTube, Facebook Groups, Discord, and other platforms via Apify Agent Skills and Web Search, performs semantic tagging, pain point mining, sent","description":"# VOC Analysis — Market Research and Voice of Customer Analysis\n\n## Description\n\nSenior market research analyst workflow for comprehensive, data-driven VOC (Voice of Customer) analysis and competitive intelligence. Covers the full pipeline: multi-platform data collection (Reddit, Twitter, Amazon, App Store, YouTube, etc.) via Apify Agent Skills and web search, LLM semantic tagging, Python statistical counting, and Chinese report generation with English quotes preserved.\n\n## Prerequisites: Apify API Key (MANDATORY — must be completed first)\n\n**Apify is the core dependency of this Skill. VOC analysis cannot be performed without Apify.**\n\nBefore starting ANY research, you MUST verify Apify is configured:\n\n1. Check `APIFY_TOKEN` environment variable: `echo $APIFY_TOKEN`\n2. Or check `.env` file for `APIFY_TOKEN=...`\n\n**If no API key is found, STOP IMMEDIATELY. Do NOT proceed. Tell the user:**\n\n> \"**This Skill requires Apify for multi-platform data collection. Research cannot begin without an API Key.**\n>\n> Please follow these steps:\n> 1. Contact an admin to obtain an APIFY_TOKEN\n> 2. Install Apify Agent Skills: `npx skills add apify/agent-skills`\n> 3. Configure environment variable: add `APIFY_TOKEN=<your-token>` to the `.env` file\n> 4. Come back to start the research after configuration is complete.\"\n\n**Apify is not optional.** Web Search alone cannot complete a full VOC analysis (insufficient data volume, no structured collection, unable to cover multiple platforms). Apify must be available before work can begin.\n\nRead `references/apify-setup.md` for detailed installation and configuration steps.\n\n## Rules\n\n1. **Data coverage is everything.** Incomplete data → biased conclusions. Always collect from multiple platforms, build diverse queries, and cross-validate findings. Aim for 5+ query layers and 3+ platforms minimum.\n\n2. **Be a detective, not a tourist.** Don't just check what exists — actively construct 10+ diverse queries. Explore adjacent topics, competitor spaces, and problem-space queries where users describe issues without mentioning the brand.\n\n3. **Analysis is iterative.** Initial data → preliminary analysis → discover gaps/questions → collect targeted data → deeper analysis. It's normal (and expected) to collect more data mid-analysis when patterns or questions emerge.\n\n4. **LLM reads ALL raw data directly.** Point the LLM to raw JSON files and process every single item — no sampling, no pre-filtering, no manual copy-paste. This eliminates bias and ensures complete coverage.\n\n5. **Semantic tagging over keyword matching.** LLM tags items by understanding context (e.g., \"I gave up trying to connect it\" → `connectivity_issue` + `setup_difficult`). Python is ONLY used to count those tags statistically.\n\n## High-Level Workflow\n\n```\nStep 0: Check existing Apify runs (cache-first)\n    → Evaluate relevance of each run's input parameters\n    → Download only relevant datasets\nStep 0.5: Gap analysis (MANDATORY)\n    → Map existing data to 5-layer query strategy\n    → Identify missing platforms, angles, competitors, time ranges\n    → Build 5-10+ new queries to fill gaps\nStep 1: Plan query strategy (5 layers)\n    → Direct → Problem-space → Competitor → Use-case → Long-tail\nStep 2: Collect data (Apify + Web Search fallback)\n    → Store in reference/<scraper>/dataset/<id>.json\nStep 3: LLM analyzes ALL raw data\n    → Reads every item, tags semantically, extracts quotes with URLs\n    → Tags feature-level data: features_mentioned, feature_sentiment, is_noise\n    → Saves tagged data to reference/tagged/\nStep 4: Python counts tags (ONLY statistical counting)\n    → Frequencies, percentages, cross-tabs, visualizations\nStep 4.5: Feature-Level Analysis (for product comparison)\n    → Per-feature metrics: mention rate, positive rate, negative rate, avg rating\n    → Noise filtering: exclude subscription/pricing complaints from quality metrics\n    → Competitive benchmarking tables with pp differences\nStep 5: Generate report in Chinese (English quotes preserved)\n    → Save as .md in docs/\n```\n\nEach step is detailed in the reference files below.\n\n## Reference Files\n\nRead the relevant reference file when you reach that phase of the workflow:\n\n| File | When to Read | Content |\n|------|-------------|---------|\n| `references/apify-setup.md` | Before starting | MCP installation, verification, troubleshooting |\n| `references/data-collection.md` | Steps 0–2 | Cache management, Python scripts, run checking, dataset download, gap analysis |\n| `references/query-strategy.md` | Step 1 | 5-layer query design, platform selection, query examples |\n| `references/analysis-methodology.md` | Steps 3–4 | LLM tagging workflow, Python counting, analysis pipeline |\n| `references/report-template.md` | Step 5 | Complete report template with all sections |\n| `references/failure-recovery.md` | When runs fail | Failure analysis, alternative actors, web search fallback |\n| `references/amazon-guide.md` | When scraping Amazon | ASIN search methods, review scraper input formats |\n\n## Data Storage Structure\n\n```\nreference/\n├── apify_runs_cache.json           # Central cache (keep updated)\n├── scripts/                         # Automation scripts\n│   ├── build_apify_cache.py        # Fetch all runs + details\n│   └── download_datasets.py        # Batch download relevant datasets\n├── tagged/                          # LLM-tagged datasets\n│   └── <platform>_<id>_tagged.json\n├── reddit_scraper/dataset/\n├── twitter_scraper/dataset/\n├── amazon_reviews/dataset/\n├── appstore_reviews/dataset/\n├── google_play_reviews/dataset/\n├── youtube_scraper/dataset/\n└── ...other platforms.../dataset/\n```\n\n## Tools\n\n### Primary: IDE Built-in Web Search\n- Free, unlimited, real-time — use as primary discovery tool\n- Best for: validation, niche platforms, recent events, quick checks\n- Always start with web search for discovery before Apify scraping\n\n### Secondary: Apify Agent Skills\n\nApify provides AI-native agent skills for web scraping and data extraction. Source: https://github.com/apify/agent-skills\n\n**Installation options (choose one):**\n\n1. **npx one-line install (recommended):**\n   ```bash\n   npx skills add apify/agent-skills\n   ```\n\n2. **Global install to skills directory:**\n   ```bash\n   # Claude Code\n   /plugin marketplace add https://github.com/apify/agent-skills\n   /plugin install apify-ultimate-scraper@apify-agent-skills\n   ```\n\n**Available Skills include:**\n- **Universal Scraper** — AI-powered general web scraper (Instagram, Facebook, TikTok, YouTube, Google Maps, and 50+ other platforms)\n- **E-Commerce** — E-commerce data collection (Amazon, eBay price intelligence, reviews, product research)\n- **Social Media Analytics** — Audience analysis, content analysis, influencer discovery, trend analysis\n- **Competitor Analysis** — Competitive analysis, brand reputation monitoring, market research\n- **Lead Generation** — B2B/B2C lead collection (Google Maps, LinkedIn, etc.)\n\n**Environment requirements:**\n- Node.js 20.6+\n- `APIFY_TOKEN` environment variable (set in `.env` file or shell environment)\n\n**Decision tree:**\n```\nNeed data? → Apify skill available?\n  YES → Try Apify first → Failed? → Web Search fallback\n  NO  → Web Search (primary)\n```\n\nSee `references/failure-recovery.md` for detailed fallback strategies.\n\n## 5-Layer Query Strategy (Summary)\n\n| Layer | Purpose | Example |\n|-------|---------|---------|\n| 1. Direct | Brand/product mentions | `\"<product_name>\" review` |\n| 2. Problem-space | Pain points (no brand) | `\"smart device not connecting\"` |\n| 3. Competitor | Competitive products | `\"<product_name> vs <competitor>\"` |\n| 4. Use-case | Target users/scenarios | `r/<community> smart device` |\n| 5. Long-tail | Seasonal, niche, integrations | `\"<product_name> winter setup\"` |\n\nFull details in `references/query-strategy.md`.\n\n## Analysis Approach (Summary)\n\n### What LLM Does (ALL analysis):\n- Reads raw JSON files directly (`reference/*/dataset/*.json`)\n- Processes EVERY item — no sampling\n- Identifies patterns, extracts quotes with URLs\n- Performs sentiment analysis by reading content\n- Tags items semantically (pain_points, feature_requests, sentiment, user_type)\n- Tags **feature-level** data: features_mentioned, feature_sentiment, is_noise\n- Saves tagged data to `reference/tagged/`\n\n### What Python Does (ONLY counting):\n- Counts tag frequencies from LLM-tagged data\n- Calculates percentages and distributions\n- Generates cross-tabulations (e.g., pain_points by user_type)\n- **Calculates per-feature metrics**: mention rate, positive rate, negative rate, avg rating\n- **Excludes noise items** from feature quality metrics (e.g., pure subscription complaints)\n- Creates visualizations (charts, tables)\n\n**Python does NOT**: read raw data, find patterns, extract quotes, or do sentiment analysis.\n\n### Feature-Level Analysis (for product comparison):\n- Per-feature metrics: mention rate, positive rate, negative rate, avg rating\n- Noise filtering: exclude pricing/subscription complaints from feature quality metrics\n- Formulas: `positive_rate = positive_count / (valid_mentions - noise_count)`, `negative_rate = quality_negative_count / (valid_mentions - noise_count)`\n- Competitive benchmarking: per-feature head-to-head tables with pp differences\n\nFull details in `references/analysis-methodology.md`.\n\n## Report Format (Summary)\n\n- **Language**: Chinese analysis, English quotes preserved verbatim\n- **Location**: Save as `.md` in `docs/`\n- **Structure**: Executive Summary → Research Strategy → Methodology → Findings → Competitive Intelligence → Sentiment Analysis → User Personas → Recommendations\n- **Every claim** must have inline references with clickable URLs\n- **Every quote** in original English with source link\n\nFull template in `references/report-template.md`.\n\n## Quality Checklist\n\n### Data Collection\n- [ ] Checked cache and existing runs; evaluated relevance\n- [ ] **Gap analysis completed** (MANDATORY)\n  - [ ] Mapped to 5-layer strategy\n  - [ ] Identified platform gaps (target: 3+ of 8 major platforms)\n  - [ ] Identified query angle gaps by layer\n  - [ ] Coverage percentage calculated\n- [ ] Built 5-10+ NEW queries to fill gaps\n- [ ] Failed runs analyzed and recovered (see `references/failure-recovery.md`)\n- [ ] Final dataset: existing + new ≥ 100 data points\n\n### Analysis\n- [ ] LLM read ALL raw JSON files (every item, no sampling)\n- [ ] Semantic tagging completed (not keyword matching)\n- [ ] Python tag counting completed (frequencies, percentages)\n- [ ] Findings quantified (%, counts, averages, sample sizes)\n- [ ] All quotes have clickable source URLs\n- [ ] Cross-platform validation performed\n- [ ] Iterative data collection documented if performed\n- [ ] **Feature-level analysis** (if comparing products):\n  - [ ] Feature keyword sets defined\n  - [ ] Feature-level sentiment tagged (per-feature positive/negative)\n  - [ ] Noise items identified and filtered (subscription/pricing complaints)\n  - [ ] Per-feature metrics calculated (mention rate, positive rate, negative rate, avg rating)\n  - [ ] Competitive benchmarking table with pp differences\n\n### Report\n- [ ] Written in Chinese, English quotes preserved\n- [ ] Research strategy section explains query logic\n- [ ] Each dataset mapped to strategy layer\n- [ ] Gap analysis documented\n- [ ] Python scripts in appendix (if used)\n- [ ] Saved as `.md` in `docs/`\n\n## Examples\n\n**Good Example — Semantic tagging by LLM:**\nUser review: \"I gave up trying to connect it after 2 hours\"\n→ LLM tags: `connectivity_issue`, `setup_difficult`, sentiment: `negative`\n→ Python counts: connectivity_issue appears 47 times (23% of total)\n\n**Bad Example — Python keyword matching:**\nUser review: \"I gave up trying to connect it after 2 hours\"\n→ Python: `if \"connect\" in text: tag = \"connectivity\"` ← misses context, no sentiment, no semantic understanding\n→ Misses nuance: \"gave up\" signals frustration; \"2 hours\" signals severity\n\n**Good Example — Multi-platform gap analysis:**\nAfter initial Reddit scrape (Layer 1), analyst identifies: no Amazon reviews (Layer 1), no competitor data (Layer 3), no problem-space queries (Layer 2). Builds 8 new queries to fill gaps before proceeding to analysis.\n\n**Bad Example — Single-source analysis:**\nAnalyst scrapes one Reddit thread, finds 15 comments, writes a report claiming \"users love the product\" — no gap analysis, no cross-platform validation, insufficient sample size.\n\n## Golden Rules\n\n1. **VOC = LLM reads ALL raw JSON** — never copy-paste or pre-filter\n2. **Gap analysis is MANDATORY** — don't skip to analysis without checking coverage\n3. **Existing data is never enough** — always build new queries\n4. **Research is iterative** — collect more data when questions arise mid-analysis\n5. **LLM semantic tagging → Python tag counting** — never use Python keyword matching\n6. **Process ENTIRE datasets** — thousands of items, not samples\n7. **Every claim needs a number and a source link**\n8. **Feature-level analysis for comparisons** — compute per-feature mention rate, positive rate, negative rate; filter noise (subscription complaints) from quality metrics; always show sample size (N=X) with percentages","tags":["voc","analysis","enterprise","harness","engineering","addxai","agent-skills","ai-agent","ai-engineering","claude-code","code-review","cursor"],"capabilities":["skill","source-addxai","skill-voc-analysis","topic-agent-skills","topic-ai-agent","topic-ai-engineering","topic-claude-code","topic-code-review","topic-cursor","topic-devops","topic-enterprise","topic-sre","topic-windsurf"],"categories":["enterprise-harness-engineering"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/addxai/enterprise-harness-engineering/voc-analysis","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add addxai/enterprise-harness-engineering","source_repo":"https://github.com/addxai/enterprise-harness-engineering","install_from":"skills.sh"}},"qualityScore":"0.458","qualityRationale":"deterministic score 0.46 from registry signals: · indexed on github topic:agent-skills · 16 github stars · SKILL.md body (13,118 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T01:02:13.410Z","embedding":null,"createdAt":"2026-04-21T19:04:02.986Z","updatedAt":"2026-04-22T01:02:13.410Z","lastSeenAt":"2026-04-22T01:02:13.410Z","tsv":"'-10':491,1308 '/apify/agent-skills':827,858 '/dataset':526,769,1040 '/plugin':853,859 '0':449,661 '0.5':470 '1':142,194,275,498,675,832,982,1543,1550,1609 '10':321 '100':1325 '15':1586 '2':151,203,306,516,662,844,987,1470,1505,1527,1562,1624 '20.6':937 '23':1487 '3':212,303,342,529,686,999,1292,1555,1637 '4':223,374,561,687,1004,1647 '4.5':575 '47':1485 '5':299,411,478,490,502,610,676,697,974,1013,1285,1307,1660 '50':887 '6':1672 '7':1681 '8':1294,1564,1691 'activ':319 'actor':711 'add':210,216,842,855 'adjac':325 'admin':197 'agent':31,89,206,809,816,867 'ai':814,875 'ai-nat':813 'ai-pow':874 'aim':297 'alon':240 'altern':710 'alway':284,797,1643,1715 'amazon':19,82,718,758,898,1547 'analysi':3,8,44,51,67,126,246,343,349,356,368,472,579,672,693,709,909,911,915,917,919,1026,1033,1055,1153,1157,1222,1243,1280,1328,1377,1440,1537,1573,1579,1598,1626,1633,1659,1695 'analyst':56,1544,1580 'analyt':907 'analyz':531,1316 'angl':485,1299 'api':110,161,188 'apifi':30,88,109,117,131,139,144,149,157,176,201,205,217,234,258,452,519,805,808,811,862,866,938,953,958 'apify-agent-skil':865 'apify-ultimate-scrap':861 'apify/agent-skills':211,843 'apify_runs_cache.json':730 'app':20,83 'appear':1484 'appendix':1445 'approach':1027 'appstor':760 'aris':1656 'asin':719 'audienc':908 'autom':736 'avail':261,869,955 'averag':1354 'avg':593,1123,1171,1412 'b2b/b2c':927 'back':225 'bad':1490,1574 'bash':839,850 'batch':744 'begin':185,265 'benchmark':604,1201,1415 'best':788 'bias':282,406 'brand':341,920,994 'brand/product':984 'build':289,489,1563,1644 'build_apify_cache.py':738 'built':774,1306 'built-in':773 'cach':455,663,732,1273 'cache-first':454 'calcul':1098,1112,1305,1405 'cannot':127,184,241 'case':511,1007 'central':731 'chart':1138 'check':143,153,316,450,668,796,1272,1635 'checklist':1269 'chines':101,614,1221,1423 'choos':830 'claim':1248,1591,1683 'claud':851 'clickabl':1254,1360 'code':852 'collect':13,79,182,252,285,352,363,517,897,929,1271,1370,1651 'come':224 'comment':1587 'commerc':892,895 'compar':1379 'comparison':582,1160,1697 'competit':10,69,603,918,1001,1200,1240,1414 'competitor':327,486,508,916,1000,1552 'complaint':599,1135,1177,1400,1711 'complet':115,233,242,409,698,1281,1341,1348 'comprehens':59 'comput':1698 'conclus':283 'configur':141,213,231,272 'connect':429,431,998,1467,1474,1482,1502,1509,1513 'construct':320 'contact':195 'content':651,910,1058 'context':422,1515 'copi':402,1618 'copy-past':401,1617 'core':120 'count':99,440,563,567,692,1089,1090,1186,1190,1195,1199,1347,1353,1481,1666 'cover':71,255 'coverag':277,410,1303,1636 'creat':1136 'cross':294,571,1104,1364,1601 'cross-platform':1363,1600 'cross-tab':570 'cross-tabul':1103 'cross-valid':293 'custom':7,50,66 'data':61,78,181,248,276,281,347,354,365,379,476,518,534,548,557,726,822,896,952,1073,1082,1097,1145,1270,1326,1369,1553,1639,1653 'data-driven':60 'dataset':468,669,747,752,1322,1434,1675 'decis':949 'deeper':355 'defin':1384 'depend':121 'describ':336 'descript':52 'design':679 'detail':269,626,742,971,1023,1214 'detect':309 'devic':996,1012 'differ':608,1212,1419 'difficult':434,1477 'direct':380,504,983,1038 'directori':849 'discord':25 'discov':350 'discoveri':786,803,913 'distribut':1101 'divers':290,322 'doc':622,1232,1452 'document':1371,1441 'download':465,670,745 'download_datasets.py':743 'driven':62 'e':891,894 'e-commerc':890,893 'e.g':423,1106,1132 'ebay':899 'echo':148 'elimin':405 'emerg':373 'english':105,615,1223,1260,1424 'enough':1642 'ensur':408 'entir':1674 'env':154,221,944 'environ':146,214,934,940,948 'etc':86,933 'evalu':457,1277 'event':794 'everi':390,536,1043,1247,1256,1335,1682 'everyth':279 'exampl':683,981,1453,1455,1491,1532,1575 'exclud':597,1125,1175 'execut':1234 'exist':318,451,475,1275,1323,1638 'expect':361 'explain':1430 'explor':324 'extract':540,823,1049,1148 'facebook':23,881 'fail':707,960,1314 'failur':708 'fallback':522,714,963,972 'featur':546,549,551,577,585,1064,1071,1074,1076,1115,1129,1155,1163,1179,1204,1375,1381,1386,1392,1403,1693,1701 'feature-level':545,576,1070,1154,1374,1385,1692 'feedback':15 'fetch':739 'file':155,222,387,630,633,638,647,945,1037,1334 'fill':495,1312,1568 'filter':398,596,1174,1398,1623,1708 'final':1321 'find':296,1146,1239,1351,1585 'first':116,456,959 'follow':191 'format':725,1218 'formula':1182 'found':164 'free':778 'frequenc':568,1092,1349 'frustrat':1526 'full':73,244,1022,1213,1264 'gap':471,496,671,1279,1290,1300,1313,1439,1536,1569,1597,1625 'gaps/questions':351 'gave':425,1463,1498,1523 'general':877 'generat':103,611,926,1102 'github.com':826,857 'github.com/apify/agent-skills':825,856 'global':845 'golden':1607 'good':1454,1531 'googl':762,884,930 'group':24 'head':1206,1208 'head-to-head':1205 'high':445 'high-level':444 'hour':1471,1506,1528 'ide':772 'identifi':482,1047,1288,1297,1396,1545 'immedi':166 'includ':871 'incomplet':280 'influenc':912 'initi':346,1539 'inlin':1251 'input':463,724 'instagram':880 'instal':204,270,656,828,837,846,860 'insuffici':247,1604 'integr':1019 'intellig':11,70,901,1241 'issu':337,432,1475,1483 'item':392,419,537,1044,1060,1127,1336,1395,1678 'iter':345,1368,1650 'json':386,527,1036,1041,1333,1615 'keep':733 'key':111,162,189 'keyword':415,1343,1382,1493,1670 'languag':1220 'layer':301,479,503,677,975,979,1286,1302,1438,1542,1549,1554,1561 'lead':925,928 'level':446,547,578,1072,1156,1376,1387,1694 'line':836 'link':1263,1690 'linkedin':932 'llm':94,375,383,417,530,688,750,1030,1095,1329,1459,1472,1611,1661 'llm-tag':749,1094 'locat':1227 'logic':1432 'long':513,1015 'long-tail':512,1014 'love':1593 'major':1295 'manag':664 'mandatori':112,473,1282,1628 'manual':400 'map':474,885,931,1283,1435 'market':45,54,923 'marketplac':854 'match':416,1344,1494,1671 'mcp':655 'md':620,1230,1450 'media':906 'mention':339,550,587,985,1075,1117,1165,1188,1197,1406,1702 'method':721 'methodolog':1238 'metric':586,602,1116,1131,1164,1181,1404,1714 'mid':367,1658 'mid-analysi':366,1657 'mine':41 'minimum':305 'miss':483,1514,1521 'monitor':922 'multi':76,179,1534 'multi-platform':75,178,1533 'multipl':256,287 'must':113,137,259,1249 'n':1719 'nativ':815 'need':951,1684 'negat':591,1121,1169,1191,1194,1410,1479,1706 'never':1616,1641,1667 'new':492,1309,1324,1565,1645 'nich':791,1018 'node.js':936 'nois':554,595,1079,1126,1173,1189,1198,1394,1709 'normal':359 'npx':208,833,840 'nuanc':1522 'number':1686 'obtain':199 'one':831,835,1582 'one-lin':834 'option':237,829 'origin':1259 'pain':39,991,1062,1107 'paramet':464 'past':403,1619 'pattern':370,1048,1147 'per':584,1114,1162,1203,1391,1402,1700 'per-featur':583,1113,1161,1202,1390,1401,1699 'percentag':569,1099,1304,1350,1722 'perform':36,129,1053,1367,1373 'persona':1245 'phase':643 'pipelin':74,694 'plan':499 'platform':28,77,180,257,288,304,484,680,768,792,889,1289,1296,1365,1535,1602 'play':763 'pleas':190 'point':40,381,992,1063,1108,1327 'posit':589,1119,1167,1183,1185,1408,1704 'positive/negative':1393 'power':876 'pp':607,1211,1418 'pre':397,1622 'pre-filt':396,1621 'preliminari':348 'prerequisit':108 'preserv':107,617,1225,1426 'price':900 'pricing/subscription':1176 'primari':771,785,967 'problem':331,506,989,1558 'problem-spac':330,505,988,1557 'proceed':169,1571 'process':389,1042,1673 'product':581,903,1002,1159,1380,1595 'provid':812 'pure':1133 'purpos':980 'python':97,435,562,665,691,1086,1140,1345,1442,1480,1492,1507,1664,1669 'qualiti':601,1130,1180,1193,1268,1713 'quantifi':1352 'queri':291,300,323,333,480,493,500,678,682,976,1298,1310,1431,1560,1566,1646 'question':372,1655 'quick':795 'quot':106,541,616,1050,1149,1224,1257,1358,1425 'r':1010 'rang':488 'rate':588,590,592,594,1118,1120,1122,1124,1166,1168,1170,1172,1184,1192,1407,1409,1411,1413,1703,1705,1707 'raw':378,385,533,1035,1144,1332,1614 'reach':641 'read':266,376,535,634,650,1034,1057,1143,1330,1612 'real':781 'real-tim':780 'recent':793 'recommend':838,1246 'recov':1318 'reddit':17,80,754,1540,1583 'refer':525,629,632,637,729,1039,1252 'reference/tagged':559,1084 'references/amazon-guide.md':715 'references/analysis-methodology.md':684,1216 'references/apify-setup.md':267,652 'references/data-collection.md':659 'references/failure-recovery.md':704,969,1320 'references/query-strategy.md':673,1025 'references/report-template.md':695,1267 'relev':458,467,636,746,1278 'report':102,612,699,1217,1420,1590 'reput':921 'request':1065 'requir':175,935 'research':12,46,55,135,183,229,904,924,1236,1427,1648 'review':722,902,986,1461,1496,1548 'reviews/dataset':759,761,764 'rule':274,1608 'run':453,461,667,706,741,1276,1315 'sampl':394,1046,1338,1355,1605,1680,1717 'save':555,618,1080,1228,1448 'scrape':717,806,820,1541,1581 'scraper':723,864,873,879 'scraper/dataset':755,757,766 'script':666,735,737,1443 'search':35,93,239,521,713,720,777,801,962,966 'season':1017 'secondari':807 'section':703,1429 'see':968,1319 'select':681 'semant':37,95,412,539,1061,1339,1456,1519,1662 'senior':53 'sent':42 'sentiment':552,1054,1066,1077,1152,1242,1388,1478,1517 'set':942,1383 'setup':433,1021,1476 'sever':1530 'shell':947 'show':1716 'signal':1525,1529 'singl':391,1577 'single-sourc':1576 'size':1356,1606,1718 'skill':32,90,124,174,207,209,810,817,841,848,868,870,954 'skill-voc-analysis' 'skip':1631 'smart':995,1011 'social':905 'sourc':824,1262,1361,1578,1689 'source-addxai' 'space':328,332,507,990,1559 'start':133,227,654,798 'statist':98,443,566 'step':193,273,448,469,497,515,528,560,574,609,624,660,674,685,696 'stop':165 'storag':727 'store':21,84,523 'strategi':481,501,973,977,1237,1287,1428,1437 'structur':251,728,1233 'subscript':1134,1710 'subscription/pricing':598,1399 'summari':978,1028,1219,1235 'tab':572 'tabl':605,1139,1209,1416 'tabul':1105 'tag':38,96,413,418,442,538,544,556,564,689,748,751,1059,1069,1081,1091,1096,1340,1346,1389,1457,1473,1512,1663,1665 'tagged.json':753 'tail':514,1016 'target':353,1008,1291 'tell':170 'templat':700,1265 'text':1511 'thousand':1676 'thread':1584 'tiktok':882 'time':487,782,1486 'token':145,150,158,202,218,939 'tool':770,787 'topic':326 'topic-agent-skills' 'topic-ai-agent' 'topic-ai-engineering' 'topic-claude-code' 'topic-code-review' 'topic-cursor' 'topic-devops' 'topic-enterprise' 'topic-sre' 'topic-windsurf' 'total':1489 'tourist':312 'tree':950 'trend':914 'tri':427,957,1465,1500 'troubleshoot':658 'twitter':18,81,756 'type':1068,1111 'ultim':863 'unabl':253 'understand':421,1520 'univers':872 'unlimit':779 'updat':734 'url':543,1052,1255,1362 'use':438,510,783,1006,1447,1668 'use-cas':509,1005 'user':14,172,335,1067,1110,1244,1460,1495,1592 'users/scenarios':1009 'valid':295,790,1187,1196,1366,1603 'variabl':147,215,941 'verbatim':1226 'verif':657 'verifi':138 'via':29,87 'visual':573,1137 'voc':2,4,43,63,125,245,1610 'voc-analysi':1 'voic':5,48,64 'volum':249 'vs':1003 'web':34,92,238,520,712,776,800,819,878,961,965 'winter':1020 'without':130,186,338,1634 'work':263 'workflow':57,447,646,690 'write':1588 'written':1421 'x':1720 'yes':956 'youtub':22,85,765,883","prices":[{"id":"9a866458-9c90-4917-99c2-9761aed194ba","listingId":"f79d9429-a90f-4d8d-964b-1d5f4b538209","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"addxai","category":"enterprise-harness-engineering","install_from":"skills.sh"},"createdAt":"2026-04-21T19:04:02.986Z"}],"sources":[{"listingId":"f79d9429-a90f-4d8d-964b-1d5f4b538209","source":"github","sourceId":"addxai/enterprise-harness-engineering/voc-analysis","sourceUrl":"https://github.com/addxai/enterprise-harness-engineering/tree/main/skills/voc-analysis","isPrimary":false,"firstSeenAt":"2026-04-21T19:04:02.986Z","lastSeenAt":"2026-04-22T01:02:13.410Z"}],"details":{"listingId":"f79d9429-a90f-4d8d-964b-1d5f4b538209","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"addxai","slug":"voc-analysis","github":{"repo":"addxai/enterprise-harness-engineering","stars":16,"topics":["agent-skills","ai-agent","ai-engineering","claude-code","code-review","cursor","devops","enterprise","sre","windsurf"],"license":"apache-2.0","html_url":"https://github.com/addxai/enterprise-harness-engineering","pushed_at":"2026-04-17T08:57:37Z","description":"Enterprise-grade AI Agent Skills for software development, DevOps, SRE, security, and product teams. Compatible with Claude Code, Cursor, Windsurf, Gemini CLI, GitHub Copilot, and 30+ AI coding agents.","skill_md_sha":"ed4db6fcc6e0b0b2e04b6e5839795ea6544e0995","skill_md_path":"skills/voc-analysis/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/addxai/enterprise-harness-engineering/tree/main/skills/voc-analysis"},"layout":"multi","source":"github","category":"enterprise-harness-engineering","frontmatter":{"name":"voc-analysis","description":"VOC (Voice of Customer) analysis and competitive intelligence research. Collects user feedback from Reddit, Twitter, Amazon, App Store, YouTube, Facebook Groups, Discord, and other platforms via Apify Agent Skills and Web Search, performs semantic tagging, pain point mining, sentiment analysis, and competitive comparison, and generates data-driven market insight reports. Triggers when the user mentions VOC analysis, user feedback analysis, market research, competitive analysis, product review analysis, pain point analysis, customer feedback, competitive intelligence, customer sentiment, review analysis, or needs to collect and analyze user voices from multiple platforms. Should also trigger even if the user just says \"check how users rate this product\" or \"analyze competitors\"."},"skills_sh_url":"https://skills.sh/addxai/enterprise-harness-engineering/voc-analysis"},"updatedAt":"2026-04-22T01:02:13.410Z"}}