{"id":"38b79471-de7f-4427-a9c1-649fe34149dd","shortId":"pFg9xS","kind":"skill","title":"gh-issue-to-demand-signal","tagline":"Takes a competitor's public GitHub repo URL, fetches their open issues via the GitHub REST API, filters noise locally, clusters issues into 6 demand categories, computes a demand score per issue and per cluster, and outputs a ranked demand gap report with a GTM messaging brief. U","description":"# GitHub Issue Demand Signal\n\nTake a competitor's public GitHub repo. Fetch their open issues. Filter noise locally. Cluster into 6 demand categories. Score by real engagement. Output a ranked demand gap report and GTM messaging brief.\n\n---\n\n**Critical rule:** Every issue title in the output must be verbatim from the GitHub API response. Every cluster theme name must be derived from actual issue titles in that cluster. If fewer than 10 issues remain after noise filtering, stop and tell the user -- the repo is too small for reliable clustering. No invented issue content anywhere.\n\n---\n\n## Common Mistakes\n\n| The agent will want to... | Why that's wrong |\n|---|---|\n| Send all 200 raw issues to the AI without filtering | Bot issues, PRs, and zero-engagement noise inflate cluster counts and waste context. Filter locally first. |\n| Use comment count as the primary demand signal | Comments include maintainer responses, off-topic discussion, and spam. reactions[\"+1\"] is the cleanest buyer signal. |\n| Paraphrase issue titles when summarizing clusters | Paraphrasing loses the buyer's exact language, which is the entire point. Use verbatim issue titles. |\n| Continue past Step 4 if fewer than 10 issues remain after filtering | Under 10 issues means the repo is too small or the wrong URL was given. Clustering on sparse data produces meaningless categories. |\n| Include pull requests in the analysis | The GitHub Issues endpoint returns PRs too. Filter by checking that the pull_request key is absent on the issue object. |\n| Mark an issue as ignored demand without checking all 3 criteria | All three must be true: reactions >= 10, age >= 180 days, no planned/in-progress/roadmap label. Missing one criterion disqualifies the issue. |\n\n---\n\n## Step 1: Setup Check\n\n```bash\necho \"GITHUB_TOKEN: ${GITHUB_TOKEN:-not set, unauthenticated rate limit applies (60 req/hr)}\"\n```\n\n**If GITHUB_TOKEN is not set:** Continue. Tell the user: \"GITHUB_TOKEN is not set. Unauthenticated rate limit is 60 requests/hour -- enough for 2 fetches before hitting the limit. For repeated use, add a token at github.com/settings/tokens (no scopes needed for public repos).\"\n\n---\n\n## Step 2: Gather Input\n\nYou need:\n- GitHub repo URL (e.g. https://github.com/owner/repo) or owner/repo slug (e.g. facebook/react)\n\nParse owner and repo from input:\n\n```bash\npython3 << 'PYEOF'\nimport re, sys, os\n\nraw = \"REPO_INPUT_HERE\"\n\n# Normalize to owner/repo\nif raw.startswith(\"http\"):\n    m = re.search(r\"github\\.com/([^/]+)/([^/?\\s]+)\", raw)\n    if not m:\n        print(\"ERROR: Could not parse GitHub URL. Expected format: https://github.com/owner/repo\")\n        sys.exit(1)\n    owner, repo = m.group(1), m.group(2).rstrip(\"/\")\nelif \"/\" in raw:\n    parts = raw.strip().split(\"/\")\n    owner, repo = parts[0], parts[1]\nelse:\n    print(\"ERROR: Input must be a GitHub URL or owner/repo slug (e.g. vercel/next.js)\")\n    sys.exit(1)\n\nprint(f\"Owner: {owner}\")\nprint(f\"Repo: {repo}\")\n\nwith open(\"/tmp/ghd-target.txt\", \"w\") as f:\n    f.write(f\"{owner}/{repo}\")\nPYEOF\n```\n\n**If parsing fails:** Stop. Ask: \"Please provide the GitHub repo as a URL (https://github.com/owner/repo) or an owner/repo slug (e.g. vercel/next.js).\"\n\n---\n\n## Step 3: Fetch Issues from GitHub REST API\n\nFetch up to 200 issues (2 pages of 100). Check rate limit after the first fetch.\n\n```bash\npython3 << 'PYEOF'\nimport json, urllib.request, os, sys\nfrom datetime import datetime, timezone\n\ntarget = open(\"/tmp/ghd-target.txt\").read().strip()\nowner_repo = target\ntoken = os.environ.get(\"GITHUB_TOKEN\", \"\")\n\nheaders = {\"Accept\": \"application/vnd.github+json\", \"User-Agent\": \"gh-issue-demand-signal/1.0\"}\nif token:\n    headers[\"Authorization\"] = f\"Bearer {token}\"\n\nall_issues = []\nrate_limit_hit = False\n\nfor page in [1, 2]:\n    url = f\"https://api.github.com/repos/{owner_repo}/issues?state=open&per_page=100&page={page}\"\n    req = urllib.request.Request(url, headers=headers)\n\n    try:\n        with urllib.request.urlopen(req, timeout=30) as resp:\n            # Check rate limit after first page\n            if page == 1:\n                remaining = int(resp.headers.get(\"X-RateLimit-Remaining\", 999))\n                reset_ts = resp.headers.get(\"X-RateLimit-Reset\", \"\")\n                if remaining == 0:\n                    reset_str = datetime.fromtimestamp(int(reset_ts), tz=timezone.utc).strftime(\"%H:%M UTC\") if reset_ts else \"unknown\"\n                    print(f\"ERROR: GitHub rate limit exhausted. Resets at {reset_str}.\")\n                    print(\"Add GITHUB_TOKEN to your .env file to get 5000 req/hr. See github.com/settings/tokens (no scopes needed).\")\n                    sys.exit(1)\n                print(f\"Rate limit remaining: {remaining}\")\n\n                # Check for 404/403\n                status = resp.status\n                if status == 404:\n                    print(f\"ERROR: Repo '{owner_repo}' not found. Check the URL or slug.\")\n                    sys.exit(1)\n\n            page_data = json.loads(resp.read())\n            if not page_data:\n                print(f\"Page {page}: empty, stopping.\")\n                break\n            all_issues.extend(page_data)\n            print(f\"Page {page}: {len(page_data)} issues fetched\")\n    except urllib.error.HTTPError as e:\n        if e.code == 404:\n            print(f\"ERROR: Repo '{owner_repo}' not found (404). Check the URL or slug.\")\n        elif e.code == 403:\n            print(f\"ERROR: Access denied (403). Repo may be private or rate limit hit.\")\n        else:\n            print(f\"ERROR: GitHub API returned HTTP {e.code}\")\n        sys.exit(1)\n    except Exception as e:\n        print(f\"ERROR: Failed to fetch page {page}: {e}\")\n        sys.exit(1)\n\nprint(f\"Total raw issues fetched: {len(all_issues)}\")\njson.dump(all_issues, open(\"/tmp/ghd-raw-issues.json\", \"w\"), indent=2)\nPYEOF\n```\n\n**If GitHub returns 404:** Stop. Tell the user: \"Repo not found. Check the URL or slug and try again. Private repos are not accessible without authentication and explicit repo scope.\"\n\n**If GitHub returns 403 with rate limit header:** Stop. Show the reset time and tell the user to add GITHUB_TOKEN.\n\n---\n\n## Step 4: Pre-Process Locally -- Filter, Score, Detect Ignored Demand\n\nNo API call. Pure Python. Run before anything goes to the AI.\n\n```bash\npython3 << 'PYEOF'\nimport json, re\nfrom datetime import datetime, timezone\n\nraw = json.load(open(\"/tmp/ghd-raw-issues.json\"))\ntarget = open(\"/tmp/ghd-target.txt\").read().strip()\nnow = datetime.now(tz=timezone.utc)\n\nnoise_patterns = re.compile(\n    r\"^(chore|deps|bump|renovate|dependabot|release|ci|build|revert)[\\s:\\[]\",\n    re.IGNORECASE\n)\npr_title_patterns = re.compile(\n    r\"^(feat|fix|refactor|docs|test|style|perf|chore)(\\(.+\\))?:\",\n    re.IGNORECASE\n)\n\nfiltered = []\nnoise_count = 0\nnoise_reasons = {}\n\nfor issue in raw:\n    # Skip pull requests (GitHub Issues endpoint returns PRs too)\n    if \"pull_request\" in issue:\n        noise_count += 1\n        noise_reasons[\"pull_request\"] = noise_reasons.get(\"pull_request\", 0) + 1\n        continue\n\n    title = issue.get(\"title\", \"\")\n    reactions = issue.get(\"reactions\", {}).get(\"+1\", 0)\n    comments = issue.get(\"comments\", 0)\n    user_type = (issue.get(\"user\") or {}).get(\"type\", \"User\")\n\n    # Skip bot-authored issues\n    if user_type == \"Bot\":\n        noise_count += 1\n        noise_reasons[\"bot_author\"] = noise_reasons.get(\"bot_author\", 0) + 1\n        continue\n\n    # Skip bot-pattern titles\n    if noise_patterns.match(title):\n        noise_count += 1\n        noise_reasons[\"bot_title\"] = noise_reasons.get(\"bot_title\", 0) + 1\n        continue\n\n    # Skip PR-as-issue titles\n    if pr_title_patterns.match(title):\n        noise_count += 1\n        noise_reasons[\"pr_as_issue\"] = noise_reasons.get(\"pr_as_issue\", 0) + 1\n        continue\n\n    # Skip zero-signal issues\n    if reactions == 0 and comments == 0:\n        noise_count += 1\n        noise_reasons[\"zero_signal\"] = noise_reasons.get(\"zero_signal\", 0) + 1\n        continue\n\n    # Compute demand score\n    demand_score = (reactions * 2) + (comments * 0.5)\n\n    # Detect ignored demand\n    created_at = issue.get(\"created_at\", \"\")\n    if created_at:\n        created = datetime.fromisoformat(created_at.replace(\"Z\", \"+00:00\"))\n        age_days = (now - created).days\n    else:\n        age_days = 0\n\n    labels = [l.get(\"name\", \"\").lower() for l in issue.get(\"labels\", [])]\n    has_planned_label = any(\n        kw in label for label in labels\n        for kw in [\"in-progress\", \"planned\", \"roadmap\", \"wip\", \"in progress\"]\n    )\n\n    ignored_demand = (\n        reactions >= 10 and\n        age_days >= 180 and\n        not has_planned_label\n    )\n\n    filtered.append({\n        \"number\": issue[\"number\"],\n        \"title\": title,\n        \"url\": issue.get(\"html_url\", f\"https://github.com/{target}/issues/{issue['number']}\"),\n        \"reactions_plus1\": reactions,\n        \"comments\": comments,\n        \"demand_score\": demand_score,\n        \"age_days\": age_days,\n        \"labels\": labels,\n        \"ignored_demand\": ignored_demand,\n        \"body_snippet\": (issue.get(\"body\") or \"\")[:300]\n    })\n\n# Sort by demand score descending\nfiltered.sort(key=lambda x: x[\"demand_score\"], reverse=True)\n\nprint(f\"Raw issues: {len(raw)}\")\nprint(f\"Noise filtered: {noise_count} ({', '.join(f'{k}: {v}' for k, v in noise_reasons.items())})\")\nprint(f\"Issues for analysis: {len(filtered)}\")\n\nif len(filtered) < 10:\n    print(f\"ERROR: Only {len(filtered)} issues remain after filtering. This repo has too few engaged issues for reliable clustering.\")\n    print(\"Try a larger repo or a repo with more community engagement.\")\n    import sys; sys.exit(1)\n\n# Ignored demand summary\nignored = [i for i in filtered if i[\"ignored_demand\"]]\nprint(f\"Ignored demand issues: {len(ignored)}\")\n\njson.dump(filtered, open(\"/tmp/ghd-filtered-issues.json\", \"w\"), indent=2)\nprint(\"Pre-processing complete.\")\nPYEOF\n```\n\n**If fewer than 10 issues remain after filtering:** Stop. Tell the user exactly how many issues were found and filtered, and why the repo is too small for reliable demand clustering.\n\n---\n\n## Step 5: Cluster Issues\n\nPrint the filtered issues for analysis:\n\n```bash\npython3 << 'PYEOF'\nimport json\nfiltered = json.load(open(\"/tmp/ghd-filtered-issues.json\"))\ntarget = open(\"/tmp/ghd-target.txt\").read().strip()\n\nissue_list = filtered[:150]\nprint(f\"Repo: {target}\")\nprint(f\"Issues to cluster: {len(issue_list)}\")\nprint()\nfor i in issue_list:\n    labels_str = f\" [{', '.join(i['labels'][:3])}]\" if i['labels'] else \"\"\n    print(f\"#{i['number']} [score:{round(i['demand_score'],1)} reactions:{i['reactions_plus1']}] {i['title']}{labels_str}\")\nPYEOF\n```\n\nClassify each issue printed above into one of these 6 categories:\n`feature_gap`, `bug_pattern`, `ux_complaint`, `performance`, `integration_missing`, `docs_missing`\n\nRules:\n- Classify each issue into exactly one category\n- Extract a 1-sentence pain statement using the user's exact language from the title -- do not paraphrase\n- Identify 5-8 cluster themes: short phrases (3-6 words) capturing dominant complaint patterns across all issues\n- No em dashes. No marketing language.\n\nWrite your analysis to `/tmp/ghd-clusters.json` with this exact structure:\n\n```json\n{\n  \"classified_issues\": [\n    {\"number\": 123, \"category\": \"feature_gap\", \"pain_statement\": \"Users need X which does not exist yet\"}\n  ],\n  \"cluster_themes\": [\n    {\"theme_name\": \"Missing export options\", \"category\": \"feature_gap\", \"issue_numbers\": [123, 456, 789]}\n  ],\n  \"category_counts\": {\"feature_gap\": 5, \"bug_pattern\": 3, \"ux_complaint\": 4, \"performance\": 2, \"integration_missing\": 6, \"docs_missing\": 1}\n}\n```\n\nAfter writing the file, confirm with:\n\n```bash\npython3 -c \"\nimport json\nd = json.load(open('/tmp/ghd-clusters.json'))\nprint(f'Classified: {len(d[\\\"classified_issues\\\"])} issues, {len(d[\\\"cluster_themes\\\"])} themes')\nprint('Categories:', d['category_counts'])\n\"\n```\n\n---\n\n## Step 6: Messaging Brief\n\nCompute total demand score per cluster and print the top 3:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nfiltered = json.load(open(\"/tmp/ghd-filtered-issues.json\"))\nclusters = json.load(open(\"/tmp/ghd-clusters.json\"))\ntarget = open(\"/tmp/ghd-target.txt\").read().strip()\n\ndemand_by_issue = {i[\"number\"]: i[\"demand_score\"] for i in filtered}\nissue_titles = {i[\"number\"]: i[\"title\"] for i in filtered}\nissue_reactions = {i[\"number\"]: i[\"reactions_plus1\"] for i in filtered}\n\nenriched_themes = []\nfor theme in clusters.get(\"cluster_themes\", []):\n    issue_nums = theme.get(\"issue_numbers\", [])\n    total_demand = sum(demand_by_issue.get(n, 0) for n in issue_nums)\n    top_issues = sorted(issue_nums, key=lambda n: demand_by_issue.get(n, 0), reverse=True)[:3]\n    enriched_themes.append({\n        \"theme_name\": theme[\"theme_name\"],\n        \"category\": theme[\"category\"],\n        \"issue_count\": len(issue_nums),\n        \"total_demand_score\": round(total_demand, 1),\n        \"top_issues\": [\n            {\"number\": n, \"title\": issue_titles.get(n, \"\"), \"reactions\": issue_reactions.get(n, 0)}\n            for n in top_issues\n        ]\n    })\n\nenriched_themes.sort(key=lambda x: x[\"total_demand_score\"], reverse=True)\njson.dump(enriched_themes, open(\"/tmp/ghd-enriched-themes.json\", \"w\"), indent=2)\n\nprint(f\"Top 3 clusters for messaging brief (repo: {target}):\")\nfor t in enriched_themes[:3]:\n    print(f\"\\n  {t['theme_name']} ({t['category']}) -- total demand: {t['total_demand_score']}\")\n    for ti in t[\"top_issues\"]:\n        print(f\"    #{ti['number']}: \\\"{ti['title']}\\\" ({ti['reactions']} reactions)\")\nPYEOF\n```\n\nGenerate a GTM messaging brief from the top 3 clusters printed above.\n\nRules:\n- Each positioning angle must cite the specific cluster it comes from\n- Each outreach hook must quote a verbatim issue title in quotation marks\n- Headlines must include a number or specific named pain -- no generic statements\n- No em dashes. No forbidden words: powerful, robust, seamless, innovative, game-changing, streamline, leverage, transform\n\nWrite your brief to `/tmp/ghd-brief.json` with this exact structure:\n\n```json\n{\n  \"positioning_angles\": [\n    {\n      \"angle_name\": \"3-5 word label\",\n      \"cluster_source\": \"theme_name from cluster\",\n      \"positioning_statement\": \"2-3 sentences on what your product does that this competitor does not\",\n      \"evidence\": \"verbatim issue title that best illustrates this gap\"\n    }\n  ],\n  \"outreach_hooks\": [\n    {\n      \"hook_type\": \"pain quote hook\",\n      \"hook_text\": \"2-3 sentences quoting a verbatim issue title in quotes\",\n      \"best_for\": \"audience this hook works for\"\n    }\n  ],\n  \"cluster_headlines\": [\n    {\n      \"theme_name\": \"from the cluster\",\n      \"headline\": \"specific headline with a number or named pain\",\n      \"sub_copy\": \"1 sentence expanding the headline\"\n    }\n  ]\n}\n```\n\nAfter writing the file, confirm with:\n\n```bash\npython3 -c \"\nimport json\nd = json.load(open('/tmp/ghd-brief.json'))\nprint('Positioning angles:', len(d.get('positioning_angles', [])))\nprint('Outreach hooks:', len(d.get('outreach_hooks', [])))\nprint('Cluster headlines:', len(d.get('cluster_headlines', [])))\n\"\n```\n\n---\n\n## Step 7: Self-QA\n\nRun before presenting. Verify evidence. Remove violations. Check output integrity.\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nfiltered = json.load(open(\"/tmp/ghd-filtered-issues.json\"))\nclusters = json.load(open(\"/tmp/ghd-clusters.json\"))\nthemes = json.load(open(\"/tmp/ghd-enriched-themes.json\"))\nbrief = json.load(open(\"/tmp/ghd-brief.json\"))\ntarget = open(\"/tmp/ghd-target.txt\").read().strip()\n\nfailures = []\nreal_titles = {i[\"number\"]: i[\"title\"] for i in filtered}\n\n# Verify: classified_issues only reference real issue numbers\nreal_numbers = set(real_titles.keys())\nhallucinated = [\n    c[\"number\"] for c in clusters.get(\"classified_issues\", [])\n    if c[\"number\"] not in real_numbers\n]\nif hallucinated:\n    failures.append(f\"Removed {len(hallucinated)} hallucinated issue numbers from classified_issues: {hallucinated[:5]}\")\n    clusters[\"classified_issues\"] = [\n        c for c in clusters.get(\"classified_issues\", [])\n        if c[\"number\"] in real_numbers\n    ]\n\n# Verify: top-10 list is sorted by demand_score descending\ntop10 = filtered[:10]\nfor i, issue in enumerate(top10):\n    if i > 0 and issue[\"demand_score\"] > top10[i-1][\"demand_score\"]:\n        failures.append(\"Top-10 list was not sorted by demand_score -- re-sorted.\")\n        filtered.sort(key=lambda x: x[\"demand_score\"], reverse=True)\n        top10 = filtered[:10]\n        break\n\n# Verify: ignored demand issues meet all 3 criteria\nignored = [i for i in filtered if i[\"ignored_demand\"]]\nfor issue in ignored:\n    if issue[\"reactions_plus1\"] < 10 or issue[\"age_days\"] < 180:\n        issue[\"ignored_demand\"] = False\n        failures.append(f\"Removed issue #{issue['number']} from ignored demand -- did not meet all 3 criteria\")\n\n# Check messaging brief counts\nif len(brief.get(\"positioning_angles\", [])) != 3:\n    failures.append(f\"Expected 3 positioning angles, got {len(brief.get('positioning_angles', []))}\")\nif len(brief.get(\"outreach_hooks\", [])) != 3:\n    failures.append(f\"Expected 3 outreach hooks, got {len(brief.get('outreach_hooks', []))}\")\nif len(brief.get(\"cluster_headlines\", [])) != 3:\n    failures.append(f\"Expected 3 cluster headlines, got {len(brief.get('cluster_headlines', []))}\")\n\n# Check for em dashes in brief\nbrief_str = json.dumps(brief)\nif \"\\u2014\" in brief_str:\n    brief_str = brief_str.replace(\"\\u2014\", \" - \")\n    brief = json.loads(brief_str)\n    failures.append(\"Fixed: em dash characters removed from messaging brief\")\n\n# Check for forbidden words\nforbidden = [\"powerful\", \"robust\", \"seamless\", \"innovative\", \"game-changing\", \"streamline\", \"leverage\", \"transform\"]\nfull_text = (json.dumps(clusters) + json.dumps(brief)).lower()\nfor word in forbidden:\n    if word in full_text:\n        failures.append(f\"Warning: forbidden word '{word}' found in output -- review before presenting\")\n\n# Build final output bundle\noutput = {\n    \"repo\": target,\n    \"issues_analyzed\": len(filtered),\n    \"clusters\": clusters,\n    \"enriched_themes\": themes,\n    \"filtered_issues\": filtered,\n    \"messaging_brief\": brief,\n    \"data_quality_flags\": failures\n}\n\njson.dump(output, open(\"/tmp/ghd-output.json\", \"w\"), indent=2)\nprint(f\"QA complete. Issues addressed: {len(failures)}\")\nfor f in failures:\n    print(f\"  - {f}\")\nif not failures:\n    print(\"All QA checks passed.\")\nPYEOF\n```\n\n---\n\n## Step 8: Save and Present Output\n\n```bash\npython3 << 'PYEOF'\nimport json\nfrom datetime import datetime, timezone\n\noutput = json.load(open(\"/tmp/ghd-output.json\"))\ntarget = output[\"repo\"]\nrepo_slug = target.replace(\"/\", \"-\")\ndate_str = datetime.now(tz=timezone.utc).strftime(\"%Y-%m-%d\")\n\nfiltered = output[\"filtered_issues\"]\nthemes = output[\"enriched_themes\"]\nclusters = output[\"clusters\"]\nbrief = output[\"messaging_brief\"]\nflags = output[\"data_quality_flags\"]\n\nignored = [i for i in filtered if i.get(\"ignored_demand\")]\ntop10 = filtered[:10]\n\n# Build category summary from cluster data\ncategory_counts = clusters.get(\"category_counts\", {})\n\nlines = [\n    f\"## Demand Gap Report: {target}\",\n    f\"Issues analyzed: {output['issues_analyzed']} | Date: {date_str}\",\n    \"\",\n    \"---\",\n    \"\",\n    \"### Demand Gap Leaderboard\",\n    \"\",\n    \"| Rank | Theme | Category | Issues | Total Demand Score | Top Issue Reactions |\",\n    \"|---|---|---|---|---|---|\",\n]\n\nfor i, theme in enumerate(themes[:8], 1):\n    top_reactions = theme[\"top_issues\"][0][\"reactions\"] if theme[\"top_issues\"] else 0\n    lines.append(\n        f\"| {i} | {theme['theme_name']} | {theme['category']} | \"\n        f\"{theme['issue_count']} | {theme['total_demand_score']} | {top_reactions} |\"\n    )\n\nlines += [\"\", \"---\", \"\"]\n\nif ignored:\n    lines += [\n        \"### Ignored Demand (High Reactions, No Maintainer Response)\",\n        \"\",\n        \"These issues have 10+ reactions, are 6+ months old, and have no planned/in-progress label.\",\n        \"This is your opportunity window.\",\n        \"\",\n    ]\n    for issue in ignored[:10]:\n        lines.append(\n            f\"- [{issue['title']}]({issue['url']}) -- \"\n            f\"{issue['reactions_plus1']} reactions, {issue['age_days']} days old\"\n        )\n    lines += [\"\", \"---\", \"\"]\n\nlines += [\n    \"### Top 10 Highest-Demand Issues\",\n    \"\",\n    \"| Rank | Issue | Reactions | Comments | Demand Score | Link |\",\n    \"|---|---|---|---|---|---|\",\n]\nfor i, issue in enumerate(top10, 1):\n    short_title = issue[\"title\"][:70] + (\"...\" if len(issue[\"title\"]) > 70 else \"\")\n    lines.append(\n        f\"| {i} | {short_title} | {issue['reactions_plus1']} | \"\n        f\"{issue['comments']} | {round(issue['demand_score'], 1)} | \"\n        f\"[#{issue['number']}]({issue['url']}) |\"\n    )\n\nlines += [\"\", \"---\", \"\", \"### Cluster Deep Dives\", \"\"]\n\nfor theme in themes[:3]:\n    lines.append(f\"#### {theme['theme_name']}\")\n    lines.append(f\"Category: {theme['category']} | Issues: {theme['issue_count']} | Total demand score: {theme['total_demand_score']}\")\n    lines.append(\"\")\n    lines.append(\"Top issues in this cluster:\")\n    for ti in theme[\"top_issues\"]:\n        lines.append(f\"- \\\"{ti['title']}\\\" -- {ti['reactions']} reactions\")\n    lines.append(\"\")\n\nlines += [\"---\", \"\", \"### Messaging Brief\", \"\"]\n\nfor angle in brief.get(\"positioning_angles\", []):\n    lines.append(f\"**{angle.get('angle_name', 'Angle')}**\")\n    lines.append(angle.get(\"positioning_statement\", \"\"))\n    lines.append(f\"Evidence: \\\"{angle.get('evidence', '')}\\\"\")\n    lines.append(\"\")\n\nlines += [\"---\", \"\", \"### GTM Angles\", \"\"]\n\nfor hook in brief.get(\"outreach_hooks\", []):\n    lines.append(f\"**{hook.get('hook_type', 'Hook')}**\")\n    lines.append(hook.get(\"hook_text\", \"\"))\n    lines.append(f\"Best for: {hook.get('best_for', '')}\")\n    lines.append(\"\")\n\nlines += [\"---\", \"\"]\nif flags:\n    lines.append(f\"Data quality notes: {'; '.join(flags)}\")\nelse:\n    lines.append(\"Data quality notes: None\")\n\noutput_path = f\"docs/demand-signals/{repo_slug}-{date_str}.md\"\nimport os\nos.makedirs(\"docs/demand-signals\", exist_ok=True)\nopen(output_path, \"w\").write(\"\\n\".join(lines))\nprint(f\"Saved to: {output_path}\")\n\n# Print to console\nprint(\"\\n\" + \"\\n\".join(lines))\nPYEOF\n```\n\nClean up temp files:\n\n```bash\nrm -f /tmp/ghd-target.txt /tmp/ghd-raw-issues.json /tmp/ghd-filtered-issues.json \\\n      /tmp/ghd-cluster-request.json /tmp/ghd-clusters.json /tmp/ghd-enriched-themes.json \\\n      /tmp/ghd-brief-request.json /tmp/ghd-brief.json /tmp/ghd-output.json\n```","tags":["issue","demand","signal","opendirectory","varnan-tech","agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"capabilities":["skill","source-varnan-tech","skill-gh-issue-to-demand-signal","topic-agent-skills","topic-gtm","topic-hermes-agent","topic-openclaw-skills","topic-skills","topic-technical-seo"],"categories":["opendirectory"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/Varnan-Tech/opendirectory/gh-issue-to-demand-signal","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add Varnan-Tech/opendirectory","source_repo":"https://github.com/Varnan-Tech/opendirectory","install_from":"skills.sh"}},"qualityScore":"0.489","qualityRationale":"deterministic score 0.49 from registry signals: · indexed on github topic:agent-skills · 79 github stars · SKILL.md body (22,321 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T00:55:38.407Z","embedding":null,"createdAt":"2026-04-21T18:55:54.313Z","updatedAt":"2026-04-22T00:55:38.407Z","lastSeenAt":"2026-04-22T00:55:38.407Z","tsv":"'+00':1151 '+1':206,1022 '-1':2175 '-10':2149,2180 '-3':1930,1961 '-5':1918 '-6':1525 '-8':1519 '/1.0':590 '/issues':616,1219 '/owner/repo':450 '/owner/repo)':400,522 '/repos/':613 '/settings/tokens':381,707 '/tmp/ghd-brief-request.json':2864 '/tmp/ghd-brief.json':1907,2014,2071,2865 '/tmp/ghd-cluster-request.json':2861 '/tmp/ghd-clusters.json':1544,1615,1661,2063,2862 '/tmp/ghd-enriched-themes.json':1789,2067,2863 '/tmp/ghd-filtered-issues.json':1352,1411,1657,2059,2860 '/tmp/ghd-output.json':2414,2461,2866 '/tmp/ghd-raw-issues.json':846,939,2859 '/tmp/ghd-target.txt':498,568,942,1414,1664,2074,2858 '0':469,663,981,1012,1023,1027,1055,1076,1100,1110,1113,1124,1161,1718,1734,1769,2168,2562,2569 '0.5':1135 '00':1152 '1':326,452,456,471,487,607,645,712,741,817,832,1004,1013,1047,1056,1068,1077,1090,1101,1116,1125,1328,1459,1501,1600,1758,1995,2556,2660,2687 '10':125,241,247,312,1196,1292,1365,2159,2202,2230,2509,2602,2622,2642 '100':545,621 '123':1553,1579 '150':1420 '180':314,1200,2235 '2':366,389,458,542,608,849,1133,1355,1594,1792,1929,1960,2417 '200':162,540 '3':304,530,1445,1524,1589,1648,1737,1796,1808,1847,1917,2210,2253,2264,2268,2281,2285,2298,2302,2701 '30':634 '300':1246 '4':237,903,1592 '403':792,798,884 '404':726,775,784,854 '404/403':721 '456':1580 '5':1394,1518,1586,2130 '5000':702 '6':30,75,1478,1597,1635,2605 '60':341,362 '7':2037 '70':2665,2670 '789':1581 '8':2443,2555 '999':653 'absent':290 'accept':579 'access':796,874 'across':1531 'actual':116 'add':375,693,899 'address':2423 'age':313,1153,1159,1198,1231,1233,2233,2635 'agent':152,584 'ai':167,924 'all_issues.extend':757 'analysi':273,1286,1402,1542 'analyz':2393,2529,2532 'angl':1854,1914,1915,2017,2021,2263,2270,2275,2748,2752,2756,2758,2771 'angle.get':2755,2760,2766 'anyth':920 'anywher':148 'api':23,106,536,812,914 'api.github.com':612 'api.github.com/repos/':611 'appli':340 'application/vnd.github':580 'ask':511 'audienc':1972 'authent':876 'author':594,1039,1051,1054 'bash':329,412,553,925,1403,1607,1649,2006,2051,2448,2855 'bearer':596 'best':1947,1970,2790,2793 'bodi':1241,1244 'bot':170,1038,1044,1050,1053,1060,1071,1074 'bot-author':1037 'bot-pattern':1059 'break':756,2203 'brief':53,91,1637,1800,1843,1905,2068,2257,2315,2316,2319,2323,2325,2329,2331,2341,2362,2405,2406,2488,2491,2746 'brief.get':2261,2273,2278,2290,2295,2307,2750,2775 'brief_str.replace':2327 'bug':1482,1587 'build':960,2385,2510 'bump':955 'bundl':2388 'buyer':210,221 'c':1609,2008,2101,2104,2110,2134,2136,2142 'call':915 'captur':1527 'categori':32,77,267,1479,1498,1554,1574,1582,1630,1632,1744,1746,1816,2511,2516,2519,2541,2577,2709,2711 'chang':1899,2353 'charact':2337 'check':283,302,328,546,637,719,735,785,862,2048,2255,2310,2342,2439 'chore':953,976 'ci':959 'cite':1856 'classifi':1469,1492,1550,1618,1621,2089,2107,2127,2132,2139 'clean':2851 'cleanest':209 'cluster':27,41,73,109,121,143,179,217,261,1312,1392,1395,1429,1520,1567,1626,1643,1658,1706,1797,1848,1859,1921,1926,1977,1983,2030,2034,2060,2131,2296,2303,2308,2360,2396,2397,2485,2487,2514,2694,2729 'clusters.get':1705,2106,2138,2518 'com':433 'come':1861 'comment':188,195,1024,1026,1112,1134,1225,1226,2650,2682 'common':149 'communiti':1323 'competitor':9,61,1939 'complaint':1485,1529,1591 'complet':1360,2421 'comput':33,1127,1638 'confirm':1605,2004 'consol':2844 'content':147 'context':183 'continu':234,349,1014,1057,1078,1102,1126 'copi':1994 'could':441 'count':180,189,980,1003,1046,1067,1089,1115,1272,1583,1633,1748,2258,2517,2520,2581,2715 'creat':1139,1142,1145,1147,1156 'created_at.replace':1149 'criteria':305,2211,2254 'criterion':321 'critic':92 'd':1612,1620,1625,1631,2011,2476 'd.get':2019,2026,2033 'dash':1536,1889,2313,2336 'data':264,743,749,759,766,2407,2494,2515,2801,2808 'date':2468,2533,2534,2818 'datetim':562,564,932,934,2454,2456 'datetime.fromisoformat':1148 'datetime.fromtimestamp':666 'datetime.now':946,2470 'day':315,1154,1157,1160,1199,1232,1234,2234,2636,2637 'deep':2695 'demand':5,31,35,46,57,76,85,193,300,588,912,1128,1130,1138,1194,1227,1229,1238,1240,1249,1257,1330,1341,1345,1391,1457,1640,1667,1673,1714,1753,1757,1781,1818,1821,2154,2171,2176,2186,2196,2206,2221,2238,2248,2506,2523,2536,2544,2584,2593,2645,2651,2685,2717,2721 'demand_by_issue.get':1716,1732 'deni':797 'dep':954 'dependabot':957 'deriv':114 'descend':1251,2156 'detect':910,1136 'discuss':202 'disqualifi':322 'dive':2696 'doc':972,1489,1598 'docs/demand-signals':2815,2824 'domin':1528 'e':772,821,830 'e.code':774,791,815 'e.g':397,404,484,527 'echo':330 'elif':460,790 'els':472,679,807,1158,1449,2568,2671,2806 'em':1535,1888,2312,2335 'empti':754 'endpoint':277,993 'engag':81,176,1308,1324 'enough':364 'enrich':1700,1786,1806,2398,2483 'enriched_themes.append':1738 'enriched_themes.sort':1775 'entir':228 'enumer':2164,2553,2658 'env':698 'error':440,474,683,729,778,795,810,824,1295 'everi':94,108 'evid':1942,2045,2765,2767 'exact':223,1374,1496,1509,1547,1910 'except':769,818,819 'exhaust':687 'exist':1565,2825 'expand':1997 'expect':446,2267,2284,2301 'explicit':878 'export':1572 'extract':1499 'f':489,493,501,503,595,610,682,714,728,751,761,777,794,809,823,834,1216,1262,1268,1274,1283,1294,1343,1422,1426,1441,1451,1617,1794,1810,1830,2119,2241,2266,2283,2300,2374,2419,2427,2431,2432,2522,2527,2571,2578,2624,2629,2673,2680,2688,2703,2708,2737,2754,2764,2779,2789,2800,2814,2837,2857 'f.write':502 'facebook/react':405 'fail':509,825 'failur':2077,2410,2425,2429,2435 'failures.append':2118,2178,2240,2265,2282,2299,2333,2373 'fals':603,2239 'feat':969 'featur':1480,1555,1575,1584 'fetch':15,66,367,531,537,552,768,827,838 'fewer':123,239,1363 'file':699,1604,2003,2854 'filter':24,70,130,169,184,245,281,908,978,1270,1288,1291,1298,1302,1337,1350,1369,1381,1399,1408,1419,1654,1678,1688,1699,2056,2087,2158,2201,2217,2395,2401,2403,2477,2479,2502,2508 'filtered.append':1206 'filtered.sort':1252,2191 'final':2386 'first':186,551,641 'fix':970,2334 'flag':2409,2492,2496,2798,2805 'forbidden':1891,2344,2346,2367,2376 'format':447 'found':734,783,861,1379,2379 'full':2357,2371 'game':1898,2352 'game-chang':1897,2351 'gap':47,86,1481,1556,1576,1585,1950,2524,2537 'gather':390 'generat':1839 'generic':1885 'get':701,1021,1033 'gh':2,586 'gh-issue-demand-sign':585 'gh-issue-to-demand-sign':1 'github':12,21,55,64,105,275,331,333,344,353,394,432,444,479,515,534,576,684,694,811,852,882,900,991 'github.com':380,399,449,521,706,1217 'github.com/owner/repo':448 'github.com/owner/repo)':398,520 'github.com/settings/tokens':379,705 'given':260 'goe':921 'got':2271,2288,2305 'gtm':51,89,1841,2770 'h':673 'hallucin':2100,2117,2122,2123,2129 'header':578,593,627,628,888 'headlin':1875,1978,1984,1986,1999,2031,2035,2297,2304,2309 'high':2594 'highest':2644 'highest-demand':2643 'hit':369,602,806 'hook':1865,1952,1953,1957,1958,1974,2024,2028,2280,2287,2292,2773,2777,2781,2783,2786 'hook.get':2780,2785,2792 'html':1214 'http':428,814 'i.get':2504 'identifi':1517 'ignor':299,911,1137,1193,1237,1239,1329,1332,1340,1344,1348,2205,2212,2220,2225,2237,2247,2497,2505,2590,2592,2621 'illustr':1948 'import':415,556,563,928,933,1325,1406,1610,1652,2009,2054,2451,2455,2821 'in-progress':1185 'includ':196,268,1877 'indent':848,1354,1791,2416 'inflat':178 'innov':1896,2350 'input':391,411,421,475 'int':647,667 'integr':1487,1595,2050 'invent':145 'issu':3,18,28,38,56,69,95,117,126,146,164,171,213,232,242,248,276,293,297,324,532,541,587,599,767,837,841,844,985,992,1001,1040,1083,1095,1099,1107,1208,1220,1264,1284,1299,1309,1346,1366,1377,1396,1400,1417,1427,1431,1437,1471,1494,1533,1551,1577,1622,1623,1669,1679,1689,1708,1711,1722,1725,1727,1747,1750,1760,1774,1828,1870,1944,1966,2090,2094,2108,2124,2128,2133,2140,2162,2170,2207,2223,2227,2232,2236,2243,2244,2392,2402,2422,2480,2528,2531,2542,2547,2561,2567,2580,2600,2619,2625,2627,2630,2634,2646,2648,2656,2663,2668,2677,2681,2684,2689,2691,2712,2714,2726,2735 'issue.get':1016,1019,1025,1030,1141,1169,1213,1243 'issue_reactions.get':1767 'issue_titles.get':1764 'join':1273,1442,2804,2834,2848 'json':557,581,929,1407,1549,1611,1653,1912,2010,2055,2452 'json.dump':842,1349,1785,2411 'json.dumps':2318,2359,2361 'json.load':937,1409,1613,1655,1659,2012,2057,2061,2065,2069,2459 'json.loads':744,2330 'k':1275,1278 'key':288,1253,1729,1776,2192 'kw':1175,1183 'l':1167 'l.get':1163 'label':318,1162,1170,1173,1177,1179,1181,1205,1235,1236,1439,1444,1448,1466,1920,2612 'lambda':1254,1730,1777,2193 'languag':224,1510,1539 'larger':1316 'leaderboard':2538 'len':764,839,1265,1287,1290,1297,1347,1430,1619,1624,1749,2018,2025,2032,2121,2260,2272,2277,2289,2294,2306,2394,2424,2667 'leverag':1901,2355 'limit':339,360,371,548,601,639,686,716,805,887 'line':2521,2588,2591,2639,2640,2693,2744,2769,2796,2835,2849 'lines.append':2570,2623,2672,2702,2707,2723,2724,2736,2743,2753,2759,2763,2768,2778,2784,2788,2795,2799,2807 'link':2653 'list':1418,1432,1438,2150,2181 'local':26,72,185,907 'lose':219 'lower':1165,2363 'm':429,438,674,2475 'm.group':455,457 'maintain':197,2597 'mani':1376 'mark':295,1874 'market':1538 'may':800 'md':2820 'mean':249 'meaningless':266 'meet':2208,2251 'messag':52,90,1636,1799,1842,2256,2340,2404,2490,2745 'miss':319,1488,1490,1571,1596,1599 'mistak':150 'month':2606 'must':100,112,308,476,1855,1866,1876 'n':1717,1720,1731,1733,1762,1765,1768,1771,1811,2833,2846,2847 'name':111,1164,1570,1740,1743,1814,1882,1916,1924,1980,1991,2575,2706,2757 'need':384,393,710,1560 'nois':25,71,129,177,949,979,982,1002,1005,1045,1048,1066,1069,1088,1091,1114,1117,1269,1271 'noise_patterns.match':1064 'noise_reasons.get':1009,1052,1073,1096,1121 'noise_reasons.items':1281 'none':2811 'normal':423 'note':2803,2810 'num':1709,1723,1728,1751 'number':1207,1209,1221,1453,1552,1578,1671,1682,1692,1712,1761,1832,1879,1989,2081,2095,2097,2102,2111,2115,2125,2143,2146,2245,2690 'object':294 'off-top':199 'ok':2826 'old':2607,2638 'one':320,1475,1497 'open':17,68,497,567,618,845,938,941,1351,1410,1413,1614,1656,1660,1663,1788,2013,2058,2062,2066,2070,2073,2413,2460,2828 'opportun':2616 'option':1573 'os':418,559,2822 'os.environ.get':575 'os.makedirs':2823 'output':43,82,99,2049,2381,2387,2389,2412,2447,2458,2463,2478,2482,2486,2489,2493,2530,2812,2829,2840 'outreach':1864,1951,2023,2027,2279,2286,2291,2776 'owner':407,453,466,490,491,504,571,614,731,780 'owner/repo':402,425,482,525 'page':543,605,620,622,623,642,644,742,748,752,753,758,762,763,765,828,829 'pain':1503,1557,1883,1955,1992 'paraphras':212,218,1516 'pars':406,443,508 'part':463,468,470 'pass':2440 'past':235 'path':2813,2830,2841 'pattern':950,966,1061,1483,1530,1588 'per':37,40,619,1642 'perf':975 'perform':1486,1593 'phrase':1523 'plan':1172,1188,1204 'planned/in-progress':2611 'planned/in-progress/roadmap':317 'pleas':512 'plus1':1223,1463,1695,2229,2632,2679 'point':229 'posit':1853,1913,1927,2016,2020,2262,2269,2274,2751,2761 'power':1893,2347 'pr':964,1081,1093,1097 'pr-as-issu':1080 'pr_title_patterns.match':1086 'pre':905,1358 'pre-process':904,1357 'present':2043,2384,2446 'primari':192 'print':439,473,488,492,681,692,713,727,750,760,776,793,808,822,833,1261,1267,1282,1293,1313,1342,1356,1397,1421,1425,1433,1450,1472,1616,1629,1645,1793,1809,1829,1849,2015,2022,2029,2418,2430,2436,2836,2842,2845 'privat':802,870 'process':906,1359 'produc':265 'product':1935 'progress':1187,1192 'provid':513 'prs':172,279,995 'public':11,63,386 'pull':269,286,989,998,1007,1010 'pure':916 'pyeof':414,506,555,850,927,1361,1405,1468,1651,1838,2053,2441,2450,2850 'python':917 'python3':413,554,926,1404,1608,1650,2007,2052,2449 'qa':2040,2420,2438 'qualiti':2408,2495,2802,2809 'quot':1867,1956,1963,1969 'quotat':1873 'r':431,952,968 'rank':45,84,2539,2647 'rate':338,359,547,600,638,685,715,804,886 'ratelimit':651,659 'raw':163,419,435,462,836,936,987,1263,1266 'raw.startswith':427 'raw.strip':464 're':416,930,2189 're-sort':2188 're.compile':951,967 're.ignorecase':963,977 're.search':430 'reaction':205,311,1018,1020,1109,1132,1195,1222,1224,1460,1462,1690,1694,1766,1836,1837,2228,2548,2558,2563,2587,2595,2603,2631,2633,2649,2678,2741,2742 'read':569,943,1415,1665,2075 'real':80,2078,2093,2096,2114,2145 'real_titles.keys':2099 'reason':983,1006,1049,1070,1092,1118 'refactor':971 'refer':2092 'releas':958 'reliabl':142,1311,1390 'remain':127,243,646,652,662,717,718,1300,1367 'remov':2046,2120,2242,2338 'renov':956 'repeat':373 'repo':13,65,137,251,387,395,409,420,454,467,494,495,505,516,572,615,730,732,779,781,799,859,871,879,1304,1317,1320,1385,1423,1801,2390,2464,2465,2816 'report':48,87,2525 'req':624,632 'req/hr':342,703 'request':270,287,990,999,1008,1011 'requests/hour':363 'reset':654,660,664,668,677,688,690,892 'resp':636 'resp.headers.get':648,656 'resp.read':745 'resp.status':723 'respons':107,198,2598 'rest':22,535 'return':278,813,853,883,994 'revers':1259,1735,1783,2198 'revert':961 'review':2382 'rm':2856 'roadmap':1189 'robust':1894,2348 'round':1455,1755,2683 'rstrip':459 'rule':93,1491,1851 'run':918,2041 'save':2444,2838 'scope':383,709,880 'score':36,78,909,1129,1131,1228,1230,1250,1258,1454,1458,1641,1674,1754,1782,1822,2155,2172,2177,2187,2197,2545,2585,2652,2686,2718,2722 'seamless':1895,2349 'see':704 'self':2039 'self-qa':2038 'send':160 'sentenc':1502,1931,1962,1996 'set':336,348,357,2098 'setup':327 'short':1522,2661,2675 'show':890 'signal':6,58,194,211,589,1106,1120,1123 'skill' 'skill-gh-issue-to-demand-signal' 'skip':988,1036,1058,1079,1103 'slug':403,483,526,739,789,866,2466,2817 'small':140,254,1388 'snippet':1242 'sort':1247,1726,2152,2184,2190 'sourc':1922 'source-varnan-tech' 'spam':204 'spars':263 'specif':1858,1881,1985 'split':465 'state':617 'statement':1504,1558,1886,1928,2762 'status':722,725 'step':236,325,388,529,902,1393,1634,2036,2442 'stop':131,510,755,855,889,1370 'str':665,691,1440,1467,2317,2324,2326,2332,2469,2535,2819 'streamlin':1900,2354 'strftime':672,2473 'strip':570,944,1416,1666,2076 'structur':1548,1911 'style':974 'sub':1993 'sum':1715 'summar':216 'summari':1331,2512 'sys':417,560,1326 'sys.exit':451,486,711,740,816,831,1327 'take':7,59 'target':566,573,940,1218,1412,1424,1662,1802,2072,2391,2462,2526 'target.replace':2467 'tell':133,350,856,895,1371 'temp':2853 'test':973 'text':1959,2358,2372,2787 'theme':110,1521,1568,1569,1627,1628,1701,1703,1707,1739,1741,1742,1745,1787,1807,1813,1923,1979,2064,2399,2400,2481,2484,2540,2551,2554,2559,2565,2573,2574,2576,2579,2582,2698,2700,2704,2705,2710,2713,2719,2733 'theme.get':1710 'three':307 'ti':1824,1831,1833,1835,2731,2738,2740 'time':893 'timeout':633 'timezon':565,935,2457 'timezone.utc':671,948,2472 'titl':96,118,214,233,965,1015,1017,1062,1065,1072,1075,1084,1087,1210,1211,1465,1513,1680,1684,1763,1834,1871,1945,1967,2079,2083,2626,2662,2664,2669,2676,2739 'token':332,334,345,354,377,574,577,592,597,695,901 'top':1647,1724,1759,1773,1795,1827,1846,2148,2179,2546,2557,2560,2566,2586,2641,2725,2734 'top10':2157,2165,2173,2200,2507,2659 'topic':201 'topic-agent-skills' 'topic-gtm' 'topic-hermes-agent' 'topic-openclaw-skills' 'topic-skills' 'topic-technical-seo' 'total':835,1639,1713,1752,1756,1780,1817,1820,2543,2583,2716,2720 'transform':1902,2356 'tri':629,868,1314 'true':310,1260,1736,1784,2199,2827 'ts':655,669,678 'type':1029,1034,1043,1954,2782 'tz':670,947,2471 'u':54 'u2014':2321,2328 'unauthent':337,358 'unknown':680 'url':14,258,396,445,480,519,609,626,737,787,864,1212,1215,2628,2692 'urllib.error.httperror':770 'urllib.request':558 'urllib.request.request':625 'urllib.request.urlopen':631 'use':187,230,374,1505 'user':135,352,583,858,897,1028,1031,1035,1042,1373,1507,1559 'user-ag':582 'utc':675 'ux':1484,1590 'v':1276,1279 'verbatim':102,231,1869,1943,1965 'vercel/next.js':485,528 'verifi':2044,2088,2147,2204 'via':19 'violat':2047 'w':499,847,1353,1790,2415,2831 'want':154 'warn':2375 'wast':182 'window':2617 'wip':1190 'without':168,301,875 'word':1526,1892,1919,2345,2365,2369,2377,2378 'work':1975 'write':1540,1602,1903,2001,2832 'wrong':159,257 'x':650,658,1255,1256,1561,1778,1779,2194,2195 'x-ratelimit-remain':649 'x-ratelimit-reset':657 'y':2474 'yet':1566 'z':1150 'zero':175,1105,1119,1122 'zero-engag':174 'zero-sign':1104","prices":[{"id":"70a59539-0134-466b-84df-67790cca1933","listingId":"38b79471-de7f-4427-a9c1-649fe34149dd","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"Varnan-Tech","category":"opendirectory","install_from":"skills.sh"},"createdAt":"2026-04-21T18:55:54.313Z"}],"sources":[{"listingId":"38b79471-de7f-4427-a9c1-649fe34149dd","source":"github","sourceId":"Varnan-Tech/opendirectory/gh-issue-to-demand-signal","sourceUrl":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/gh-issue-to-demand-signal","isPrimary":false,"firstSeenAt":"2026-04-21T18:55:54.313Z","lastSeenAt":"2026-04-22T00:55:38.407Z"}],"details":{"listingId":"38b79471-de7f-4427-a9c1-649fe34149dd","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"Varnan-Tech","slug":"gh-issue-to-demand-signal","github":{"repo":"Varnan-Tech/opendirectory","stars":79,"topics":["agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"license":null,"html_url":"https://github.com/Varnan-Tech/opendirectory","pushed_at":"2026-04-21T16:48:11Z","description":" AI Agent Skills built for GTM, Technical Marketing, and growth automation.","skill_md_sha":"3c7fdd0c09bf20ca496ab114eac8f7f3abb0845b","skill_md_path":"skills/gh-issue-to-demand-signal/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/gh-issue-to-demand-signal"},"layout":"multi","source":"github","category":"opendirectory","frontmatter":{"name":"gh-issue-to-demand-signal","description":"Takes a competitor's public GitHub repo URL, fetches their open issues via the GitHub REST API, filters noise locally, clusters issues into 6 demand categories, computes a demand score per issue and per cluster, and outputs a ranked demand gap report with a GTM messaging brief. Use when asked to scan a competitor's GitHub issues, find what their users are begging for, turn GitHub complaints into product positioning, identify competitor feature gaps, or generate messaging from real user demand. Trigger when a user says \"scan competitor issues\", \"what are users asking for on X repo\", \"find demand gaps in Y\", \"turn GitHub issues into messaging\", or \"what should I build based on competitor complaints\".","compatibility":"[claude-code, gemini-cli, github-copilot]"},"skills_sh_url":"https://skills.sh/Varnan-Tech/opendirectory/gh-issue-to-demand-signal"},"updatedAt":"2026-04-22T00:55:38.407Z"}}