{"id":"f0435390-7941-4c5c-aa69-0ef8dcf59f1c","shortId":"9Jq4eE","kind":"skill","title":"sdk-adoption-tracker","tagline":"Given your SDK or library name, searches GitHub code search for public repos that import or require it, classifies each repo as company org, affiliated developer, solo developer, or tutorial noise, scores by adoption signal strength, detects new adopters by date, and outputs a ra","description":"# SDK Adoption Tracker\n\nTake an SDK name. Search GitHub for public repos that import it. Score each repo by company signal, activity, and noise indicators. Enrich high-signal repos with owner and contributor data. Output a ranked adoption report with outreach context for company adopters.\n\n---\n\n**Critical rule:** Every repo in the output must exist in the GitHub code search API response. Every company name must come from the GitHub user or org API `company` or `name` field. Every contributor handle must come from the GitHub contributors API response. If any field is empty in the API, write \"not listed\" -- do not infer, guess, or extrapolate.\n\n---\n\n## Common Mistakes\n\n| The agent will want to... | Why that's wrong |\n|---|---|\n| Run code search without GITHUB_TOKEN | Unauthenticated code search hits a 3 req/min secondary rate limit and fails on any meaningful scan. GITHUB_TOKEN is required. Stop at Step 1 with a clear error if it is missing. |\n| Include forks of the SDK itself | Repos that fork the SDK are contributors or mirrors, not adopters. Filter out repos where `fork == true` AND the repo name matches the SDK name. |\n| Send all 500 raw search results to the AI | Code search can return up to 500 results, most of which are noise. Filter and score locally first. Send only the top 20 high-signal repos to the AI analysis step. |\n| Report tutorial and example repos as adopters | Repos with \"example\", \"tutorial\", \"demo\", \"learn\", \"sample\", \"playground\", \"starter\" in the name or description are not production users. Mark as tutorial_noise and exclude from lead briefs. |\n| Invent company names or contact handles | Every company name must come from the GitHub `company` or org `name` field. Every contributor handle must come from the contributors API response. If a field is empty, write \"not listed\". |\n| Use one import pattern for all ecosystems | `require('sdk')` will not find Python users. Auto-detect ecosystem from the SDK name and build ecosystem-specific patterns. Ask the user if auto-detection is ambiguous. |\n\n---\n\n## Step 1: Setup Check\n\n```bash\nif [ -z \"$GITHUB_TOKEN\" ]; then\n  echo \"ERROR: GITHUB_TOKEN is required for code search.\"\n  echo \"Add a token at github.com/settings/tokens (no scopes needed for public repos).\"\n  echo \"Without it, GitHub code search hits a 3 req/min secondary rate limit and fails.\"\n  exit 1\nfi\necho \"GITHUB_TOKEN: set\"\ncurl -s -H \"Authorization: Bearer $GITHUB_TOKEN\" \\\n     -H \"Accept: application/vnd.github+json\" \\\n     \"https://api.github.com/rate_limit\" | python3 -c \"\nimport json, sys\nd = json.load(sys.stdin)\nsearch = d['resources']['search']\ncore = d['resources']['core']\nprint(f'Search rate: {search[\\\"remaining\\\"]}/{search[\\\"limit\\\"]} remaining')\nprint(f'Core rate:   {core[\\\"remaining\\\"]}/{core[\\\"limit\\\"]} remaining')\n\"\n```\n\nIf search remaining is 0: stop. Tell the user the reset time from `X-RateLimit-Reset`.\n\n---\n\n## Step 2: Gather Input\n\nCollect from the conversation:\n- SDK name (e.g. `@company/my-sdk`, `requests`, `github.com/org/go-sdk`)\n- Optional: ecosystem override (`npm`, `python`, `go`, `gem`) -- auto-detected if not provided\n- Optional: org/user to exclude from results (the SDK owner's own repos)\n- Optional: product context string (used to personalize outreach messages)\n\n**Auto-detect ecosystem:**\n- Starts with `@` or contains `-`: npm\n- snake_case with no `/` or `-`: python\n- Contains `github.com/`: go\n- Otherwise: ask the user\n\n**If no SDK name is provided:** Ask: \"Which SDK or library would you like to track? Provide the package name as it appears in import statements (e.g. `stripe`, `@clerk/nextjs`, `requests`).\"\n\n```bash\npython3 << 'PYEOF'\nimport json, sys, re\n\nsdk_name = \"SDK_NAME_HERE\"\necosystem_override = \"\"  # leave empty for auto-detect\nexclude_owner = \"\"       # optional: owner name to exclude (usually the SDK publisher)\nproduct_context = \"\"     # optional: what your product does\n\n# Auto-detect ecosystem\nif ecosystem_override:\n    ecosystem = ecosystem_override\nelif sdk_name.startswith(\"@\") or \"-\" in sdk_name:\n    ecosystem = \"npm\"\nelif re.match(r'^[a-z][a-z0-9_]*$', sdk_name):\n    ecosystem = \"python\"\nelif \"github.com/\" in sdk_name:\n    ecosystem = \"go\"\nelse:\n    ecosystem = \"generic\"\n\nprint(f\"SDK: {sdk_name}\")\nprint(f\"Ecosystem: {ecosystem}\")\nprint(f\"Exclude owner: {exclude_owner or '(none)'}\")\n\nwith open(\"/tmp/sat-input.json\", \"w\") as f:\n    json.dump({\n        \"sdk_name\": sdk_name,\n        \"ecosystem\": ecosystem,\n        \"exclude_owner\": exclude_owner,\n        \"product_context\": product_context\n    }, f)\nPYEOF\n```\n\n---\n\n## Step 3: Search GitHub Code\n\nCheck for standalone script first -- it handles Steps 3-5 in one call.\n\n```bash\nls scripts/fetch.py 2>/dev/null && echo \"script available\" || echo \"script not found\"\n```\n\n**If the script is available**, run it and skip to Step 6:\n\n```bash\npython3 scripts/fetch.py \"$(python3 -c \"import json; d=json.load(open('/tmp/sat-input.json')); print(d['sdk_name'])\")\" \\\n  --ecosystem \"$(python3 -c \"import json; d=json.load(open('/tmp/sat-input.json')); print(d['ecosystem'])\")\" \\\n  --exclude \"$(python3 -c \"import json; d=json.load(open('/tmp/sat-input.json')); print(d.get('exclude_owner',''))\")\" \\\n  --output /tmp/sat-script-out.json\n```\n\nThen load into the temp file format Steps 6-8 expect:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nout = json.load(open(\"/tmp/sat-script-out.json\"))\njson.dump(out[\"raw_results\"], open(\"/tmp/sat-raw-results.json\", \"w\"), indent=2)\njson.dump(out[\"scored\"], open(\"/tmp/sat-scored.json\", \"w\"), indent=2)\njson.dump(out[\"enriched\"], open(\"/tmp/sat-enriched.json\", \"w\"), indent=2)\nprint(f\"Loaded: {len(out['raw_results'])} raw | {len(out['scored'])} scored | {len(out['enriched'])} enriched\")\nPYEOF\n```\n\n**If the script is not available**, run the inline code below.\n\nBuild import patterns and search GitHub code for each:\n\n```bash\npython3 << 'PYEOF'\nimport json, urllib.request, ssl, time, os\nfrom datetime import datetime, timezone\n\nctx = ssl._create_unverified_context()\ntoken = os.environ[\"GITHUB_TOKEN\"]\nheaders = {\n    \"Accept\": \"application/vnd.github+json\",\n    \"Authorization\": f\"Bearer {token}\",\n    \"User-Agent\": \"sdk-adoption-tracker/1.0\"\n}\n\ndata = json.load(open(\"/tmp/sat-input.json\"))\nsdk_name = data[\"sdk_name\"]\necosystem = data[\"ecosystem\"]\n\n# Build query patterns per ecosystem\nif ecosystem == \"npm\":\n    # Use sdk name without @ prefix for bare searches\n    bare = sdk_name.lstrip(\"@\").replace(\"/\", \"/\")\n    queries = [\n        f'require(\"{sdk_name}\")',\n        f\"require('{sdk_name}')\",\n        f'from \"{sdk_name}\"',\n        f\"from '{sdk_name}'\",\n    ]\nelif ecosystem == \"python\":\n    queries = [\n        f\"import {sdk_name}\",\n        f\"from {sdk_name} import\",\n        f\"from {sdk_name}.\",\n    ]\nelif ecosystem == \"go\":\n    queries = [f'\"{sdk_name}\"']\nelse:\n    queries = [sdk_name]\n\nprint(f\"Building queries for {sdk_name} ({ecosystem}):\")\nfor q in queries:\n    print(f\"  {q}\")\n\nseen_repos = {}  # full_name -> first match\nsearch_rate_remaining = 30\nflags = []\n\nfor i, query in enumerate(queries):\n    if search_rate_remaining <= 2:\n        flags.append(f\"Code search rate limit low ({search_rate_remaining}) -- skipped remaining patterns\")\n        print(f\"  Rate limit low ({search_rate_remaining}), stopping early\")\n        break\n\n    url = f\"https://api.github.com/search/code?q={urllib.parse.quote(query)}&per_page=100\"\n    req = urllib.request.Request(url, headers=headers)\n\n    try:\n        with urllib.request.urlopen(req, timeout=20, context=ctx) as resp:\n            search_rate_remaining = int(resp.headers.get(\"X-RateLimit-Remaining\", 10))\n            raw = json.loads(resp.read())\n\n        items = raw.get(\"items\", [])\n        total = raw.get(\"total_count\", 0)\n        print(f\"  Pattern '{query}': {total} total, {len(items)} fetched | rate_remaining={search_rate_remaining}\")\n\n        for item in items:\n            repo = item.get(\"repository\", {})\n            full_name = repo.get(\"full_name\", \"\")\n            if full_name and full_name not in seen_repos:\n                seen_repos[full_name] = {\n                    \"full_name\": full_name,\n                    \"name\": repo.get(\"name\", \"\"),\n                    \"owner_login\": repo.get(\"owner\", {}).get(\"login\", \"\"),\n                    \"owner_type\": repo.get(\"owner\", {}).get(\"type\", \"\"),\n                    \"file_path\": item.get(\"path\", \"\"),\n                    \"matched_pattern\": query,\n                    \"html_url\": repo.get(\"html_url\", \"\"),\n                    \"description\": repo.get(\"description\") or \"\",\n                }\n\n    except urllib.error.HTTPError as e:\n        if e.code == 403:\n            flags.append(f\"Code search rate limit hit on pattern '{query}'\")\n            print(f\"  Rate limit hit (403) on '{query}'\")\n            break\n        else:\n            print(f\"  HTTP {e.code} on '{query}'\")\n    except Exception as e:\n        print(f\"  Error on '{query}': {e}\")\n\n    # Respect 10 req/min code search limit\n    if i < len(queries) - 1:\n        time.sleep(6)\n\nresults = list(seen_repos.values())\njson.dump(results, open(\"/tmp/sat-raw-results.json\", \"w\"), indent=2)\nprint(f\"\\nTotal unique repos found: {len(results)}\")\nif flags:\n    print(\"Flags:\", flags)\n\nimport urllib.parse  # ensure imported\nPYEOF\n```\n\n**If 0 results:** Tell the user: \"No repos found importing `{sdk_name}`. GitHub code search indexing takes 1-4 weeks for new packages. If the SDK is established, check the import patterns in `references/import-patterns.md`.\"\n\n---\n\n## Step 4: Score and Classify Repos\n\nNo API call. Pure Python. Filter noise, compute adoption score, classify each repo.\n\n```bash\npython3 << 'PYEOF'\nimport json\nfrom datetime import datetime, timezone\n\ndata = json.load(open(\"/tmp/sat-input.json\"))\nexclude_owner = data.get(\"exclude_owner\", \"\").lower()\nsdk_name = data[\"sdk_name\"].lower().split(\"/\")[-1].replace(\"@\", \"\")\n\nresults = json.load(open(\"/tmp/sat-raw-results.json\"))\n\nTUTORIAL_WORDS = {\"example\", \"tutorial\", \"demo\", \"learn\", \"sample\", \"starter\",\n                  \"boilerplate\", \"template\", \"playground\", \"test\", \"course\", \"workshop\"}\n\nscored = []\nnow = datetime.now(tz=timezone.utc)\n\nfor repo in results:\n    full_name = repo[\"full_name\"]\n    owner_login = repo.get(\"owner_login\", \"\").lower()\n    repo_name = repo.get(\"name\", \"\").lower()\n    description = (repo.get(\"description\") or \"\").lower()\n    owner_type = repo.get(\"owner_type\", \"User\")\n\n    # Exclude the SDK owner's own repos\n    if exclude_owner and owner_login == exclude_owner.lower():\n        continue\n\n    # Detect tutorial noise\n    name_words = set(repo_name.replace(\"-\", \" \").replace(\"_\", \" \").split())\n    desc_words = set(description.split())\n    is_tutorial = bool((name_words | desc_words) & TUTORIAL_WORDS)\n    # Also exclude if repo name IS the SDK name (likely a fork)\n    if repo_name == sdk_name or repo_name.startswith(sdk_name + \"-\"):\n        is_tutorial = True  # treat as noise\n\n    # Classification tier\n    if is_tutorial:\n        tier = \"tutorial_noise\"\n    elif owner_type == \"Organization\":\n        tier = \"company_org\"\n    else:\n        tier = \"solo_dev\"  # will be upgraded to affiliated_dev in Step 5 if company field populated\n\n    # Adoption score (filled with partial data now, enriched in Step 5)\n    score = 0\n    if owner_type == \"Organization\": score += 50\n    if not is_tutorial:              score += 20\n    # stars, days_since_push, is_fork, is_archived added in Step 5\n\n    scored.append({\n        **repo,\n        \"tier\": tier,\n        \"is_tutorial\": is_tutorial,\n        \"adoption_score\": score,\n        \"enriched\": False\n    })\n\n# Sort: company_org first, then by tier\ntier_order = {\"company_org\": 0, \"affiliated_dev\": 1, \"solo_dev\": 2, \"tutorial_noise\": 3}\nscored.sort(key=lambda x: (tier_order.get(x[\"tier\"], 9), -x[\"adoption_score\"]))\n\njson.dump(scored, open(\"/tmp/sat-scored.json\", \"w\"), indent=2)\n\ntiers = {}\nfor r in scored:\n    tiers[r[\"tier\"]] = tiers.get(r[\"tier\"], 0) + 1\n\nprint(f\"Classification:\")\nfor tier, count in sorted(tiers.items(), key=lambda x: tier_order.get(x[0], 9)):\n    print(f\"  {tier}: {count}\")\nprint(f\"Total: {len(scored)} repos\")\n\nnon_noise = [r for r in scored if r[\"tier\"] != \"tutorial_noise\"]\nprint(f\"\\nTop repos for enrichment (non-noise): {len(non_noise)}\")\nfor r in non_noise[:5]:\n    print(f\"  {r['full_name']} ({r['tier']}) -- {r.get('description','')[:60]}\")\nPYEOF\n```\n\n**If all repos are tutorial_noise:** Stop. Tell the user: \"All repos found appear to be tutorials or examples. No production adopters detected in public GitHub. The SDK may be too new, or the package name is generic enough that search results are dominated by examples.\"\n\n---\n\n## Step 5: Enrich High-Signal Repos\n\nFetch full repo metadata, owner profile, and top contributors for non-noise repos. Skip tutorial_noise repos entirely.\n\n```bash\npython3 << 'PYEOF'\nimport json, urllib.request, ssl, os, time\nfrom datetime import datetime, timezone\n\nctx = ssl._create_unverified_context()\ntoken = os.environ[\"GITHUB_TOKEN\"]\nheaders = {\n    \"Accept\": \"application/vnd.github+json\",\n    \"Authorization\": f\"Bearer {token}\",\n    \"User-Agent\": \"sdk-adoption-tracker/1.0\"\n}\n\nscored = json.load(open(\"/tmp/sat-scored.json\"))\ncore_remaining = 5000\nflags = []\n\ndef gh_get(path):\n    global core_remaining\n    req = urllib.request.Request(f\"https://api.github.com{path}\", headers=headers)\n    try:\n        with urllib.request.urlopen(req, timeout=15, context=ctx) as resp:\n            remaining = resp.headers.get(\"X-RateLimit-Remaining\")\n            if remaining:\n                core_remaining = int(remaining)\n            return json.loads(resp.read())\n    except urllib.error.HTTPError as e:\n        if e.code == 404:\n            return None\n        raise\n    except Exception:\n        return None\n\ntarget = [r for r in scored if r[\"tier\"] != \"tutorial_noise\"]\nprint(f\"Enriching {len(target)} repos (skipping tutorial_noise)...\")\n\nenriched = []\n\nfor item in target:\n    full_name = item[\"full_name\"]\n    owner_login = item[\"owner_login\"]\n    owner_type = item[\"owner_type\"]\n\n    if core_remaining <= 10:\n        flags.append(f\"Core rate limit low ({core_remaining}) -- skipped enrichment for {full_name} and remaining repos\")\n        enriched.append({**item, \"enriched\": False})\n        continue\n\n    # Fetch full repo metadata\n    repo_data = gh_get(f\"/repos/{full_name}\")\n    if not repo_data:\n        print(f\"  {full_name}: repo not found\")\n        continue\n\n    stars = repo_data.get(\"stargazers_count\", 0)\n    is_fork = repo_data.get(\"fork\", False)\n    is_archived = repo_data.get(\"archived\", False)\n    language = repo_data.get(\"language\") or \"\"\n    description = repo_data.get(\"description\") or item.get(\"description\", \"\")\n    pushed_at = repo_data.get(\"pushed_at\") or \"\"\n    created_at = repo_data.get(\"created_at\") or \"\"\n    repo_url = repo_data.get(\"html_url\", f\"https://github.com/{full_name}\")\n\n    # Compute days since last push\n    days_since_push = 999\n    if pushed_at:\n        pushed_dt = datetime.fromisoformat(pushed_at.replace(\"Z\", \"+00:00\"))\n        days_since_push = (datetime.now(tz=timezone.utc) - pushed_dt).days\n\n    days_since_created = 999\n    if created_at:\n        created_dt = datetime.fromisoformat(created_at.replace(\"Z\", \"+00:00\"))\n        days_since_created = (datetime.now(tz=timezone.utc) - created_dt).days\n\n    # Fetch owner profile (user or org)\n    owner_profile = {}\n    company = \"\"\n    org_website = \"\"\n\n    if owner_type == \"Organization\":\n        org_data = gh_get(f\"/orgs/{owner_login}\")\n        if org_data:\n            company = org_data.get(\"name\") or owner_login\n            org_website = org_data.get(\"blog\") or \"\"\n            owner_profile = {\n                \"type\": \"org\",\n                \"name\": org_data.get(\"name\") or owner_login,\n                \"description\": org_data.get(\"description\") or \"\",\n                \"website\": org_website,\n                \"email\": org_data.get(\"email\") or \"\",\n                \"public_repos\": org_data.get(\"public_repos\", 0),\n                \"followers\": org_data.get(\"followers\", 0),\n            }\n    else:\n        user_data = gh_get(f\"/users/{owner_login}\")\n        if user_data:\n            company = user_data.get(\"company\") or \"\"\n            owner_profile = {\n                \"type\": \"user\",\n                \"name\": user_data.get(\"name\") or owner_login,\n                \"company\": company,\n                \"bio\": user_data.get(\"bio\") or \"\",\n                \"blog\": user_data.get(\"blog\") or \"\",\n                \"followers\": user_data.get(\"followers\", 0),\n                \"twitter_username\": user_data.get(\"twitter_username\") or \"not listed\",\n            }\n            # Upgrade tier if company field is populated\n            if company and item[\"tier\"] == \"solo_dev\":\n                item[\"tier\"] = \"affiliated_dev\"\n\n    # Fetch top contributors (skip if rate limit low)\n    top_contributors = []\n    if core_remaining > 20:\n        contributors = gh_get(f\"/repos/{full_name}/contributors?per_page=3\")\n        if contributors:\n            top_contributors = [\n                {\"login\": c.get(\"login\", \"\"), \"contributions\": c.get(\"contributions\", 0)}\n                for c in contributors[:3]\n            ]\n\n    # Compute final adoption score\n    score = 0\n    if owner_type == \"Organization\":      score += 50\n    if company and company.strip():        score += 20\n    score += min(stars, 500) / 10\n    if days_since_push < 30:              score += 30\n    if days_since_push < 7:               score += 20\n    if not is_fork:                       score += 10\n    if not is_archived:                   score += 10\n    if not item.get(\"is_tutorial\", False): score += 20\n\n    tier = item[\"tier\"]\n    if is_archived or is_fork:\n        tier = \"tutorial_noise\" if item.get(\"is_tutorial\") else tier\n\n    enriched_item = {\n        **item,\n        \"description\": description,\n        \"stars\": stars,\n        \"language\": language,\n        \"is_fork\": is_fork,\n        \"is_archived\": is_archived,\n        \"days_since_push\": days_since_push,\n        \"days_since_created\": days_since_created,\n        \"pushed_at\": pushed_at,\n        \"created_at\": created_at,\n        \"repo_url\": repo_url,\n        \"tier\": tier,\n        \"adoption_score\": round(score, 1),\n        \"company\": company,\n        \"org_website\": org_website,\n        \"owner_profile\": owner_profile,\n        \"top_contributors\": top_contributors,\n        \"enriched\": True,\n    }\n    enriched.append(enriched_item)\n\n    print(f\"  {full_name} | tier={tier} | score={round(score,1)} | \"\n          f\"stars={stars} | pushed={days_since_push}d ago | \"\n          f\"company={company or 'not listed'} | rate={core_remaining}\")\n    time.sleep(0.1)\n\nenriched.sort(key=lambda x: -x[\"adoption_score\"])\n\njson.dump(enriched, open(\"/tmp/sat-enriched.json\", \"w\"), indent=2)\nprint(f\"\\nEnrichment complete: {len(enriched)} repos | rate_remaining={core_remaining}\")\nif flags:\n    for f in flags:\n        print(f\"  FLAG: {f}\")\nPYEOF\n```\n\n---\n\n## Step 6: Generate Adoption Briefs\n\nPrint top adopters, then generate outreach briefs for high-signal company repos.\n\n```bash\npython3 << 'PYEOF'\nimport json\nfrom datetime import datetime, timezone\n\nenriched = json.load(open(\"/tmp/sat-enriched.json\"))\ndata = json.load(open(\"/tmp/sat-input.json\"))\nproduct_context = data.get(\"product_context\", \"\")\nsdk_name = data[\"sdk_name\"]\n\nhigh_signal = [r for r in enriched if r[\"adoption_score\"] >= 80]\nmedium = [r for r in enriched if 40 <= r[\"adoption_score\"] < 80]\nnoise = [r for r in enriched if r[\"adoption_score\"] < 40 or r[\"tier\"] == \"tutorial_noise\"]\n\nprint(\"=== DATA FOR ADOPTION BRIEF GENERATION ===\")\nprint(f\"SDK: {sdk_name}\")\nprint(f\"Product context: {product_context or '(none provided)'}\")\nprint()\n\nfor item in (high_signal + medium)[:20]:\n    prof = item.get(\"owner_profile\", {})\n    contribs = item.get(\"top_contributors\", [])\n    primary = contribs[0] if contribs else {}\n\n    print(f\"REPO: {item['full_name']} (tier={item['tier']}, score={item['adoption_score']})\")\n    print(f\"  Stars: {item.get('stars', 0)} | Language: {item.get('language','?')} | \"\n          f\"Pushed: {item.get('days_since_push', '?')} days ago\")\n    print(f\"  Description: {item.get('description','none')}\")\n    print(f\"  SDK found in: {item.get('file_path','?')}\")\n    print(f\"  Owner type: {item.get('owner_type','?')} | Company: {item.get('company','not listed')}\")\n    if prof.get(\"type\") == \"org\":\n        print(f\"  Org: {prof.get('name')} | Website: {prof.get('website','none')} | \"\n              f\"Repos: {prof.get('public_repos',0)}\")\n    elif prof.get(\"type\") == \"user\":\n        print(f\"  User: {prof.get('name')} | Company: {prof.get('company','not listed')} | \"\n              f\"Twitter: {prof.get('twitter_username','not listed')} | \"\n              f\"Followers: {prof.get('followers',0)}\")\n    if primary:\n        print(f\"  Top contributor: @{primary.get('login')} ({primary.get('contributions',0)} commits)\")\n    print()\nPYEOF\n```\n\nUsing the repo data printed above, generate an adoption brief for each HIGH-SIGNAL repo (score >= 80).\n\nRules:\n- Every repo name, star count, and file path must come from the printed data -- do not modify\n- Every contributor handle must come from the printed \"Top contributor\" line -- if none listed, write \"not listed\"\n- Every company name must come from the printed \"Company\" or \"Org\" line -- if \"not listed\", write \"not listed\"\n- \"Why reach out\" must reference specific signals from the data (score, stars, days since push, company)\n- \"Suggested message\" must name the repo, the specific file where the SDK was found, and connect to product_context if provided\n- No em dashes. No forbidden words: powerful, robust, seamless, innovative, game-changing, streamline, leverage, transform\n\nWrite your briefs to `/tmp/sat-briefs.json` with this structure:\n\n```json\n{\n  \"adoption_briefs\": [\n    {\n      \"repo\": \"owner/repo-name\",\n      \"tier\": \"company_org\",\n      \"adoption_score\": 124.0,\n      \"company\": \"Company Name or not listed\",\n      \"top_contributor\": \"@handle or not listed\",\n      \"twitter\": \"@handle or not listed\",\n      \"stars\": 234,\n      \"language\": \"TypeScript\",\n      \"sdk_file\": \"src/api/client.ts\",\n      \"why_reach_out\": \"2-3 sentences specific to this repo's signals\",\n      \"suggested_message\": \"2-4 sentences naming the repo, SDK file, and product_context connection\"\n    }\n  ]\n}\n```\n\nAfter writing, confirm:\n\n```bash\npython3 -c \"\nimport json\nd = json.load(open('/tmp/sat-briefs.json'))\nprint(f'Briefs generated: {len(d.get(\\\"adoption_briefs\\\", []))}')\nfor b in d['adoption_briefs']:\n    print(f'  {b[\\\"repo\\\"]} ({b[\\\"tier\\\"]}): score={b[\\\"adoption_score\\\"]} company={b[\\\"company\\\"]}')\n\"\n```\n\n---\n\n## Step 7: Self-QA\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nraw = json.load(open(\"/tmp/sat-raw-results.json\"))\nenriched = json.load(open(\"/tmp/sat-enriched.json\"))\nbriefs = json.load(open(\"/tmp/sat-briefs.json\"))\n\nfailures = []\n\n# Verify every repo in briefs exists in raw search results\nraw_full_names = {r[\"full_name\"] for r in raw}\nfor brief in briefs.get(\"adoption_briefs\", []):\n    if brief.get(\"repo\") not in raw_full_names:\n        failures.append(f\"Brief for unknown repo '{brief.get('repo')}' not in code search results -- removed\")\n\nbriefs[\"adoption_briefs\"] = [\n    b for b in briefs.get(\"adoption_briefs\", []) if b.get(\"repo\") in raw_full_names\n]\n\n# Verify briefs are sorted by adoption_score descending\nscores = [b[\"adoption_score\"] for b in briefs.get(\"adoption_briefs\", [])]\nif scores != sorted(scores, reverse=True):\n    briefs[\"adoption_briefs\"].sort(key=lambda x: -x[\"adoption_score\"])\n    failures.append(\"Re-sorted briefs by adoption_score descending\")\n\n# Check for em dashes\nbriefs_str = json.dumps(briefs)\nif \"—\" in briefs_str:\n    briefs_str = briefs_str.replace(\"—\", \" - \")\n    briefs = json.loads(briefs_str)\n    failures.append(\"Fixed: em dash characters removed from briefs\")\n\n# Check for forbidden words\nforbidden = [\"powerful\", \"robust\", \"seamless\", \"innovative\", \"game-changing\",\n             \"streamline\", \"leverage\", \"transform\", \"revolutionize\"]\nfull_text = json.dumps(briefs).lower()\nfor word in forbidden:\n    if word in full_text:\n        failures.append(f\"Warning: forbidden word '{word}' found in briefs -- review before presenting\")\n\n# Check required fields\nfor brief in briefs.get(\"adoption_briefs\", []):\n    for field in [\"repo\", \"tier\", \"adoption_score\", \"company\", \"top_contributor\",\n                  \"why_reach_out\", \"suggested_message\"]:\n        if brief.get(field) is None:\n            failures.append(f\"Missing field '{field}' in brief for {brief.get('repo', '?')}\")\n\noutput = {\n    \"enriched\": enriched,\n    \"briefs\": briefs,\n    \"data_quality_flags\": failures\n}\njson.dump(output, open(\"/tmp/sat-output.json\", \"w\"), indent=2)\nprint(f\"QA complete. Issues found: {len(failures)}\")\nfor f in failures:\n    print(f\"  - {f}\")\nif not failures:\n    print(\"All QA checks passed.\")\nPYEOF\n```\n\n---\n\n## Step 8: Save and Present Output\n\n```bash\npython3 << 'PYEOF'\nimport json, os\nfrom datetime import datetime, timezone\n\noutput = json.load(open(\"/tmp/sat-output.json\"))\nenriched = output[\"enriched\"]\nbriefs_map = {b[\"repo\"]: b for b in output[\"briefs\"].get(\"adoption_briefs\", [])}\nflags = output[\"data_quality_flags\"]\ndata = json.load(open(\"/tmp/sat-input.json\"))\nsdk_name = data[\"sdk_name\"]\ndate_str = datetime.now(tz=timezone.utc).strftime(\"%Y-%m-%d\")\n\nhigh_signal = [r for r in enriched if r[\"adoption_score\"] >= 80]\nmedium = [r for r in enriched if 40 <= r[\"adoption_score\"] < 80]\nnoise = [r for r in enriched if r[\"adoption_score\"] < 40 or r[\"tier\"] == \"tutorial_noise\"]\n\n# Compute velocity buckets\nnew_7d = sum(1 for r in enriched if r.get(\"days_since_created\", 999) <= 7)\nnew_30d = sum(1 for r in enriched if r.get(\"days_since_created\", 999) <= 30)\n\n# Load previous snapshot for comparison\nslug = sdk_name.replace(\"@\", \"\").replace(\"/\", \"-\")\nprev_repos = set()\nprev_path = f\"docs/sdk-adopters/\"\nif os.path.isdir(prev_path):\n    import glob\n    prev_files = sorted(glob.glob(f\"{prev_path}{slug}-*.json\"))\n    if prev_files:\n        try:\n            prev_data = json.load(open(prev_files[-1]))\n            prev_repos = {r[\"full_name\"] for r in prev_data.get(\"enriched\", [])}\n        except Exception:\n            pass\n\nnew_since_last = len({r[\"full_name\"] for r in enriched} - prev_repos) if prev_repos else None\n\ntier_counts = {}\nfor r in enriched:\n    tier_counts[r[\"tier\"]] = tier_counts.get(r[\"tier\"], 0) + 1\n\nlines = [\n    f\"## SDK Adoption Report: {sdk_name}\",\n    f\"Repos found: {len(enriched)} | Company repos: {tier_counts.get('company_org', 0)} | \"\n    f\"Active (30 days): {sum(1 for r in enriched if r.get('days_since_push', 999) <= 30)} | Date: {date_str}\",\n    \"\",\n    \"---\",\n    \"\",\n    \"### Adoption Velocity\",\n    f\"New repos last 7 days: {new_7d}\",\n    f\"New repos last 30 days: {new_30d}\",\n]\nif new_since_last is not None:\n    lines.append(f\"New since last run: {new_since_last}\")\nlines += [\"\", \"---\", \"\"]\n\nif high_signal or medium:\n    lines += [\"### Top Adopters\", \"\"]\n    lines += [\n        \"| Rank | Repo | Stars | Tier | Score | Pushed | Language |\",\n        \"|---|---|---|---|---|---|---|\",\n    ]\n    for i, r in enumerate((high_signal + medium)[:15], 1):\n        pushed_label = f\"{r.get('days_since_push','?')}d ago\"\n        lines.append(\n            f\"| {i} | [{r['full_name']}]({r.get('repo_url', '')}) | \"\n            f\"{r.get('stars',0):,} | {r['tier']} | {r['adoption_score']} | \"\n            f\"{pushed_label} | {r.get('language','?')} |\"\n        )\n    lines += [\"\", \"---\", \"\"]\n\nif high_signal:\n    lines += [\"### Adoption Briefs (score >= 80)\", \"\"]\n    for r in high_signal:\n        brief = briefs_map.get(r[\"full_name\"], {})\n        prof = r.get(\"owner_profile\", {})\n        lines.append(f\"#### {r['full_name']}  [score: {r['adoption_score']}]\")\n        lines.append(f\"Owner: {r['owner_login']} ({r['owner_type']})\")\n        lines.append(f\"Stars: {r.get('stars',0):,} | Language: {r.get('language','?')} | \"\n                     f\"Last pushed: {r.get('days_since_push','?')} days ago\")\n        if r.get(\"description\"):\n            lines.append(f\"What they're building: {r['description']}\")\n        lines.append(f\"SDK found in: {r.get('file_path','?')}\")\n        if r.get(\"company\") and r[\"company\"] != \"not listed\":\n            lines.append(f\"Company: {r['company']}\")\n        if r.get(\"org_website\"):\n            lines.append(f\"Website: {r['org_website']}\")\n        contribs = r.get(\"top_contributors\", [])\n        if contribs:\n            lines.append(f\"Top contributor: @{contribs[0]['login']} ({contribs[0]['contributions']} commits)\")\n        lines.append(\"\")\n        if brief.get(\"why_reach_out\"):\n            lines.append(f\"**Why reach out:** {brief['why_reach_out']}\")\n        if brief.get(\"suggested_message\"):\n            lines.append(f\"\\n**Suggested message:**\")\n            lines.append(f\"> {brief['suggested_message']}\")\n        lines += [\"\", \"---\", \"\"]\n\nlines += [\n    \"### Adoption Breakdown\",\n    \"\",\n    \"| Tier | Count |\",\n    \"|---|---|\",\n]\nfor tier in [\"company_org\", \"affiliated_dev\", \"solo_dev\", \"tutorial_noise\"]:\n    count = tier_counts.get(tier, 0)\n    lines.append(f\"| {tier} | {count} |\")\n\nlines += [\"\", \"---\", \"\"]\nlines.append(f\"Data quality notes: {'; '.join(flags) if flags else 'None'}\")\n\noutput_dir = f\"docs/sdk-adopters\"\nos.makedirs(output_dir, exist_ok=True)\nmd_path = f\"{output_dir}/{slug}-{date_str}.md\"\njson_path = f\"{output_dir}/{slug}-{date_str}.json\"\n\nopen(md_path, \"w\").write(\"\\n\".join(lines))\njson.dump({\"enriched\": enriched, \"briefs\": output[\"briefs\"]}, open(json_path, \"w\"), indent=2)\n\nprint(\"\\n\".join(lines))\nprint(f\"\\nSaved to: {md_path}\")\nprint(f\"JSON snapshot: {json_path} (used for velocity tracking on next run)\")\nPYEOF\n```\n\nClean up temp files:\n\n```bash\nrm -f /tmp/sat-input.json /tmp/sat-raw-results.json /tmp/sat-scored.json \\\n      /tmp/sat-enriched.json /tmp/sat-briefs.json /tmp/sat-output.json\n```","tags":["sdk","adoption","tracker","opendirectory","varnan-tech","agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"capabilities":["skill","source-varnan-tech","skill-sdk-adoption-tracker","topic-agent-skills","topic-gtm","topic-hermes-agent","topic-openclaw-skills","topic-skills","topic-technical-seo"],"categories":["opendirectory"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/Varnan-Tech/opendirectory/sdk-adoption-tracker","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add Varnan-Tech/opendirectory","source_repo":"https://github.com/Varnan-Tech/opendirectory","install_from":"skills.sh"}},"qualityScore":"0.494","qualityRationale":"deterministic score 0.49 from registry signals: · indexed on github topic:agent-skills · 88 github stars · SKILL.md body (29,798 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T18:56:44.276Z","embedding":null,"createdAt":"2026-04-23T06:55:38.656Z","updatedAt":"2026-04-23T18:56:44.276Z","lastSeenAt":"2026-04-23T18:56:44.276Z","tsv":"'+00':1993,2016 '-1':1351,3337 '-3':2824 '-4':1289,2835 '-5':741 '-8':820 '/1.0':928,1779 '/contributors':2182 '/dev/null':749 '/org/go-sdk':520 '/orgs':2047 '/rate_limit':453 '/repos':1915,2179 '/search/code?q=':1070 '/settings/tokens':411 '/tmp/sat-briefs.json':2781,2857,2906,3782 '/tmp/sat-enriched.json':852,2384,2441,2902,3781 '/tmp/sat-input.json':706,779,792,804,932,1337,2445,3209,3778 '/tmp/sat-output.json':3136,3184,3783 '/tmp/sat-raw-results.json':836,1249,1356,2898,3779 '/tmp/sat-scored.json':844,1588,1783,3780 '/tmp/sat-script-out.json':810,830 '/users':2101 '0':492,1111,1272,1515,1564,1603,1619,1934,2090,2094,2134,2196,2207,2534,2556,2612,2638,2649,3382,3401,3504,3561,3627,3630,3682 '0.1':2373 '00':1994,2017 '1':196,386,434,1240,1288,1567,1604,2324,2353,3270,3285,3383,3407,3482 '10':1100,1231,1884,2224,2244,2250 '100':1075 '124.0':2795 '15':1807,3481 '2':506,748,839,847,855,1041,1252,1570,1591,2387,2823,2834,3139,3746 '20':267,1086,1527,2174,2219,2238,2258,2523 '234':2814 '3':178,426,728,740,1573,2185,2201 '30':1029,2229,2231,3296,3404,3418,3436 '30d':3283,3439 '4':1306 '40':2475,2490,3243,3258 '403':1193,1209 '404':1833 '5':1498,1513,1539,1660,1719 '50':1521,2213 '500':238,251,2223 '5000':1786 '6':768,819,1242,2411 '60':1670 '7':2236,2886,3281,3428 '7d':3268,3431 '8':3165 '80':2467,2479,2670,3235,3247,3523 '9':672,1581,1620 '999':1984,2007,3280,3295,3417 'a-z':666 'a-z0':669 'accept':448,914,1765 'activ':71,3403 'ad':1536 'add':405 'adopt':3,38,43,51,88,95,221,283,926,1319,1503,1548,1583,1693,1777,2204,2320,2379,2413,2417,2465,2477,2488,2499,2549,2661,2786,2793,2864,2870,2880,2932,2957,2964,2978,2983,2989,2998,3005,3013,3092,3099,3199,3233,3245,3256,3387,3422,3464,3508,3520,3545,3664 'affili':29,1494,1565,2159,3673 'agent':159,923,1774 'ago':2362,2567,3491,3573 'ai':244,274 'also':1444 'ambigu':384 'analysi':275 'api':110,123,137,146,338,1312 'api.github.com':452,1069,1798 'api.github.com/rate_limit':451 'api.github.com/search/code?q=':1068 'appear':599,1685 'application/vnd.github':449,915,1766 'archiv':1535,1941,1943,2248,2264,2291,2293 'ask':376,574,583 'author':443,917,1768 'auto':363,381,529,556,625,646 'auto-detect':362,380,528,555,624,645 'avail':752,761,878 'b':2867,2874,2876,2879,2883,2959,2961,2982,2986,3190,3192,3194 'b.get':2967 'bare':955,957 'bash':389,607,745,769,822,893,1324,1744,2428,2849,2890,3170,3775 'bearer':444,919,1770 'bio':2123,2125 'blog':2062,2127,2129 'boilerpl':1365 'bool':1437 'break':1065,1212 'breakdown':3665 'brief':310,2414,2421,2500,2662,2779,2787,2860,2865,2871,2903,2912,2929,2933,2944,2956,2958,2965,2974,2990,2997,2999,3011,3020,3023,3026,3028,3031,3033,3042,3062,3081,3089,3093,3120,3127,3128,3188,3197,3200,3521,3529,3644,3659,3738,3740 'brief.get':2935,2948,3110,3122,3635,3649 'briefs.get':2931,2963,2988,3091 'briefs_map.get':3530 'briefs_str.replace':3030 'bucket':3266 'build':371,884,941,1007,3582 'c':455,773,786,798,2198,2851 'c.get':2191,2194 'call':744,1313 'case':565 'chang':2773,3054 'charact':3039 'check':388,732,1299,3016,3043,3085,3161 'classif':1471,1607 'classifi':23,1309,1321 'clean':3771 'clear':199 'clerk/nextjs':605 'code':13,108,168,174,245,402,422,731,882,890,1044,1196,1233,1284,2952 'collect':509 'come':116,132,321,334,2681,2693,2710 'commit':2650,3632 'common':156 'compani':27,69,94,113,124,312,318,325,1484,1500,1554,1562,2035,2053,2107,2109,2121,2122,2146,2151,2215,2325,2326,2364,2365,2426,2589,2591,2622,2624,2707,2714,2739,2791,2796,2797,2882,2884,3101,3396,3399,3595,3598,3603,3605,3671 'company.strip':2217 'company/my-sdk':516 'comparison':3301 'complet':2391,3143 'comput':1318,1976,2202,3264 'confirm':2848 'connect':2755,2845 'contact':315 'contain':562,570 'context':92,548,639,722,724,1087,1808,2447,2450,2510,2512,2758,2844 'continu':1421,1905,1929 'contrib':2528,2533,2536,3616,3621,3626,3629 'contribut':2193,2195,2648,3631 'contributor':83,129,136,217,331,337,1733,2163,2170,2175,2187,2189,2200,2336,2338,2531,2644,2690,2698,2803,3103,3619,3625 'convers':512 'core':466,469,481,483,485,1784,1793,1820,1882,1887,1891,2172,2370,2397 'count':1110,1610,1624,1933,2676,3370,3376,3667,3679,3686 'cours':1369 'creat':1961,1964,2006,2009,2011,2020,2024,2302,2305,2310,2312,3279,3294 'created_at.replace':2014 'critic':96 'ctx':907,1088,1758,1809 'curl':440 'd':459,463,467,776,781,789,794,801,2361,2854,2869,3223,3490 'd.get':806,2863 'dash':2763,3019,3038 'data':84,929,935,939,1334,1346,1508,1911,1921,2043,2052,2097,2106,2442,2453,2497,2656,2685,2733,3129,3203,3206,3212,3332,3690 'data.get':1340,2448 'date':45,3215,3419,3420,3715,3724 'datetim':903,905,1330,1332,1754,1756,2434,2436,3177,3179 'datetime.fromisoformat':1990,2013 'datetime.now':1373,1998,2021,3217 'day':1529,1977,1981,1995,2003,2004,2018,2026,2226,2233,2294,2297,2300,2303,2358,2563,2566,2736,3277,3292,3405,3414,3429,3437,3487,3569,3572 'def':1788 'demo':288,1361 'desc':1431,1440 'descend':2980,3015 'descript':297,1183,1185,1396,1398,1669,1949,1951,1954,2074,2076,2280,2281,2570,2572,3576,3584 'description.split':1434 'detect':41,364,382,530,557,626,647,1422,1694 'dev':1489,1495,1566,1569,2156,2160,3674,3676 'develop':30,32 'dir':3700,3705,3713,3722 'docs/sdk-adopters':3311,3702 'domin':1715 'dt':1989,2002,2012,2025 'e':1190,1223,1229,1830 'e.code':1192,1217,1832 'e.g':515,603 'earli':1064 'echo':395,404,418,436,750,753 'ecosystem':354,365,373,522,558,619,648,650,652,653,661,675,682,685,694,695,715,716,784,795,938,940,945,947,978,995,1012 'ecosystem-specif':372 'elif':655,663,677,977,994,1479,2613 'els':684,1001,1213,1486,2095,2275,2537,3367,3697 'em':2762,3018,3037 'email':2081,2083 'empti':143,344,622 'enough':1710 'enrich':75,850,870,871,1510,1551,1648,1720,1854,1861,1894,1903,2277,2339,2342,2382,2393,2438,2462,2473,2485,2899,3125,3126,3185,3187,3230,3241,3253,3274,3289,3347,3361,3374,3395,3411,3736,3737 'enriched.append':1901,2341 'enriched.sort':2374 'ensur':1268 'entir':1743 'enumer':1035,3477 'error':200,396,1226 'establish':1298 'everi':98,112,128,317,330,2672,2689,2706,2909 'exampl':280,286,1359,1690,1717 'except':1187,1220,1221,1827,1837,1838,3348,3349 'exclud':307,537,627,633,698,700,717,719,796,807,1338,1341,1407,1415,1445 'exclude_owner.lower':1420 'exist':104,2913,3706 'exit':433 'expect':821 'extrapol':155 'f':471,480,688,693,697,709,725,857,918,961,965,969,973,981,985,990,998,1006,1018,1043,1056,1067,1113,1195,1205,1215,1225,1254,1606,1622,1626,1644,1662,1769,1797,1853,1886,1914,1923,1972,2046,2100,2178,2345,2354,2363,2389,2402,2406,2408,2503,2508,2539,2552,2560,2569,2575,2583,2599,2607,2618,2627,2634,2642,2859,2873,2943,3074,3115,3141,3149,3153,3154,3310,3322,3385,3391,3402,3424,3432,3448,3485,3493,3501,3510,3539,3548,3557,3565,3578,3586,3602,3611,3623,3640,3653,3658,3684,3689,3701,3711,3720,3752,3758,3777 'fail':184,432 'failur':2907,3132,3147,3151,3157 'failures.append':2942,3007,3035,3073,3114 'fals':1552,1904,1939,1944,2256 'fetch':1120,1725,1906,2027,2161 'fi':435 'field':127,141,329,342,1501,2147,3087,3095,3111,3117,3118 'file':816,1171,2580,2678,2748,2818,2841,3319,3329,3336,3591,3774 'fill':1505 'filter':222,258,1316 'final':2203 'find':359 'first':262,736,1024,1556 'fix':3036 'flag':1030,1262,1264,1265,1787,2400,2404,2407,3131,3201,3205,3694,3696 'flags.append':1042,1194,1885 'follow':2091,2093,2131,2133,2635,2637 'forbidden':2765,3045,3047,3067,3076 'fork':206,213,226,1455,1533,1936,1938,2242,2267,2287,2289 'format':817 'found':756,1258,1279,1684,1928,2577,2753,3079,3145,3393,3588 'full':1022,1133,1136,1139,1142,1150,1152,1154,1380,1383,1664,1726,1866,1869,1896,1907,1916,1924,1974,2180,2346,2542,2919,2922,2940,2971,3059,3071,3341,3356,3496,3532,3541 'game':2772,3053 'game-chang':2771,3052 'gather':507 'gem':527 'generat':2412,2419,2501,2659,2861 'generic':686,1709 'get':1163,1169,1790,1913,2045,2099,2177,3198 'gh':1789,1912,2044,2098,2176 'github':12,58,107,119,135,171,189,324,392,397,421,437,445,730,889,911,1283,1697,1762 'github.com':410,519,571,678,1973 'github.com/org/go-sdk':518 'github.com/settings/tokens':409 'given':5 'glob':3317 'glob.glob':3321 'global':1792 'go':526,572,683,996 'guess':153 'h':442,447 'handl':130,316,332,738,2691,2804,2809 'header':913,1079,1080,1764,1800,1801 'high':77,269,1722,2424,2456,2520,2666,3224,3458,3478,3517,3527 'high-sign':76,268,1721,2423,2665 'hit':176,424,1200,1208 'html':1178,1181,1970 'http':1216 'import':19,63,350,456,601,610,774,787,799,825,885,896,904,982,989,1266,1269,1280,1301,1327,1331,1747,1755,2431,2435,2852,2893,3173,3178,3316 'includ':205 'indent':838,846,854,1251,1590,2386,3138,3745 'index':1286 'indic':74 'infer':152 'inlin':881 'innov':2770,3051 'input':508 'int':1094,1822 'invent':311 'issu':3144 'item':1104,1106,1119,1127,1129,1863,1868,1873,1878,1902,2153,2157,2260,2278,2279,2343,2518,2541,2545,2548 'item.get':1131,1173,1953,2253,2272,2525,2529,2554,2558,2562,2571,2579,2586,2590 'join':3693,3733,3749 'json':450,457,611,775,788,800,826,897,916,1328,1748,1767,2432,2785,2853,2894,3174,3326,3718,3726,3742,3759,3761 'json.dump':710,831,840,848,1246,1585,2381,3133,3735 'json.dumps':3022,3061 'json.load':460,777,790,802,828,930,1335,1354,1781,2439,2443,2855,2896,2900,2904,3182,3207,3333 'json.loads':1102,1825,3032 'key':1575,1614,2375,3001 'label':3484,3512 'lambda':1576,1615,2376,3002 'languag':1945,1947,2284,2285,2557,2559,2815,3472,3514,3562,3564 'last':1979,3353,3427,3435,3443,3451,3455,3566 'lead':309 'learn':289,1362 'leav':621 'len':859,864,868,1118,1238,1259,1628,1652,1855,2392,2862,3146,3354,3394 'leverag':2775,3056 'librari':9,587 'like':590,1453 'limit':182,430,477,486,1047,1058,1199,1207,1235,1889,2167 'line':2699,2717,3384,3456,3462,3465,3515,3519,3662,3663,3687,3734,3750 'lines.append':3447,3492,3538,3547,3556,3577,3585,3601,3610,3622,3633,3639,3652,3657,3683,3688 'list':149,347,1244,2142,2368,2593,2626,2633,2702,2705,2720,2723,2801,2807,2812,3600 'load':812,858,3297 'local':261 'login':1160,1164,1386,1389,1419,1872,1875,2049,2058,2073,2103,2120,2190,2192,2646,3552,3628 'low':1048,1059,1890,2168 'lower':1343,1349,1390,1395,1400,3063 'ls':746 'm':3222 'map':3189 'mark':302 'match':232,1025,1175 'may':1700 'md':3709,3717,3728,3755 'meaning':187 'medium':2468,2522,3236,3461,3480 'messag':554,2741,2833,3108,3651,3656,3661 'metadata':1728,1909 'min':2221 'mirror':219 'miss':204,3116 'mistak':157 'modifi':2688 'must':103,115,131,320,333,2680,2692,2709,2727,2742 'n':3654,3732,3748 'name':10,56,114,126,231,235,295,313,319,328,369,514,580,596,615,617,631,660,674,681,691,712,714,783,934,937,951,964,968,972,976,984,988,993,1000,1004,1011,1023,1134,1137,1140,1143,1151,1153,1155,1156,1158,1282,1345,1348,1381,1384,1392,1394,1425,1438,1448,1452,1458,1460,1464,1665,1707,1867,1870,1897,1917,1925,1975,2055,2068,2070,2115,2117,2181,2347,2452,2455,2506,2543,2602,2621,2674,2708,2743,2798,2837,2920,2923,2941,2972,3211,3214,3342,3357,3390,3497,3533,3542 'need':414 'nenrich':2390 'new':42,1292,1703,3267,3282,3351,3425,3430,3433,3438,3441,3449,3453 'next':3768 'nois':35,73,257,305,1317,1424,1470,1478,1572,1632,1642,1651,1654,1659,1677,1737,1741,1851,1860,2270,2480,2495,3248,3263,3678 'non':1631,1650,1653,1658,1736 'non-nois':1649,1735 'none':703,1835,1840,2514,2573,2606,2701,3113,3368,3446,3698 'note':3692 'npm':524,563,662,948 'nsave':3753 'ntop':1645 'ntotal':1255 'ok':3707 'one':349,743 'open':705,778,791,803,829,835,843,851,931,1248,1336,1355,1587,1782,2383,2440,2444,2856,2897,2901,2905,3135,3183,3208,3334,3727,3741 'option':521,534,546,629,640 'order':1561 'org':28,122,327,1485,1555,1563,2032,2036,2042,2051,2059,2067,2079,2327,2329,2597,2600,2716,2792,3400,3608,3614,3672 'org/user':535 'org_data.get':2054,2061,2069,2075,2082,2087,2092 'organ':1482,1519,2041,2211 'os':901,1751,3175 'os.environ':910,1761 'os.makedirs':3703 'os.path.isdir':3313 'otherwis':573 'output':47,85,102,809,3124,3134,3169,3181,3186,3196,3202,3699,3704,3712,3721,3739 'outreach':91,553,2420 'overrid':523,620,651,654 'owner':81,542,628,630,699,701,718,720,808,1159,1162,1165,1168,1339,1342,1385,1388,1401,1404,1410,1416,1418,1480,1517,1729,1871,1874,1876,1879,2028,2033,2039,2048,2057,2064,2072,2102,2111,2119,2209,2331,2333,2526,2584,2587,3536,3549,3551,3554 'owner/repo-name':2789 'packag':595,1293,1706 'page':1074,2184 'partial':1507 'pass':3162,3350 'path':1172,1174,1791,1799,2581,2679,3309,3315,3324,3592,3710,3719,3729,3743,3756,3762 'pattern':351,375,886,943,1054,1114,1176,1202,1302 'per':944,1073,2183 'person':552 'playground':291,1367 'popul':1502,2149 'power':2767,3048 'prefix':953 'present':3084,3168 'prev':3305,3308,3314,3318,3323,3328,3331,3335,3338,3362,3365 'prev_data.get':3346 'previous':3298 'primari':2532,2640 'primary.get':2645,2647 'print':470,479,687,692,696,780,793,805,856,1005,1017,1055,1112,1204,1214,1224,1253,1263,1605,1621,1625,1643,1661,1852,1922,2344,2388,2405,2415,2496,2502,2507,2516,2538,2551,2568,2574,2582,2598,2617,2641,2651,2657,2684,2696,2713,2858,2872,3140,3152,3158,3747,3751,3757 'product':300,547,638,643,721,723,1692,2446,2449,2509,2511,2757,2843 'prof':2524,3534 'prof.get':2595,2601,2604,2609,2614,2620,2623,2629,2636 'profil':1730,2029,2034,2065,2112,2332,2334,2527,3537 'provid':533,582,593,2515,2760 'public':16,60,416,1696,2085,2088,2610 'publish':637 'pure':1314 'push':1531,1955,1958,1980,1983,1986,1988,1997,2001,2228,2235,2296,2299,2306,2308,2357,2360,2561,2565,2738,3416,3471,3483,3489,3511,3567,3571 'pushed_at.replace':1991 'pyeof':609,726,824,872,895,1270,1326,1671,1746,2409,2430,2652,2892,3163,3172,3770 'python':360,525,569,676,979,1315 'python3':454,608,770,772,785,797,823,894,1325,1745,2429,2850,2891,3171 'q':1014,1019 'qa':2889,3142,3160 'qualiti':3130,3204,3691 'queri':942,960,980,997,1002,1008,1016,1033,1036,1072,1115,1177,1203,1211,1219,1228,1239 'r':665,1594,1598,1601,1633,1635,1639,1656,1663,1666,1842,1844,1848,2458,2460,2464,2469,2471,2476,2481,2483,2487,2492,2921,2925,3226,3228,3232,3237,3239,3244,3249,3251,3255,3260,3272,3287,3340,3344,3355,3359,3372,3377,3380,3409,3475,3495,3505,3507,3525,3531,3540,3544,3550,3553,3583,3597,3604,3613 'r.get':1668,3276,3291,3413,3486,3498,3502,3513,3535,3559,3563,3568,3575,3590,3594,3607,3617 'ra':49 'rais':1836 'rank':87,3466 'rate':181,429,473,482,1027,1039,1046,1050,1057,1061,1092,1121,1124,1198,1206,1888,2166,2369,2395 'ratelimit':503,1098,1816 'raw':239,833,861,863,1101,2895,2915,2918,2927,2939,2970 'raw.get':1105,1108 're':613,3009,3581 're-sort':3008 're.match':664 'reach':2725,2821,3105,3637,3642,3646 'refer':2728 'references/import-patterns.md':1304 'remain':475,478,484,487,490,1028,1040,1051,1053,1062,1093,1099,1122,1125,1785,1794,1812,1817,1819,1821,1823,1883,1892,1899,2173,2371,2396,2398 'remov':2955,3040 'replac':959,1352,1429,3304 'repo':17,25,61,67,79,99,211,224,230,271,281,284,417,545,1021,1130,1147,1149,1257,1278,1310,1323,1377,1382,1391,1413,1447,1457,1541,1630,1646,1674,1683,1724,1727,1738,1742,1857,1900,1908,1910,1920,1926,1967,2086,2089,2314,2316,2394,2427,2540,2608,2611,2655,2668,2673,2745,2788,2829,2839,2875,2910,2936,2947,2949,2968,3097,3123,3191,3306,3339,3363,3366,3392,3397,3426,3434,3467,3499 'repo.get':1135,1157,1161,1167,1180,1184,1387,1393,1397,1403 'repo_data.get':1931,1937,1942,1946,1950,1957,1963,1969 'repo_name.replace':1428 'repo_name.startswith':1462 'report':89,277,3388 'repositori':1132 'req':1076,1084,1795,1805 'req/min':179,427,1232 'request':517,606 'requir':21,192,355,400,962,966,3086 'reset':498,504 'resourc':464,468 'resp':1090,1811 'resp.headers.get':1095,1813 'resp.read':1103,1826 'respect':1230 'respons':111,138,339 'result':241,252,539,834,862,1243,1247,1260,1273,1353,1379,1713,2917,2954 'return':248,1824,1834,1839 'revers':2995 'review':3082 'revolution':3058 'rm':3776 'robust':2768,3049 'round':2322,2351 'rule':97,2671 'run':167,762,879,3452,3769 'sampl':290,1363 'save':3166 'scan':188 'scope':413 'score':36,65,260,842,866,867,1307,1320,1371,1504,1514,1520,1526,1549,1550,1584,1586,1596,1629,1637,1780,1846,2205,2206,2212,2218,2220,2230,2237,2243,2249,2257,2321,2323,2350,2352,2380,2466,2478,2489,2547,2550,2669,2734,2794,2878,2881,2979,2981,2984,2992,2994,3006,3014,3100,3234,3246,3257,3470,3509,3522,3543,3546 'scored.append':1540 'scored.sort':1574 'script':735,751,754,759,875 'scripts/fetch.py':747,771 'sdk':2,7,50,55,209,215,234,356,368,513,541,579,585,614,616,636,659,673,680,689,690,711,713,782,925,933,936,950,963,967,971,975,983,987,992,999,1003,1010,1281,1296,1344,1347,1409,1451,1459,1463,1699,1776,2451,2454,2504,2505,2576,2751,2817,2840,3210,3213,3386,3389,3587 'sdk-adoption-track':1,924,1775 'sdk_name.lstrip':958 'sdk_name.replace':3303 'sdk_name.startswith':656 'seamless':2769,3050 'search':11,14,57,109,169,175,240,246,403,423,462,465,472,474,476,489,729,888,956,1026,1038,1045,1049,1060,1091,1123,1197,1234,1285,1712,2916,2953 'secondari':180,428 'seen':1020,1146,1148 'seen_repos.values':1245 'self':2888 'self-qa':2887 'send':236,263 'sentenc':2825,2836 'set':439,1427,1433,3307 'setup':387 'signal':39,70,78,270,1723,2425,2457,2521,2667,2730,2831,3225,3459,3479,3518,3528 'sinc':1530,1978,1982,1996,2005,2019,2227,2234,2295,2298,2301,2304,2359,2564,2737,3278,3293,3352,3415,3442,3450,3454,3488,3570 'skill' 'skill-sdk-adoption-tracker' 'skip':765,1052,1739,1858,1893,2164 'slug':3302,3325,3714,3723 'snake':564 'snapshot':3299,3760 'solo':31,1488,1568,2155,3675 'sort':1553,1612,2976,2993,3000,3010,3320 'source-varnan-tech' 'specif':374,2729,2747,2826 'split':1350,1430 'src/api/client.ts':2819 'ssl':899,1750 'ssl._create_unverified_context':908,1759 'standalon':734 'star':1528,1930,2222,2282,2283,2355,2356,2553,2555,2675,2735,2813,3468,3503,3558,3560 'stargaz':1932 'start':559 'starter':292,1364 'statement':602 'step':195,276,385,505,727,739,767,818,1305,1497,1512,1538,1718,2410,2885,3164 'stop':193,493,1063,1678 'str':3021,3027,3029,3034,3216,3421,3716,3725 'streamlin':2774,3055 'strength':40 'strftime':3220 'string':549 'stripe':604 'structur':2784 'suggest':2740,2832,3107,3650,3655,3660 'sum':3269,3284,3406 'sys':458,612 'sys.stdin':461 'take':53,1287 'target':1841,1856,1865 'tell':494,1274,1679 'temp':815,3773 'templat':1366 'test':1368 'text':3060,3072 'tier':1472,1476,1483,1487,1542,1543,1559,1560,1580,1592,1597,1599,1602,1609,1623,1640,1667,1849,2144,2154,2158,2259,2261,2268,2276,2318,2319,2348,2349,2493,2544,2546,2790,2877,3098,3261,3369,3375,3378,3381,3469,3506,3666,3669,3681,3685 'tier_counts.get':3379,3398,3680 'tier_order.get':1578,1617 'tiers.get':1600 'tiers.items':1613 'time':499,900,1752 'time.sleep':1241,2372 'timeout':1085,1806 'timezon':906,1333,1757,2437,3180 'timezone.utc':1375,2000,2023,3219 'token':172,190,393,398,407,438,446,909,912,920,1760,1763,1771 'top':266,1732,2162,2169,2188,2335,2337,2416,2530,2643,2697,2802,3102,3463,3618,3624 'topic-agent-skills' 'topic-gtm' 'topic-hermes-agent' 'topic-openclaw-skills' 'topic-skills' 'topic-technical-seo' 'total':1107,1109,1116,1117,1627 'track':592,3766 'tracker':4,52,927,1778 'transform':2776,3057 'treat':1468 'tri':1081,1802,3330 'true':227,1467,2340,2996,3708 'tutori':34,278,287,304,1357,1360,1423,1436,1442,1466,1475,1477,1525,1545,1547,1571,1641,1676,1688,1740,1850,1859,2255,2269,2274,2494,3262,3677 'twitter':2135,2138,2628,2630,2808 'type':1166,1170,1402,1405,1481,1518,1877,1880,2040,2066,2113,2210,2585,2588,2596,2615,3555 'typescript':2816 'tz':1374,1999,2022,3218 'unauthent':173 'uniqu':1256 'unknown':2946 'upgrad':1492,2143 'url':1066,1078,1179,1182,1968,1971,2315,2317,3500 'urllib.error.httperror':1188,1828 'urllib.parse':1267 'urllib.parse.quote':1071 'urllib.request':898,1749 'urllib.request.request':1077,1796 'urllib.request.urlopen':1083,1804 'use':348,550,949,2653,3763 'user':120,301,361,378,496,576,922,1276,1406,1681,1773,2030,2096,2105,2114,2616,2619 'user-ag':921,1772 'user_data.get':2108,2116,2124,2128,2132,2137 'usernam':2136,2139,2631 'usual':634 'veloc':3265,3423,3765 'verifi':2908,2973 'w':707,837,845,853,1250,1589,2385,3137,3730,3744 'want':161 'warn':3075 'websit':2037,2060,2078,2080,2328,2330,2603,2605,3609,3612,3615 'week':1290 'without':170,419,952 'word':1358,1426,1432,1439,1441,1443,2766,3046,3065,3069,3077,3078 'workshop':1370 'would':588 'write':147,345,2703,2721,2777,2847,3731 'wrong':166 'x':502,1097,1577,1579,1582,1616,1618,1815,2377,2378,3003,3004 'x-ratelimit-remain':1096,1814 'x-ratelimit-reset':501 'y':3221 'z':391,668,1992,2015 'z0':671","prices":[{"id":"b2203499-5a78-4c3b-844d-180c1bb133d8","listingId":"f0435390-7941-4c5c-aa69-0ef8dcf59f1c","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"Varnan-Tech","category":"opendirectory","install_from":"skills.sh"},"createdAt":"2026-04-23T06:55:38.656Z"}],"sources":[{"listingId":"f0435390-7941-4c5c-aa69-0ef8dcf59f1c","source":"github","sourceId":"Varnan-Tech/opendirectory/sdk-adoption-tracker","sourceUrl":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/sdk-adoption-tracker","isPrimary":false,"firstSeenAt":"2026-04-23T06:55:38.656Z","lastSeenAt":"2026-04-23T18:56:44.276Z"}],"details":{"listingId":"f0435390-7941-4c5c-aa69-0ef8dcf59f1c","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"Varnan-Tech","slug":"sdk-adoption-tracker","github":{"repo":"Varnan-Tech/opendirectory","stars":88,"topics":["agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"license":null,"html_url":"https://github.com/Varnan-Tech/opendirectory","pushed_at":"2026-04-23T16:53:57Z","description":" AI Agent Skills built for GTM, Technical Marketing, and growth automation.","skill_md_sha":"e6c547529f0a07ae2abd922eaa89db1de45dd6d2","skill_md_path":"skills/sdk-adoption-tracker/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/sdk-adoption-tracker"},"layout":"multi","source":"github","category":"opendirectory","frontmatter":{"name":"sdk-adoption-tracker","description":"Given your SDK or library name, searches GitHub code search for public repos that import or require it, classifies each repo as company org, affiliated developer, solo developer, or tutorial noise, scores by adoption signal strength, detects new adopters by date, and outputs a ranked list of who is building on you with outreach context per high-signal company. Use when asked to find who uses your SDK, track SDK adoption, find companies building on your library, identify warm leads from existing SDK users, or see which orgs import your package. Trigger when a user says \"who is using my SDK\", \"find repos that import my library\", \"track adoption of my package\", \"which companies are building on my SDK\", \"find my SDK users on GitHub\", or \"show me who imports my package\".","compatibility":"[claude-code, gemini-cli, github-copilot]"},"skills_sh_url":"https://skills.sh/Varnan-Tech/opendirectory/sdk-adoption-tracker"},"updatedAt":"2026-04-23T18:56:44.276Z"}}