{"id":"62781a85-f10c-4c3d-95b4-19d06e7833b9","shortId":"2UmfLY","kind":"skill","title":"competitor-pr-finder","tagline":"Give it your product URL or description. It finds your top 5 competitors, runs three-track PR research across all of them (editorial, podcasts, communities), identifies which channels appear most frequently, looks up the journalist or host for each, and returns a tiered outreach ","description":"# Competitor PR Finder\n\nGive it your product URL. It finds your competitors, researches every PR channel they used (news, podcasts, communities), surfaces the channels that appear across multiple competitors (your proven targets), finds the journalist or host for each, and drafts a personalized cold pitch for your product at every tier-1 channel.\n\n---\n\n**Zero-hallucination policy:** Every channel, journalist name, story angle, and pitch detail in the output must trace to a specific Tavily search result or the fetched product page. This applies to:\n- Competitor names: must appear in Tavily search results, not AI training knowledge\n- Channel names: must have a URL in the search results\n- Journalist/host names: must appear verbatim in a Tavily snippet\n- Story angles: extracted from article/episode titles in search results only\n- Pitch drafts: reference specific evidence from search data + product analysis\n\n---\n\n## Common Mistakes\n\n| The agent will want to... | Why that's wrong |\n|---|---|\n| Name a journalist from training knowledge | Every journalist name must trace to a search result snippet. Writing \"Sarah Perez covers startups at TechCrunch\" from memory is hallucination. |\n| List channels without evidence URLs | Every channel in the output must have at least one URL from the PR search results proving a competitor was featured there. |\n| Skip the competitor confirmation step | Always show discovered competitors and wait for the user to confirm. Wrong competitors = wasted searches and a useless output. |\n| Generate generic pitches (\"We'd love to be featured\") | Every pitch must reference a specific angle from the evidence AND a specific differentiator from the product analysis. |\n| Mark a channel as Tier 1 with only 1 competitor occurrence | Tier 1 = 3+ competitors. Tier 2 = exactly 2. Tier 3 = 1. Do not promote channels that haven't proven themselves. |\n| Use em dashes in output | Replace all em dashes (--) with hyphens. |\n\n---\n\n## Read Reference Files Before Each Run\n\n```bash\ncat references/pr-channel-types.md\ncat references/pitch-guide.md\ncat references/tier-scoring.md\n```\n\n---\n\n## Step 1: Setup Check\n\n```bash\necho \"TAVILY_API_KEY:    ${TAVILY_API_KEY:+set}${TAVILY_API_KEY:-NOT SET -- required}\"\necho \"FIRECRAWL_API_KEY: ${FIRECRAWL_API_KEY:+set}${FIRECRAWL_API_KEY:-not set, Tavily extract will be used as fallback}\"\n```\n\n**If TAVILY_API_KEY is missing:** Stop immediately. Tell the user: \"TAVILY_API_KEY is required to research competitors and find PR coverage. There is no fallback. Get it at app.tavily.com -- free tier: 1000 credits/month (about 43 full runs at ~23 searches/run). Add it to your .env file.\"\n\n**If only FIRECRAWL_API_KEY is missing:** Continue. Tavily extract will be used for the URL fetch.\n\n---\n\n## Step 2: Parse Input\n\nCollect from the conversation:\n- `product_url`: the URL to fetch (required, unless user pastes a description directly)\n- `product_name`: optional, derived from page if not provided\n- `geography`: optional -- US / Europe / global. Default: US\n\n**If the user provides only a pasted description (no URL):** Skip Steps 3 and 4. Go directly to Step 4 (product analysis) using the pasted text as `product_content`. Set `page_source` to `user_description` and note in `data_quality_flags`.\n\n**If neither URL nor description:** Ask: \"What is the URL of your product or startup? Or paste a short description: what it does, who it is for, and what makes it different from competitors.\"\n\nDerive product slug:\n\n```bash\nPRODUCT_SLUG=$(python3 -c \"\nfrom urllib.parse import urlparse\nimport sys\nurl = 'URL_HERE'\nif url.startswith('http'):\n    host = urlparse(url).netloc.replace('www.', '')\n    print(host.split('.')[0])\nelse:\n    import re\n    print(re.sub(r'[^a-z0-9]', '-', url[:30].lower()).strip('-'))\n\")\necho \"Product slug: $PRODUCT_SLUG\"\n```\n\n---\n\n## Step 3: Fetch Product Page\n\n**Primary: Firecrawl (if FIRECRAWL_API_KEY is set)**\n\n```bash\ncurl -s -X POST https://api.firecrawl.dev/v1/scrape \\\n  -H \"Authorization: Bearer $FIRECRAWL_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"url\": \"URL_HERE\", \"formats\": [\"markdown\"], \"onlyMainContent\": true}' \\\n  | python3 -c \"\nimport sys, json\nd = json.load(sys.stdin)\ncontent = d.get('data', {}).get('markdown', '') or d.get('markdown', '')\nprint(f'Fetched via Firecrawl: {len(content)} characters')\nopen('/tmp/cprf-product-raw.md', 'w').write(content)\n\"\n```\n\n**Fallback: Tavily extract (if FIRECRAWL_API_KEY is not set)**\n\n```bash\ncurl -s -X POST https://api.tavily.com/extract \\\n  -H \"Content-Type: application/json\" \\\n  -d \"{\\\"api_key\\\": \\\"$TAVILY_API_KEY\\\", \\\"urls\\\": [\\\"URL_HERE\\\"]}\" \\\n  | python3 -c \"\nimport sys, json\nd = json.load(sys.stdin)\ncontent = d.get('results', [{}])[0].get('raw_content', '')\nprint(f'Fetched via Tavily extract: {len(content)} characters')\nopen('/tmp/cprf-product-raw.md', 'w').write(content)\n\"\n```\n\n**Checkpoint:**\n\n```bash\npython3 -c \"\ncontent = open('/tmp/cprf-product-raw.md').read()\nif len(content) < 200:\n    print('ERROR: fewer than 200 characters fetched')\nelse:\n    print(f'Content OK: {len(content)} characters')\n\"\n```\n\n**If content < 200 characters:** Stop fetching. Tell the user: \"The product page returned no readable content -- the site is likely JavaScript-rendered and blocked the fetch. Please paste a short description directly: what it does, who it is for, and what makes it different.\"\n\n---\n\n## Step 4: Product Analysis (AI)\n\nPrint page content:\n\n```bash\npython3 -c \"\ncontent = open('/tmp/cprf-product-raw.md').read()[:5000]\nprint('=== PRODUCT PAGE (first 5000 chars) ===')\nprint(content)\n\"\n```\n\n**AI instructions:** Analyze the product page above and extract:\n\n- `product_name`: the product or company name\n- `one_line_description`: what it does, for whom, core value prop. Under 20 words. No marketing language. Example: \"CI/CD automation for developer teams that self-host their pipelines.\"\n- `industry_taxonomy`: `l1` (top-level: e.g. developer tools / fintech / healthtech / consumer), `l2` (sector: e.g. devops / payments / telemedicine), `l3` (specific niche: e.g. CI/CD automation / embedded payments / async video consultation). Vague labels like \"technology\" alone are not acceptable.\n- `differentiators`: exactly 2-3 specific things that distinguish this product from generic competitors. These feed directly into the pitch drafts -- be specific. Example: [\"Self-hosted pipeline runner -- no data leaves your infra\", \"Native support for monorepos with dynamic step generation\"]\n- `icp`: `buyer_persona` (job title), `company_type`, `company_size`\n- `geography_bias`: US / Europe / global / unclear\n- `page_source`: \"live_page\" or \"user_description\"\n\nWrite to `/tmp/cprf-product-analysis.json`:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nanalysis = {\n    # FILL from your analysis above\n    \"product_name\": \"\",\n    \"one_line_description\": \"\",\n    \"industry_taxonomy\": {\"l1\": \"\", \"l2\": \"\", \"l3\": \"\"},\n    \"differentiators\": [],\n    \"icp\": {\"buyer_persona\": \"\", \"company_type\": \"\", \"company_size\": \"\"},\n    \"geography_bias\": \"US\",\n    \"page_source\": \"live_page\"\n}\n\njson.dump(analysis, open('/tmp/cprf-product-analysis.json', 'w'), indent=2)\nprint('Product analysis written.')\nPYEOF\n```\n\nVerify:\n\n```bash\npython3 -c \"\nimport json\na = json.load(open('/tmp/cprf-product-analysis.json'))\nprint('Product:', a['product_name'])\nprint('Industry:', a['industry_taxonomy']['l1'], '>', a['industry_taxonomy']['l2'], '>', a['industry_taxonomy']['l3'])\nprint('Differentiators:')\nfor d in a['differentiators']:\n    print(f'  - {d}')\n\"\n```\n\n---\n\n## Step 4b: Phase 1 -- Competitor Discovery\n\n```bash\nls scripts/research.py 2>/dev/null && echo \"script found\" || echo \"ERROR: scripts/research.py not found -- cannot continue\"\n```\n\n```bash\npython3 scripts/research.py \\\n  --phase discover \\\n  --product-analysis /tmp/cprf-product-analysis.json \\\n  --tavily-key \"$TAVILY_API_KEY\" \\\n  --output /tmp/cprf-competitors-raw.json\n```\n\nPrint results for AI review:\n\n```bash\npython3 -c \"\nimport json\ndata = json.load(open('/tmp/cprf-competitors-raw.json'))\nprint(f'Searches run: {len(data[\\\"competitor_searches\\\"])}')\nfor s in data['competitor_searches']:\n    print(f'\\nQuery: {s[\\\"query\\\"]}')\n    print(f'Answer: {s.get(\\\"answer\\\",\\\"\\\")[:400]}')\n    for r in s.get('results', [])[:5]:\n        print(f'  - {r[\\\"title\\\"]} | {r[\\\"url\\\"]}')\n        print(f'    {r.get(\\\"content\\\",\\\"\\\")[:200]}')\n\"\n```\n\n**AI instructions:** Read the search results above. Pick exactly 5 competitor companies that:\n1. Are named in the search result titles, answers, or snippets\n2. Are in the same L3 niche as the product being analyzed\n3. Are actual competing products (not agencies, consultancies, or list articles)\n4. Are distinct from each other (not the same company under different names)\n\nFor each competitor write: `name`, `url` (from the search result where they appeared), `description` (one sentence from snippet), `source_url` (the search result URL where they were found).\n\n---\n\n## Step 5: Competitor Confirmation\n\n**Show the discovered competitors to the user:**\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nanalysis = json.load(open('/tmp/cprf-product-analysis.json'))\n\n# FILL: 5 competitors from the search results above\ncandidates = [\n    # {\"name\": str, \"url\": str, \"description\": str, \"source_url\": str}\n]\n\nprint(f\"\\nFound 5 competitors for {analysis['product_name']} in {analysis['industry_taxonomy']['l3']}:\\n\")\nfor i, c in enumerate(candidates, 1):\n    print(f\"  {i}. {c['name']} -- {c['description']}\")\n    print(f\"     {c['url']}\")\n\ndata = json.load(open('/tmp/cprf-competitors-raw.json'))\ndata['competitor_candidates'] = candidates\njson.dump(data, open('/tmp/cprf-competitors-raw.json', 'w'), indent=2)\nPYEOF\n```\n\nTell the user: \"These are the 5 competitors I'll research for PR coverage. Add, remove, or swap any -- or say 'looks good' to continue.\"\n\n**Wait for confirmation.** If the user edits the list (adds/removes/swaps), update the candidates accordingly. Then write the confirmed list:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\n# FILL: confirmed competitor list (after user review)\nconfirmed = [\n    # {\"name\": str, \"url\": str}\n]\n\njson.dump({\"confirmed_competitors\": confirmed}, open('/tmp/cprf-competitors-confirmed.json', 'w'), indent=2)\nprint(f\"Confirmed {len(confirmed)} competitors for PR research.\")\nfor c in confirmed:\n    print(f\"  - {c['name']} ({c['url']})\")\nPYEOF\n```\n\n---\n\n## Step 6: Three-Track PR Research (Phase 2)\n\n```bash\npython3 scripts/research.py \\\n  --phase pr-research \\\n  --competitors /tmp/cprf-competitors-confirmed.json \\\n  --product-analysis /tmp/cprf-product-analysis.json \\\n  --tavily-key \"$TAVILY_API_KEY\" \\\n  --output /tmp/cprf-pr-raw.json\n```\n\nThis runs 3 searches per competitor (15 total):\n- **Track A (Editorial):** `\"[competitor]\" featured press coverage TechCrunch Forbes Wired article interview`\n- **Track B (Podcasts):** `\"[competitor]\" founder CEO podcast interview appeared on episode`\n- **Track C (Communities):** `\"[competitor]\" site:reddit.com OR site:news.ycombinator.com OR site:producthunt.com`\n\nPrint coverage summary:\n\n```bash\npython3 -c \"\nimport json\ndata = json.load(open('/tmp/cprf-pr-raw.json'))\nprint(f'Competitors researched: {data[\\\"competitors_researched\\\"]}')\nprint()\nfor r in data['results']:\n    print(f'{r[\\\"competitor\\\"]}:')\n    for track, tdata in r['tracks'].items():\n        n = len(tdata.get('results', []))\n        print(f'  {track:12}: {n} results')\n\"\n```\n\n**If all 3 tracks for a competitor return 0 results:** This competitor has very low press coverage. Note in `data_quality_flags` and proceed -- the cross-competitor pattern will still work with the remaining 4.\n\n---\n\n## Step 7: Pattern Analysis (AI)\n\nPrint all raw PR results:\n\n```bash\npython3 -c \"\nimport json\ndata = json.load(open('/tmp/cprf-pr-raw.json'))\nfor r in data['results']:\n    print(f'\\n=== {r[\\\"competitor\\\"]} ===')\n    for track, tdata in r['tracks'].items():\n        print(f'\\n--- Track {track.upper()} ---')\n        print(f'Query: {tdata[\\\"query\\\"]}')\n        print(f'Answer: {tdata.get(\\\"answer\\\",\\\"\\\")[:400]}')\n        for item in tdata.get('results', [])[:5]:\n            print(f'  Title: {item[\\\"title\\\"]}')\n            print(f'  URL:   {item[\\\"url\\\"]}')\n            print(f'  Snippet: {item.get(\\\"content\\\",\\\"\\\")[:200]}')\n\"\n```\n\n**AI instructions:** Read ALL search results above. Build a channel frequency map.\n\n**Step 1 -- Normalize URLs to root domain:** `https://techcrunch.com/2023/06/article-title` → `techcrunch.com`. `https://open.spotify.com/episode/...` → identify as podcast (spotify episode). `https://www.reddit.com/r/devops/` → `reddit.com/r/devops`.\n\n**Step 2 -- Count occurrences:** How many different competitors appeared in results from each channel root? A channel that shows up in Competitor A's Track A AND Competitor B's Track A counts as frequency 2.\n\n**Step 3 -- Tier channels** (follow `references/tier-scoring.md`):\n- Tier 1: appeared in 3+ competitors\n- Tier 2: appeared in exactly 2 competitors\n- Tier 3: appeared in 1 competitor\n\n**Step 4 -- Extract story angles** from article/episode titles in the results. Classify each as: funding-announcement / product-launch / founder-story / trend-piece / category-creation / how-to / comparison / award. Do not infer -- only classify angles visible in the titles.\n\n**Step 5 -- Classify channel type** for each: editorial / podcast / community / newsletter.\n\nWrite to `/tmp/cprf-pr-patterns.json`:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\npatterns = {\n    \"tier_1_channels\": [\n        # FILL -- channels appearing in 3+ competitors\n        # Each: {\"channel_name\": str, \"channel_url\": str, \"channel_type\": str,\n        #        \"frequency\": int, \"found_in_competitors\": [str],\n        #        \"evidence_urls\": [str], \"story_angles_used\": [str],\n        #        \"journalist_name\": \"\", \"journalist_beat\": \"\"}\n    ],\n    \"tier_2_channels\": [\n        # FILL -- channels appearing in exactly 2 competitors\n        # Each: {\"channel_name\": str, \"channel_url\": str, \"channel_type\": str,\n        #        \"frequency\": 2, \"found_in_competitors\": [str], \"evidence_urls\": [str],\n        #        \"story_angles_used\": [str]}\n    ],\n    \"tier_3_channels\": [\n        # FILL -- channels appearing in only 1 competitor (name + URL only)\n        # Each: {\"channel_name\": str, \"channel_url\": str, \"found_in_competitor\": str}\n    ],\n    \"data_quality_flags\": []\n}\n\njson.dump(patterns, open('/tmp/cprf-pr-patterns.json', 'w'), indent=2)\nPYEOF\n```\n\nVerify:\n\n```bash\npython3 -c \"\nimport json\np = json.load(open('/tmp/cprf-pr-patterns.json'))\nprint(f'Tier 1 channels: {len(p[\\\"tier_1_channels\\\"])}')\nfor ch in p['tier_1_channels']:\n    print(f'  {ch[\\\"frequency\\\"]}x {ch[\\\"channel_name\\\"]} ({ch[\\\"channel_type\\\"]}) -- {ch[\\\"found_in_competitors\\\"]}')\nprint(f'Tier 2 channels: {len(p[\\\"tier_2_channels\\\"])}')\nprint(f'Tier 3 channels: {len(p[\\\"tier_3_channels\\\"])}')\n\"\n```\n\n**If fewer than 3 Tier 1 channels:** This is normal for niche markets. Promote the top Tier 2 channels (highest frequency) to get to at least 3 total channels with deep dives. Note the promotion in `data_quality_flags`.\n\n---\n\n## Step 8: Journalist / Host Lookup\n\nFor each Tier 1 channel (up to 7), run one targeted Tavily search:\n\n```bash\npython3 << 'PYEOF'\nimport json, os, urllib.request\n\npatterns = json.load(open('/tmp/cprf-pr-patterns.json'))\nanalysis = json.load(open('/tmp/cprf-product-analysis.json'))\nl2 = analysis['industry_taxonomy']['l2']\nl3 = analysis['industry_taxonomy']['l3']\ntavily_key = os.environ.get('TAVILY_API_KEY', '')\n\nlookup_results = []\n\nfor channel in patterns.get('tier_1_channels', [])[:7]:\n    name = channel['channel_name']\n    ctype = channel['channel_type']\n\n    if ctype == 'editorial':\n        query = f'\"{name}\" journalist reporter writer covers {l2} {l3} startups technology'\n    elif ctype == 'podcast':\n        query = f'\"{name}\" podcast host interviewer {l2} {l3} founders'\n    else:\n        query = f'\"{name}\" moderator community manager {l2} {l3}'\n\n    payload = json.dumps({\n        \"api_key\": tavily_key,\n        \"query\": query,\n        \"search_depth\": \"basic\",\n        \"max_results\": 5\n    }).encode()\n\n    req = urllib.request.Request(\n        'https://api.tavily.com/search',\n        data=payload,\n        headers={'Content-Type': 'application/json'},\n        method='POST'\n    )\n    try:\n        with urllib.request.urlopen(req, timeout=20) as resp:\n            data = json.loads(resp.read())\n            lookup_results.append({\n                'channel': name,\n                'channel_type': ctype,\n                'query': query,\n                'answer': data.get('answer', ''),\n                'results': [\n                    {'title': r['title'], 'url': r['url'], 'content': r.get('content', '')[:400]}\n                    for r in data.get('results', [])[:3]\n                ]\n            })\n            print(f'Journalist lookup -- {name}: {len(data.get(\"results\", []))} results')\n    except Exception as e:\n        lookup_results.append({\n            'channel': name, 'channel_type': ctype,\n            'query': query, 'answer': '', 'results': [], 'error': str(e)\n        })\n        print(f'Journalist lookup -- {name}: FAILED ({e})')\n\njson.dump(lookup_results, open('/tmp/cprf-journalist-results.json', 'w'), indent=2)\nprint(f'Journalist lookups complete: {len(lookup_results)} channels')\nPYEOF\n```\n\nPrint results for AI extraction:\n\n```bash\npython3 -c \"\nimport json\nresults = json.load(open('/tmp/cprf-journalist-results.json'))\nfor r in results:\n    print(f'\\n=== {r[\\\"channel\\\"]} ({r[\\\"channel_type\\\"]}) ===')\n    print(f'Answer: {r.get(\\\"answer\\\",\\\"\\\")[:400]}')\n    for item in r.get('results', []):\n        print(f'  {item[\\\"title\\\"]}')\n        print(f'  {item.get(\\\"content\\\",\\\"\\\")[:300]}')\n\"\n```\n\n**AI instructions:** For each Tier 1 channel, extract from the search results above:\n- `journalist_name`: the person's name verbatim from a snippet. Write \"not found in search data\" if absent -- do NOT fill from training knowledge.\n- `journalist_beat`: what topics they cover, extracted from snippet text. Write \"not found in search data\" if absent.\n\nUpdate `/tmp/cprf-pr-patterns.json` with `journalist_name` and `journalist_beat` populated for each Tier 1 channel:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\npatterns = json.load(open('/tmp/cprf-pr-patterns.json'))\n\n# FILL: update journalist_name and journalist_beat for each tier_1 channel\n# journalist_name and journalist_beat come from search snippet text only\n# Write \"not found in search data\" if the snippets don't name a person\n\n# Example:\n# patterns['tier_1_channels'][0]['journalist_name'] = 'Ingrid Lunden'\n# patterns['tier_1_channels'][0]['journalist_beat'] = 'enterprise software and developer tools'\n\njson.dump(patterns, open('/tmp/cprf-pr-patterns.json', 'w'), indent=2)\nprint('Journalist data updated.')\nfor ch in patterns['tier_1_channels']:\n    print(f\"  {ch['channel_name']}: {ch.get('journalist_name','--')} | {ch.get('journalist_beat','--')}\")\nPYEOF\n```\n\n---\n\n## Step 9: Synthesis -- Generate Outreach Packages (AI)\n\nPrint consolidated data:\n\n```bash\npython3 -c \"\nimport json\n\nanalysis = json.load(open('/tmp/cprf-product-analysis.json'))\npatterns = json.load(open('/tmp/cprf-pr-patterns.json'))\n\nprint('=== PRODUCT ===')\nprint(f'Name: {analysis[\\\"product_name\\\"]}')\nprint(f'What it does: {analysis[\\\"one_line_description\\\"]}')\nprint(f'Differentiators:')\nfor d in analysis['differentiators']:\n    print(f'  - {d}')\nprint(f'ICP: {analysis[\\\"icp\\\"]}')\nprint(f'Geography: {analysis[\\\"geography_bias\\\"]}')\nprint()\nprint('=== TIER 1 CHANNELS ===')\nfor ch in patterns['tier_1_channels']:\n    print(f'\\n{ch[\\\"channel_name\\\"]} ({ch[\\\"channel_type\\\"]}, freq={ch[\\\"frequency\\\"]})')\n    print(f'  Found in: {ch[\\\"found_in_competitors\\\"]}')\n    print(f'  Evidence URLs: {ch[\\\"evidence_urls\\\"][:3]}')\n    print(f'  Story angles: {ch[\\\"story_angles_used\\\"]}')\n    print(f'  Journalist: {ch.get(\\\"journalist_name\\\",\\\"not found\\\")} | {ch.get(\\\"journalist_beat\\\",\\\"\\\")}')\nprint()\nprint('=== TIER 2 CHANNELS ===')\nfor ch in patterns['tier_2_channels']:\n    print(f'  {ch[\\\"channel_name\\\"]} ({ch[\\\"channel_type\\\"]}) -- found in {ch[\\\"found_in_competitors\\\"]}')\n\"\n```\n\n**AI instructions -- zero-hallucination rules:**\n\n1. **Channel names:** Only include channels from `/tmp/cprf-pr-patterns.json`. No invented channels.\n2. **Journalist/host names:** Use only what was populated in Step 8. Write \"not found in search data\" if blank. Do NOT substitute from training knowledge.\n3. **Story angles:** Use only angles extracted from article/episode titles in the search results. Do not infer from training knowledge.\n4. **Cold pitch drafts:** Must reference (a) a specific story angle from the evidence, (b) at least one specific differentiator from the product analysis, (c) the journalist's beat if found. No generic \"we'd love to be featured\" or \"our product is revolutionary\" language.\n5. **Channel overview:** 1-2 sentences from search snippets only. Write \"not found in search data\" if the snippets don't describe the channel's coverage focus.\n6. **Bonus hooks:** 3 angles that your competitors did NOT use in their coverage. These must be grounded in the product's actual differentiators from Step 4 -- not generic advice.\n7. No em dashes. No banned words (powerful, seamless, game-changing, revolutionary, cutting-edge, leverage, transform).\n\n**Per Tier 1 channel generate:**\n- `channel_overview`: 1-2 sentences about coverage focus (from snippets)\n- `why_they_covered_competitors`: specific angle extracted from evidence titles\n- `journalist_name` + `journalist_beat`\n- `approach_method`: cold email / podcast pitch form / community post / LinkedIn DM (based on channel type)\n- `cold_pitch_draft`:\n  - `subject`: \"[Journalist name]: [their beat] + [your specific angle]\"\n  - `body`: 3-4 sentences. Structure: hook (reference their past coverage of a competitor) + what you do (one sentence) + why it fits their beat (tie to a specific differentiator) + ask (clear, low-friction CTA)\n\n**Also generate `bonus_hooks`**: 3 pitch angles not used by any competitor in the search results. Base each on a specific product differentiator.\n\nWrite to `/tmp/cprf-final.json`:\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nresult = {\n    \"product_summary\": {\n        # FILL from analysis\n    },\n    \"competitors_researched\": [],  # FILL: names of confirmed competitors\n    \"tier_1_deep_dives\": [\n        # FILL per tier 1 channel:\n        # {\n        #   \"channel_name\": str,\n        #   \"channel_type\": str,  # editorial / podcast / community\n        #   \"frequency\": int,\n        #   \"found_in_competitors\": [str],\n        #   \"evidence_urls\": [str],\n        #   \"channel_overview\": str,\n        #   \"why_they_covered_competitors\": str,\n        #   \"story_angles_used\": [str],\n        #   \"journalist_name\": str,\n        #   \"journalist_beat\": str,\n        #   \"approach_method\": str,\n        #   \"cold_pitch_draft\": {\"subject\": str, \"body\": str}\n        # }\n    ],\n    \"tier_2_channels\": [\n        # FILL: {channel_name, channel_type, frequency, found_in_competitors, evidence_urls}\n    ],\n    \"tier_3_channels\": [\n        # FILL: {channel_name, found_in_competitor}\n    ],\n    \"bonus_hooks\": [\n        # FILL: 3 strings -- pitch angles not used by competitors\n    ],\n    \"data_quality_flags\": []\n}\n\njson.dump(result, open('/tmp/cprf-final.json', 'w'), indent=2)\nprint(f'Synthesis written.')\nprint(f'Tier 1 deep dives: {len(result.get(\"tier_1_deep_dives\", []))}')\nprint(f'Bonus hooks: {len(result.get(\"bonus_hooks\", []))}')\nPYEOF\n```\n\n---\n\n## Step 10: Self-QA, Present, and Save\n\n**Self-QA:**\n\n```bash\npython3 << 'PYEOF'\nimport json\n\nresult = json.load(open('/tmp/cprf-final.json'))\nfailures = []\n\n# Check 1: em dashes\nfull_text = json.dumps(result)\nif '—' in full_text:\n    result = json.loads(full_text.replace('—', '-'))\n    failures.append('Fixed: em dashes replaced with hyphens')\n\n# Check 2: banned words\nbanned = ['powerful', 'seamless', 'innovative', 'game-changing', 'revolutionize',\n          'excited to announce', 'cutting-edge', 'best-in-class', 'world-class',\n          'leverage', 'transform', 'disrupt']\nfor word in banned:\n    if word.lower() in json.dumps(result).lower():\n        failures.append(f'Warning: banned word \"{word}\" found in output -- review before presenting')\n\n# Check 3: cold pitch subjects exist\nfor dd in result.get('tier_1_deep_dives', []):\n    pitch = dd.get('cold_pitch_draft', {})\n    if not pitch.get('subject') or len(pitch.get('subject', '')) < 10:\n        dd['cold_pitch_draft']['subject'] = 'not generated'\n        failures.append(f'Fixed: missing subject line for {dd.get(\"channel_name\")}')\n    if not pitch.get('body') or len(pitch.get('body', '')) < 50:\n        failures.append(f'Warning: very short pitch body for {dd.get(\"channel_name\")}')\n\n# Check 4: bonus hooks count\nif len(result.get('bonus_hooks', [])) != 3:\n    failures.append(f'Expected 3 bonus hooks, got {len(result.get(\"bonus_hooks\", []))}')\n\n# Check 5: \"not found in search data\" count\nnf_count = json.dumps(result).count('not found in search data')\nif nf_count > 0:\n    failures.append(f'INFO: {nf_count} field(s) marked \"not found in search data\" -- verify before outreach')\n\n# Check 6: tier 1 channels have evidence URLs\nfor ch in result.get('tier_1_deep_dives', []):\n    if not ch.get('evidence_urls'):\n        failures.append(f'Warning: {ch[\"channel_name\"]} has no evidence_urls')\n\nif 'data_quality_flags' not in result:\n    result['data_quality_flags'] = []\nresult['data_quality_flags'].extend(failures)\n\njson.dump(result, open('/tmp/cprf-final.json', 'w'), indent=2)\nprint(f'QA complete. {len(failures)} issues addressed.')\nfor f in failures:\n    print(f'  - {f}')\nif not failures:\n    print('All QA checks passed.')\nPYEOF\n```\n\n**Present the output:**\n\n```\n## PR Intel: [product_name]\nDate: [today] | Competitors researched: [N] | Tier 1 channels: [N] | Tier 2 channels: [N]\n\n---\n\n### Your Product\n[one_line_description]\nDifferentiators: [list]\nCompetitors researched: [names]\n\n---\n\n### Tier 1 Channels (Proven Beats -- Found in 3+ Competitors)\n\n*These channels have already covered multiple companies in your space.*\n\n| Channel | Type | Found in | Journalist/Host | Approach |\n|---|---|---|---|---|\n[one row per tier 1 channel]\n\n---\n\n### Deep Dives + Cold Pitches\n\n#### 1. [Channel Name] (Tier 1 -- [Type], found in [N] competitors)\n\nCovers: [channel_overview]\nCovered competitors: [found_in_competitors with evidence URLs]\nStory angle they used: [why_they_covered_competitors]\nJournalist/Host: [journalist_name] | Beat: [journalist_beat]\nHow to reach: [approach_method]\n\n**Cold pitch:**\nSubject: [subject]\n\n[body -- 3-4 sentences]\n\n---\n\n[repeat for each tier 1 channel]\n\n---\n\n### Tier 2 Channels (Warm -- Found in 2 Competitors)\n\n| Channel | Type | Found in | URL |\n|---|---|---|---|\n[one row per tier 2 channel]\n\n---\n\n### Tier 3 Channels (Discovery -- Found in 1 Competitor)\n\n[comma-separated list of channel names with URLs]\n\n---\n\n### 3 Bonus Hooks (Angles Your Competitors Didn't Use)\n\n1. [hook_text]\n2. [hook_text]\n3. [hook_text]\n\n---\nData notes: [data_quality_flags, or \"None\"]\nSaved to: docs/pr-intel/[PRODUCT_SLUG]-[DATE].md\n```\n\n**Save to file and clean up:**\n\n```bash\nDATE=$(date +%Y-%m-%d)\nOUTPUT_FILE=\"docs/pr-intel/${PRODUCT_SLUG}-${DATE}.md\"\nmkdir -p docs/pr-intel\necho \"Saved to: $OUTPUT_FILE\"\n```\n\n```bash\nrm -f /tmp/cprf-product-raw.md /tmp/cprf-product-analysis.json \\\n      /tmp/cprf-competitors-raw.json /tmp/cprf-competitors-confirmed.json \\\n      /tmp/cprf-pr-raw.json /tmp/cprf-pr-patterns.json \\\n      /tmp/cprf-journalist-results.json /tmp/cprf-final.json\necho \"Temp files cleaned up.\"\n```","tags":["competitor","finder","opendirectory","varnan-tech","agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"capabilities":["skill","source-varnan-tech","skill-competitor-pr-finder","topic-agent-skills","topic-gtm","topic-hermes-agent","topic-openclaw-skills","topic-skills","topic-technical-seo"],"categories":["opendirectory"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/Varnan-Tech/opendirectory/competitor-pr-finder","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add Varnan-Tech/opendirectory","source_repo":"https://github.com/Varnan-Tech/opendirectory","install_from":"skills.sh"}},"qualityScore":"0.504","qualityRationale":"deterministic score 0.50 from registry signals: · indexed on github topic:agent-skills · 108 github stars · SKILL.md body (27,569 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-25T18:55:03.528Z","embedding":null,"createdAt":"2026-04-24T06:56:03.766Z","updatedAt":"2026-04-25T18:55:03.528Z","lastSeenAt":"2026-04-25T18:55:03.528Z","tsv":"'-1':101 '-2':2728,2807 '-3':931 '-4':2856,3464 '/2023/06/article-title':1690 '/dev/null':1091 '/episode/...':1694 '/extract':706 '/r/devops':1705 '/r/devops/':1702 '/search'',':2157 '/tmp/cprf-competitors-confirmed.json':1416,1457,3573 '/tmp/cprf-competitors-raw.json':1118,1132,1337,1345,3572 '/tmp/cprf-final.json':2913,3027,3075,3325,3577 '/tmp/cprf-journalist-results.json':2243,2270,3576 '/tmp/cprf-pr-patterns.json':1824,1930,1944,2064,2359,2380,2443,2492,2630,3575 '/tmp/cprf-pr-raw.json':1469,1524,1613,3574 '/tmp/cprf-product-analysis.json':993,1033,1051,1110,1282,1461,2068,2488,3571 '/tmp/cprf-product-raw.md':685,746,756,835,3570 '/v1/scrape':640 '0':600,732,1567,2423,2432,3257 '1':307,310,314,323,358,1084,1188,1322,1682,1749,1765,1832,1908,1948,1953,1960,2002,2044,2092,2308,2370,2391,2421,2430,2456,2535,2542,2623,2727,2801,2806,2933,2939,3038,3044,3078,3160,3277,3287,3366,3384,3412,3418,3422,3470,3497,3517 '10':3057,3176 '1000':429 '12':1556 '15':1476 '2':318,320,462,930,1036,1090,1199,1348,1419,1448,1707,1741,1755,1759,1868,1875,1888,1933,1980,1985,2014,2246,2446,2594,2601,2634,2988,3030,3100,3328,3370,3473,3478,3489,3520 '20':874,2172 '200':761,766,779,1174,1668 '23':436 '3':315,322,510,621,1211,1472,1561,1743,1752,1762,1838,1901,1990,1995,2000,2023,2205,2571,2659,2754,2855,2892,3002,3013,3150,3224,3228,3390,3463,3492,3508,3523 '30':612 '300':2302 '4':512,517,823,1222,1594,1768,2679,2777,3215 '400':1157,1646,2199,2288 '43':432 '4b':1082 '5':16,1163,1184,1264,1284,1304,1356,1652,1812,2151,2724,3237 '50':3202 '5000':837,842 '6':1441,2751,3275 '7':1596,2048,2094,2781 '8':2037,2644 '9':610,2471 'a-z0':607 'absent':2333,2357 'accept':927 'accord':1388 'across':24,76 'actual':1213,2773 'add':438,1364 'address':3336 'adds/removes/swaps':1384 'advic':2780 'agenc':1217 'agent':189 'ai':144,826,846,1122,1175,1599,1669,2260,2303,2476,2617 'alon':924 'alreadi':3395 'also':2888 'alway':256 'analysi':185,301,519,825,999,1003,1031,1039,1109,1279,1307,1311,1460,1598,2065,2070,2075,2485,2498,2506,2516,2524,2529,2702,2924 'analyz':848,1210 'angl':112,167,290,1771,1806,1860,1897,2575,2578,2661,2664,2689,2755,2819,2853,2894,2968,3016,3440,3511 'announc':1783,3113 'answer':1154,1156,1196,1643,1645,2186,2188,2227,2285,2287 'api':364,367,371,378,381,385,398,408,447,629,645,694,713,716,1115,1466,2083,2140 'api.firecrawl.dev':639 'api.firecrawl.dev/v1/scrape':638 'api.tavily.com':705,2156 'api.tavily.com/extract':704 'api.tavily.com/search'',':2155 'app.tavily.com':426 'appear':34,75,138,160,1247,1498,1714,1750,1756,1763,1836,1872,1905 'appli':133 'application/json':651,711,2164 'approach':2828,2977,3407,3456 'articl':1221,1488 'article/episode':170,1773,2667 'ask':544,2882 'async':917 'author':642 'autom':881,914 'award':1800 'b':1491,1734,2693 'ban':2786,3101,3103,3130,3140 'base':2839,2904 'bash':350,361,576,633,699,751,830,994,1043,1087,1102,1124,1274,1394,1449,1516,1605,1825,1936,2054,2262,2372,2480,2914,3067,3546,3567 'basic':2148 'bearer':643 'beat':1866,2341,2365,2387,2397,2434,2468,2590,2707,2827,2850,2876,2975,3387,3450,3452 'best':3118 'best-in-class':3117 'bias':979,1024,2531 'blank':2652 'block':801 'bodi':2854,2985,3197,3201,3209,3462 'bonus':2752,2890,3010,3049,3053,3216,3222,3229,3234,3509 'build':1676 'buyer':970,1017 'c':580,661,722,753,832,1045,1126,1318,1326,1328,1332,1430,1435,1437,1502,1518,1607,1938,2264,2482,2703 'candid':1291,1321,1340,1341,1387 'cannot':1100 'cat':351,353,355 'categori':1794 'category-cr':1793 'ceo':1495 'ch':1956,1964,1967,1970,1973,2452,2460,2538,2547,2550,2554,2560,2568,2576,2597,2605,2608,2613,3283,3298 'ch.get':2463,2466,2583,2588,3292 'chang':2792,3109 'channel':33,65,73,102,108,147,225,230,304,327,1678,1719,1722,1745,1814,1833,1835,1841,1844,1847,1869,1871,1878,1881,1884,1902,1904,1914,1917,1949,1954,1961,1968,1971,1981,1986,1991,1996,2003,2015,2025,2045,2088,2093,2096,2097,2100,2101,2179,2181,2220,2222,2255,2279,2281,2309,2371,2392,2422,2431,2457,2461,2536,2543,2548,2551,2595,2602,2606,2609,2624,2628,2633,2725,2747,2802,2804,2841,2940,2941,2944,2959,2989,2991,2993,3003,3005,3192,3212,3278,3299,3367,3371,3385,3393,3402,3413,3419,3429,3471,3474,3480,3490,3493,3504 'char':843 'charact':683,744,767,776,780 'check':360,3077,3099,3149,3214,3236,3274,3350 'checkpoint':750 'ci/cd':880,913 'class':3120,3123 'classifi':1778,1805,1813 'clean':3544,3581 'clear':2883 'cold':93,2680,2830,2843,2980,3151,3165,3178,3416,3458 'collect':465 'come':2398 'comma':3500 'comma-separ':3499 'common':186 'communiti':30,70,1503,1820,2134,2835,2949 'compani':860,974,976,1019,1021,1186,1231,3398 'comparison':1799 'compet':1214 'competitor':2,17,50,61,78,135,247,253,259,268,311,316,414,572,940,1085,1139,1145,1185,1237,1265,1270,1285,1305,1339,1357,1401,1413,1425,1456,1475,1481,1493,1504,1527,1530,1541,1565,1570,1586,1623,1713,1727,1733,1753,1760,1766,1839,1854,1876,1891,1909,1922,1976,2563,2616,2758,2817,2866,2899,2925,2931,2954,2965,2998,3009,3020,3362,3380,3391,3427,3432,3435,3446,3479,3498,3513 'competitor-pr-find':1 'complet':2251,3332 'confirm':254,266,1266,1377,1392,1400,1406,1412,1414,1422,1424,1432,2930 'consolid':2478 'consult':919,1218 'consum':902 'content':526,649,668,682,688,709,729,735,743,749,754,760,772,775,778,792,829,833,845,1173,1667,2162,2196,2198,2301 'content-typ':648,708,2161 'continu':451,1101,1374 'convers':468 'core':870 'count':1708,1738,3218,3243,3245,3248,3256,3262 'cover':216,2112,2345,2816,2964,3396,3428,3431,3445 'coverag':418,1363,1484,1514,1575,2749,2764,2810,2863 'creation':1795 'credits/month':430 'cross':1585 'cross-competitor':1584 'cta':2887 'ctype':2099,2104,2118,2183,2224 'curl':634,700 'cut':2795,3115 'cutting-edg':2794,3114 'd':279,652,665,712,726,1074,1080,2514,2520,2713,3551 'd.get':669,674,730 'dash':335,341,2784,3080,3095 'data':183,536,670,957,1129,1138,1144,1334,1338,1343,1521,1529,1536,1578,1610,1617,1924,2033,2158,2175,2331,2355,2409,2449,2479,2650,2739,3021,3242,3253,3270,3306,3313,3317,3526,3528 'data.get':2187,2203,2212 'date':3360,3538,3547,3548,3557 'dd':3156,3177 'dd.get':3164,3191,3211 'deep':2027,2934,3039,3045,3161,3288,3414 'default':496 'depth':2147 'deriv':485,573 'describ':2745 'descript':11,480,505,532,543,558,808,864,990,1009,1248,1296,1329,2509,3377 'detail':115 'develop':883,898,2438 'devop':906 'didn':3514 'differ':570,821,1233,1712 'differenti':297,928,1015,1072,1077,2512,2517,2698,2774,2881,2910,3378 'direct':481,514,809,943 'discov':258,1106,1269 'discoveri':1086,3494 'disrupt':3126 'distinct':1224 'distinguish':935 'dive':2028,2935,3040,3046,3162,3289,3415 'dm':2838 'docs/pr-intel':3535,3554,3561 'domain':1687 'draft':90,177,947,2682,2845,2982,3167,3180 'dynam':966 'e':2218,2231,2238 'e.g':897,905,912 'echo':362,376,615,1092,1095,3562,3578 'edg':2796,3116 'edit':1381 'editori':28,1480,1818,2105,2947 'elif':2117 'els':601,769,2129 'em':334,340,2783,3079,3094 'email':2831 'embed':915 'encod':2152 'enterpris':2435 'enumer':1320 'env':442 'episod':1500,1699 'error':763,1096,2229 'europ':494,981 'everi':63,99,107,203,229,284 'evid':180,227,293,1856,1893,2566,2569,2692,2822,2956,2999,3280,3293,3303,3437 'exact':319,929,1183,1758,1874 'exampl':879,950,2418 'except':2215,2216 'excit':3111 'exist':3154 'expect':3227 'extend':3320 'extract':168,390,453,691,741,854,1769,2261,2310,2346,2665,2820 'f':677,737,771,1079,1134,1148,1153,1165,1171,1302,1324,1331,1421,1434,1526,1539,1554,1620,1632,1637,1642,1654,1659,1664,1946,1963,1978,1988,2107,2121,2131,2207,2233,2248,2276,2284,2295,2299,2459,2496,2502,2511,2519,2522,2527,2545,2557,2565,2573,2581,2604,3032,3036,3048,3138,3185,3204,3226,3259,3296,3330,3338,3342,3343,3569 'fail':2237 'failur':3076,3321,3334,3340,3346 'failures.append':3092,3137,3184,3203,3225,3258,3295 'fallback':395,422,689 'featur':249,283,1482,2717 'feed':942 'fetch':129,460,474,622,678,738,768,782,803 'fewer':764,1998 'field':3263 'file':346,443,3542,3553,3566,3580 'fill':1000,1283,1399,1834,1870,1903,2336,2381,2922,2927,2936,2990,3004,3012 'find':13,59,82,416 'finder':4,52 'fintech':900 'firecrawl':377,380,384,446,626,628,644,680,693 'first':841 'fit':2874 'fix':3093,3186 'flag':538,1580,1926,2035,3023,3308,3315,3319,3530 'focus':2750,2811 'follow':1746 'forb':1486 'form':2834 'format':656 'found':1094,1099,1262,1852,1889,1920,1974,2328,2352,2406,2558,2561,2587,2611,2614,2647,2709,2736,2952,2996,3007,3143,3239,3250,3267,3388,3404,3424,3433,3476,3482,3495 'founder':1494,1788,2128 'founder-stori':1787 'free':427 'freq':2553 'frequenc':1679,1740,1850,1887,1965,2017,2555,2950,2995 'frequent':36 'friction':2886 'full':433,3081,3087 'full_text.replace':3091 'fund':1782 'funding-announc':1781 'game':2791,3108 'game-chang':2790,3107 'generat':275,968,2473,2803,2889,3183 'generic':276,939,2711,2779 'geographi':491,978,1023,2528,2530 'get':423,671,733,2019 'give':5,53 'global':495,982 'go':513 'good':1372 'got':3231 'ground':2768 'h':641,647,707 'hallucin':105,223,2621 'haven':329 'header':2160 'healthtech':901 'highest':2016 'hook':2753,2859,2891,3011,3050,3054,3217,3223,3230,3235,3510,3518,3521,3524 'host':42,86,593,888,953,2039,2124 'host.split':599 'how-to':1796 'http':592 'hyphen':343,3098 'icp':969,1016,2523,2525 'identifi':31,1695 'immedi':403 'import':583,585,602,662,723,997,1046,1127,1277,1397,1519,1608,1828,1939,2057,2265,2375,2483,2917,3070 'includ':2627 'indent':1035,1347,1418,1932,2245,2445,3029,3327 'industri':891,1010,1058,1060,1064,1068,1312,2071,2076 'infer':1803,2675 'info':3260 'infra':960 'ingrid':2426 'innov':3106 'input':464 'instruct':847,1176,1670,2304,2618 'int':1851,2951 'intel':3357 'interview':1489,1497,2125 'invent':2632 'issu':3335 'item':1548,1630,1648,1656,1661,2290,2296 'item.get':1666,2300 'javascript':798 'javascript-rend':797 'job':972 'journalist':40,84,109,199,204,1863,1865,2038,2109,2208,2234,2249,2316,2340,2361,2364,2383,2386,2393,2396,2424,2433,2448,2464,2467,2582,2584,2589,2705,2824,2826,2847,2971,2974,3448,3451 'journalist/host':157,2635,3406,3447 'json':664,725,998,1047,1128,1278,1398,1520,1609,1829,1940,2058,2266,2376,2484,2918,3071 'json.dump':1030,1342,1411,1927,2239,2440,3024,3322 'json.dumps':2139,3083,3134,3246 'json.load':666,727,1049,1130,1280,1335,1522,1611,1942,2062,2066,2268,2378,2486,2490,3073 'json.loads':2176,3090 'key':365,368,372,379,382,386,399,409,448,630,646,695,714,717,1113,1116,1464,1467,2080,2084,2141,2143 'knowledg':146,202,2339,2658,2678 'l1':893,1012,1062 'l2':903,1013,1066,2069,2073,2113,2126,2136 'l3':909,1014,1070,1204,1314,2074,2078,2114,2127,2137 'label':921 'languag':878,2723 'launch':1786 'least':237,2022,2695 'leav':958 'len':681,742,759,774,1137,1423,1550,1950,1982,1992,2211,2252,3041,3051,3173,3199,3220,3232,3333 'level':896 'leverag':2797,3124 'like':796,922 'line':863,1008,2508,3189,3376 'linkedin':2837 'list':224,1220,1383,1393,1402,3379,3502 'live':986,1028 'll':1359 'look':37,1371 'lookup':2040,2085,2209,2235,2240,2250,2253 'lookup_results.append':2178,2219 'love':280,2714 'low':1573,2885 'low-frict':2884 'lower':613,3136 'ls':1088 'lunden':2427 'm':3550 'make':568,819 'manag':2135 'mani':1711 'map':1680 'mark':302,3265 'markdown':657,672,675 'market':877,2009 'max':2149 'md':3539,3558 'memori':221 'method':2165,2829,2978,3457 'miss':401,450,3187 'mistak':187 'mkdir':3559 'moder':2133 'monorepo':964 'multipl':77,3397 'must':119,137,149,159,206,234,286,2683,2766 'n':1315,1549,1557,1621,1633,2277,2546,3364,3368,3372,3426 'name':110,136,148,158,197,205,483,856,861,1006,1056,1190,1234,1239,1292,1309,1327,1407,1436,1842,1864,1879,1910,1915,1969,2095,2098,2108,2122,2132,2180,2210,2221,2236,2317,2321,2362,2384,2394,2415,2425,2462,2465,2497,2500,2549,2585,2607,2625,2636,2825,2848,2928,2942,2972,2992,3006,3193,3213,3300,3359,3382,3420,3449,3505 'nativ':961 'neither':540 'netloc.replace':596 'news':68 'news.ycombinator.com':1509 'newslett':1821 'nf':3244,3255,3261 'nfound':1303 'nich':911,1205,2008 'none':3532 'normal':1683,2006 'note':534,1576,2029,3527 'nqueri':1149 'occurr':312,1709 'ok':773 'one':238,862,1007,1249,2050,2507,2696,2870,3375,3408,3485 'onlymaincont':658 'open':684,745,755,834,1032,1050,1131,1281,1336,1344,1415,1523,1612,1929,1943,2063,2067,2242,2269,2379,2442,2487,2491,3026,3074,3324 'open.spotify.com':1693 'open.spotify.com/episode/...':1692 'option':484,492 'os':2059 'os.environ.get':2081 'output':118,233,274,337,1117,1468,3145,3355,3552,3565 'outreach':49,2474,3273 'overview':2726,2805,2960,3430 'p':1941,1951,1958,1983,1993,3560 'packag':2475 'page':131,487,528,624,788,828,840,851,984,987,1026,1029 'pars':463 'pass':3351 'past':478,504,522,555,805,2862 'pattern':1587,1597,1830,1928,2061,2377,2419,2428,2441,2454,2489,2540,2599 'patterns.get':2090 'payload':2138,2159 'payment':907,916 'per':1474,2799,2937,3410,3487 'perez':215 'person':92,2319,2417 'persona':971,1018 'phase':1083,1105,1447,1452 'pick':1182 'piec':1792 'pipelin':890,954 'pitch':94,114,176,277,285,946,2681,2833,2844,2893,2981,3015,3152,3163,3166,3179,3208,3417,3459 'pitch.get':3170,3174,3196,3200 'pleas':804 'podcast':29,69,1492,1496,1697,1819,2119,2123,2832,2948 'polici':106 'popul':2366,2641 'post':637,703,2166,2836 'power':2788,3104 'pr':3,22,51,64,242,417,1362,1427,1445,1454,1603,3356 'pr-research':1453 'present':3061,3148,3353 'press':1483,1574 'primari':625 'print':598,604,676,736,762,770,827,838,844,1037,1052,1057,1071,1078,1119,1133,1147,1152,1164,1170,1301,1323,1330,1420,1433,1513,1525,1532,1538,1553,1600,1619,1631,1636,1641,1653,1658,1663,1945,1962,1977,1987,2206,2232,2247,2257,2275,2283,2294,2298,2447,2458,2477,2493,2495,2501,2510,2518,2521,2526,2532,2533,2544,2556,2564,2572,2580,2591,2592,2603,3031,3035,3047,3329,3341,3347 'proceed':1582 'product':8,56,97,130,184,300,469,482,518,525,551,574,577,616,618,623,787,824,839,850,855,858,937,1005,1038,1053,1055,1108,1208,1215,1308,1459,1785,2494,2499,2701,2720,2771,2909,2920,3358,3374,3536,3555 'product-analysi':1107,1458 'product-launch':1784 'producthunt.com':1512 'promot':326,2010,2031 'prop':872 'prove':245 'proven':80,331,3386 'provid':490,501 'pyeof':996,1041,1276,1349,1396,1439,1827,1934,2056,2256,2374,2469,2916,3055,3069,3352 'python3':579,660,721,752,831,995,1044,1103,1125,1275,1395,1450,1517,1606,1826,1937,2055,2263,2373,2481,2915,3068 'qa':3060,3066,3331,3349 'qualiti':537,1579,1925,2034,3022,3307,3314,3318,3529 'queri':1151,1638,1640,2106,2120,2130,2144,2145,2184,2185,2225,2226 'r':606,1159,1166,1168,1534,1540,1546,1615,1622,1628,2191,2194,2201,2272,2278,2280 'r.get':1172,2197,2286,2292 'raw':734,1602 're':603 're.sub':605 'reach':3455 'read':344,757,836,1177,1671 'readabl':791 'reddit.com':1506,1704 'reddit.com/r/devops':1703 'refer':178,287,345,2684,2860 'references/pitch-guide.md':354 'references/pr-channel-types.md':352 'references/tier-scoring.md':356,1747 'remain':1593 'remov':1365 'render':799 'repeat':3466 'replac':338,3096 'report':2110 'req':2153,2170 'requir':375,411,475 'research':23,62,413,1360,1428,1446,1455,1528,1531,2926,3363,3381 'resp':2174 'resp.read':2177 'result':126,142,156,174,211,244,731,1120,1162,1180,1194,1244,1257,1289,1537,1552,1558,1568,1604,1618,1651,1674,1716,1777,2086,2150,2189,2204,2213,2214,2228,2241,2254,2258,2267,2274,2293,2314,2672,2903,2919,3025,3072,3084,3089,3135,3247,3311,3312,3316,3323 'result.get':3042,3052,3158,3221,3233,3285 'return':46,789,1566 'review':1123,1405,3146 'revolution':3110 'revolutionari':2722,2793 'rm':3568 'root':1686,1720 'row':3409,3486 'rule':2622 'run':18,349,434,1136,1471,2049 'runner':955 's.get':1155,1161 'sarah':214 'save':3063,3533,3540,3563 'say':1370 'script':1093 'scripts/research.py':1089,1097,1104,1451 'seamless':2789,3105 'search':125,141,155,173,182,210,243,270,1135,1140,1146,1179,1193,1243,1256,1288,1473,1673,2053,2146,2313,2330,2354,2400,2408,2649,2671,2731,2738,2902,3241,3252,3269 'searches/run':437 'sector':904 'self':887,952,3059,3065 'self-host':886,951 'self-qa':3058,3064 'sentenc':1250,2729,2808,2857,2871,3465 'separ':3501 'set':369,374,383,388,527,632,698 'setup':359 'short':557,807,3207 'show':257,1267,1724 'site':794,1505,1508,1511 'size':977,1022 'skill' 'skill-competitor-pr-finder' 'skip':251,508 'slug':575,578,617,619,3537,3556 'snippet':165,212,1198,1252,1665,2325,2348,2401,2412,2732,2742,2813 'softwar':2436 'sourc':529,985,1027,1253,1298 'source-varnan-tech' 'space':3401 'specif':123,179,289,296,910,932,949,2687,2697,2818,2852,2880,2908 'spotifi':1698 'startup':217,553,2115 'step':255,357,461,509,516,620,822,967,1081,1263,1440,1595,1681,1706,1742,1767,1811,2036,2470,2643,2776,3056 'still':1589 'stop':402,781 'stori':111,166,1770,1789,1859,1896,2574,2577,2660,2688,2967,3439 'str':1293,1295,1297,1300,1408,1410,1843,1846,1849,1855,1858,1862,1880,1883,1886,1892,1895,1899,1916,1919,1923,2230,2943,2946,2955,2958,2961,2966,2970,2973,2976,2979,2984,2986 'string':3014 'strip':614 'structur':2858 'subject':2846,2983,3153,3171,3175,3181,3188,3460,3461 'substitut':2655 'summari':1515,2921 'support':962 'surfac':71 'swap':1367 'synthesi':2472,3033 'sys':586,663,724 'sys.stdin':667,728 'target':81,2051 'tavili':124,140,164,363,366,370,389,397,407,452,690,715,740,1112,1114,1463,1465,2052,2079,2082,2142 'tavily-key':1111,1462 'taxonomi':892,1011,1061,1065,1069,1313,2072,2077 'tdata':1544,1626,1639 'tdata.get':1551,1644,1650 'team':884 'techcrunch':219,1485 'techcrunch.com':1689,1691 'techcrunch.com/2023/06/article-title':1688 'technolog':923,2116 'telemedicin':908 'tell':404,783,1350 'temp':3579 'text':523,2349,2402,3082,3088,3519,3522,3525 'thing':933 'three':20,1443 'three-track':19,1442 'tie':2877 'tier':48,100,306,313,317,321,428,1744,1748,1754,1761,1831,1867,1900,1947,1952,1959,1979,1984,1989,1994,2001,2013,2043,2091,2307,2369,2390,2420,2429,2455,2534,2541,2593,2600,2800,2932,2938,2987,3001,3037,3043,3159,3276,3286,3365,3369,3383,3411,3421,3469,3472,3488,3491 'timeout':2171 'titl':171,973,1167,1195,1655,1657,1774,1810,2190,2192,2297,2668,2823 'today':3361 'tool':899,2439 'top':15,895,2012 'top-level':894 'topic':2343 'topic-agent-skills' 'topic-gtm' 'topic-hermes-agent' 'topic-openclaw-skills' 'topic-skills' 'topic-technical-seo' 'total':1477,2024 'trace':120,207 'track':21,1444,1478,1490,1501,1543,1547,1555,1562,1625,1629,1634,1730,1736 'track.upper':1635 'train':145,201,2338,2657,2677 'transform':2798,3125 'trend':1791 'trend-piec':1790 'tri':2167 'true':659 'type':650,710,975,1020,1815,1848,1885,1972,2102,2163,2182,2223,2282,2552,2610,2842,2945,2994,3403,3423,3481 'unclear':983 'unless':476 'updat':1385,2358,2382,2450 'url':9,57,152,228,239,459,470,472,507,541,548,587,588,595,611,653,654,718,719,1169,1240,1254,1258,1294,1299,1333,1409,1438,1660,1662,1684,1845,1857,1882,1894,1911,1918,2193,2195,2567,2570,2957,3000,3281,3294,3304,3438,3484,3507 'url.startswith':591 'urllib.parse':582 'urllib.request':2060 'urllib.request.request':2154 'urllib.request.urlopen':2169 'urlpars':584,594 'us':493,497,980,1025 'use':67,333,393,456,520,1861,1898,2579,2637,2662,2761,2896,2969,3018,3442,3516 'useless':273 'user':264,406,477,500,531,785,989,1273,1352,1380,1404 'vagu':920 'valu':871 'verbatim':161,2322 'verifi':1042,1935,3271 'via':679,739 'video':918 'visibl':1807 'w':686,747,1034,1346,1417,1931,2244,2444,3028,3326 'wait':261,1375 'want':191 'warm':3475 'warn':3139,3205,3297 'wast':269 'wire':1487 'without':226 'word':875,2787,3102,3128,3141,3142 'word.lower':3132 'work':1590 'world':3122 'world-class':3121 'write':213,687,748,991,1238,1390,1822,2326,2350,2404,2645,2734,2911 'writer':2111 'written':1040,3034 'wrong':196,267 'www':597 'www.reddit.com':1701 'www.reddit.com/r/devops/':1700 'x':636,702,1966 'y':3549 'z0':609 'zero':104,2620 'zero-hallucin':103,2619","prices":[{"id":"db6d351c-a605-4ea4-9e6f-82b15dbf79c9","listingId":"62781a85-f10c-4c3d-95b4-19d06e7833b9","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"Varnan-Tech","category":"opendirectory","install_from":"skills.sh"},"createdAt":"2026-04-24T06:56:03.766Z"}],"sources":[{"listingId":"62781a85-f10c-4c3d-95b4-19d06e7833b9","source":"github","sourceId":"Varnan-Tech/opendirectory/competitor-pr-finder","sourceUrl":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/competitor-pr-finder","isPrimary":false,"firstSeenAt":"2026-04-24T06:56:03.766Z","lastSeenAt":"2026-04-25T18:55:03.528Z"}],"details":{"listingId":"62781a85-f10c-4c3d-95b4-19d06e7833b9","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"Varnan-Tech","slug":"competitor-pr-finder","github":{"repo":"Varnan-Tech/opendirectory","stars":108,"topics":["agent-skills","gtm","hermes-agent","openclaw-skills","skills","technical-seo"],"license":null,"html_url":"https://github.com/Varnan-Tech/opendirectory","pushed_at":"2026-04-25T16:06:48Z","description":" AI Agent Skills built for GTM, Technical Marketing, and growth automation.","skill_md_sha":"8d1f3a7887352fa150417345b445309105423bc7","skill_md_path":"skills/competitor-pr-finder/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/Varnan-Tech/opendirectory/tree/main/skills/competitor-pr-finder"},"layout":"multi","source":"github","category":"opendirectory","frontmatter":{"name":"competitor-pr-finder","description":"Give it your product URL or description. It finds your top 5 competitors, runs three-track PR research across all of them (editorial, podcasts, communities), identifies which channels appear most frequently, looks up the journalist or host for each, and returns a tiered outreach list with story angles and ready-to-send cold pitch drafts tailored to your product. Use when asked to find PR opportunities, discover where competitors got featured, build a media outreach list, find which journalists cover my space, or get pitch templates for press coverage.","compatibility":"[claude-code, gemini-cli, github-copilot]"},"skills_sh_url":"https://skills.sh/Varnan-Tech/opendirectory/competitor-pr-finder"},"updatedAt":"2026-04-25T18:55:03.528Z"}}