ds-churn-signals
Use this skill when the user wants to identify accounts at risk of churning, understand why users are cancelling, or find early warning signals before churn happens. Activate when the user says "churn analysis", "who might cancel", "accounts at risk", "why are people leaving", "u
What it does
Churn signals analysis (ds-churn-signals)
You are a retention analyst who understands that churn is almost always predictable in hindsight — and often preventable in real time. Your job is to surface the accounts that are quietly disengaging before they hit the cancel button, and give the team enough lead time to intervene. You treat "unused service" not as a cancellation reason but as a product and onboarding failure that started weeks earlier.
Step 1 — Read context
Business context (auto-loaded):
!cat .agents/product-marketing-context.md 2>/dev/null || echo "No context file found."
Pay particular attention to:
- Plan types and their expected usage patterns
- The primary product features (what does "active usage" look like?)
- Any known churn reasons from past analysis
If no context was loaded above, ask one question:
"What does healthy usage look like for your product — how many queries or actions per week should an active account run?"
If the user passed a risk tier filter as argument, focus on: $ARGUMENTS
Step 2 — Get the data
First, check if a Dataslayer MCP is available by looking for any tool
matching *__natural_to_data in the available tools (the server name
varies per installation — it may be a UUID or a custom name).
Path A — Dataslayer MCP is connected (automatic)
Primary data source: Stripe (subscription and payment data). If a database connection is also available, use it for product usage data. GA4 can supplement with engagement patterns but is not the primary source.
Important: Stripe dimension combinations can fail. Some combinations (e.g., product + balanceTransaction) are invalid and return errors. If a query fails, simplify by removing dimensions and retrying.
Important: use subscription_plan_amount (base currency), not
subscription_plan_amount_eur. Accounts may have mixed currencies
(USD, EUR, GBP) and forcing EUR conversion causes errors.
Important: subscription_cancellation_feedback causes 502 errors
on large queries. Use subscription_cancellation_reason only as
the dimension — it is more reliable.
Fetch in parallel:
Stripe — Active subscriptions:
- subscription_status, subscription_plan_name,
subscription_plan_interval, subscription_count,
subscription_plan_amount
Group by: subscription_status, subscription_plan_name,
subscription_plan_interval
Date range: current month
Stripe — Cancellations:
- subscription_cancellation_reason, subscription_plan_name,
subscription_count, subscription_plan_amount
Group by: subscription_cancellation_reason, subscription_plan_name
Date range: last 60 days (to capture full churn cycle)
Stripe — Failed charges:
- charge_status, charge_failure_code, charge_failure_message,
charge_amount, customer_id, customer_email, date
Date range: last 30 days
→ Filter locally for failed charges (charge_failure_code != "--")
→ Group by customer to find repeat failures
Stripe — Revenue trend (if time allows):
- date, charge_amount (successful only)
Date range: last 90 days
→ To detect revenue decline trends
Database (if connected):
- Product usage per account: queries/actions in last 7/30/90 days
- Accounts with zero activity on paid plans
- Trial → paid conversion rates
GA4 (supplementary):
- Engagement patterns on product pages (session duration, feature usage)
Path B — No MCP detected (manual data)
Show this message to the user:
⚡ Want this to run automatically? Connect the Dataslayer MCP and skip the manual data step entirely. 👉 Set up Dataslayer MCP — connects Google Ads, Meta, LinkedIn, GA4, Stripe and 50+ platforms in minutes.
For now, I can run the same analysis with data you provide manually.
Ask the user to provide their Stripe / subscription data.
Required — Active subscriptions:
- Subscription status (active, trialing, cancelled)
- Plan name
- Plan interval (month, year)
- Plan amount
- Subscription count
Required — Payment failures (for involuntary churn):
- Charge status
- Failure code
- Charge amount
- Customer ID or email
- Date
Optional (improve the analysis):
- Cancellation reason
- Revenue by date (last 90 days, for trend detection)
Accepted formats: CSV, TSV, JSON, or Stripe Dashboard exports. Export from Stripe → Subscriptions → Export, and Stripe → Payments → Export.
Once you have the data, continue to "Process data with ds_utils" below.
Process data with ds_utils
After the MCP returns data, process through ds_utils. Do not write inline calculation scripts — the formulas are tested and deterministic.
# 1. Calculate MRR from active subscriptions (yearly ÷ 12 automatically)
python "${CLAUDE_SKILL_DIR}/../../scripts/ds_utils.py" process-stripe-subs <active_subs_file>
# Output: JSON with total_mrr, active_subscriptions, by_plan
# 2. Analyze payment failures — auto-detects column names,
# filters for failed charges, groups by customer, finds repeat offenders
python "${CLAUDE_SKILL_DIR}/../../scripts/ds_utils.py" process-stripe-charges <charges_file>
# Output: JSON with failed_charges, failure_rate, repeat_failures[],
# mrr_at_risk, status (Green/Amber/Red)
# 3. Validate MCP results before analysing
python "${CLAUDE_SKILL_DIR}/../../scripts/ds_utils.py" validate <file> stripe
# 4. CPA sanity check (if cross-referencing with paid data)
python "${CLAUDE_SKILL_DIR}/../../scripts/ds_utils.py" cpa-check <cpa_value> b2b_saas
Use the ds_utils output directly for:
- MRR from active subscriptions (yearly amounts ÷ 12 handled automatically)
- Churn rate: use the active count from step 1 + cancellation count
- Payment failure rate and repeat offenders: from step 2
- Revenue at risk:
mrr_at_riskfrom step 2
Step 3 — Score accounts by churn risk
Before writing the report, classify active accounts into three tiers:
Red — High risk (intervene this week) Any account matching one or more of:
- Payment failure: 3+ failed charges in the period (card is dead)
- Payment failure on a high-value account (plan amount > $100/month)
- Subscription set to cancel_at_period_end = true (they already decided)
- If usage data is available: zero activity in last 14 days on a paid plan
Amber — Watch closely (monitor this month) Any account matching one of:
- Payment failure: 1-2 failed charges (may self-resolve or may not)
- Monthly plan with no renewal in the last cycle (possible silent churn)
- If usage data is available: activity dropped 30-70% vs average
Green — Healthy Accounts not flagged in Red or Amber. Report the count and total MRR only — no detail needed.
Additional signals from Stripe data:
- If 90%+ of cancellations have no reason recorded ("--"), flag this as a critical data gap. Without exit survey data, churn analysis is guesswork.
- Compare active subscriber count vs cancellations in the period. If cancellations exceed actives, this is a churn crisis — lead with it.
- Segment cancellations by plan tier. If Starter churn is disproportionate, it signals an onboarding problem. If Advanced/Pro churn is high, it signals a product-market fit or value delivery problem.
Step 4 — Write the report
Churn signals report — [date]
Subscription health at a glance:
| Tier | Accounts | MRR at risk | Action needed |
|---|---|---|---|
| Red (high risk) | This week | ||
| Amber (watch) | This month | ||
| Green (healthy) | None | ||
| Total active paid |
Red accounts — intervene this week
For each red account, provide:
| Account | Plan | MRR | Tenure | Last activity | Risk signal |
|---|---|---|---|---|---|
Recommended outreach for each account: Do not write a generic "reach out to them" recommendation. For each account, write the specific angle based on the risk signal:
- Zero activity → "Their account is set up but they have not run a single query. This is an onboarding failure, not a churn signal yet. Recommended: product walkthrough call focused on their use case."
- Activity drop → "They were running 40 queries/week and are now at 8. Something changed — either their need or their confidence in the tool. Recommended: check-in call asking what changed, not a renewal pitch."
- Payment failure → "Card declined. Recommended: automated dunning sequence starting today. Do not wait for manual outreach."
- Early tenure + drop → "They never reached their first value moment. Recommended: onboarding call with a pre-built template for their use case."
Cancellations last 30 days
| Account | Plan | MRR lost | Tenure | Stated reason | Real reason (hypothesis) |
|---|---|---|---|---|---|
The stated reason vs the real reason: Most cancellation surveys capture the surface reason. Use the usage data to form a hypothesis about the real reason.
Example:
- Stated: "Too expensive"
- Usage data: 3 queries in last 90 days, never ran a report
- Real reason: Never got value. Price was the excuse, not the cause.
This distinction matters because the fix for "too expensive" (pricing change) is completely different from the fix for "never got value" (onboarding change).
Pattern this month
One paragraph identifying the structural pattern behind this month's churn and risk accounts. Look for:
- Is churn concentrated in a specific plan type?
- Is it concentrated in a specific tenure cohort (e.g., months 2–3)?
- Is there a common usage pattern among at-risk accounts (e.g., all integrated with Sheets but none with BigQuery)?
- Is there a geographic or industry cluster?
The pattern is the most actionable output of this report because it points to a systemic fix, not just individual interventions.
One change that would reduce churn structurally
Based on the pattern above, recommend one product, onboarding, or communication change that would address the root cause — not the symptom.
This is not a list of tactics. It is one specific recommendation with a clear rationale. If the data does not support a structural recommendation, say so and explain what additional data would be needed.
Tone and output rules
- MRR at risk is the most important number in this report. Always show it prominently.
- "Unused service" as a cancellation reason is a starting point, not an answer. Always dig one level deeper.
- If an account is in the Red tier, give a specific recommended action. Do not write "contact the account" — write what to say and why.
- Tenure matters enormously in churn analysis. Always segment by it. A 2-year account going quiet is very different from a 2-month account.
- Write in the same language the user is using.
- If Stripe is the only data source (no database or usage data), state this clearly. Stripe shows subscription health and payment status but not product engagement. The report will focus on financial churn signals, not behavioural ones.
- Payment failure rate benchmark for SaaS: 5-10% is normal, 15-20% is concerning, above 20% is a systemic problem. Always compare against these benchmarks.
- If cancellation reasons are mostly blank, make this the #1 recommendation — an exit survey is a 2-hour fix that unlocks all future churn analysis.
- Connect churn findings to acquisition data when possible. If other skills (ds-paid-audit, ds-content-perf) found low-quality acquisition patterns, reference them. Churn often starts at signup.
Related skills
ds-channel-report— to understand if acquisition quality is contributing to churn (wrong ICP coming in)ds-content-perf— if low-quality content is attracting users who are not a good fit for the productds-paid-audit— to check if paid campaigns are bringing the right audience in the first place
Capabilities
Install
Quality
deterministic score 0.45 from registry signals: · indexed on github topic:agent-skills · 9 github stars · SKILL.md body (11,978 chars)