Skillquality 0.47

retention-engagement

Improve retention and engagement: diagnosis, aha moment, lever hypotheses, experiment backlog. See also: user-onboarding (first-time UX).

Price
free
Protocol
skill
Verified
no

What it does

Retention & Engagement

Scope

Covers

  • Diagnosing retention + engagement (cohorts/curves, frequency, segments, drop-offs)
  • Identifying the activation / “aha moment” and reducing time-to-value
  • Designing habit + re-engagement interventions (daily return, reminders, content loops)
  • Creating accruing value and ethical switching costs (“mounting loss”)
  • Turning insights into a prioritized experiment + measurement plan

When to use

  • “Improve retention / reduce churn”
  • “Increase engagement / DAU/WAU”
  • “Define our activation / aha moment”
  • “D1/D7 retention is low—fix onboarding and time-to-value”
  • “Create a retention experiment backlog and a 30/60/90 plan”

When NOT to use

  • You don’t have (or can’t assume) a stable value proposition / ICP (use problem-definition).
  • You’re primarily deciding pricing/packaging/paywalls (this skill can add retention context but won’t replace pricing work).
  • You need acquisition loop design (use designing-growth-loops).
  • You need to synthesize qualitative churn feedback before proposing experiments (use analyzing-user-feedback or interviews).
  • The problem is specifically first-time onboarding UX (signup flow, empty states, guided setup) rather than full-lifecycle retention (use user-onboarding).
  • You want to apply behavioral science frameworks (habit loops, nudge theory, loss aversion mechanics) as the primary lens rather than a retention metrics lens (use behavioral-product-design).
  • You need to determine whether you have product-market fit before optimizing retention (use measuring-product-market-fit).

Inputs

Minimum required

  • Product + target user/ICP and 1–2 key segments
  • Current stage (pre-PMF / early PMF / growth / mature)
  • Best-available baseline metrics (even rough):
    • retention (D1/D7/D30 or weekly cohort), churn, engagement (DAU/WAU/MAU), activation rate, time-to-value
  • Onboarding flow summary (steps/screens + where users drop)
  • Constraints: timebox, engineering/design capacity, allowed channels (email/push/in-app), privacy/legal/brand limits

Missing-info strategy

  • Ask up to 5 questions from references/INTAKE.md, then proceed.
  • If metrics are missing, proceed with explicit assumptions and label confidence.
  • Do not request secrets or PII; prefer aggregated metrics and redacted funnels.

Outputs (deliverables)

Produce a Retention & Engagement Improvement Pack (Markdown in-chat; or as files if requested) containing:

  1. Context snapshot (goal, segments, constraints, timebox)
  2. Metric definitions + guardrails (how “retention” and “engagement” are measured)
  3. Retention + engagement diagnosis (cohorts/curves, segments, drop-offs, churn drivers)
  4. Activation / aha moment definition (candidate behaviors + threshold + validation plan)
  5. Lever hypotheses map (onboarding → habit → accruing value → re-engagement)
  6. Experiment backlog (prioritized; experiment cards with success metrics + guardrails)
  7. Measurement + instrumentation plan (events, dashboards, owners if known)
  8. 30/60/90 execution plan
  9. Risks / Open questions / Next steps (always included)

Templates and checklists:

Workflow (7 steps)

1) Intake + goal framing

  • Inputs: User prompt; references/INTAKE.md.
  • Actions: Define the retention problem (segment, time horizon, metric) and the decision this work will drive (what will change). Confirm constraints (timebox, capacity, channels, privacy/brand).
  • Outputs: Context snapshot + metric definitions draft.
  • Checks: Goal is a sentence with a number and a date (e.g., “Improve paid D30 retention from 18%→24% by end of Q2”).

2) Data + instrumentation sanity check

  • Inputs: Current tracking/events (or best guess), funnel steps, dashboards (if any).
  • Actions: List what you can/can’t measure today. Define the minimum event schema needed to learn (activation, engagement, churn). Identify 1–3 highest-impact instrumentation gaps.
  • Outputs: Instrumentation gap list + “minimum viable measurement” plan.
  • Checks: Every key metric in the goal has a data source or an explicit assumption.

3) Diagnose: where retention fails (and why)

  • Inputs: Baseline metrics, cohorts/curves, funnel drop-offs, segments, any churn feedback.
  • Actions: Build a diagnosis across three failure modes:
    • Activation failure (users never reach value)
    • Engagement decay (users get value once, don’t build a habit)
    • Monetization churn (value exists, but price/packaging/friction drives churn) Segment results (at least 2 segments) and identify the largest “leak.”
  • Outputs: Retention + engagement diagnosis table + primary failure mode(s).
  • Checks: Diagnosis points to one primary lever to test first (onboarding vs habit vs value vs comms).

4) Define the activation / “aha moment” (data-backed)

  • Inputs: Candidate value behaviors + journey; usage events; retention outcome definition.
  • Actions: Propose 3–5 candidate “aha” behaviors, then define an activation threshold (e.g., “uses X feature twice within 7 days” or “invites 2 teammates + uses 2 key features within 14 days”). Document how you’ll validate (correlation with D30/D60 retention; holdout if possible).
  • Outputs: Activation/aha moment spec + validation plan + tracking requirements.
  • Checks: The activation definition is behavioral and measurable (not a survey response or opinion).

5) Generate lever hypotheses (convert insights → rules)

  • Inputs: Diagnosis + activation spec; constraints.
  • Actions: Create a lever map with hypotheses tied to failure modes:
    • Onboarding/time-to-value: get users to aha faster and more reliably
    • Habit/daily return: design cues, routines, rewards; reduce friction to “come back tomorrow”
    • Accruing value + mounting loss (ethical): personalization, progress/history, saved work, identity/data repository
    • Re-engagement: lifecycle messaging, winback, content reminders, in-product nudges Convert each hypothesis into a rule + check (see references/SOURCE_SUMMARY.md).
  • Outputs: Lever hypotheses map + candidate interventions.
  • Checks: Every hypothesis ties to (a) a failure mode, and (b) a measurable leading indicator.

6) Design + prioritize experiments (with measurement)

  • Inputs: Hypotheses; measurement plan; capacity.
  • Actions: Turn top hypotheses into experiment cards (1–2 weeks each). Prioritize using a simple score (Impact × Confidence ÷ Effort). Define success metrics and guardrails; note required instrumentation and rollout/rollback.
  • Outputs: Prioritized experiment backlog + experiment cards + metric/guardrail spec.
  • Checks: Top 3 experiments are runnable with current constraints and have unambiguous “win/lose/learn” criteria.

7) Build the 30/60/90 plan + quality gate

  • Inputs: Draft pack; references/CHECKLISTS.md and references/RUBRIC.md.
  • Actions: Sequence work into a 30/60/90 plan (instrumentation, experiments, analysis cadence). Run the checklist and score the rubric. Always include Risks / Open questions / Next steps.
  • Outputs: Final Retention & Engagement Improvement Pack.
  • Checks: Next 2 weeks of work are unblocked; measurement is in place to learn.

Anti-patterns (common failure modes)

  1. Vanity-metric retention — Reporting DAU/MAU ratios without segmenting by cohort or user type; masks churn behind new-user influx and leads to false confidence.
  2. Notification spam as "re-engagement" — Defaulting to push/email frequency increases instead of addressing the underlying value gap; temporarily lifts open rates but accelerates unsubscribes and erodes trust.
  3. Activation theater — Defining the "aha moment" based on internal opinion ("they saw the dashboard") rather than correlating specific behaviors with downstream retention; produces interventions that move a proxy but not real retention.
  4. One-size-fits-all diagnosis — Running the same retention playbook for all segments instead of diagnosing distinct failure modes (activation failure vs. engagement decay vs. monetization churn) per segment; wastes experiment capacity on wrong levers.
  5. Dark-pattern switching costs — Engineering "mounting loss" that traps users (hidden data lock-in, punitive cancellation flows) rather than building genuinely accruing value; creates regulatory risk and brand damage.

Quality gate (required)

Examples

Example 1 (B2C subscription, churn reduction):
“Use retention-engagement. Product: meditation app. Segment: paid subscribers. Baseline: D30 paid retention 22%, churn spikes after week 2. Constraint: 4-week sprint, no major redesign. Output: a Retention & Engagement Improvement Pack with an activation/aha definition, a diagnosis, and a prioritized experiment backlog + 30/60/90 plan.”

Example 2 (B2B SaaS, activation + habit):
“New users activate but don’t return weekly. Define our aha moment, identify the biggest engagement decay point, and propose 5 experiments (in-product + email) with success metrics and guardrails.”

Boundary example (upstream problem): “Write a brand new value prop and pick an ICP for our product.” Response: that’s upstream strategy/problem definition; use problem-definition (and optionally PMF measurement) before retention optimization.

Boundary example (onboarding-specific): “Redesign our signup flow and first-time empty states to reduce drop-off before activation.” Response: this is first-time onboarding UX, not full-lifecycle retention; use user-onboarding for signup-to-activation flow design. Come back to retention-engagement once users are activated and you need to improve post-activation retention.

Boundary example (growth loop design): “Design a referral loop so our existing users bring in new users.” Response: referral/viral loop design is acquisition, not retention; use designing-growth-loops. This skill focuses on keeping existing users engaged and retained, not on building loops that acquire new users.

Capabilities

skillsource-liqiongyuskill-retention-engagementtopic-agent-skillstopic-ai-agentstopic-automationtopic-claudetopic-codextopic-prompt-engineeringtopic-refoundaitopic-skillpack

Install

Quality

0.47/ 1.00

deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 49 github stars · SKILL.md body (10,561 chars)

Provenance

Indexed fromgithub
Enriched2026-04-22 00:56:24Z · deterministic:skill-github:v1 · v1
First seen2026-04-18
Last seen2026-04-22

Agent access