writing-north-star-metrics
Define or refresh a product North Star metric + driver tree.
What it does
Writing North Star Metrics
Scope
Covers
- Defining or refreshing a product/company North Star and North Star Metric
- Translating a qualitative value model into measurable, decision-useful metrics
- Creating a simple driver tree: leading input/proxy metrics + guardrails
- Producing a “North Star Metric Pack” teams can use as a decision tie-breaker
When to use
- “We need one metric that defines success.”
- “Teams are optimizing different KPIs.”
- “We’re setting quarterly OKRs and need leading indicators.”
- “We’re launching a new strategy and need a metric that aligns decisions.”
When NOT to use
- You only need OKRs for an already-agreed North Star -> use
setting-okrs-goals - You need a full analytics taxonomy/event tracking plan from scratch
- Stakeholders haven’t aligned on the customer value model / mission at all -> use
defining-product-visionfirst - You’re choosing a single experiment metric for a one-off test
- You need to diagnose retention or engagement patterns, not define the top-level metric -> use
retention-engagement - You need to assess whether you have product-market fit -> use
measuring-product-market-fit
Inputs
Minimum required
- Product/company + primary customer segment
- The “value moment” (what the customer gets when things go well)
- Business model + strategic goal (growth, activation, retention, margin, trust, etc.)
- Time horizon (next quarter vs next year)
- Measurement constraints (what you can measure today; data latency; known gaps)
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md.
- If still missing, proceed with clearly labeled assumptions and provide 2–3 options.
Outputs (deliverables)
Produce a North Star Metric Pack in Markdown (in-chat; or as files if the user requests):
- North Star Narrative (value model, tie-breaker, scope)
- Candidate metrics (3–5) + selection rationale (evaluation table)
- Chosen North Star Metric spec (definition, formula, window, segmentation, owner, data source)
- Driver tree (leading input/proxy metrics + guardrails)
- Validation & rollout plan (instrumentation checks, dashboard cadence, decision rules)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (8 steps)
1) Intake + constraints
- Inputs: User context; use references/INTAKE.md.
- Actions: Confirm product, customer, value moment, horizon, constraints, stakeholders.
- Outputs: 5–10 bullet “Context snapshot”.
- Checks: You can explain the customer value in one sentence.
2) Define the qualitative North Star (before numbers)
- Inputs: Context snapshot.
- Actions: Write a North Star statement and value model from the customer’s perspective.
- Outputs: Draft North Star Narrative (template in references/TEMPLATES.md).
- Checks: Narrative can act as a decision tie-breaker (“if we do X, does it move the North Star?”).
3) Generate 3–5 candidate North Star metrics (customer POV)
- Inputs: North Star Narrative + value moment.
- Actions: Propose metrics that measure delivered customer value (not internal activity). Include at least one “friction/absence of pain” option when relevant.
- Outputs: Candidate list with definitions.
- Checks: Each candidate is measurable, understandable, and not trivially gameable.
4) Stress-test and pick the North Star metric
- Inputs: Candidate metrics.
- Actions: Evaluate with references/CHECKLISTS.md and references/RUBRIC.md. Explicitly test:
- Leading vs lagging (avoid “retention as the only goal”; pair lagging outcomes with controllable inputs)
- Controllability within a quarter (proxy/input metrics you can move)
- Ecosystem impact (what breaks if you optimize this?)
- Outputs: Selection table + chosen metric + why others lost.
- Checks: A cross-functional leader could agree/disagree based on definitions and evidence.
5) Write the metric spec (make it unambiguous)
- Inputs: Chosen metric.
- Actions: Define formula, unit, window, inclusion rules, segmentation, owner, source, latency, and example calculation.
- Outputs: North Star Metric Spec.
- Checks: Two analysts would compute the same number.
6) Build the driver tree (inputs + guardrails)
- Inputs: Metric spec + product levers.
- Actions: Decompose into 3–7 drivers; identify leading input/proxy metrics you can move in weeks/months; add guardrails to prevent gaming/harm.
- Outputs: Driver tree table + guardrails list.
- Checks: Every driver has at least 1 realistic lever (initiative/experiment) and 1 measurement.
7) Define validation + rollout
- Inputs: Driver tree + constraints.
- Actions: Plan validation (sanity checks, correlation to outcomes) and operationalization (dashboards, cadence, owners, decision rules).
- Outputs: Validation & Rollout Plan.
- Checks: Plan includes “who does what, when” and works with current instrumentation.
8) Quality gate + finalize pack
- Inputs: All drafts.
- Actions: Run references/CHECKLISTS.md and score with references/RUBRIC.md. Add Risks/Open questions/Next steps.
- Outputs: Final North Star Metric Pack.
- Checks: Pack is shareable as-is; key decisions and caveats are explicit.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (B2B SaaS): “Define a North Star metric for a team collaboration tool.”
Expected: a pack that chooses a customer-value metric (e.g., weekly active teams completing the core value moment), plus a driver tree (activation → collaboration depth) and guardrails.
Example 2 (Marketplace): “Refresh North Star metric for a local services marketplace.”
Expected: a pack that measures delivered value (e.g., successful jobs completed with quality), plus input metrics for supply/demand balance and quality guardrails.
Boundary example (redirect): “We already have our North Star metric. Now set quarterly OKRs and key results for the team.”
Response: redirect to setting-okrs-goals -- this request needs OKR design from an existing North Star, not metric definition work.
Boundary example (reframe): “Our North Star should be retention.” Response: keep retention as an outcome/validation metric, and propose controllable input/proxy metrics (time-to-first-value, weekly value moments, repeat value delivery) as the operating focus.
Anti-patterns
Avoid these common failure modes when defining North Star metrics:
- Revenue-as-North-Star -- Choosing revenue or profit as the North Star metric. Revenue is a trailing indicator of value delivery; it cannot be a decision tie-breaker for product teams. Use a customer-value metric that leads to revenue.
- Vanity volume metric -- Picking a metric that only goes up (total users, total messages sent) without a quality or frequency dimension. Always pair volume with a quality or engagement signal.
- Uncontrollable lagging metric -- Selecting a metric no team can move within a quarter. The North Star must decompose into leading input metrics with realistic levers.
- Driver tree without levers -- Building a decomposition tree where drivers are described as metrics but no team has a concrete initiative or experiment to move them. Every driver needs at least one actionable lever.
- Metric without a spec -- Agreeing on a metric name (“weekly active teams”) without defining formula, window, inclusion rules, and segmentation. Two analysts should compute the same number.
Capabilities
Install
Quality
deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 49 github stars · SKILL.md body (7,949 chars)