Skillquality 0.47

planning-under-uncertainty

Plan under uncertainty: uncertainty map, hypotheses + experiments, buffers + triggers, cadence.

Price
free
Protocol
skill
Verified
no

What it does

Planning Under Uncertainty

Scope

Covers

  • Turning ambiguity into an executable plan via hypotheses, experiments, and decision triggers
  • Diagnosing “what’s actually happening” before acting (especially in crisis / wartime situations)
  • Using data as a compass (directional checks) rather than a GPS (false precision)
  • Building buffers and contingencies so the plan survives chaos
  • Setting a cadence for learning, decision-making, and stakeholder communication

When to use

  • “We need a plan, but the requirements are unclear and the outcome is uncertain.”
  • “Create a hypothesis-driven plan (experiments + decision rules) for this initiative.”
  • “We’re in a crisis (drop in retention/revenue/reliability) and need a wartime diagnosis + action plan.”
  • “Help us build contingencies, buffers, and pivot triggers before we commit.”

When NOT to use

  • You don’t agree on the underlying problem/opportunity (use problem-definition).
  • You need to choose what to do among many options (use prioritizing-roadmap).
  • You already have a clear plan and only need dates/milestones and stakeholder cadence (use managing-timelines).
  • You need a decision-ready PRD/spec for build execution (use writing-prds / writing-specs-designs).
  • You’re weighing a specific binary or multi-option decision with known trade-offs (use evaluating-trade-offs).
  • You need to map systemic interdependencies and feedback loops, not plan under ambiguity (use systems-thinking).
  • You need to cut scope to hit a fixed timebox, not explore unknowns (use scoping-cutting).

Inputs

Minimum required

  • The initiative context and desired outcome (“what are we trying to change?”)
  • Time horizon and urgency (wartime vs peacetime)
  • Constraints/guardrails (quality, compliance, brand, budget, “must not worsen” metrics)
  • Stakeholders and decision rights (who decides pivot/stop/scale?)
  • Top unknowns/assumptions (what would change the plan?)
  • Current signals (what data exists; what feels true but unproven?)

Missing-info strategy

  • Ask up to 5 questions from references/INTAKE.md.
  • If answers aren’t available, proceed with explicit assumptions and list Open questions that could change the plan.

Outputs (deliverables)

Produce an Uncertainty Planning Pack in Markdown (in-chat; or as files if the user requests), containing:

  1. Decision frame (objective, “why now”, success + guardrails, time horizon, decision owner)
  2. Uncertainty map (assumptions/unknowns, confidence, impact, validation plan)
  3. Hypotheses + experiment portfolio (what we’ll learn, how, and what decision it enables)
  4. Plan v0 with buffers + contingencies (phases/options, triggers, fallbacks, pivot criteria)
  5. Cadence + comms (learning review ritual, update template, decision log)
  6. Risks / Open questions / Next steps (always included)

Templates: references/TEMPLATES.md
Expanded guidance: references/WORKFLOW.md

Workflow (7 steps)

1) Intake + mode setting (wartime vs peacetime)

  • Inputs: User request; references/INTAKE.md.
  • Actions: Clarify urgency, stakes, and what decision is needed. Decide whether you’re in diagnosis-first wartime mode or exploration peacetime mode.
  • Outputs: Short decision frame draft + mode declaration.
  • Checks: You can state: “We’re optimizing for <fast stabilization / learning / growth>. The decision we need by <date> is <pivot/stop/scale/commit>.”

2) Diagnose reality (humility first)

  • Inputs: Current signals, anecdotes, dashboards, incident reports, qualitative inputs.
  • Actions: Separate symptoms from hypotheses. Write 3–7 plausible explanations, and identify what evidence would falsify each. Avoid prematurely picking a favorite story.
  • Outputs: “What we know / don’t know” + initial hypothesis set.
  • Checks: At least one hypothesis contradicts the team’s initial intuition (to reduce confirmation bias).

3) Build the uncertainty map (assumptions → validation plan)

  • Inputs: Hypotheses; constraints; stakeholders; time horizon.
  • Actions: Create an uncertainty map of assumptions/unknowns with confidence and impact; prioritize the top items that would change the plan.
  • Outputs: Uncertainty map table + prioritized “top 5 unknowns”.
  • Checks: Every top unknown has a clear validation method and an owner.

4) Define hypotheses + decision rules (learning over “wins”)

  • Inputs: Top unknowns; success/guardrails; risk tolerance.
  • Actions: Turn unknowns into testable hypotheses. For each hypothesis, define: expected learning, success signal(s), guardrails, and the decision the result enables (stop/pivot/scale).
  • Outputs: Hypothesis statements + decision rules.
  • Checks: Each hypothesis ties to a decision; “winning” is defined as learning, not just positive results.

5) Design a reproducible testing process (many shots at bat)

  • Inputs: Hypothesis set; available tools; team capacity.
  • Actions: Create an experiment portfolio that balances speed vs confidence (smoke tests, prototypes, A/Bs, customer calls, operational drills). Set a cadence to run and review tests continuously.
  • Outputs: Experiment portfolio table + review cadence.
  • Checks: At least 1 fast test can run within the next 1–2 weeks (or faster in wartime).

6) Turn learning into a plan with buffers, contingencies, and triggers

  • Inputs: Experiment portfolio; constraints; dependencies; timeline needs.
  • Actions: Draft Plan v0 with phases/options; add buffers; define contingencies and explicit triggers for pivot/rollback/escalation. Use data as a compass: focus on directional signals and early warnings, not false certainty.
  • Outputs: Plan v0 + buffer/contingency section + trigger list.
  • Checks: There is a clear “if X happens, we will do Y” for the top risks/unknowns.

7) Quality gate + finalize

  • Inputs: Full draft pack.
  • Actions: Run references/CHECKLISTS.md and score with references/RUBRIC.md. Ensure Risks / Open questions / Next steps exist with owners and time bounds.
  • Outputs: Final Uncertainty Planning Pack.
  • Checks: A stakeholder can approve the plan async and the team can execute without re-litigating the ambiguity.

Anti-patterns (common failure modes)

  1. Analysis paralysis. Mapping every possible unknown without prioritizing. The team spends weeks building an exhaustive uncertainty map but never runs an experiment to resolve the top unknowns.
  2. Premature commitment. Skipping the hypothesis phase and committing to a delivery timeline before the core assumptions are validated. The plan looks precise but is built on unproven foundations.
  3. Experiment theater. Defining experiments that cannot actually falsify the hypothesis (e.g., “talk to 2 users and see if they like it”). The team goes through the motions but learns nothing actionable.
  4. Buffer hoarding. Adding excessive buffers to every phase without tying them to specific risks. Buffers become hidden slack instead of targeted contingency for named unknowns.
  5. Compass-to-GPS drift. Starting with directional signals (good) but gradually treating early data as definitive proof. The team locks into a path before the data warrants it.

Quality gate (required)

Examples

Example 1 (ambiguous initiative): “We think onboarding is hurting conversion, but we’re not sure why. Create an uncertainty plan with hypotheses, experiments, and pivot triggers.” Expected: an uncertainty map + experiment portfolio (qual + quant) + a Plan v0 that commits to learning milestones, not premature delivery dates.

Example 2 (wartime): “Retention dropped 15% this week after a release. We need a wartime plan: diagnose root causes, run rapid tests, and decide whether to rollback or patch.” Expected: diagnosis-first workflow with falsifiable hypotheses, tight guardrails, and explicit rollback/escalation triggers.

Boundary example (timeline management): “We know what we’re building; just need a milestone plan with dates and stakeholder cadence.” Response: if the scope and approach are clear, use managing-timelines directly; this skill is for when the what or how is still uncertain.

Boundary example (trade-off decision): “Should we build vs buy this component? Help us evaluate the trade-offs.” Response: use evaluating-trade-offs for a structured decision between known options; this skill is for when you don’t yet know what the options are.

Boundary example (PRD): “Write a full PRD for Feature X.” Response: clarify uncertainty first (this skill), then use writing-prds once the hypotheses, constraints, and decision gates are clear.

Capabilities

skillsource-liqiongyuskill-planning-under-uncertaintytopic-agent-skillstopic-ai-agentstopic-automationtopic-claudetopic-codextopic-prompt-engineeringtopic-refoundaitopic-skillpack

Install

Quality

0.47/ 1.00

deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 49 github stars · SKILL.md body (9,058 chars)

Provenance

Indexed fromgithub
Enriched2026-04-22 00:56:23Z · deterministic:skill-github:v1 · v1
First seen2026-04-18
Last seen2026-04-22

Agent access