Skillquality 0.47

writing-job-descriptions

Write outcome-based job descriptions and role scorecards.

Price
free
Protocol
skill
Verified
no

What it does

Writing Job Descriptions (Outcome-Based)

Scope

Covers

  • Turning a vague “we should hire X” into a clear role outcome + success scorecard
  • Defining competency spikes (major/minor) instead of a generic laundry list
  • Writing a high-signal job description that is honest about context (pace, constraints, trade-offs)
  • Building a lightweight iteration loop to improve the JD after real candidate conversations

When to use

  • “Write a job description / job posting for …”
  • “Create a role scorecard / success profile for a new hire.”
  • “Make this JD more high-signal (it’s generic and attracting everyone).”
  • “Rewrite our JD around outcomes instead of responsibilities.”

When NOT to use

  • You haven’t decided whether to hire vs restructure/contract/automation (do org planning first)
  • You need a full interview loop, question maps, or structured interview design (use conducting-interviews)
  • You need to evaluate candidates, design work samples, or make a hiring decision (use evaluating-candidates)
  • You need to build a sales team hiring pipeline or GTM recruiting strategy (use building-sales-team)
  • You need to design the new hire’s onboarding experience (use onboarding-new-hires)
  • You need legal/HR review for compliance wording (this skill is not legal advice)

Inputs

Minimum required

  • Role title + level + function (e.g., “Senior Product Designer”, “Staff Backend Engineer”)
  • Team/context (what you build; who the role reports to; key partners)
  • Why hire now + the “progress” this role must create
  • Success definition: 3–6 outcomes for 12 months after start
  • Working model + constraints (remote/hybrid, time zones, travel, on-call, pace)

Missing-info strategy

  • Ask up to 5 questions from references/INTAKE.md.
  • If answers aren’t available, proceed with explicit assumptions and offer 2 versions: conservative/inclusive and high-intensity/polarizing (if appropriate).

Outputs (deliverables)

Produce a Job Description Pack in Markdown (in-chat; or as files if requested):

  1. Context snapshot
  2. Role scorecard: success at 12 months (+ optional 30/60/90)
  3. Competency spike map: majors/minors + “evidence of strength”
  4. Job description draft (public): outcome-based, high-signal
  5. Filters: who will thrive / who should not apply (honest, non-discriminatory)
  6. Iteration plan + version log: what to test and how to update after candidate conversations
  7. Risks / Open questions / Next steps (always included)

Templates: references/TEMPLATES.md
Expanded guidance: references/WORKFLOW.md

Workflow (7 steps)

1) Intake + constraints (don’t start writing yet)

  • Inputs: user request; references/INTAKE.md.
  • Actions: Clarify role, level, “why now”, constraints (location, pace, comp bands if available), and what “good” looks like. Identify what you can/can’t say publicly.
  • Outputs: Context snapshot + assumptions/unknowns list.
  • Checks: You can state in one sentence: “We are hiring X to achieve Y by Z.”

2) Define “success 12 months later” (scorecard)

  • Inputs: business goals, current pains, manager expectations.
  • Actions: Write 3–6 outcomes that would make you “clink champagne” in 12 months. Add measurable indicators where possible.
  • Outputs: Role scorecard (12-month success).
  • Checks: Outcomes describe business impact and shipped/owned artifacts, not just activities.

3) Decide the competency spikes (major/minor)

  • Inputs: role scorecard.
  • Actions: Choose 1 major spike and 1–2 minor spikes. Define what “strong” looks like and how to recognize it (work samples, narratives, portfolio, shipped systems).
  • Outputs: Competency spike map.
  • Checks: Spikes explain why a generalist won’t work; each spike ties to at least one 12-month outcome.

4) Translate outcomes into responsibilities (progress over laundry lists)

  • Inputs: scorecard + spikes.
  • Actions: Convert outcomes into 6–10 responsibilities phrased as progress (“Own X end-to-end”, “Reduce Y from A→B”) rather than “attend meetings”. Remove arbitrary requirements.
  • Outputs: Responsibilities section draft.
  • Checks: Every responsibility maps to at least one outcome; anything that doesn’t map is cut or re-justified.

5) Add the “truth” section (high-signal + filtering)

  • Inputs: team reality: pace, constraints, trade-offs.
  • Actions: Write a candid “How we work / What’s hard here” section and a “Who will thrive / Who won’t” filter. Use polarizing clarity without illegal/discriminatory language.
  • Outputs: Context truth + filters.
  • Checks: A candidate can self-select in/out; claims are honest and specific (not hype).

6) Draft the public job description (clean, inclusive, skimmable)

  • Inputs: templates; company/role basics.
  • Actions: Assemble a complete JD using references/TEMPLATES.md. Keep requirements minimal; separate must-haves vs nice-to-haves; avoid jargon and bias.
  • Outputs: JD draft (public).
  • Checks: In 90 seconds, a qualified candidate can answer: “What will I accomplish? Why here? What do I need to be great at?”

7) Iterate + quality gate + finalize pack

  • Inputs: JD draft; any candidate feedback; hiring manager review.
  • Actions: Propose what to test (which section is failing: attract vs filter). Create an iteration log. Run references/CHECKLISTS.md and score with references/RUBRIC.md. Add Risks/Open questions/Next steps.
  • Outputs: Final Job Description Pack.
  • Checks: The pack is internally aligned and externally high-signal; unknowns are explicit; iteration triggers are defined.

Quality gate (required)

Examples

Example 1 (Startup, high-pace): “Write a job description for a founding Product Designer for a seed-stage B2B AI tool. We need someone who can ship end-to-end in ambiguity. Include success at 12 months and a candid ‘what’s hard here’ section.”
Expected: clear 12-month outcomes, a design-major spike, honest pace/constraints, and filters that self-select the wrong candidates out.

Example 2 (Scale-up, specialized spike): “Create a role scorecard + job posting for a Staff Backend Engineer owning reliability for a high-traffic API. Emphasize systems thinking and incident ownership.”
Expected: outcome-based responsibilities tied to reliability outcomes, plus a clear major spike (operational excellence) and measurable success criteria.

Boundary example (no outcomes): “Write a JD for a ‘rockstar generalist’ to ‘do whatever is needed’ (no outcomes).” Response: refuse to invent a laundry list; run intake, define 12-month outcomes and spikes first, then draft.

Boundary example (redirect to interviews): “I have the JD. Now help me design the interview loop and behavioral questions.” Response: redirect to conducting-interviews — this skill produces the job description and role scorecard, not the interview process.

Boundary example (redirect to evaluation): “We posted the JD and have 5 applicants. Help me decide who to interview and how to score them.” Response: redirect to evaluating-candidates — this skill defines the role, not the evaluation process.

Anti-patterns (common failure modes)

  1. Laundry-list responsibilities — Writing 15+ bullet-point responsibilities that describe activities (“attend meetings”, “manage stakeholders”) instead of outcomes. Every responsibility should map to a 12-month outcome.
  2. Unicorn requirements — Requiring 10+ years experience AND a specific degree AND 5 tools AND 3 industries. This filters out strong candidates and signals org confusion about what matters. Identify 1 major spike and 1-2 minors.
  3. Copy-paste from competitors — Reusing another company’s JD with your logo. This attracts generic applicants and fails to differentiate your opportunity. The “why here / why now” must be specific to your context.
  4. Hiding the hard parts — Omitting pace, constraints, or dysfunction to maximize applicant volume. This wastes everyone’s time. Candid “what’s hard here” sections improve conversion of the right candidates.
  5. One-and-done publishing — Treating the JD as final after one draft. JDs should iterate based on candidate conversations and pipeline signal (who’s applying, who’s dropping off, and why).

Capabilities

skillsource-liqiongyuskill-writing-job-descriptionstopic-agent-skillstopic-ai-agentstopic-automationtopic-claudetopic-codextopic-prompt-engineeringtopic-refoundaitopic-skillpack

Install

Quality

0.47/ 1.00

deterministic score 0.47 from registry signals: · indexed on github topic:agent-skills · 49 github stars · SKILL.md body (8,698 chars)

Provenance

Indexed fromgithub
Enriched2026-04-22 00:56:26Z · deterministic:skill-github:v1 · v1
First seen2026-04-18
Last seen2026-04-22

Agent access