{"id":"de713f1b-8a03-4548-bc40-8c11d4c439a7","shortId":"5QyLaB","kind":"skill","title":"llm-prompt-optimizer","tagline":"Use when improving prompts for any LLM. Applies proven prompt engineering techniques to boost output quality, reduce hallucinations, and cut token usage.","description":"# LLM Prompt Optimizer\n\n## Overview\n\nThis skill transforms weak, vague, or inconsistent prompts into precision-engineered instructions that reliably produce high-quality outputs from any LLM (Claude, Gemini, GPT-4, Llama, etc.). It applies systematic prompt engineering frameworks — from zero-shot to few-shot, chain-of-thought, and structured output patterns.\n\n## When to Use This Skill\n\n- Use when a prompt returns inconsistent, vague, or hallucinated results\n- Use when you need structured/JSON output from an LLM reliably\n- Use when designing system prompts for AI agents or chatbots\n- Use when you want to reduce token usage without sacrificing quality\n- Use when implementing chain-of-thought reasoning for complex tasks\n- Use when prompts work on one model but fail on another\n\n## Step-by-Step Guide\n\n### 1. Diagnose the Weak Prompt\n\nBefore optimizing, identify which problem pattern applies:\n\n| Problem | Symptom | Fix |\n|---------|---------|-----|\n| Too vague | Generic, unhelpful answers | Add role + context + constraints |\n| No structure | Unformatted, hard-to-parse output | Specify output format explicitly |\n| Hallucination | Confident wrong answers | Add \"say I don't know if unsure\" |\n| Inconsistent | Different answers each run | Add few-shot examples |\n| Too long | Verbose, padded responses | Add length constraints |\n\n### 2. Apply the RSCIT Framework\n\nEvery optimized prompt should have:\n\n- **R** — **Role**: Who is the AI in this interaction?\n- **S** — **Situation**: What context does it need?\n- **C** — **Constraints**: What are the rules and limits?\n- **I** — **Instructions**: What exactly should it do?\n- **T** — **Template**: What should the output look like?\n\n**Before (weak prompt):**\n```\nExplain machine learning.\n```\n\n**After (optimized prompt):**\n```\nYou are a senior ML engineer explaining concepts to a junior developer.\n\nContext: The developer has 1 year of Python experience but no ML background.\n\nTask: Explain supervised machine learning in simple terms.\n\nConstraints:\n- Use an analogy from everyday life\n- Maximum 200 words\n- No mathematical formulas\n- End with one actionable next step\n\nFormat: Plain prose, no bullet points.\n```\n\n### 3. Chain-of-Thought (CoT) Pattern\n\nFor reasoning tasks, instruct the model to think step-by-step:\n\n```\nSolve this problem step by step, showing your work at each stage.\nOnly provide the final answer after completing all reasoning steps.\n\nProblem: [your problem here]\n\nThinking process:\nStep 1: [identify what's given]\nStep 2: [identify what's needed]\nStep 3: [apply logic or formula]\nStep 4: [verify the answer]\n\nFinal Answer:\n```\n\n### 4. Few-Shot Examples Pattern\n\nProvide 2-3 examples to establish the pattern:\n\n```\nClassify the sentiment of customer reviews as POSITIVE, NEGATIVE, or NEUTRAL.\n\nExamples:\nReview: \"This product exceeded my expectations!\" -> POSITIVE\nReview: \"It arrived broken and support was useless.\" -> NEGATIVE  \nReview: \"Product works as described, nothing special.\" -> NEUTRAL\n\nNow classify:\nReview: \"[your review here]\" ->\n```\n\n### 5. Structured JSON Output Pattern\n\n```\nExtract the following information from the text below and return it as valid JSON only.\nDo not include any explanation or markdown — just the raw JSON object.\n\nSchema:\n{\n  \"name\": string,\n  \"email\": string | null,\n  \"company\": string | null,\n  \"role\": string | null\n}\n\nText: [input text here]\n```\n\n### 6. Reduce Hallucination Pattern\n\n```\nAnswer the following question based ONLY on the provided context.\nIf the answer is not contained in the context, respond with exactly: \"I don't have enough information to answer this.\"\nDo not make up or infer information not present in the context.\n\nContext:\n[your context here]\n\nQuestion: [your question here]\n```\n\n### 7. Prompt Compression Techniques\n\nReduce token count without losing effectiveness:\n\n```\n# Verbose (expensive)\n\"Please carefully analyze the following code and provide a detailed explanation of \nwhat it does, how it works, and any potential issues you might find.\"\n\n# Compressed (efficient, same quality)\n\"Analyze this code: explain what it does, how it works, and flag any issues.\"\n```\n\n## Best Practices\n\n- ✅ **Do:** Always specify the output format (JSON, markdown, plain text, bullet list)\n- ✅ **Do:** Use delimiters (```, ---) to separate instructions from content\n- ✅ **Do:** Test prompts with edge cases (empty input, unusual data)\n- ✅ **Do:** Version your system prompts in source control\n- ✅ **Do:** Add \"think step by step\" for math, logic, or multi-step tasks\n- ❌ **Don't:** Use negative-only instructions (\"don't be verbose\") — add positive alternatives\n- ❌ **Don't:** Assume the model knows your codebase context — always include it\n- ❌ **Don't:** Use the same prompt across different models without testing — they behave differently\n\n## Prompt Audit Checklist\n\nBefore using a prompt in production:\n\n- [ ] Does it have a clear role/persona?\n- [ ] Is the output format explicitly defined?\n- [ ] Are edge cases handled (empty input, ambiguous data)?\n- [ ] Is the length appropriate (not too long/short)?\n- [ ] Has it been tested on 5+ varied inputs?\n- [ ] Is hallucination risk addressed for factual tasks?\n\n## Troubleshooting\n\n**Problem:** Model ignores format instructions\n**Solution:** Move format instructions to the END of the prompt, after examples. Use strong language: \"You MUST return only valid JSON.\"\n\n**Problem:** Inconsistent results between runs\n**Solution:** Lower the temperature setting (0.0-0.3 for factual tasks). Add more few-shot examples.\n\n**Problem:** Prompt works in playground but fails in production\n**Solution:** Check if system prompt is being sent correctly. Verify token limits aren't being exceeded (use a token counter).\n\n**Problem:** Output is too long\n**Solution:** Add explicit word/sentence limits: \"Respond in exactly 3 bullet points, each under 20 words.\"\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["llm","prompt","optimizer","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-llm-prompt-optimizer","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/llm-prompt-optimizer","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34726 github stars · SKILL.md body (6,153 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T12:51:11.069Z","embedding":null,"createdAt":"2026-04-18T21:40:08.117Z","updatedAt":"2026-04-23T12:51:11.069Z","lastSeenAt":"2026-04-23T12:51:11.069Z","tsv":"'-0.3':806 '-3':417 '-4':57 '0.0':805 '1':155,295,385 '2':221,391,416 '20':863 '200':320 '3':337,397,858 '4':403,409 '5':465,758 '6':513 '7':568 'across':709 'action':328 'add':175,195,208,218,664,688,810,851 'address':764 'agent':114 'ai':113,236 'altern':690 'alway':626,700 'ambigu':744 'analog':315 'analyz':582,609 'anoth':149 'answer':174,194,205,372,406,408,517,529,546 'appli':12,61,166,222,398 'appropri':749 'aren':837 'arriv':444 'ask':898 'assum':693 'audit':718 'background':303 'base':521 'behav':715 'best':623 'boost':18 'boundari':906 'broken':445 'bullet':335,635,859 'c':247 'care':581 'case':650,740 'chain':75,132,339 'chain-of-thought':74,131,338 'chatbot':116 'check':826 'checklist':719 'clarif':900 'classifi':423,460 'claud':54 'clear':730,873 'code':585,611 'codebas':698 'compani':503 'complet':374 'complex':137 'compress':570,605 'concept':286 'confid':192 'constraint':178,220,248,312 'contain':532 'content':644 'context':177,243,291,526,535,559,560,562,699 'control':662 'correct':833 'cot':342 'count':574 'counter':844 'criteria':909 'custom':427 'cut':24 'data':654,745 'defin':737 'delimit':639 'describ':455,877 'design':109 'detail':589 'develop':290,293 'diagnos':156 'differ':204,710,716 'edg':649,739 'effect':577 'effici':606 'email':500 'empti':651,742 'end':325,780 'engin':15,42,64,284 'enough':543 'environ':889 'environment-specif':888 'establish':420 'etc':59 'everi':226 'everyday':317 'exact':258,538,857 'exampl':212,413,418,434,785,815 'exceed':438,840 'expect':440 'expens':579 'experi':299 'expert':894 'explain':273,285,305,612 'explan':489,590 'explicit':190,736,852 'extract':470 'factual':766,808 'fail':147,822 'few-shot':71,209,410,812 'final':371,407 'find':604 'fix':169 'flag':620 'follow':472,519,584 'format':189,331,630,735,772,776 'formula':324,401 'framework':65,225 'gemini':55 'generic':172 'given':389 'gpt':56 'guid':154 'hallucin':22,95,191,515,762 'handl':741 'hard':183 'hard-to-pars':182 'high':48 'high-qual':47 'identifi':162,386,392 'ignor':771 'implement':130 'improv':7 'includ':487,701 'inconsist':37,92,203,796 'infer':553 'inform':473,544,554 'input':510,652,743,760,903 'instruct':43,256,347,642,683,773,777 'interact':239 'issu':601,622 'json':467,483,495,631,794 'junior':289 'know':200,696 'languag':788 'learn':275,308 'length':219,748 'life':318 'like':269 'limit':254,836,854,865 'list':636 'llama':58 'llm':2,11,27,53,105 'llm-prompt-optim':1 'logic':399,671 'long':214,849 'long/short':752 'look':268 'lose':576 'lower':801 'machin':274,307 'make':550 'markdown':491,632 'match':874 'math':670 'mathemat':323 'maximum':319 'might':603 'miss':911 'ml':283,302 'model':145,349,695,711,770 'move':775 'multi':674 'multi-step':673 'must':790 'name':498 'need':100,246,395 'negat':431,450,681 'negative-on':680 'neutral':433,458 'next':329 'noth':456 'null':502,505,508 'object':496 'one':144,327 'optim':4,29,161,227,277 'output':19,50,80,102,186,188,267,468,629,734,846,883 'overview':30 'pad':216 'pars':185 'pattern':81,165,343,414,422,469,516 'permiss':904 'plain':332,633 'playground':820 'pleas':580 'point':336,860 'posit':430,441,689 'potenti':600 'practic':624 'precis':41 'precision-engin':40 'present':556 'problem':164,167,358,378,380,769,795,816,845 'process':383 'produc':46 'product':437,452,725,824 'prompt':3,8,14,28,38,63,90,111,141,159,228,272,278,569,647,659,708,717,723,783,817,829 'prose':333 'proven':13 'provid':369,415,525,587 'python':298 'qualiti':20,49,127,608 'question':520,564,566 'r':231 'raw':494 'reason':135,345,376 'reduc':21,122,514,572 'reliabl':45,106 'requir':902 'respond':536,855 'respons':217 'result':96,797 'return':91,479,791 'review':428,435,442,451,461,463,895 'risk':763 'role':176,232,506 'role/persona':731 'rscit':224 'rule':252 'run':207,799 'sacrif':126 'safeti':905 'say':196 'schema':497 'scope':876 'senior':282 'sent':832 'sentiment':425 'separ':641 'set':804 'shot':69,73,211,412,814 'show':362 'simpl':310 'situat':241 'skill':32,86,868 'skill-llm-prompt-optimizer' 'solut':774,800,825,850 'solv':356 'sourc':661 'source-sickn33' 'special':457 'specif':890 'specifi':187,627 'stage':367 'step':151,153,330,353,355,359,361,377,384,390,396,402,666,668,675 'step-by-step':150,352 'stop':896 'string':499,501,504,507 'strong':787 'structur':79,180,466 'structured/json':101 'substitut':886 'success':908 'supervis':306 'support':447 'symptom':168 'system':110,658,828 'systemat':62 'task':138,304,346,676,767,809,872 'techniqu':16,571 'temperatur':803 'templat':263 'term':311 'test':646,713,756,892 'text':476,509,511,634 'think':351,382,665 'thought':77,134,341 'token':25,123,573,835,843 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'transform':33 'treat':881 'troubleshoot':768 'unformat':181 'unhelp':173 'unsur':202 'unusu':653 'usag':26,124 'use':5,84,87,97,107,117,128,139,313,638,679,705,721,786,841,866 'useless':449 'vagu':35,93,171 'valid':482,793,891 'vari':759 'verbos':215,578,687 'verifi':404,834 'version':656 'want':120 'weak':34,158,271 'without':125,575,712 'word':321,864 'word/sentence':853 'work':142,364,453,597,618,818 'wrong':193 'year':296 'zero':68 'zero-shot':67","prices":[{"id":"a18fa325-6836-4fb4-9b06-80a8b77c8daa","listingId":"de713f1b-8a03-4548-bc40-8c11d4c439a7","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:40:08.117Z"}],"sources":[{"listingId":"de713f1b-8a03-4548-bc40-8c11d4c439a7","source":"github","sourceId":"sickn33/antigravity-awesome-skills/llm-prompt-optimizer","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/llm-prompt-optimizer","isPrimary":false,"firstSeenAt":"2026-04-18T21:40:08.117Z","lastSeenAt":"2026-04-23T12:51:11.069Z"}],"details":{"listingId":"de713f1b-8a03-4548-bc40-8c11d4c439a7","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"llm-prompt-optimizer","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34726,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"9e88fc05ce3223f3da5d43f0abad62fa4ed45d2a","skill_md_path":"skills/llm-prompt-optimizer/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/llm-prompt-optimizer"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"llm-prompt-optimizer","description":"Use when improving prompts for any LLM. Applies proven prompt engineering techniques to boost output quality, reduce hallucinations, and cut token usage."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/llm-prompt-optimizer"},"updatedAt":"2026-04-23T12:51:11.069Z"}}