{"id":"aa3fdee3-12eb-41bc-813a-b773d3af217c","shortId":"Vgnhkq","kind":"skill","title":"prompt-engineering-patterns","tagline":"Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.","description":"# Prompt Engineering Patterns\n\nMaster advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.\n\n## Do not use this skill when\n\n- The task is unrelated to prompt engineering patterns\n- You need a different domain or tool outside this scope\n\n## Instructions\n\n- Clarify goals, constraints, and required inputs.\n- Apply relevant best practices and validate outcomes.\n- Provide actionable steps and verification.\n- If detailed examples are required, open `resources/implementation-playbook.md`.\n\n## Use this skill when\n\n- Designing complex prompts for production LLM applications\n- Optimizing prompt performance and consistency\n- Implementing structured reasoning patterns (chain-of-thought, tree-of-thought)\n- Building few-shot learning systems with dynamic example selection\n- Creating reusable prompt templates with variable interpolation\n- Debugging and refining prompts that produce inconsistent outputs\n- Implementing system prompts for specialized AI assistants\n\n## Core Capabilities\n\n### 1. Few-Shot Learning\n- Example selection strategies (semantic similarity, diversity sampling)\n- Balancing example count with context window constraints\n- Constructing effective demonstrations with input-output pairs\n- Dynamic example retrieval from knowledge bases\n- Handling edge cases through strategic example selection\n\n### 2. Chain-of-Thought Prompting\n- Step-by-step reasoning elicitation\n- Zero-shot CoT with \"Let's think step by step\"\n- Few-shot CoT with reasoning traces\n- Self-consistency techniques (sampling multiple reasoning paths)\n- Verification and validation steps\n\n### 3. Prompt Optimization\n- Iterative refinement workflows\n- A/B testing prompt variations\n- Measuring prompt performance metrics (accuracy, consistency, latency)\n- Reducing token usage while maintaining quality\n- Handling edge cases and failure modes\n\n### 4. Template Systems\n- Variable interpolation and formatting\n- Conditional prompt sections\n- Multi-turn conversation templates\n- Role-based prompt composition\n- Modular prompt components\n\n### 5. System Prompt Design\n- Setting model behavior and constraints\n- Defining output formats and structure\n- Establishing role and expertise\n- Safety guidelines and content policies\n- Context setting and background information\n\n## Quick Start\n\n```python\nfrom prompt_optimizer import PromptTemplate, FewShotSelector\n\n# Define a structured prompt template\ntemplate = PromptTemplate(\n    system=\"You are an expert SQL developer. Generate efficient, secure SQL queries.\",\n    instruction=\"Convert the following natural language query to SQL:\\n{query}\",\n    few_shot_examples=True,\n    output_format=\"SQL code block with explanatory comments\"\n)\n\n# Configure few-shot learning\nselector = FewShotSelector(\n    examples_db=\"sql_examples.jsonl\",\n    selection_strategy=\"semantic_similarity\",\n    max_examples=3\n)\n\n# Generate optimized prompt\nprompt = template.render(\n    query=\"Find all users who registered in the last 30 days\",\n    examples=selector.select(query=\"user registration date filter\")\n)\n```\n\n## Key Patterns\n\n### Progressive Disclosure\nStart with simple prompts, add complexity only when needed:\n\n1. **Level 1**: Direct instruction\n   - \"Summarize this article\"\n\n2. **Level 2**: Add constraints\n   - \"Summarize this article in 3 bullet points, focusing on key findings\"\n\n3. **Level 3**: Add reasoning\n   - \"Read this article, identify the main findings, then summarize in 3 bullet points\"\n\n4. **Level 4**: Add examples\n   - Include 2-3 example summaries with input-output pairs\n\n### Instruction Hierarchy\n```\n[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]\n```\n\n### Error Recovery\nBuild prompts that gracefully handle failures:\n- Include fallback instructions\n- Request confidence scores\n- Ask for alternative interpretations when uncertain\n- Specify how to indicate missing information\n\n## Best Practices\n\n1. **Be Specific**: Vague prompts produce inconsistent results\n2. **Show, Don't Tell**: Examples are more effective than descriptions\n3. **Test Extensively**: Evaluate on diverse, representative inputs\n4. **Iterate Rapidly**: Small changes can have large impacts\n5. **Monitor Performance**: Track metrics in production\n6. **Version Control**: Treat prompts as code with proper versioning\n7. **Document Intent**: Explain why prompts are structured as they are\n\n## Common Pitfalls\n\n- **Over-engineering**: Starting with complex prompts before trying simple ones\n- **Example pollution**: Using examples that don't match the target task\n- **Context overflow**: Exceeding token limits with excessive examples\n- **Ambiguous instructions**: Leaving room for multiple interpretations\n- **Ignoring edge cases**: Not testing on unusual or boundary inputs\n\n## Integration Patterns\n\n### With RAG Systems\n```python\n# Combine retrieved context with prompt engineering\nprompt = f\"\"\"Given the following context:\n{retrieved_context}\n\n{few_shot_examples}\n\nQuestion: {user_question}\n\nProvide a detailed answer based solely on the context above. If the context doesn't contain enough information, explicitly state what's missing.\"\"\"\n```\n\n### With Validation\n```python\n# Add self-verification step\nprompt = f\"\"\"{main_task_prompt}\n\nAfter generating your response, verify it meets these criteria:\n1. Answers the question directly\n2. Uses only information from provided context\n3. Cites specific sources\n4. Acknowledges any uncertainty\n\nIf verification fails, revise your response.\"\"\"\n```\n\n## Performance Optimization\n\n### Token Efficiency\n- Remove redundant words and phrases\n- Use abbreviations consistently after first definition\n- Consolidate similar instructions\n- Move stable content to system prompts\n\n### Latency Reduction\n- Minimize prompt length without sacrificing quality\n- Use streaming for long-form outputs\n- Cache common prompt prefixes\n- Batch similar requests when possible\n\n## Resources\n\n- **references/few-shot-learning.md**: Deep dive on example selection and construction\n- **references/chain-of-thought.md**: Advanced reasoning elicitation techniques\n- **references/prompt-optimization.md**: Systematic refinement workflows\n- **references/prompt-templates.md**: Reusable template patterns\n- **references/system-prompts.md**: System-level prompt design\n- **assets/prompt-template-library.md**: Battle-tested prompt templates\n- **assets/few-shot-examples.json**: Curated example datasets\n- **scripts/optimize-prompt.py**: Automated prompt optimization tool\n\n## Success Metrics\n\nTrack these KPIs for your prompts:\n- **Accuracy**: Correctness of outputs\n- **Consistency**: Reproducibility across similar inputs\n- **Latency**: Response time (P50, P95, P99)\n- **Token Usage**: Average tokens per request\n- **Success Rate**: Percentage of valid outputs\n- **User Satisfaction**: Ratings and feedback\n\n## Next Steps\n\n1. Review the prompt template library for common patterns\n2. Experiment with few-shot learning for your specific use case\n3. Implement prompt versioning and A/B testing\n4. Set up automated evaluation pipelines\n5. Document your prompt engineering decisions and learnings\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["prompt","engineering","patterns","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-prompt-engineering-patterns","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/prompt-engineering-patterns","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34616 github stars · SKILL.md body (7,409 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T00:51:26.353Z","embedding":null,"createdAt":"2026-04-18T21:42:57.165Z","updatedAt":"2026-04-23T00:51:26.353Z","lastSeenAt":"2026-04-23T00:51:26.353Z","tsv":"'-3':459 '1':144,410,412,506,690,849 '2':184,418,420,458,514,695,858 '3':226,373,427,434,436,449,525,702,870 '30':388 '4':255,452,454,533,706,877 '5':278,542,883 '6':549 '7':559 'a/b':232,875 'abbrevi':726 'accuraci':240,815 'acknowledg':707 'across':821 'action':71 'add':405,421,437,455,671 'advanc':6,21,774 'ai':140 'altern':494 'ambigu':602 'answer':648,691 'appli':63 'applic':92 'articl':417,425,441 'ask':492,924 'assets/few-shot-examples.json':798 'assets/prompt-template-library.md':792 'assist':141 'autom':803,880 'averag':832 'background':304 'balanc':156 'base':176,272,649 'batch':759 'battl':794 'battle-test':793 'behavior':284 'best':65,504 'block':353 'boundari':617,932 'build':110,480 'bullet':428,450 'cach':755 'capabl':143 'case':179,251,611,869 'chain':103,186 'chain-of-thought':102,185 'chang':537 'cite':703 'clarif':926 'clarifi':57 'clear':899 'code':352,555 'combin':625 'comment':356 'common':570,756,856 'complex':87,406,577 'compon':277 'composit':274 'condit':262 'confid':490 'configur':357 'consist':97,216,241,727,819 'consolid':731 'constraint':59,162,286,422 'construct':163,772 'contain':660 'content':299,736 'context':160,301,470,594,627,636,638,653,657,701 'control':16,31,551 'convers':268 'convert':335 'core':142 'correct':816 'cot':199,210 'count':158 'creat':120 'criteria':689,935 'curat':799 'data':475 'dataset':801 'date':395 'day':389 'db':365 'debug':127 'decis':888 'deep':766 'defin':287,315 'definit':730 'demonstr':165 'describ':903 'descript':524 'design':86,281,791 'detail':76,647 'develop':328 'differ':49 'direct':413,694 'disclosur':400 'dive':767 'divers':154,530 'document':560,884 'doesn':658 'domain':50 'dynam':117,171 'edg':178,250,610 'effect':164,522 'effici':330,719 'elicit':195,776 'engin':3,8,18,23,44,574,630,887 'enough':661 'environ':915 'environment-specif':914 'error':478 'establish':292 'evalu':528,881 'exampl':77,118,149,157,172,182,347,364,372,390,456,460,473,519,583,586,601,641,769,800 'exceed':596 'excess':600 'experi':859 'expert':326,920 'expertis':295 'explain':562 'explanatori':355 'explicit':663 'extens':527 'f':632,677 'fail':712 'failur':253,485 'fallback':487 'feedback':846 'few-shot':111,145,207,358,861 'fewshotselector':314,363 'filter':396 'find':380,433,445 'first':729 'focus':430 'follow':337,635 'form':753 'format':261,289,350,477 'generat':329,374,682 'given':633 'goal':58 'grace':483 'guidelin':297 'handl':177,249,484 'hierarchi':468 'identifi':442 'ignor':609 'impact':541 'implement':98,135,871 'import':312 'includ':457,486 'inconsist':133,512 'indic':501 'inform':305,503,662,698 'input':62,168,464,474,532,618,823,929 'input-output':167,463 'instruct':56,334,414,467,472,488,603,733 'integr':619 'intent':561 'interpol':126,259 'interpret':495,608 'iter':229,534 'key':397,432 'knowledg':175 'kpis':811 'languag':339 'larg':540 'last':387 'latenc':242,740,824 'learn':114,148,361,864,890 'leav':604 'length':744 'let':201 'level':411,419,435,453,789 'librari':854 'limit':598,891 'llm':12,27,91 'long':752 'long-form':751 'main':444,678 'maintain':247 'master':5,20 'match':590,900 'max':371 'maxim':11,26 'measur':236 'meet':687 'metric':239,546,808 'minim':742 'miss':502,667,937 'mode':254 'model':283 'modular':275 'monitor':543 'move':734 'multi':266 'multi-turn':265 'multipl':219,607 'n':343 'natur':338 'need':47,409 'next':847 'one':582 'open':80 'optim':93,228,311,375,717,805 'outcom':69 'output':134,169,288,349,465,476,754,818,841,909 'outsid':53 'over-engin':572 'overflow':595 'p50':827 'p95':828 'p99':829 'pair':170,466 'path':221 'pattern':4,19,45,101,398,620,785,857 'per':834 'percentag':838 'perform':13,28,95,238,544,716 'permiss':930 'phrase':724 'pipelin':882 'pitfal':571 'point':429,451 'polici':300 'pollut':584 'possibl':763 'practic':66,505 'prefix':758 'produc':132,511 'product':90,548 'progress':399 'prompt':2,7,17,22,43,88,94,122,130,137,189,227,234,237,263,273,276,280,310,318,376,377,404,481,510,553,564,578,629,631,676,680,739,743,757,790,796,804,814,852,872,886 'prompt-engineering-pattern':1 'prompttempl':313,321 'proper':557 'provid':70,645,700 'python':308,624,670 'qualiti':248,747 'queri':333,340,344,379,392 'question':642,644,693 'quick':306 'rag':622 'rapid':535 'rate':837,844 'read':439 'reason':100,194,212,220,438,775 'recoveri':479 'reduc':243 'reduct':741 'redund':721 'references/chain-of-thought.md':773 'references/few-shot-learning.md':765 'references/prompt-optimization.md':778 'references/prompt-templates.md':782 'references/system-prompts.md':786 'refin':129,230,780 'regist':384 'registr':394 'relev':64 'reliabl':14,29 'remov':720 'repres':531 'reproduc':820 'request':489,761,835 'requir':61,79,928 'resourc':764 'resources/implementation-playbook.md':81 'respons':684,715,825 'result':513 'retriev':173,626,637 'reusabl':121,783 'review':850,921 'revis':713 'role':271,293 'role-bas':270 'room':605 'sacrif':746 'safeti':296,931 'sampl':155,218 'satisfact':843 'scope':55,902 'score':491 'scripts/optimize-prompt.py':802 'section':264 'secur':331 'select':119,150,183,367,770 'selector':362 'selector.select':391 'self':215,673 'self-consist':214 'self-verif':672 'semant':152,369 'set':282,302,878 'shot':113,147,198,209,346,360,640,863 'show':515 'similar':153,370,732,760,822 'simpl':403,581 'skill':36,84,894 'skill-prompt-engineering-patterns' 'small':536 'sole':650 'sourc':705 'source-sickn33' 'special':139 'specif':508,704,867,916 'specifi':498 'sql':327,332,342,351 'sql_examples.jsonl':366 'stabl':735 'start':307,401,575 'state':664 'step':72,191,193,204,206,225,675,848 'step-by-step':190 'stop':922 'strateg':181 'strategi':151,368 'stream':749 'structur':99,291,317,566 'substitut':912 'success':807,836,934 'summar':415,423,447 'summari':461 'system':115,136,257,279,322,469,623,738,788 'system-level':787 'systemat':779 'target':592 'task':39,471,593,679,898 'techniqu':9,24,217,777 'tell':518 'templat':123,256,269,319,320,784,797,853 'template.render':378 'test':233,526,613,795,876,918 'think':203 'thought':105,109,188 'time':826 'token':244,597,718,830,833 'tool':52,806 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'trace':213 'track':545,809 'treat':552,907 'tree':107 'tree-of-thought':106 'tri':580 'true':348 'turn':267 'uncertain':497 'uncertainti':709 'unrel':41 'unusu':615 'usag':245,831 'use':34,82,585,696,725,748,868,892 'user':382,393,643,842 'vagu':509 'valid':68,224,669,840,917 'variabl':125,258 'variat':235 'verif':74,222,674,711 'verifi':685 'version':550,558,873 'window':161 'without':745 'word':722 'workflow':231,781 'zero':197 'zero-shot':196","prices":[{"id":"d94c6b14-2825-493f-972e-04fd9fd87e8c","listingId":"aa3fdee3-12eb-41bc-813a-b773d3af217c","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:42:57.165Z"}],"sources":[{"listingId":"aa3fdee3-12eb-41bc-813a-b773d3af217c","source":"github","sourceId":"sickn33/antigravity-awesome-skills/prompt-engineering-patterns","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/prompt-engineering-patterns","isPrimary":false,"firstSeenAt":"2026-04-18T21:42:57.165Z","lastSeenAt":"2026-04-23T00:51:26.353Z"}],"details":{"listingId":"aa3fdee3-12eb-41bc-813a-b773d3af217c","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"prompt-engineering-patterns","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34616,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-22T06:40:00Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"e4033018a2e6796a6c5ed0067832b5bc8ccecd23","skill_md_path":"skills/prompt-engineering-patterns/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/prompt-engineering-patterns"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"prompt-engineering-patterns","description":"Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/prompt-engineering-patterns"},"updatedAt":"2026-04-23T00:51:26.353Z"}}