{"id":"c5a8b231-c580-46d9-a29f-858b81a95d1c","shortId":"Pgkdbw","kind":"skill","title":"llm-application-dev","tagline":"Building applications with Large Language Models - prompt engineering, RAG patterns, and LLM integration. Use for AI-powered features, chatbots, or LLM-based automation.","description":"# LLM Application Development\n\n## Prompt Engineering\n\n### Structured Prompts\n```typescript\nconst systemPrompt = `You are a helpful assistant that answers questions about our product.\n\nRULES:\n- Only answer questions about our product\n- If you don't know, say \"I don't know\"\n- Keep responses concise (under 100 words)\n- Never make up information\n\nCONTEXT:\n{context}`;\n\nconst userPrompt = `Question: {question}`;\n```\n\n### Few-Shot Examples\n```typescript\nconst prompt = `Classify the sentiment of customer feedback.\n\nExamples:\nInput: \"Love this product!\"\nOutput: positive\n\nInput: \"Worst purchase ever\"\nOutput: negative\n\nInput: \"It works fine\"\nOutput: neutral\n\nInput: \"${customerFeedback}\"\nOutput:`;\n```\n\n### Chain of Thought\n```typescript\nconst prompt = `Solve this step by step:\n\nQuestion: ${question}\n\nLet's think through this:\n1. First, identify the key information\n2. Then, determine the approach\n3. Finally, calculate the answer\n\nStep-by-step solution:`;\n```\n\n## API Integration\n\n### OpenAI Pattern\n```typescript\nimport OpenAI from 'openai';\n\nconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });\n\nasync function chat(messages: Message[]): Promise<string> {\n  const response = await openai.chat.completions.create({\n    model: 'gpt-4',\n    messages,\n    temperature: 0.7,\n    max_tokens: 500,\n  });\n\n  return response.choices[0].message.content ?? '';\n}\n```\n\n### Anthropic Pattern\n```typescript\nimport Anthropic from '@anthropic-ai/sdk';\n\nconst anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });\n\nasync function chat(prompt: string): Promise<string> {\n  const response = await anthropic.messages.create({\n    model: 'claude-3-opus-20240229',\n    max_tokens: 1024,\n    messages: [{ role: 'user', content: prompt }],\n  });\n\n  return response.content[0].type === 'text'\n    ? response.content[0].text\n    : '';\n}\n```\n\n### Streaming Responses\n```typescript\nasync function* streamChat(prompt: string) {\n  const stream = await openai.chat.completions.create({\n    model: 'gpt-4',\n    messages: [{ role: 'user', content: prompt }],\n    stream: true,\n  });\n\n  for await (const chunk of stream) {\n    const content = chunk.choices[0]?.delta?.content;\n    if (content) yield content;\n  }\n}\n```\n\n## RAG (Retrieval-Augmented Generation)\n\n### Basic RAG Pipeline\n```typescript\nasync function ragQuery(question: string): Promise<string> {\n  // 1. Embed the question\n  const questionEmbedding = await embedText(question);\n\n  // 2. Search vector database\n  const relevantDocs = await vectorDb.search(questionEmbedding, { limit: 5 });\n\n  // 3. Build context\n  const context = relevantDocs.map(d => d.content).join('\\n\\n');\n\n  // 4. Generate answer\n  const prompt = `Answer based on this context:\\n${context}\\n\\nQuestion: ${question}`;\n  return await chat(prompt);\n}\n```\n\n### Document Chunking\n```typescript\nfunction chunkDocument(text: string, options: ChunkOptions): string[] {\n  const { chunkSize = 1000, overlap = 200 } = options;\n  const chunks: string[] = [];\n\n  let start = 0;\n  while (start < text.length) {\n    const end = Math.min(start + chunkSize, text.length);\n    chunks.push(text.slice(start, end));\n    start += chunkSize - overlap;\n  }\n\n  return chunks;\n}\n```\n\n### Embedding Storage\n```typescript\n// Using Supabase with pgvector\nasync function storeEmbeddings(docs: Document[]) {\n  for (const doc of docs) {\n    const embedding = await embedText(doc.content);\n\n    await supabase.from('documents').insert({\n      content: doc.content,\n      metadata: doc.metadata,\n      embedding: embedding,  // vector column\n    });\n  }\n}\n\nasync function searchSimilar(query: string, limit = 5) {\n  const embedding = await embedText(query);\n\n  const { data } = await supabase.rpc('match_documents', {\n    query_embedding: embedding,\n    match_count: limit,\n  });\n\n  return data;\n}\n```\n\n## Error Handling\n\n```typescript\nasync function safeLLMCall<T>(\n  fn: () => Promise<T>,\n  options: { retries?: number; fallback?: T }\n): Promise<T> {\n  const { retries = 3, fallback } = options;\n\n  for (let i = 0; i < retries; i++) {\n    try {\n      return await fn();\n    } catch (error) {\n      if (error.status === 429) {\n        // Rate limit - exponential backoff\n        await sleep(Math.pow(2, i) * 1000);\n        continue;\n      }\n      if (i === retries - 1) {\n        if (fallback !== undefined) return fallback;\n        throw error;\n      }\n    }\n  }\n  throw new Error('Max retries exceeded');\n}\n```\n\n## Best Practices\n\n- **Token Management**: Track usage and set limits\n- **Caching**: Cache embeddings and common queries\n- **Evaluation**: Test prompts with diverse inputs\n- **Guardrails**: Validate outputs before using\n- **Logging**: Log prompts and responses for debugging\n- **Cost Control**: Use cheaper models for simple tasks\n- **Latency**: Stream responses for better UX\n- **Privacy**: Don't send PII to external APIs","tags":["llm","application","dev","agent","skills","moizibnyousaf","agent-skills","claude-code","cli","codex","cursor","developer-tools"],"capabilities":["skill","source-moizibnyousaf","skill-llm-application-dev","topic-agent-skills","topic-claude-code","topic-cli","topic-codex","topic-cursor","topic-developer-tools","topic-productivity"],"categories":["Ai-Agent-Skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/MoizIbnYousaf/Ai-Agent-Skills/llm-application-dev","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add MoizIbnYousaf/Ai-Agent-Skills","source_repo":"https://github.com/MoizIbnYousaf/Ai-Agent-Skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 1044 github stars · SKILL.md body (4,838 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-02T18:52:55.004Z","embedding":null,"createdAt":"2026-04-18T21:56:16.325Z","updatedAt":"2026-05-02T18:52:55.004Z","lastSeenAt":"2026-05-02T18:52:55.004Z","tsv":"'-20240229':230 '-3':228 '-4':187,261 '/sdk':207 '0':196,241,245,278,371,472 '0.7':190 '1':137,300,499 '100':72 '1000':362,494 '1024':233 '2':143,309,492 '200':364 '3':148,320,466 '4':331 '429':484 '5':319,430 '500':193 'ai':21,206 'ai-pow':20 'answer':46,53,152,333,336 'anthrop':198,202,205,209,211 'anthropic-ai':204 'anthropic.messages.create':225 'api':158,173,214,567 'apikey':171,212 'applic':3,6,31 'approach':147 'assist':44 'async':175,216,250,294,397,424,453 'augment':288 'autom':29 'await':183,224,257,270,306,315,347,409,412,433,438,478,489 'backoff':488 'base':28,337 'basic':290 'best':513 'better':558 'build':5,321 'cach':522,523 'calcul':150 'catch':480 'chain':119 'chat':177,218,348 'chatbot':24 'cheaper':549 'chunk':272,351,367,389 'chunk.choices':277 'chunkdocu':354 'chunkopt':358 'chunks.push':381 'chunksiz':361,379,386 'classifi':91 'claud':227 'column':423 'common':526 'concis':70 'const':38,80,89,123,167,181,208,222,255,271,275,304,313,323,334,360,366,375,403,407,431,436,464 'content':237,265,276,280,282,284,416 'context':78,79,322,324,340,342 'continu':495 'control':547 'cost':546 'count':446 'custom':95 'customerfeedback':117 'd':326 'd.content':327 'data':437,449 'databas':312 'debug':545 'delta':279 'determin':145 'dev':4 'develop':32 'divers':532 'doc':400,404,406 'doc.content':411,417 'doc.metadata':419 'document':350,401,414,441 'emb':301 'embed':390,408,420,421,432,443,444,524 'embedtext':307,410,434 'end':376,384 'engin':12,34 'error':450,481,506,509 'error.status':483 'evalu':528 'ever':107 'exampl':87,97 'exceed':512 'exponenti':487 'extern':566 'fallback':461,467,501,504 'featur':23 'feedback':96 'few-shot':84 'final':149 'fine':113 'first':138 'fn':456,479 'function':176,217,251,295,353,398,425,454 'generat':289,332 'gpt':186,260 'guardrail':534 'handl':451 'help':43 'identifi':139 'import':163,201 'inform':77,142 'input':98,104,110,116,533 'insert':415 'integr':17,159 'join':328 'keep':68 'key':141,174,215 'know':62,67 'languag':9 'larg':8 'latenc':554 'let':132,369,470 'limit':318,429,447,486,521 'llm':2,16,27,30 'llm-application-dev':1 'llm-base':26 'log':539,540 'love':99 'make':75 'manag':516 'match':440,445 'math.min':377 'math.pow':491 'max':191,231,510 'messag':178,179,188,234,262 'message.content':197 'metadata':418 'model':10,185,226,259,550 'n':329,330,341,343 'negat':109 'neutral':115 'never':74 'new':169,210,508 'nquestion':344 'number':460 'openai':160,164,166,168,170 'openai.chat.completions.create':184,258 'option':357,365,458,468 'opus':229 'output':102,108,114,118,536 'overlap':363,387 'pattern':14,161,199 'pgvector':396 'pii':564 'pipelin':292 'posit':103 'power':22 'practic':514 'privaci':560 'process.env.anthropic':213 'process.env.openai':172 'product':50,57,101 'promis':180,221,299,457,463 'prompt':11,33,36,90,124,219,238,253,266,335,349,530,541 'purchas':106 'queri':427,435,442,527 'question':47,54,82,83,130,131,297,303,308,345 'questionembed':305,317 'rag':13,285,291 'ragqueri':296 'rate':485 'relevantdoc':314 'relevantdocs.map':325 'respons':69,182,223,248,543,556 'response.choices':195 'response.content':240,244 'retri':459,465,474,498,511 'retriev':287 'retrieval-aug':286 'return':194,239,346,388,448,477,503 'role':235,263 'rule':51 'safellmcal':455 'say':63 'search':310 'searchsimilar':426 'send':563 'sentiment':93 'set':520 'shot':86 'simpl':552 'skill' 'skill-llm-application-dev' 'sleep':490 'solut':157 'solv':125 'source-moizibnyousaf' 'start':370,373,378,383,385 'step':127,129,154,156 'step-by-step':153 'storag':391 'storeembed':399 'stream':247,256,267,274,555 'streamchat':252 'string':220,254,298,356,359,368,428 'structur':35 'supabas':394 'supabase.from':413 'supabase.rpc':439 'systemprompt':39 'task':553 'temperatur':189 'test':529 'text':243,246,355 'text.length':374,380 'text.slice':382 'think':134 'thought':121 'throw':505,507 'token':192,232,515 'topic-agent-skills' 'topic-claude-code' 'topic-cli' 'topic-codex' 'topic-cursor' 'topic-developer-tools' 'topic-productivity' 'track':517 'tri':476 'true':268 'type':242 'typescript':37,88,122,162,200,249,293,352,392,452 'undefin':502 'usag':518 'use':18,393,538,548 'user':236,264 'userprompt':81 'ux':559 'valid':535 'vector':311,422 'vectordb.search':316 'word':73 'work':112 'worst':105 'yield':283","prices":[{"id":"ca65b2ac-c9a0-4a4b-96d3-1a9f1692c321","listingId":"c5a8b231-c580-46d9-a29f-858b81a95d1c","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"MoizIbnYousaf","category":"Ai-Agent-Skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:56:16.325Z"}],"sources":[{"listingId":"c5a8b231-c580-46d9-a29f-858b81a95d1c","source":"github","sourceId":"MoizIbnYousaf/Ai-Agent-Skills/llm-application-dev","sourceUrl":"https://github.com/MoizIbnYousaf/Ai-Agent-Skills/tree/main/skills/llm-application-dev","isPrimary":false,"firstSeenAt":"2026-04-18T21:56:16.325Z","lastSeenAt":"2026-05-02T18:52:55.004Z"}],"details":{"listingId":"c5a8b231-c580-46d9-a29f-858b81a95d1c","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"MoizIbnYousaf","slug":"llm-application-dev","github":{"repo":"MoizIbnYousaf/Ai-Agent-Skills","stars":1044,"topics":["agent-skills","claude-code","cli","codex","cursor","developer-tools","productivity"],"license":"mit","html_url":"https://github.com/MoizIbnYousaf/Ai-Agent-Skills","pushed_at":"2026-04-13T19:04:12Z","description":"my curated agent skills library ","skill_md_sha":"9ccadbe20cbf3863b5ee1e63b7d4e1b30da3d14d","skill_md_path":"skills/llm-application-dev/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/MoizIbnYousaf/Ai-Agent-Skills/tree/main/skills/llm-application-dev"},"layout":"multi","source":"github","category":"Ai-Agent-Skills","frontmatter":{"name":"llm-application-dev","license":"MIT","description":"Building applications with Large Language Models - prompt engineering, RAG patterns, and LLM integration. Use for AI-powered features, chatbots, or LLM-based automation."},"skills_sh_url":"https://skills.sh/MoizIbnYousaf/Ai-Agent-Skills/llm-application-dev"},"updatedAt":"2026-05-02T18:52:55.004Z"}}