{"id":"305eee37-13b1-4583-bfdd-5985b6b57409","shortId":"Xjt64z","kind":"skill","title":"llm-ops","tagline":"LLM Operations -- RAG, embeddings, vector databases, fine-tuning, prompt engineering avancado, custos de LLM, evals de qualidade e arquiteturas de IA para producao.","description":"# LLM-OPS -- IA de Producao\n\n## Overview\n\nLLM Operations -- RAG, embeddings, vector databases, fine-tuning, prompt engineering avancado, custos de LLM, evals de qualidade e arquiteturas de IA para producao. Ativar para: implementar RAG, criar pipeline de embeddings, Pinecone/Chroma/pgvector, fine-tuning, prompt engineering, reducao de custos de LLM, evals, cache semantico, streaming, agents.\n\n## When to Use This Skill\n\n- When you need specialized assistance with this domain\n\n## Do Not Use This Skill When\n\n- The task is unrelated to llm ops\n- A simpler, more specific tool can handle the request\n- The user needs general-purpose assistance without domain expertise\n\n## How It Works\n\n> A diferenca entre um prototipo de IA e um produto de IA e operabilidade.\n> LLM-Ops e a engenharia que torna IA confiavel, escalavel e economica.\n\n---\n\n## Arquitetura Rag Completa\n\n[Documentos] -> [Chunking] -> [Embeddings] -> [Vector DB]\n                                                      |\n    [Query] -> [Embed query] -> [Semantic Search] -> [Top K chunks]\n                                                          |\n                                           [LLM + Context] -> [Resposta]\n\n## Pipeline De Indexacao\n\nfrom anthropic import Anthropic\n    import chromadb\n\n    client = Anthropic()\n    chroma = chromadb.PersistentClient(path=\"./chroma_db\")\n\n    def chunk_text(text, chunk_size=500, overlap=50):\n        words = text.split()\n        chunks = []\n        for i in range(0, len(words), chunk_size - overlap):\n            chunk = \" \".join(words[i:i + chunk_size])\n            if chunk: chunks.append(chunk)\n        return chunks\n\n    def index_document(doc_id, content_text, metadata=None):\n        chunks = chunk_text(content_text)\n        ids = [f\"{doc_id}_chunk_{i}\" for i in range(len(chunks))]\n        collection.upsert(ids=ids, documents=chunks)\n        return len(chunks)\n\n## Pipeline De Query Com Rag\n\ndef rag_query(query, top_k=5, system=None):\n        results = collection.query(\n            query_texts=[query], n_results=top_k,\n            include=[\"documents\", \"metadatas\", \"distances\"])\n        context_parts = []\n        for doc, meta, dist in zip(results[\"documents\"][0],\n                                    results[\"metadatas\"][0],\n                                    results[\"distances\"][0]):\n            if dist < 1.5:\n                src = meta.get(\"source\", \"doc\")\n                context_parts.append(f\"[Fonte: {src}]\n{doc}\")\n        context = \"\n\n---\n\n\".join(context_parts)\n        response = client.messages.create(\n            model=\"claude-opus-4-20250805\", max_tokens=1024,\n            system=system or \"Responda baseado no contexto.\",\n            messages=[{\"role\": \"user\", \"content\": f\"Contexto:\n{context}\n\n{query}\"}])\n        return response.content[0].text\n\n---\n\n## Escolha Do Vector Db\n\n| DB | Melhor Para | Hosting | Custo |\n|----|------------|---------|-------|\n| Chroma | Desenvolvimento, local | Self-hosted | Gratis |\n| pgvector | Ja usa PostgreSQL | Self/Cloud | Gratis |\n| Pinecone | Producao gerenciada | Cloud | USD 70+/mes |\n| Weaviate | Multi-modal | Self/Cloud | Gratis+ |\n| Qdrant | Alta performance | Self/Cloud | Gratis+ |\n\n## Pgvector\n\nCREATE EXTENSION IF NOT EXISTS vector;\n    CREATE TABLE knowledge_embeddings (\n        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),\n        content TEXT NOT NULL,\n        embedding vector(1536),\n        metadata JSONB,\n        created_at TIMESTAMPTZ DEFAULT NOW()\n    );\n    CREATE INDEX ON knowledge_embeddings\n    USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);\n    SELECT content, 1 - (embedding <=> QUERY_VECTOR) AS similarity\n    FROM knowledge_embeddings ORDER BY similarity DESC LIMIT 5;\n\n---\n\n## Estrutura De Prompt De Elite\n\nComponentes do system prompt Auri:\n\n- Identidade: Nome (Auri), Tom (Natural, caloroso, direto), Plataforma (Amazon Alexa)\n- Regras: Maximo 3 paragrafos curtos, sem markdown, linguagem conversacional\n- Capacidades: analise de negocios, conselho baseado em dados, criatividade\n- Limitacoes: sem internet tempo real, sem transacoes financeiras\n- Personalizacao: {user_name}, {user_preferences}, {relevant_history}\n\n## Chain-Of-Thought\n\ndef cot_analysis(problem: str) -> str:\n        steps = [\n            \"1. O que exatamente esta sendo pedido?\",\n            \"2. Que informacoes sao criticas para resolver?\",\n            \"3. Quais abordagens possiveis existem?\",\n            \"4. Qual abordagem e melhor e por que?\",\n            \"5. Quais riscos ou limitacoes existem?\",\n        ]\n        prompt = f\"Analise passo a passo:\n\nPROBLEMA: {problem}\n\n\"\n        prompt += \"\n\".join(steps) + \"\n\nResposta final (concisa, para voz):\"\n        return call_claude(prompt)\n\n---\n\n## Cache Semantico\n\nclass SemanticCache:\n        def __init__(self, similarity_threshold=0.95):\n            self.threshold = similarity_threshold\n            self.cache = {}\n\n        def get_cached(self, query, embedding):\n            for cached_emb, (response, _) in self.cache.items():\n                if cosine_similarity(embedding, cached_emb) >= self.threshold:\n                    return response\n            return None\n\n        def set_cache(self, query, embedding, response):\n            self.cache[tuple(embedding)] = (response, query)\n\n## Estimativa De Custos Claude\n\nPRICING = {\n        \"claude-opus-4-20250805\": {\"input\": 15.00, \"output\": 75.00},\n        \"claude-sonnet-4-5\": {\"input\": 3.00, \"output\": 15.00},\n        \"claude-haiku-3-5\": {\"input\": 0.80, \"output\": 4.00},\n    }\n\n    def estimate_monthly_cost(model, avg_input, avg_output, req_per_day):\n        p = PRICING[model]\n        daily = (avg_input + avg_output) * req_per_day / 1e6\n        monthly = daily * p[\"input\"] * 30\n        return {\"model\": model, \"monthly_cost\": \"USD %.2f\" % monthly}\n\n---\n\n## Framework De Avaliacao\n\nfrom anthropic import Anthropic\n    client = Anthropic()\n\n    def evaluate_response(question, expected, actual, criteria):\n        criteria_text = \"\n\".join(f\"- {c}\" for c in criteria)\n        eval_prompt = (\n            f\"Avalie a resposta do assistente de IA.\n\n\"\n            f\"PERGUNTA: {question}\nRESPOSTA ESPERADA: {expected}\n\"\n            f\"RESPOSTA ATUAL: {actual}\n\nCriterios:\n{criteria_text}\n\n\"\n            \"Nota 0-10 e justificativa para cada criterio. Formato JSON.\"\n        )\n        response = client.messages.create(\n            model=\"claude-haiku-3-5\", max_tokens=1024,\n            messages=[{\"role\": \"user\", \"content\": eval_prompt}]\n        )\n        import json\n        return json.loads(response.content[0].text)\n\n    AURI_EVALS = [\n        {\n            \"question\": \"Quais sao os principais riscos de abrir startup agora?\",\n            \"criteria\": [\"precisao_factual\", \"relevancia\", \"clareza_para_voz\"]\n        },\n    ]\n\n---\n\n## 6. Comandos\n\n| Comando | Acao |\n|---------|------|\n| /rag-setup | Configura pipeline RAG completo |\n| /embed-docs | Indexa documentos no vector DB |\n| /prompt-optimize | Otimiza prompt para qualidade e custo |\n| /cost-estimate | Estima custo mensal do LLM |\n| /eval-run | Roda suite de evals de qualidade |\n| /cache-setup | Configura cache semantico |\n| /model-select | Escolhe modelo ideal para o caso de uso |\n\n## Best Practices\n\n- Provide clear, specific context about your project and requirements\n- Review all suggestions before applying them to production code\n- Combine with other complementary skills for comprehensive analysis\n\n## Common Pitfalls\n\n- Using this skill for tasks outside its domain expertise\n- Applying recommendations without understanding your specific context\n- Not providing enough project context for accurate analysis\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["llm","ops","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows"],"capabilities":["skill","source-sickn33","skill-llm-ops","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/llm-ops","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34726 github stars · SKILL.md body (7,908 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T12:51:11.006Z","embedding":null,"createdAt":"2026-04-18T21:40:07.356Z","updatedAt":"2026-04-23T12:51:11.006Z","lastSeenAt":"2026-04-23T12:51:11.006Z","tsv":"'-10':740 '-20250805':328,630 '-5':639,648,755 '/cache-setup':826 '/chroma_db':191 '/cost-estimate':813 '/embed-docs':800 '/eval-run':819 '/mes':379 '/model-select':830 '/prompt-optimize':806 '/rag-setup':795 '0':208,298,301,304,349,739,770 '0.80':650 '0.95':581 '1':440,519 '1.5':307 '100':437 '1024':331,758 '15.00':632,643 '1536':416 '1e6':676 '2':526 '2f':688 '3':477,533,647,754 '3.00':641 '30':681 '4':327,538,629,638 '4.00':652 '5':272,454,546 '50':200 '500':198 '6':791 '70':378 '75.00':634 'abordagem':540 'abordagen':535 'abrir':781 'acao':794 'accur':891 'actual':704,734 'agent':82 'agora':783 'alexa':474 'alta':387 'amazon':473 'analis':485,554 'analysi':514,866,892 'anthrop':181,183,187,694,696,698 'appli':854,878 'arquitetura':23,54,158 'ask':926 'assist':92,124 'assistent':722 'ativar':59 'atual':733 'auri':464,467,772 'avali':718 'avaliacao':692 'avancado':15,46 'avg':658,660,669,671 'baseado':336,489 'best':839 'boundari':934 'c':710,712 'cach':79,572,588,593,602,611,828 'cada':744 'call':569 'caloroso':470 'capacidad':484 'caso':836 'chain':509 'chain-of-thought':508 'chroma':188,360 'chromadb':185 'chromadb.persistentclient':189 'chunk':162,173,193,196,203,211,214,219,222,224,226,236,237,245,252,257,260 'chunks.append':223 'clareza':788 'clarif':928 'class':574 'claud':325,570,624,627,636,645,752 'claude-haiku':644,751 'claude-opus':324,626 'claude-sonnet':635 'clear':842,901 'client':186,697 'client.messages.create':322,749 'cloud':376 'code':858 'collection.query':276 'collection.upsert':253 'com':264 'comando':792,793 'combin':859 'common':867 'complementari':862 'completa':160 'completo':799 'component':460 'comprehens':865 'concisa':565 'confiavel':154 'configura':796,827 'conselho':488 'content':232,239,342,410,439,762 'context':175,288,317,319,345,844,884,889 'context_parts.append':312 'contexto':338,344 'conversacion':483 'cosin':433,599 'cost':656,686 'cot':513 'creat':392,398,419,424 'criar':63 'criatividad':492 'criteria':705,706,714,736,784,937 'criterio':735,745 'critica':530 'curto':479 'custo':16,47,75,359,623,812,815 'dado':491 'daili':668,678 'databas':9,40 'day':664,675 'db':165,354,355,805 'de':17,20,24,32,48,51,55,65,74,76,136,141,178,262,456,458,486,622,691,723,780,822,824,837 'def':192,227,266,512,576,586,609,653,699 'default':406,422 'desc':452 'describ':905 'desenvolvimento':361 'diferenca':132 'direto':471 'dist':293,306 'distanc':287,303 'doc':230,243,291,311,316 'document':229,256,285,297 'documento':161,802 'domain':95,126,876 'e':22,53,138,143,148,156,541,543,741,811 'economica':157 'elit':459 'em':490 'emb':167,594,603 'embed':7,38,66,163,401,414,428,431,441,448,591,601,614,618 'engenharia':150 'engin':14,45,72 'enough':887 'entr':133 'environ':917 'environment-specif':916 'escalavel':155 'escolh':831 'escolha':351 'esperada':729 'esta':523 'estim':654 'estima':814 'estimativa':621 'estrutura':455 'eval':19,50,78,715,763,773,823 'evalu':700 'exatament':522 'exist':396 'existem':537,551 'expect':703,730 'expert':922 'expertis':127,877 'extens':393 'f':242,313,343,553,709,717,725,731 'factual':786 'final':564 'financeira':500 'fine':11,42,69 'fine-tun':10,41,68 'font':314 'formato':746 'framework':690 'gen':407 'general':122 'general-purpos':121 'gerenciada':375 'get':587 'grati':366,372,385,390 'haiku':646,753 'handl':115 'histori':507 'host':358,365 'ia':25,31,56,137,142,153,724 'id':231,241,244,254,255,402 'ideal':833 'identidad':465 'implementar':61 'import':182,184,695,765 'includ':284 'index':228,425 'indexa':801 'indexacao':179 'informaco':528 'init':577 'input':631,640,649,659,670,680,931 'internet':495 'ivfflat':430 'ja':368 'join':215,318,561,708 'json':747,766 'json.loads':768 'jsonb':418 'justificativa':742 'k':172,271,283 'key':405 'knowledg':400,427,447 'len':209,251,259 'limit':453,893 'limitaco':493,550 'linguagem':482 'list':436 'llm':2,4,18,29,35,49,77,107,146,174,818 'llm-op':1,28,145 'local':362 'markdown':481 'match':902 'max':329,756 'maximo':476 'melhor':356,542 'mensal':816 'messag':339,759 'meta':292 'meta.get':309 'metadata':234,286,300,417 'miss':939 'modal':383 'model':323,657,667,683,684,750 'modelo':832 'month':655,677,685,689 'multi':382 'multi-mod':381 'n':280 'name':503 'natur':469 'need':90,120 'negocio':487 'nome':466 'none':235,274,608 'nota':738 'null':413 'o':520,835 'op':3,30,108,147,434 'oper':5,36 'operabilidad':144 'opus':326,628 'order':449 'os':777 'otimiza':807 'ou':549 'output':633,642,651,661,672,911 'outsid':874 'overlap':199,213 'overview':34 'p':665,679 'para':26,57,60,357,531,566,743,789,809,834 'paragrafo':478 'part':289,320 'passo':555,557 'path':190 'pedido':525 'per':663,674 'perform':388 'pergunta':726 'permiss':932 'personalizacao':501 'pgvector':367,391 'pinecon':373 'pinecone/chroma/pgvector':67 'pipelin':64,177,261,797 'pitfal':868 'plataforma':472 'por':544 'possivei':536 'postgresql':370 'practic':840 'precisao':785 'prefer':505 'price':625,666 'primari':404 'principai':778 'problem':515,559 'problema':558 'producao':27,33,58,374 'product':857 'produto':140 'project':847,888 'prompt':13,44,71,457,463,552,560,571,716,764,808 'prototipo':135 'provid':841,886 'purpos':123 'qdrant':386 'quai':534,547,775 'qual':539 'qualidad':21,52,810,825 'que':151,521,527,545 'queri':166,168,263,268,269,277,279,346,442,590,613,620 'question':702,727,774 'rag':6,37,62,159,265,267,798 'random':408 'rang':207,250 'real':497 'recommend':879 'reducao':73 'regra':475 'relev':506 'relevancia':787 'req':662,673 'request':117 'requir':849,930 'resolv':532 'responda':335 'respons':321,595,606,615,619,701,748 'response.content':348,769 'resposta':176,563,720,728,732 'result':275,281,296,299,302 'return':225,258,347,568,605,607,682,767 'review':850,923 'risco':548,779 'roda':820 'role':340,760 'safeti':933 'sao':529,776 'scope':904 'search':170 'select':438 'self':364,578,589,612 'self-host':363 'self.cache':585,616 'self.cache.items':597 'self.threshold':582,604 'self/cloud':371,384,389 'sem':480,494,498 'semant':169 'semanticcach':575 'semantico':80,573,829 'sendo':524 'set':610 'similar':445,451,579,583,600 'simpler':110 'size':197,212,220 'skill':87,100,863,871,896 'skill-llm-ops' 'sonnet':637 'sourc':310 'source-sickn33' 'special':91 'specif':112,843,883,918 'src':308,315 'startup':782 'step':518,562 'stop':924 'str':516,517 'stream':81 'substitut':914 'success':936 'suggest':852 'suit':821 'system':273,332,333,462 'tabl':399 'task':103,873,900 'tempo':496 'test':920 'text':194,195,233,238,240,278,350,411,707,737,771 'text.split':202 'thought':511 'threshold':580,584 'timestamptz':421 'token':330,757 'tom':468 'tool':113 'top':171,270,282 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'torna':152 'transaco':499 'treat':909 'tune':12,43,70 'tupl':617 'um':134,139 'understand':881 'unrel':105 'usa':369 'usd':377,687 'use':85,98,429,869,894 'user':119,341,502,504,761 'uso':838 'uuid':403,409 'valid':919 'vector':8,39,164,353,397,415,432,443,804 'voz':567,790 'weaviat':380 'without':125,880 'word':201,210,216 'work':130 'zip':295","prices":[{"id":"7a6202a1-969b-49b7-b005-33ef2180bad7","listingId":"305eee37-13b1-4583-bfdd-5985b6b57409","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:40:07.356Z"}],"sources":[{"listingId":"305eee37-13b1-4583-bfdd-5985b6b57409","source":"github","sourceId":"sickn33/antigravity-awesome-skills/llm-ops","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/llm-ops","isPrimary":false,"firstSeenAt":"2026-04-18T21:40:07.356Z","lastSeenAt":"2026-04-23T12:51:11.006Z"}],"details":{"listingId":"305eee37-13b1-4583-bfdd-5985b6b57409","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"llm-ops","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34726,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"a18fcd796c6758bb0b530170d08cdf63b2606fcc","skill_md_path":"skills/llm-ops/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/llm-ops"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"llm-ops","description":"LLM Operations -- RAG, embeddings, vector databases, fine-tuning, prompt engineering avancado, custos de LLM, evals de qualidade e arquiteturas de IA para producao."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/llm-ops"},"updatedAt":"2026-04-23T12:51:11.006Z"}}