{"id":"a6be7d90-192b-46c9-a005-44582a3cdc32","shortId":"ujBfXb","kind":"skill","title":"langfuse","tagline":"Expert in Langfuse - the open-source LLM observability platform.","description":"# Langfuse\n\nExpert in Langfuse - the open-source LLM observability platform. Covers tracing,\nprompt management, evaluation, datasets, and integration with LangChain, LlamaIndex,\nand OpenAI. Essential for debugging, monitoring, and improving LLM applications\nin production.\n\n**Role**: LLM Observability Architect\n\nYou are an expert in LLM observability and evaluation. You think in terms of\ntraces, spans, and metrics. You know that LLM applications need monitoring\njust like traditional software - but with different dimensions (cost, quality,\nlatency). You use data to drive prompt improvements and catch regressions.\n\n### Expertise\n\n- Tracing architecture\n- Prompt versioning\n- Evaluation strategies\n- Cost optimization\n- Quality monitoring\n\n## Capabilities\n\n- LLM tracing and observability\n- Prompt management and versioning\n- Evaluation and scoring\n- Dataset management\n- Cost tracking\n- Performance monitoring\n- A/B testing prompts\n\n## Prerequisites\n\n- 0: LLM application basics\n- 1: API integration experience\n- 2: Understanding of tracing concepts\n- Required skills: Python or TypeScript/JavaScript, Langfuse account (cloud or self-hosted), LLM API keys\n\n## Scope\n\n- 0: Self-hosted requires infrastructure\n- 1: High-volume may need optimization\n- 2: Real-time dashboard has latency\n- 3: Evaluation requires setup\n\n## Ecosystem\n\n### Primary\n\n- Langfuse Cloud\n- Langfuse Self-hosted\n- Python SDK\n- JS/TS SDK\n\n### Common_integrations\n\n- LangChain\n- LlamaIndex\n- OpenAI SDK\n- Anthropic SDK\n- Vercel AI SDK\n\n### Platforms\n\n- Any Python/JS backend\n- Serverless functions\n- Jupyter notebooks\n\n## Patterns\n\n### Basic Tracing Setup\n\nInstrument LLM calls with Langfuse\n\n**When to use**: Any LLM application\n\nfrom langfuse import Langfuse\n\n# Initialize client\nlangfuse = Langfuse(\n    public_key=\"pk-...\",\n    secret_key=\"sk-...\",\n    host=\"https://cloud.langfuse.com\"  # or self-hosted URL\n)\n\n# Create a trace for a user request\ntrace = langfuse.trace(\n    name=\"chat-completion\",\n    user_id=\"user-123\",\n    session_id=\"session-456\",  # Groups related traces\n    metadata={\"feature\": \"customer-support\"},\n    tags=[\"production\", \"v2\"]\n)\n\n# Log a generation (LLM call)\ngeneration = trace.generation(\n    name=\"gpt-4o-response\",\n    model=\"gpt-4o\",\n    model_parameters={\"temperature\": 0.7},\n    input={\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]},\n    metadata={\"attempt\": 1}\n)\n\n# Make actual LLM call\nresponse = openai.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello\"}]\n)\n\n# Complete the generation with output\ngeneration.end(\n    output=response.choices[0].message.content,\n    usage={\n        \"input\": response.usage.prompt_tokens,\n        \"output\": response.usage.completion_tokens\n    }\n)\n\n# Score the trace\ntrace.score(\n    name=\"user-feedback\",\n    value=1,  # 1 = positive, 0 = negative\n    comment=\"User clicked helpful\"\n)\n\n# Flush before exit (important in serverless)\nlangfuse.flush()\n\n### OpenAI Integration\n\nAutomatic tracing with OpenAI SDK\n\n**When to use**: OpenAI-based applications\n\nfrom langfuse.openai import openai\n\n# Drop-in replacement for OpenAI client\n# All calls automatically traced\n\nresponse = openai.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello\"}],\n    # Langfuse-specific parameters\n    name=\"greeting\",  # Trace name\n    session_id=\"session-123\",\n    user_id=\"user-456\",\n    tags=[\"test\"],\n    metadata={\"feature\": \"chat\"}\n)\n\n# Works with streaming\nstream = openai.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a story\"}],\n    stream=True,\n    name=\"story-generation\"\n)\n\nfor chunk in stream:\n    print(chunk.choices[0].delta.content, end=\"\")\n\n# Works with async\nimport asyncio\nfrom langfuse.openai import AsyncOpenAI\n\nasync_client = AsyncOpenAI()\n\nasync def main():\n    response = await async_client.chat.completions.create(\n        model=\"gpt-4o\",\n        messages=[{\"role\": \"user\", \"content\": \"Hello\"}],\n        name=\"async-greeting\"\n    )\n\n### LangChain Integration\n\nTrace LangChain applications\n\n**When to use**: LangChain-based applications\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langfuse.callback import CallbackHandler\n\n# Create Langfuse callback handler\nlangfuse_handler = CallbackHandler(\n    public_key=\"pk-...\",\n    secret_key=\"sk-...\",\n    host=\"https://cloud.langfuse.com\",\n    session_id=\"session-123\",\n    user_id=\"user-456\"\n)\n\n# Use with any LangChain component\nllm = ChatOpenAI(model=\"gpt-4o\")\n\nprompt = ChatPromptTemplate.from_messages([\n    (\"system\", \"You are a helpful assistant.\"),\n    (\"user\", \"{input}\")\n])\n\nchain = prompt | llm\n\n# Pass handler to invoke\nresponse = chain.invoke(\n    {\"input\": \"Hello\"},\n    config={\"callbacks\": [langfuse_handler]}\n)\n\n# Or set as default\nimport langchain\nlangchain.callbacks.manager.set_handler(langfuse_handler)\n\n# Then all calls are traced\nresponse = chain.invoke({\"input\": \"Hello\"})\n\n# Works with agents, retrievers, etc.\nfrom langchain.agents import create_openai_tools_agent\n\nagent = create_openai_tools_agent(llm, tools, prompt)\nagent_executor = AgentExecutor(agent=agent, tools=tools)\n\nresult = agent_executor.invoke(\n    {\"input\": \"What's the weather?\"},\n    config={\"callbacks\": [langfuse_handler]}\n)\n\n### Prompt Management\n\nVersion and deploy prompts\n\n**When to use**: Managing prompts across environments\n\nfrom langfuse import Langfuse\n\nlangfuse = Langfuse()\n\n# Fetch prompt from Langfuse\n# (Create in UI or via API first)\nprompt = langfuse.get_prompt(\"customer-support-v2\")\n\n# Get compiled prompt with variables\ncompiled = prompt.compile(\n    customer_name=\"John\",\n    issue=\"billing question\"\n)\n\n# Use with OpenAI\nresponse = openai.chat.completions.create(\n    model=prompt.config.get(\"model\", \"gpt-4o\"),\n    messages=compiled,\n    temperature=prompt.config.get(\"temperature\", 0.7)\n)\n\n# Link generation to prompt version\ntrace = langfuse.trace(name=\"support-chat\")\ngeneration = trace.generation(\n    name=\"response\",\n    model=\"gpt-4o\",\n    prompt=prompt  # Links to specific version\n)\n\n# Create/update prompts via API\nlangfuse.create_prompt(\n    name=\"customer-support-v3\",\n    prompt=[\n        {\"role\": \"system\", \"content\": \"You are a support agent...\"},\n        {\"role\": \"user\", \"content\": \"{{user_message}}\"}\n    ],\n    config={\n        \"model\": \"gpt-4o\",\n        \"temperature\": 0.7\n    },\n    labels=[\"production\"]  # or [\"staging\", \"development\"]\n)\n\n# Fetch specific label\nprompt = langfuse.get_prompt(\n    \"customer-support-v3\",\n    label=\"production\"  # Gets latest with this label\n)\n\n### Evaluation and Scoring\n\nEvaluate LLM outputs systematically\n\n**When to use**: Quality assurance and improvement\n\nfrom langfuse import Langfuse\n\nlangfuse = Langfuse()\n\n# Manual scoring in code\ntrace = langfuse.trace(name=\"qa-flow\")\n\n# After getting response\ntrace.score(\n    name=\"relevance\",\n    value=0.85,  # 0-1 scale\n    comment=\"Response addressed the question\"\n)\n\ntrace.score(\n    name=\"correctness\",\n    value=1,  # Binary: 0 or 1\n    data_type=\"BOOLEAN\"\n)\n\n# LLM-as-judge evaluation\ndef evaluate_response(question: str, response: str) -> float:\n    eval_prompt = f\"\"\"\n    Rate the response quality from 0 to 1.\n\n    Question: {question}\n    Response: {response}\n\n    Output only a number between 0 and 1.\n    \"\"\"\n\n    result = openai.chat.completions.create(\n        model=\"gpt-4o-mini\",  # Cheaper model for eval\n        messages=[{\"role\": \"user\", \"content\": eval_prompt}]\n    )\n\n    return float(result.choices[0].message.content.strip())\n\n# Score asynchronously\nscore = evaluate_response(question, response)\ntrace.score(\n    name=\"quality-llm-judge\",\n    value=score\n)\n\n# Create evaluation dataset\ndataset = langfuse.create_dataset(name=\"support-qa-v1\")\n\n# Add items to dataset\nlangfuse.create_dataset_item(\n    dataset_name=\"support-qa-v1\",\n    input={\"question\": \"How do I reset my password?\"},\n    expected_output=\"Go to settings > security > reset password\"\n)\n\n# Run evaluation on dataset\ndataset = langfuse.get_dataset(\"support-qa-v1\")\n\nfor item in dataset.items:\n    # Generate response\n    response = generate_response(item.input[\"question\"])\n\n    # Link to dataset item\n    trace = langfuse.trace(name=\"eval-run\")\n    trace.generation(\n        name=\"response\",\n        input=item.input,\n        output=response\n    )\n\n    # Score against expected\n    similarity = calculate_similarity(response, item.expected_output)\n    trace.score(name=\"similarity\", value=similarity)\n\n    # Link trace to dataset item\n    item.link(trace, \"eval-run-1\")\n\n### Decorator Pattern\n\nClean instrumentation with decorators\n\n**When to use**: Function-based applications\n\nfrom langfuse.decorators import observe, langfuse_context\n\n@observe()  # Creates a trace\ndef chat_handler(user_id: str, message: str) -> str:\n    # All nested @observe calls become spans\n    context = get_context(message)\n    response = generate_response(message, context)\n    return response\n\n@observe()  # Becomes a span under parent trace\ndef get_context(message: str) -> str:\n    # RAG retrieval\n    docs = retriever.get_relevant_documents(message)\n    return \"\\n\".join([d.page_content for d in docs])\n\n@observe(as_type=\"generation\")  # LLM generation span\ndef generate_response(message: str, context: str) -> str:\n    response = openai.chat.completions.create(\n        model=\"gpt-4o\",\n        messages=[\n            {\"role\": \"system\", \"content\": f\"Context: {context}\"},\n            {\"role\": \"user\", \"content\": message}\n        ]\n    )\n    return response.choices[0].message.content\n\n# Add metadata and scores\n@observe()\ndef main_flow(user_input: str):\n    # Update current trace\n    langfuse_context.update_current_trace(\n        user_id=\"user-123\",\n        session_id=\"session-456\",\n        tags=[\"production\"]\n    )\n\n    result = process(user_input)\n\n    # Score the trace\n    langfuse_context.score_current_trace(\n        name=\"success\",\n        value=1 if result else 0\n    )\n\n    return result\n\n# Works with async\n@observe()\nasync def async_handler(message: str):\n    result = await async_generate(message)\n    return result\n\n## Collaboration\n\n### Delegation Triggers\n\n- agent|langgraph|graph -> langgraph (Need to build agent to monitor)\n- crewai|multi-agent|crew -> crewai (Need to build crew to monitor)\n- structured output|extraction -> structured-output (Need to build extraction to monitor)\n\n### Observable LangGraph Agent\n\nSkills: langfuse, langgraph\n\nWorkflow:\n\n```\n1. Build agent with LangGraph\n2. Add Langfuse callback handler\n3. Trace all LLM calls and tool uses\n4. Score outputs for quality\n5. Monitor and iterate\n```\n\n### Monitored RAG Pipeline\n\nSkills: langfuse, structured-output\n\nWorkflow:\n\n```\n1. Build RAG with retrieval and generation\n2. Trace retrieval and LLM calls\n3. Score relevance and accuracy\n4. Track costs and latency\n5. Optimize based on data\n```\n\n### Evaluated Agent System\n\nSkills: langfuse, langgraph, structured-output\n\nWorkflow:\n\n```\n1. Build agent with structured outputs\n2. Create evaluation dataset\n3. Run evaluations with traces\n4. Compare prompt versions\n5. Deploy best performers\n```\n\n## Related Skills\n\nWorks well with: `langgraph`, `crewai`, `structured-output`, `autonomous-agents`\n\n## When to Use\n- User mentions or implies: langfuse\n- User mentions or implies: llm observability\n- User mentions or implies: llm tracing\n- User mentions or implies: prompt management\n- User mentions or implies: llm evaluation\n- User mentions or implies: monitor llm\n- User mentions or implies: debug llm\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["langfuse","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity-skills"],"capabilities":["skill","source-sickn33","skill-langfuse","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/langfuse","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34726 github stars · SKILL.md body (11,834 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T12:51:08.536Z","embedding":null,"createdAt":"2026-04-18T21:39:40.447Z","updatedAt":"2026-04-23T12:51:08.536Z","lastSeenAt":"2026-04-23T12:51:08.536Z","tsv":"'-1':818 '-123':265,418,534,1148 '-456':269,422,538,1152 '0':129,158,333,354,457,817,831,858,870,893,1126,1172 '0.7':300,699,756 '0.85':816 '1':133,164,309,351,352,829,833,860,872,1013,1168,1236,1272,1310 '2':137,171,1241,1279,1316 '3':178,1246,1285,1320 '4':1254,1290,1325 '4o':291,296,319,401,436,481,549,693,718,754,878,1112 '5':1259,1295,1329 'a/b':125 'account':148 'accuraci':1289 'across':644 'actual':311 'add':921,1128,1242 'address':822 'agent':597,606,607,611,615,618,619,744,1195,1202,1208,1231,1238,1301,1312,1345 'agent_executor.invoke':623 'agentexecutor':617 'ai':203 'anthrop':200 'api':134,155,661,728 'applic':43,72,131,227,380,495,502,1026 'architect':49 'architectur':98 'ask':1423 'assist':558 'assur':790 'async':462,469,472,489,1177,1179,1181,1187 'async-greet':488 'async_client.chat.completions.create':477 'asynchron':896 'asyncio':464 'asyncopenai':468,471 'attempt':308 'automat':369,394 'autonom':1344 'autonomous-ag':1343 'await':476,1186 'backend':208 'base':379,501,1025,1297 'basic':132,214 'becom':1050,1064 'best':1331 'bill':681 'binari':830 'boolean':836 'boundari':1431 'build':1201,1213,1225,1237,1273,1311 'calcul':993 'call':219,285,313,393,588,1049,1250,1284 'callback':518,573,630,1244 'callbackhandl':515,522 'capabl':107 'catch':94 'chain':561 'chain.invoke':569,592 'chat':260,427,710,1038 'chat-complet':259 'chatopenai':507,545 'chatprompttempl':511 'chatprompttemplate.from':551 'cheaper':880 'chunk':452 'chunk.choices':456 'clarif':1425 'clean':1016 'clear':1398 'click':358 'client':233,391,470 'cloud':149,185 'cloud.langfuse.com':243,530 'code':802 'collabor':1192 'comment':356,820 'common':194 'compar':1326 'compil':671,675,695 'complet':261,325 'compon':543 'concept':141 'config':572,629,750 'content':305,323,405,440,485,739,747,887,1087,1116,1122 'context':1032,1052,1054,1060,1072,1104,1118,1119 'correct':827 'cost':83,103,121,1292 'cover':23 'creat':249,516,603,608,656,910,1034,1317 'create/update':725 'crew':1209,1214 'crewai':1205,1210,1339 'criteria':1434 'current':1140,1143,1163 'custom':276,667,677,733,769 'customer-support':275 'customer-support-v2':666 'customer-support-v3':732,768 'd':1089 'd.page':1086 'dashboard':175 'data':88,834,1299 'dataset':28,119,912,913,915,924,926,928,953,954,956,974,1006,1319 'dataset.items':964 'debug':38,1388 'decor':1014,1019 'def':473,842,1037,1070,1099,1133,1180 'default':579 'deleg':1193 'delta.content':458 'deploy':637,1330 'describ':1402 'develop':761 'differ':81 'dimens':82 'doc':1078,1091 'document':1081 'drive':90 'drop':386 'drop-in':385 'ecosystem':182 'els':1171 'end':459 'environ':645,1414 'environment-specif':1413 'essenti':36 'etc':599 'eval':850,883,888,980,1011 'eval-run':979,1010 'evalu':27,58,101,116,179,779,782,841,843,898,911,951,1300,1318,1322,1377 'executor':616 'exit':362 'expect':942,991 'experi':136 'expert':2,13,53,1419 'expertis':96 'extract':1219,1226 'f':852,1117 'featur':274,426 'feedback':349 'fetch':652,762 'first':662 'float':849,891 'flow':808,1135 'flush':360 'function':210,1024 'function-bas':1023 'generat':283,286,327,450,701,711,965,968,1057,1095,1097,1100,1188,1278 'generation.end':330 'get':670,774,810,1053,1071 'go':944 'gpt':290,295,318,400,435,480,548,692,717,753,877,1111 'gpt-4o':294,317,399,434,479,547,691,716,752,1110 'gpt-4o-mini':876 'gpt-4o-response':289 'graph':1197 'greet':412,490 'group':270 'handler':519,521,565,575,583,585,632,1039,1182,1245 'hello':306,324,406,486,571,594 'help':359,557 'high':166 'high-volum':165 'host':153,161,189,242,247,529 'id':263,267,416,420,532,536,1041,1146,1150 'impli':1352,1357,1363,1369,1375,1381,1387 'import':230,363,383,463,467,506,510,514,580,602,648,795,1029 'improv':41,92,792 'infrastructur':163 'initi':232 'input':301,336,560,570,593,624,934,985,1137,1158,1428 'instrument':217,1017 'integr':30,135,195,368,492 'invok':567 'issu':680 'item':922,927,962,975,1007 'item.expected':996 'item.input':970,986 'item.link':1008 'iter':1262 'john':679 'join':1085 'js/ts':192 'judg':840,907 'jupyt':211 'key':156,237,240,524,527 'know':69 'label':757,764,772,778 'langchain':32,196,491,494,500,504,542,581 'langchain-bas':499 'langchain.agents':601 'langchain.callbacks.manager.set':582 'langchain_core.prompts':509 'langfus':1,4,12,15,147,184,186,221,229,231,234,235,408,517,520,574,584,631,647,649,650,651,655,794,796,797,798,1031,1233,1243,1267,1304,1353 'langfuse-specif':407 'langfuse.callback':513 'langfuse.create':729,914,925 'langfuse.decorators':1028 'langfuse.flush':366 'langfuse.get':664,766,955 'langfuse.openai':382,466 'langfuse.trace':257,706,804,977 'langfuse_context.score':1162 'langfuse_context.update':1142 'langgraph':1196,1198,1230,1234,1240,1305,1338 'latenc':85,177,1294 'latest':775 'like':76 'limit':1390 'link':700,721,972,1003 'llamaindex':33,197 'llm':9,20,42,47,55,71,108,130,154,218,226,284,312,544,563,612,783,838,906,1096,1249,1283,1358,1364,1376,1383,1389 'llm-as-judg':837 'log':281 'main':474,1134 'make':310 'manag':26,113,120,634,642,1371 'manual':799 'match':1399 'may':168 'mention':1350,1355,1361,1367,1373,1379,1385 'messag':302,320,402,437,482,552,694,749,884,1043,1055,1059,1073,1082,1102,1113,1123,1183,1189 'message.content':334,1127 'message.content.strip':894 'metadata':273,307,425,1129 'metric':67 'mini':879 'miss':1436 'model':293,297,316,398,433,478,546,688,690,715,751,875,881,1109 'monitor':39,74,106,124,1204,1216,1228,1260,1263,1382 'multi':1207 'multi-ag':1206 'n':1084 'name':258,288,346,411,414,447,487,678,707,713,731,805,813,826,903,916,929,978,983,999,1165 'need':73,169,1199,1211,1223 'negat':355 'nest':1047 'notebook':212 'number':868 'observ':10,21,48,56,111,1030,1033,1048,1063,1092,1132,1178,1229,1359 'open':7,18 'open-sourc':6,17 'openai':35,198,367,372,378,384,390,505,604,609,685 'openai-bas':377 'openai.chat.completions.create':315,397,432,687,874,1108 'optim':104,170,1296 'output':329,331,339,784,865,943,987,997,1218,1222,1256,1270,1308,1315,1342,1408 'paramet':298,410 'parent':1068 'pass':564 'password':941,949 'pattern':213,1015 'perform':123,1332 'permiss':1429 'pipelin':1265 'pk':238,525 'platform':11,22,205 'posit':353 'prerequisit':128 'primari':183 'print':455 'process':1156 'product':45,279,758,773,1154 'prompt':25,91,99,112,127,550,562,614,633,638,643,653,663,665,672,703,719,720,726,730,736,765,767,851,889,1327,1370 'prompt.compile':676 'prompt.config.get':689,697 'public':236,523 'python':144,190 'python/js':207 'qa':807,919,932,959 'qa-flow':806 'qualiti':84,105,789,856,905,1258 'quality-llm-judg':904 'question':682,824,845,861,862,900,935,971 'rag':1076,1264,1274 'rate':853 'real':173 'real-tim':172 'regress':95 'relat':271,1333 'relev':814,1080,1287 'replac':388 'request':255 'requir':142,162,180,1427 'reset':939,948 'respons':292,314,396,475,568,591,686,714,811,821,844,847,855,863,864,899,901,966,967,969,984,988,995,1056,1058,1062,1101,1107 'response.choices':332,1125 'response.usage.completion':340 'response.usage.prompt':337 'result':622,873,1155,1170,1174,1185,1191 'result.choices':892 'retriev':598,1077,1276,1281 'retriever.get':1079 'return':890,1061,1083,1124,1173,1190 'review':1420 'role':46,303,321,403,438,483,737,745,885,1114,1120 'run':950,981,1012,1321 'safeti':1430 'scale':819 'scope':157,1401 'score':118,342,781,800,895,897,909,989,1131,1159,1255,1286 'sdk':191,193,199,201,204,373 'secret':239,526 'secur':947 'self':152,160,188,246 'self-host':151,159,187,245 'serverless':209,365 'session':266,268,415,417,531,533,1149,1151 'set':577,946 'setup':181,216 'similar':992,994,1000,1002 'sk':241,528 'skill':143,1232,1266,1303,1334,1393 'skill-langfuse' 'softwar':78 'sourc':8,19 'source-sickn33' 'span':65,1051,1066,1098 'specif':409,723,763,1415 'stage':760 'stop':1421 'stori':444,449 'story-gener':448 'str':846,848,1042,1044,1045,1074,1075,1103,1105,1106,1138,1184 'strategi':102 'stream':430,431,445,454 'structur':1217,1221,1269,1307,1314,1341 'structured-output':1220,1268,1306,1340 'substitut':1411 'success':1166,1433 'support':277,668,709,734,743,770,918,931,958 'support-chat':708 'support-qa-v1':917,930,957 'system':553,738,1115,1302 'systemat':785 'tag':278,423,1153 'task':1397 'tell':441 'temperatur':299,696,698,755 'term':62 'test':126,424,1417 'think':60 'time':174 'token':338,341 'tool':605,610,613,620,621,1252 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'trace':24,64,97,109,140,215,251,256,272,344,370,395,413,493,590,705,803,976,1004,1009,1036,1069,1141,1144,1161,1164,1247,1280,1324,1365 'trace.generation':287,712,982 'trace.score':345,812,825,902,998 'track':122,1291 'tradit':77 'treat':1406 'trigger':1194 'true':446 'type':835,1094 'typescript/javascript':146 'ui':658 'understand':138 'updat':1139 'url':248 'usag':335 'use':87,224,376,498,539,641,683,788,1022,1253,1348,1391 'user':254,262,264,304,322,348,357,404,419,421,439,484,535,537,559,746,748,886,1040,1121,1136,1145,1147,1157,1349,1354,1360,1366,1372,1378,1384 'user-feedback':347 'v1':920,933,960 'v2':280,669 'v3':735,771 'valid':1416 'valu':350,815,828,908,1001,1167 'variabl':674 'vercel':202 'version':100,115,635,704,724,1328 'via':660,727 'volum':167 'weather':628 'well':1336 'work':428,460,595,1175,1335 'workflow':1235,1271,1309","prices":[{"id":"5d9bc78e-a86e-4c21-8d1b-2d1926a1bb23","listingId":"a6be7d90-192b-46c9-a005-44582a3cdc32","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:39:40.447Z"}],"sources":[{"listingId":"a6be7d90-192b-46c9-a005-44582a3cdc32","source":"github","sourceId":"sickn33/antigravity-awesome-skills/langfuse","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/langfuse","isPrimary":false,"firstSeenAt":"2026-04-18T21:39:40.447Z","lastSeenAt":"2026-04-23T12:51:08.536Z"}],"details":{"listingId":"a6be7d90-192b-46c9-a005-44582a3cdc32","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"langfuse","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34726,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"a3ae543b81047476b050c473359e0d1b1c5baf81","skill_md_path":"skills/langfuse/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/langfuse"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"langfuse","description":"Expert in Langfuse - the open-source LLM observability platform."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/langfuse"},"updatedAt":"2026-04-23T12:51:08.536Z"}}