{"id":"16f8c8ed-cd94-4204-b375-044c3cd45f15","shortId":"dFbhL6","kind":"skill","title":"langfuse","tagline":"Debug AI traces, find exceptions, analyze sessions, and manage prompts via Langfuse MCP. Use when debugging AI pipelines, investigating errors, analyzing latency, managing prompt versions, or setting up Langfuse. Triggers on \"langfuse\", \"traces\", \"debug AI\", \"find exceptions\", \"w","description":"# Langfuse Skill\n\nDebug your AI systems through Langfuse observability.\n\n**Triggers:** langfuse, traces, debug AI, find exceptions, set up langfuse, what went wrong, why is it slow, datasets, evaluation sets\n\n## Setup\n\n**Step 1:** Get credentials from https://cloud.langfuse.com → Settings → API Keys\n\nIf self-hosted, use your instance URL for `LANGFUSE_HOST` and create keys there.\n\n**Step 2:** Install MCP (pick one):\n\n```bash\n# Claude Code (project-scoped, shared via .mcp.json)\nclaude mcp add \\\n  --scope project \\\n  --env LANGFUSE_PUBLIC_KEY=pk-... \\\n  --env LANGFUSE_SECRET_KEY=sk-... \\\n  --env LANGFUSE_HOST=https://cloud.langfuse.com \\\n  langfuse -- uvx --python 3.11 langfuse-mcp\n\n# Codex CLI (user-scoped, stored in ~/.codex/config.toml)\ncodex mcp add langfuse \\\n  --env LANGFUSE_PUBLIC_KEY=pk-... \\\n  --env LANGFUSE_SECRET_KEY=sk-... \\\n  --env LANGFUSE_HOST=https://cloud.langfuse.com \\\n  -- uvx --python 3.11 langfuse-mcp\n```\n\n**Step 3:** Restart CLI, verify with `/mcp` (Claude) or `codex mcp list` (Codex)\n\n**Step 4:** Test: `fetch_traces(age=60)`\n\n### Read-Only Mode\n\nFor safer observability without risk of modifying prompts or datasets, enable read-only mode:\n\n```bash\n# CLI flag\nlangfuse-mcp --read-only\n\n# Or environment variable\nLANGFUSE_MCP_READ_ONLY=true\n```\n\nThis disables write tools: `create_text_prompt`, `create_chat_prompt`, `update_prompt_labels`, `create_dataset`, `create_dataset_item`, `delete_dataset_item`.\n\n### Default Output Mode\n\nIf you want MCP clients to default to writing full payloads to files when they omit `output_mode`, configure:\n\n```bash\nlangfuse-mcp --default-output-mode full_json_file\n\n# Or via environment variable\nLANGFUSE_MCP_DEFAULT_OUTPUT_MODE=full_json_file\n```\n\nFor manual `.mcp.json` setup or troubleshooting, see `references/setup.md`.\n\n---\n\n## Playbooks\n\n### \"Where are the errors?\"\n\n```\nfind_exceptions(age=1440, group_by=\"file\")\n```\n→ Shows error counts by file. Pick the worst offender.\n\n```\nfind_exceptions_in_file(filepath=\"src/ai/chat.py\", age=1440)\n```\n→ Lists specific exceptions. Grab a trace_id.\n\n```\nget_exception_details(trace_id=\"...\")\n```\n→ Full stacktrace and context.\n\n---\n\n### \"What happened in this interaction?\"\n\n```\nfetch_traces(age=60, user_id=\"...\")\n```\n→ Find the trace. Note the trace_id.\n\nIf you don't know the user_id, start with:\n```\nfetch_traces(age=60)\n```\n\n```\nfetch_trace(trace_id=\"...\", include_observations=true)\n```\n→ See all LLM calls in the trace.\n\n```\nfetch_observation(observation_id=\"...\")\n```\n→ Inspect a specific generation's input/output.\n\n---\n\n### \"Why is it slow?\"\n\n```\nfetch_observations(age=60, type=\"GENERATION\")\n```\n→ Find recent LLM calls. Look for high latency.\n\n```\nfetch_observation(observation_id=\"...\")\n```\n→ Check token counts, model, timing.\n\n---\n\n### \"What's this user experiencing?\"\n\n```\nget_user_sessions(user_id=\"...\", age=1440)\n```\n→ List their sessions.\n\n```\nget_session_details(session_id=\"...\")\n```\n→ See all traces in the session.\n\n---\n\n### \"Manage datasets\"\n\n```\nlist_datasets()\n```\n→ See all datasets.\n\n```\nget_dataset(name=\"evaluation-set-v1\")\n```\n→ Get dataset details.\n\n```\nlist_dataset_items(dataset_name=\"evaluation-set-v1\", page=1, limit=10)\n```\n→ Browse items in the dataset.\n\n```\ncreate_dataset(name=\"qa-test-cases\", description=\"QA evaluation set\")\n```\n→ Create a new dataset.\n\n```\ncreate_dataset_item(\n  dataset_name=\"qa-test-cases\",\n  input={\"question\": \"What is 2+2?\"},\n  expected_output={\"answer\": \"4\"}\n)\n```\n→ Add test cases.\n\n```\ncreate_dataset_item(\n  dataset_name=\"qa-test-cases\",\n  item_id=\"item_123\",\n  input={\"question\": \"What is 3+3?\"},\n  expected_output={\"answer\": \"6\"}\n)\n```\n→ Upsert: updates existing item by id or creates if missing.\n\n---\n\n### \"Manage prompts\"\n\n```\nlist_prompts()\n```\n→ See all prompts with labels.\n\n```\nget_prompt(name=\"...\", label=\"production\")\n```\n→ Fetch current production version.\n\n```\ncreate_text_prompt(name=\"...\", prompt=\"...\", labels=[\"staging\"])\n```\n→ Create new version in staging.\n\n```\nupdate_prompt_labels(name=\"...\", version=N, labels=[\"production\"])\n```\n→ Promote to production. (Rollback = re-apply label to older version)\n\n---\n\n## Quick Reference\n\n| Task | Tool |\n|------|------|\n| List traces | `fetch_traces(age=N)` |\n| Get trace details | `fetch_trace(trace_id=\"...\", include_observations=true)` |\n| List LLM calls | `fetch_observations(age=N, type=\"GENERATION\")` |\n| Get observation | `fetch_observation(observation_id=\"...\")` |\n| Error count | `get_error_count(age=N)` |\n| Find exceptions | `find_exceptions(age=N, group_by=\"file\")` |\n| List sessions | `fetch_sessions(age=N)` |\n| User sessions | `get_user_sessions(user_id=\"...\", age=N)` |\n| List prompts | `list_prompts()` |\n| Get prompt | `get_prompt(name=\"...\", label=\"production\")` |\n| List datasets | `list_datasets()` |\n| Get dataset | `get_dataset(name=\"...\")` |\n| List dataset items | `list_dataset_items(dataset_name=\"...\", limit=N)` |\n| Create/update dataset item | `create_dataset_item(dataset_name=\"...\", item_id=\"...\")` |\n\n`age` = minutes to look back (max 10080 = 7 days)\n\n---\n\n## Troubleshooting\n\n### MCP connection fails\n- Verify credentials: check `LANGFUSE_PUBLIC_KEY`, `LANGFUSE_SECRET_KEY`, `LANGFUSE_HOST`\n- Restart CLI after adding/updating MCP config\n- Test MCP independently: `fetch_traces(age=60)` — if this fails, the issue is MCP, not the skill\n- See `references/setup.md` for detailed troubleshooting\n\n### No traces found\n- Increase the `age` parameter (default lookback may be too short)\n- Verify your application is sending traces to the correct Langfuse project\n- Check `LANGFUSE_HOST` points to the right instance (cloud vs self-hosted)\n\n### Permission denied\n- Regenerate API keys from Langfuse dashboard\n- Ensure keys have the required scopes for the operation\n- Write operations require read-write keys (not read-only mode)\n\n---\n\n## References\n\n- `references/tool-reference.md` — Full parameter docs, filter semantics, response schemas\n- `references/setup.md` — Manual setup, troubleshooting, advanced configuration","tags":["langfuse","mcp","avivsinai","agent-skills","ai-agents","claude-code","codex","developer-tools","genai","llm","mcp-server","observability"],"capabilities":["skill","source-avivsinai","skill-langfuse","topic-agent-skills","topic-ai-agents","topic-claude-code","topic-codex","topic-developer-tools","topic-genai","topic-langfuse","topic-llm","topic-mcp","topic-mcp-server","topic-observability","topic-tracing"],"categories":["langfuse-mcp"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/avivsinai/langfuse-mcp/langfuse","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add avivsinai/langfuse-mcp","source_repo":"https://github.com/avivsinai/langfuse-mcp","install_from":"skills.sh"}},"qualityScore":"0.492","qualityRationale":"deterministic score 0.49 from registry signals: · indexed on github topic:agent-skills · 84 github stars · SKILL.md body (5,994 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-02T12:55:16.659Z","embedding":null,"createdAt":"2026-04-18T22:12:32.006Z","updatedAt":"2026-05-02T12:55:16.659Z","lastSeenAt":"2026-05-02T12:55:16.659Z","tsv":"'+2':515 '+3':541 '/.codex/config.toml':142 '/mcp':173 '1':71,478 '10':480 '10080':717 '123':535 '1440':305,325,436 '2':95,514 '3':168,540 '3.11':131,163 '4':181,519 '6':545 '60':186,350,373,405,747 '7':718 'add':111,145,520 'adding/updating':738 'advanc':842 'age':185,304,324,349,372,404,435,613,630,645,651,660,669,711,746,768 'ai':3,18,36,44,53 'analyz':7,22 'answer':518,544 'api':77,803 'appli':600 'applic':778 'back':715 'bash':100,206,266 'brows':481 'call':384,411,627 'case':492,509,522,531 'chat':231 'check':420,726,787 'claud':101,109,174 'cli':136,170,207,736 'client':251 'cloud':795 'cloud.langfuse.com':75,127,160 'code':102 'codex':135,143,176,179 'config':740 'configur':265,843 'connect':722 'context':341 'correct':784 'count':311,422,641,644 'creat':91,227,230,236,238,486,497,501,523,553,574,581,704 'create/update':701 'credenti':73,725 'current':571 'dashboard':807 'dataset':66,200,237,239,242,452,454,457,459,466,469,471,485,487,500,502,504,524,526,683,685,687,689,692,695,697,702,705,707 'day':719 'debug':2,17,35,42,52 'default':244,253,271,283,770 'default-output-mod':270 'delet':241 'deni':801 'descript':493 'detail':335,442,467,617,761 'disabl':224 'doc':833 'enabl':201 'ensur':808 'env':114,119,124,147,152,157 'environ':216,279 'error':21,301,310,640,643 'evalu':67,462,474,495 'evaluation-set-v1':461,473 'except':6,38,55,303,319,328,334,648,650 'exist':548 'expect':516,542 'experienc':429 'fail':723,750 'fetch':183,347,370,374,388,402,416,570,611,618,628,636,658,744 'file':259,276,288,308,313,321,655 'filepath':322 'filter':834 'find':5,37,54,302,318,353,408,647,649 'flag':208 'found':765 'full':256,274,286,338,831 'generat':395,407,633 'get':72,333,430,440,458,465,565,615,634,642,664,675,677,686,688 'grab':329 'group':306,653 'happen':343 'high':414 'host':82,89,126,159,734,789,799 'id':332,337,352,359,367,377,391,419,434,444,533,551,621,639,668,710 'includ':378,622 'increas':766 'independ':743 'input':510,536 'input/output':397 'inspect':392 'instal':96 'instanc':85,794 'interact':346 'investig':20 'issu':752 'item':240,243,470,482,503,525,532,534,549,693,696,703,706,709 'json':275,287 'key':78,92,117,122,150,155,729,732,804,809,823 'know':364 'label':235,564,568,579,588,592,601,680 'langfus':1,13,30,33,40,47,50,58,88,115,120,125,128,133,146,148,153,158,165,210,218,268,281,727,730,733,785,788,806 'langfuse-mcp':132,164,209,267 'latenc':23,415 'limit':479,699 'list':178,326,437,453,468,558,609,625,656,671,673,682,684,691,694 'llm':383,410,626 'look':412,714 'lookback':771 'manag':10,24,451,556 'manual':290,839 'max':716 'may':772 'mcp':14,97,110,134,144,166,177,211,219,250,269,282,721,739,742,754 'mcp.json':108,291 'minut':712 'miss':555 'mode':190,205,246,264,273,285,828 'model':423 'modifi':197 'n':591,614,631,646,652,661,670,700 'name':460,472,488,505,527,567,577,589,679,690,698,708 'new':499,582 'note':356 'observ':48,193,379,389,390,403,417,418,623,629,635,637,638 'offend':317 'older':603 'omit':262 'one':99 'oper':816,818 'output':245,263,272,284,517,543 'page':477 'paramet':769,832 'payload':257 'permiss':800 'pick':98,314 'pipelin':19 'pk':118,151 'playbook':297 'point':790 'product':569,572,593,596,681 'project':104,113,786 'project-scop':103 'promot':594 'prompt':11,25,198,229,232,234,557,559,562,566,576,578,587,672,674,676,678 'public':116,149,728 'python':130,162 'qa':490,494,507,529 'qa-test-cas':489,506,528 'question':511,537 'quick':605 're':599 're-appli':598 'read':188,203,213,220,821,826 'read-on':187,202,212,825 'read-writ':820 'recent':409 'refer':606,829 'references/setup.md':296,759,838 'references/tool-reference.md':830 'regener':802 'requir':812,819 'respons':836 'restart':169,735 'right':793 'risk':195 'rollback':597 'safer':192 'schema':837 'scope':105,112,139,813 'secret':121,154,731 'see':295,381,445,455,560,758 'self':81,798 'self-host':80,797 'semant':835 'send':780 'session':8,432,439,441,443,450,657,659,663,666 'set':28,56,68,76,463,475,496 'setup':69,292,840 'share':106 'short':775 'show':309 'sk':123,156 'skill':41,757 'skill-langfuse' 'slow':65,401 'source-avivsinai' 'specif':327,394 'src/ai/chat.py':323 'stacktrac':339 'stage':580,585 'start':368 'step':70,94,167,180 'store':140 'system':45 'task':607 'test':182,491,508,521,530,741 'text':228,575 'time':424 'token':421 'tool':226,608 'topic-agent-skills' 'topic-ai-agents' 'topic-claude-code' 'topic-codex' 'topic-developer-tools' 'topic-genai' 'topic-langfuse' 'topic-llm' 'topic-mcp' 'topic-mcp-server' 'topic-observability' 'topic-tracing' 'trace':4,34,51,184,331,336,348,355,358,371,375,376,387,447,610,612,616,619,620,745,764,781 'trigger':31,49 'troubleshoot':294,720,762,841 'true':222,380,624 'type':406,632 'updat':233,547,586 'upsert':546 'url':86 'use':15,83 'user':138,351,366,428,431,433,662,665,667 'user-scop':137 'uvx':129,161 'v1':464,476 'variabl':217,280 'verifi':171,724,776 'version':26,573,583,590,604 'via':12,107,278 'vs':796 'w':39 'want':249 'went':60 'without':194 'worst':316 'write':225,255,817,822 'wrong':61","prices":[{"id":"9dc3951a-f8d2-4f6b-9aa4-5855c030e8b8","listingId":"16f8c8ed-cd94-4204-b375-044c3cd45f15","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"avivsinai","category":"langfuse-mcp","install_from":"skills.sh"},"createdAt":"2026-04-18T22:12:32.006Z"}],"sources":[{"listingId":"16f8c8ed-cd94-4204-b375-044c3cd45f15","source":"github","sourceId":"avivsinai/langfuse-mcp/langfuse","sourceUrl":"https://github.com/avivsinai/langfuse-mcp/tree/main/skills/langfuse","isPrimary":false,"firstSeenAt":"2026-04-18T22:12:32.006Z","lastSeenAt":"2026-05-02T12:55:16.659Z"}],"details":{"listingId":"16f8c8ed-cd94-4204-b375-044c3cd45f15","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"avivsinai","slug":"langfuse","github":{"repo":"avivsinai/langfuse-mcp","stars":84,"topics":["agent-skills","ai-agents","claude-code","codex","developer-tools","genai","langfuse","llm","mcp","mcp-server","observability","tracing"],"license":"mit","html_url":"https://github.com/avivsinai/langfuse-mcp","pushed_at":"2026-04-29T13:21:27Z","description":"A Model Context Protocol (MCP) server for Langfuse, enabling AI agents to query Langfuse trace data for enhanced debugging and observability","skill_md_sha":"1ead0d7839cc510b2a22c99dde95bb147d35d4f6","skill_md_path":"skills/langfuse/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/avivsinai/langfuse-mcp/tree/main/skills/langfuse"},"layout":"multi","source":"github","category":"langfuse-mcp","frontmatter":{"name":"langfuse","description":"Debug AI traces, find exceptions, analyze sessions, and manage prompts via Langfuse MCP. Use when debugging AI pipelines, investigating errors, analyzing latency, managing prompt versions, or setting up Langfuse. Triggers on \"langfuse\", \"traces\", \"debug AI\", \"find exceptions\", \"what went wrong\", \"why is it slow\", \"datasets\", \"evaluation sets\"."},"skills_sh_url":"https://skills.sh/avivsinai/langfuse-mcp/langfuse"},"updatedAt":"2026-05-02T12:55:16.659Z"}}