{"id":"2618fe22-74d6-4abe-929d-81aaf6cd8b51","shortId":"K5Kwr4","kind":"skill","title":"openjudge","tagline":"Build custom LLM evaluation pipelines using the OpenJudge framework. Covers selecting and configuring graders (LLM-based, function-based, agentic), running batch evaluations with GradingRunner, combining scores with aggregators, applying evaluation strategies (voting, average), a","description":"# OpenJudge Skill\n\nBuild evaluation pipelines for LLM applications using the `openjudge` library.\n\n## When to Use This Skill\n\n- User wants to evaluate LLM output quality (correctness, relevance, hallucination, etc.)\n- User wants to compare two or more models and rank them\n- User wants to design a scoring rubric and automate evaluation\n- User wants to analyze evaluation results statistically\n- User wants to build a reward model or quality filter\n\n## Sub-documents — Read When Relevant\n\n| Topic | File | Read when… |\n|-------|------|------------|\n| Grader selection & configuration | `graders.md` | User needs to pick or configure an evaluator |\n| Batch evaluation pipeline | `pipeline.md` | User needs to run evaluation over a dataset |\n| Auto-generate graders from data | `generator.md` | No rubric yet; generate from labeled examples |\n| Analyze & compare results | `analyzer.md` | User wants win rates, statistics, or metrics |\n\nRead the relevant sub-document **before** writing any code.\n\n## Install\n\n```bash\npip install py-openjudge\n```\n\n## Architecture Overview\n\n```\nDataset (List[dict])\n    │\n    ▼\nGradingRunner                    ← orchestrates everything\n    │\n    ├─► Grader A ──► EvaluationStrategy ──► _aevaluate() ──► GraderScore / GraderRank\n    ├─► Grader B ──► EvaluationStrategy ──► _aevaluate() ──► GraderScore / GraderRank\n    └─► Grader C ...\n    │\n    ├─► Aggregator (optional)    ← combine multiple grader scores into one\n    │\n    └─► RunnerResult             ← {grader_name: [GraderScore, ...]}\n            │\n            ▼\n        Analyzer                 ← statistics, win rates, validation metrics\n```\n\n## 5-Minute Quick Start\n\nEvaluate responses for correctness using a built-in grader:\n\n```python\nimport asyncio\nfrom openjudge.models.openai_chat_model import OpenAIChatModel\nfrom openjudge.graders.common.correctness import CorrectnessGrader\nfrom openjudge.runner.grading_runner import GradingRunner\n\n# 1. Configure the judge model (OpenAI-compatible endpoint)\nmodel = OpenAIChatModel(\n    model=\"qwen-plus\",\n    api_key=\"sk-xxx\",\n    base_url=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n)\n\n# 2. Instantiate a grader\ngrader = CorrectnessGrader(model=model)\n\n# 3. Prepare dataset\ndataset = [\n    {\n        \"query\": \"What is the capital of France?\",\n        \"response\": \"Paris is the capital of France.\",\n        \"reference_response\": \"Paris.\",\n    },\n    {\n        \"query\": \"What is 2 + 2?\",\n        \"response\": \"The answer is five.\",\n        \"reference_response\": \"4.\",\n    },\n]\n\n# 4. Run evaluation\nasync def main():\n    runner = GradingRunner(\n        grader_configs={\"correctness\": grader},\n        max_concurrency=8,\n    )\n    results = await runner.arun(dataset)\n\n    for i, result in enumerate(results[\"correctness\"]):\n        print(f\"[{i}] score={result.score}  reason={result.reason}\")\n\nasyncio.run(main())\n```\n\n**Expected output:**\n```\n[0] score=5  reason=The response accurately states Paris as capital...\n[1] score=1  reason=The response gives the wrong answer (five vs 4)...\n```\n\n## Key Data Types\n\n| Type | Description |\n|------|-------------|\n| `GraderScore` | Pointwise result: `.score` (float), `.reason` (str), `.metadata` (dict) |\n| `GraderRank` | Listwise result: `.rank` (List[int]), `.reason` (str), `.metadata` (dict) |\n| `GraderError` | Error during evaluation: `.error` (str), `.reason` (str) |\n| `RunnerResult` | `Dict[str, List[GraderResult]]` — keyed by grader name |\n\n## Result Handling Pattern\n\n```python\nfrom openjudge.graders.schema import GraderScore, GraderRank, GraderError\n\nfor grader_name, grader_results in results.items():\n    for i, result in enumerate(grader_results):\n        if isinstance(result, GraderScore):\n            print(f\"{grader_name}[{i}]: score={result.score}\")\n        elif isinstance(result, GraderRank):\n            print(f\"{grader_name}[{i}]: rank={result.rank}\")\n        elif isinstance(result, GraderError):\n            print(f\"{grader_name}[{i}]: ERROR — {result.error}\")\n```\n\n## Model Configuration\n\nAll LLM-based graders accept either a `BaseChatModel` instance or a dict config:\n\n```python\n# Option A: instance\nfrom openjudge.models.openai_chat_model import OpenAIChatModel\nmodel = OpenAIChatModel(model=\"gpt-4o\", api_key=\"sk-...\")\n\n# Option B: dict (auto-creates OpenAIChatModel)\nmodel_cfg = {\"model\": \"gpt-4o\", \"api_key\": \"sk-...\"}\ngrader = CorrectnessGrader(model=model_cfg)\n\n# OpenAI-compatible endpoints (DashScope / local / etc.)\nmodel = OpenAIChatModel(\n    model=\"qwen-plus\",\n    api_key=\"sk-xxx\",\n    base_url=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n)\n```","tags":["openjudge","agentscope-ai","agent","agent-skills","ai-agent","alignment","evaluation","grader","llm","reward","reward-model","rlhf"],"capabilities":["skill","source-agentscope-ai","skill-openjudge","topic-agent","topic-agent-skills","topic-ai-agent","topic-alignment","topic-evaluation","topic-grader","topic-llm","topic-reward","topic-reward-model","topic-rlhf","topic-skill-md","topic-skills"],"categories":["OpenJudge"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/agentscope-ai/OpenJudge/openjudge","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add agentscope-ai/OpenJudge","source_repo":"https://github.com/agentscope-ai/OpenJudge","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 585 github stars · SKILL.md body (4,582 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-02T18:53:08.298Z","embedding":null,"createdAt":"2026-04-18T21:57:29.052Z","updatedAt":"2026-05-02T18:53:08.298Z","lastSeenAt":"2026-05-02T18:53:08.298Z","tsv":"'/compatible-mode/v1':276,556 '0':356 '1':252,367,369 '2':277,309,310 '3':285 '4':318,319,379 '4o':509,525 '5':220,358 '8':333 'accept':485 'accur':362 'aevalu':191,197 'agent':22 'aggreg':31,202 'analyz':90,152,214 'analyzer.md':155 'answer':313,376 'api':267,510,526,547 'appli':32 'applic':45 'architectur':180 'async':322 'asyncio':236 'asyncio.run':352 'auto':139,517 'auto-cr':516 'auto-gener':138 'autom':85 'averag':36 'await':335 'b':195,514 'base':18,21,272,483,552 'basechatmodel':488 'bash':174 'batch':24,126 'build':2,40,97 'built':231 'built-in':230 'c':201 'capit':293,300,366 'cfg':521,533 'chat':239,500 'code':172 'combin':28,204 'compar':69,153 'compat':259,536 'concurr':332 'config':328,493 'configur':14,116,123,253,479 'correct':62,227,329,344 'correctnessgrad':246,282,530 'cover':11 'creat':518 'custom':3 'dashscop':538 'dashscope.aliyuncs.com':275,555 'dashscope.aliyuncs.com/compatible-mode/v1':274,554 'data':143,381 'dataset':137,182,287,288,337 'def':323 'descript':384 'design':80 'dict':184,393,403,413,492,515 'document':106,168 'either':486 'elif':456,467 'endpoint':260,537 'enumer':342,442 'error':405,408,476 'etc':65,540 'evalu':5,25,33,41,58,86,91,125,127,134,224,321,407 'evaluationstrategi':190,196 'everyth':187 'exampl':151 'expect':354 'f':346,450,461,472 'file':111 'filter':103 'five':315,377 'float':389 'framework':10 'franc':295,302 'function':20 'function-bas':19 'generat':140,148 'generator.md':144 'give':373 'gpt':508,524 'gpt-4o':507,523 'grader':15,114,141,188,194,200,206,211,233,280,281,327,330,419,432,434,443,451,462,473,484,529 'gradererror':404,430,470 'graderrank':193,199,394,429,459 'graderresult':416 'graders.md':117 'graderscor':192,198,213,385,428,448 'gradingrunn':27,185,251,326 'hallucin':64 'handl':422 'import':235,241,245,250,427,502 'instal':173,176 'instanc':489,497 'instanti':278 'int':399 'isinst':446,457,468 'judg':255 'key':268,380,417,511,527,548 'label':150 'librari':49 'list':183,398,415 'listwis':395 'llm':4,17,44,59,482 'llm-base':16,481 'local':539 'main':324,353 'max':331 'metadata':392,402 'metric':162,219 'minut':221 'model':73,100,240,256,261,263,283,284,478,501,504,506,520,522,531,532,541,543 'multipl':205 'name':212,420,433,452,463,474 'need':119,131 'one':209 'openai':258,535 'openai-compat':257,534 'openaichatmodel':242,262,503,505,519,542 'openjudg':1,9,38,48,179 'openjudge.graders.common.correctness':244 'openjudge.graders.schema':426 'openjudge.models.openai':238,499 'openjudge.runner.grading':248 'option':203,495,513 'orchestr':186 'output':60,355 'overview':181 'pari':297,305,364 'pattern':423 'pick':121 'pip':175 'pipelin':6,42,128 'pipeline.md':129 'plus':266,546 'pointwis':386 'prepar':286 'print':345,449,460,471 'py':178 'py-openjudg':177 'python':234,424,494 'qualiti':61,102 'queri':289,306 'quick':222 'qwen':265,545 'qwen-plus':264,544 'rank':75,397,465 'rate':159,217 'read':107,112,163 'reason':350,359,370,390,400,410 'refer':303,316 'relev':63,109,165 'respons':225,296,304,311,317,361,372 'result':92,154,334,340,343,387,396,421,435,440,444,447,458,469 'result.error':477 'result.rank':466 'result.reason':351 'result.score':349,455 'results.items':437 'reward':99 'rubric':83,146 'run':23,133,320 'runner':249,325 'runner.arun':336 'runnerresult':210,412 'score':29,82,207,348,357,368,388,454 'select':12,115 'sk':270,512,528,550 'sk-xxx':269,549 'skill':39,54 'skill-openjudge' 'source-agentscope-ai' 'start':223 'state':363 'statist':93,160,215 'str':391,401,409,411,414 'strategi':34 'sub':105,167 'sub-docu':104,166 'topic':110 'topic-agent' 'topic-agent-skills' 'topic-ai-agent' 'topic-alignment' 'topic-evaluation' 'topic-grader' 'topic-llm' 'topic-reward' 'topic-reward-model' 'topic-rlhf' 'topic-skill-md' 'topic-skills' 'two':70 'type':382,383 'url':273,553 'use':7,46,52,228 'user':55,66,77,87,94,118,130,156 'valid':218 'vote':35 'vs':378 'want':56,67,78,88,95,157 'win':158,216 'write':170 'wrong':375 'xxx':271,551 'yet':147","prices":[{"id":"0a3e7dfc-7caa-4e24-a8b0-3e26d94b44e8","listingId":"2618fe22-74d6-4abe-929d-81aaf6cd8b51","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"agentscope-ai","category":"OpenJudge","install_from":"skills.sh"},"createdAt":"2026-04-18T21:57:29.052Z"}],"sources":[{"listingId":"2618fe22-74d6-4abe-929d-81aaf6cd8b51","source":"github","sourceId":"agentscope-ai/OpenJudge/openjudge","sourceUrl":"https://github.com/agentscope-ai/OpenJudge/tree/main/skills/openjudge","isPrimary":false,"firstSeenAt":"2026-04-18T21:57:29.052Z","lastSeenAt":"2026-05-02T18:53:08.298Z"}],"details":{"listingId":"2618fe22-74d6-4abe-929d-81aaf6cd8b51","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"agentscope-ai","slug":"openjudge","github":{"repo":"agentscope-ai/OpenJudge","stars":585,"topics":["agent","agent-skills","ai-agent","alignment","evaluation","grader","llm","reward","reward-model","rlhf","skill-md","skills"],"license":"apache-2.0","html_url":"https://github.com/agentscope-ai/OpenJudge","pushed_at":"2026-04-30T08:18:46Z","description":"OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards","skill_md_sha":"44e0eb8dbafecf3dae0016c959963a2ddd107a8e","skill_md_path":"skills/openjudge/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/agentscope-ai/OpenJudge/tree/main/skills/openjudge"},"layout":"multi","source":"github","category":"OpenJudge","frontmatter":{"name":"openjudge","description":"Build custom LLM evaluation pipelines using the OpenJudge framework. Covers selecting and configuring graders (LLM-based, function-based, agentic), running batch evaluations with GradingRunner, combining scores with aggregators, applying evaluation strategies (voting, average), auto-generating graders from data, and analyzing results (pairwise win rates, statistics, validation metrics). Use when the user wants to evaluate LLM outputs, compare multiple models, design scoring criteria, or build an automated evaluation system."},"skills_sh_url":"https://skills.sh/agentscope-ai/OpenJudge/openjudge"},"updatedAt":"2026-05-02T18:53:08.298Z"}}