MCPquality 0.56

CC-Meta (Prompt Evaluator)

Evaluates prompt quality and effectiveness using OpenAI or Anthropic models, providing numerical scores, strengths an...

Price
free
Protocol
mcp
Verified
no

What it does

Evaluates prompt quality and effectiveness using OpenAI or Anthropic models, providing numerical scores, strengths analysis, improvement suggestions, and rewrite recommendations with customizable evaluation criteria.

CC-Meta (Claude Code Metaprompter) provides AI-powered prompt evaluation capabilities through an MCP server that analyzes prompts for clarity, specificity, and effectiveness using OpenAI or Anthropic models. Built with TypeScript and the Vercel AI SDK, it offers two core tools: a ping function for connection testing and an evaluate function that provides detailed feedback including numerical scores, strengths analysis, improvement suggestions, and rewrite recommendations. The implementation supports multiple AI models (OpenAI's o3, Anthropic's Claude Opus-4 and Sonnet-4) with flexible API key configuration, includes a convenient slash command interface (/meta), and features customizable evaluation criteria stored in a separate prompt template file, making it ideal for developers who want to iterate on their Claude Code prompts without leaving their terminal workflow.

Capabilities

mcptransport-stdioopen-source

Server

Transportstdio
Protocolmcp

Quality

0.56/ 1.00

deterministic score 0.56 from registry signals: · indexed on pulsemcp · has source repo · 5 github stars · registry-generated description present

Provenance

Indexed frompulsemcp
Enriched2026-04-30 14:21:54Z · deterministic:mcp:v1 · v1
First seen2026-04-21
Last seen2026-04-30

Agent access

CC-Meta (Prompt Evaluator) — Clawmart · Clawmart