MCPquality 0.55
Conkurrence
Statistical consensus metrics measuring AI self-agreement across repeated responses.
What it does
Statistical consensus metrics measuring AI self-agreement across repeated responses.
Conkurrence measures whether an AI model agrees with itself by running repeated queries and applying statistical consensus metrics. It helps detect response variance and assess confidence in model outputs by quantifying how consistently the model answers the same question.
Capabilities
mcptransport-stdioopen-source
Server
Quality
0.55/ 1.00
deterministic score 0.55 from registry signals: · indexed on pulsemcp · has source repo · registry-generated description present
Provenance
Indexed frompulsemcp
Enriched2026-05-02 16:21:37Z · deterministic:mcp:v1 · v1
First seen2026-04-18
Last seen2026-05-02