MPPtempoquality 0.30

Unified LLM chat completions via OpenRouter, settled per-call on Tempo L2.

Price
per_session
Protocol
mpp
Verified
no

What it does

This MPP (Micropayment Protocol) endpoint proxies OpenRouter's /v1/chat/completions API, giving agents access to 100+ large language models — including GPT-4, Claude, and Llama variants — with per-call payment settled on Tempo L2. The endpoint follows the standard OpenAI-compatible chat completions interface, accepting a model identifier and a messages array, and returning streamed or non-streamed completions.

Pricing varies by model, as each underlying LLM carries its own token-based cost. Because the endpoint is an MPP wrapper around OpenRouter, callers are expected to issue a POST request to /v1/chat/completions with a valid payment header. The 402 challenge returned on authenticated probes would normally advertise the payment method, intent, and per-call amount, but during this probe the endpoint returned 404 on HEAD and GET — which is consistent with a POST-only route rather than a definitively dead service. However, no 402 challenge was captured, so live status and exact pricing cannot be confirmed from the probe data alone.

Because no OpenAPI schema, documentation pages, or pricing details were reachable at the origin, nearly all information here is inferred from the Bazaar listing metadata. Users should verify liveness by issuing a POST request before relying on this endpoint in production.

Capabilities

chat-completionsmulti-modelopenai-compatiblellm-proxystreamingtempo-l2-settlementmicropayment-protocol

Use cases

  • Agents selecting the best LLM per task via a single endpoint
  • Pay-per-call LLM access without API key management
  • Building multi-model pipelines with automatic micropayment settlement
  • Prototyping with various frontier models through one interface

Fit

Best for

  • AI agents needing on-demand access to many LLMs via one endpoint
  • Developers who want crypto-settled, per-call LLM usage without subscriptions
  • Applications that dynamically choose models based on cost or capability

Not for

  • High-volume batch inference where direct provider contracts are cheaper
  • Use cases requiring guaranteed sub-100ms latency with no payment overhead
  • Users who need a traditional API-key billing model with no crypto settlement

Quick start

curl -X POST https://openrouter.mpp.tempo.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <MPP_TOKEN>" \
  -d '{
    "model": "openai/gpt-4",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Example

Request

{
  "model": "openai/gpt-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Explain quantum entanglement in two sentences."
    }
  ],
  "max_tokens": 256,
  "temperature": 0.7
}

Response

{
  "id": "chatcmpl-abc123",
  "model": "openai/gpt-4",
  "usage": {
    "total_tokens": 70,
    "prompt_tokens": 28,
    "completion_tokens": 42
  },
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Quantum entanglement is a phenomenon where two particles become correlated such that measuring one instantly determines the state of the other, regardless of distance. This non-local connection has been experimentally verified and underpins emerging quantum computing and communication technologies."
      },
      "finish_reason": "stop"
    }
  ],
  "created": 1717000000
}

Endpoint

Transporthttp
Protocolmpp
CurrencypathUSD

Quality

0.30/ 1.00

No 402 challenge was captured (HEAD/GET returned 404, likely because the route only accepts POST), no documentation or schema was reachable, and pricing details are absent. All information is inferred from the Bazaar listing title and description. Liveness is unconfirmed.

Warnings

  • Endpoint returned 404 on HEAD and GET probes; likely POST-only but liveness via MPP 402 challenge is unconfirmed.
  • No OpenAPI schema, documentation, or pricing pages were reachable at the origin.
  • Exact per-model pricing could not be verified — the listing states 'price varies by model' but no concrete amounts are available.
  • All technical details are inferred from the listing metadata, not from direct endpoint interaction.

Citations

Provenance

Indexed frommpp_dev
Enriched2026-04-19 16:08:15Z · anthropic/claude-opus-4.6 · v2
First seen2026-04-18
Last seen2026-04-22

Agent access