MPPtempoquality 0.70

DeepSeek V3/R1 chat completions via MPP micropayments on Tempo L2

Price
per_call
Protocol
mpp
Verified
no

What it does

This endpoint provides access to DeepSeek's frontier language models through the Locus MPP (Micropayment Protocol), settling payments in pathUSD on Tempo L2. It exposes an OpenAI-compatible chat completions interface supporting two models: deepseek-chat (DeepSeek-V3, optimized for fast chat and code generation) and deepseek-reasoner (DeepSeek-R1, optimized for deep chain-of-thought reasoning). The endpoint accepts standard chat parameters including messages, max_tokens, temperature, top_p, stop sequences, and SSE streaming.

Pricing is dynamic and depends on the model selected and token count, estimated at roughly $0.004–$0.025 per request according to the OpenAPI spec. Payment uses the MPP "charge" intent (one-shot per call) via the Tempo settlement method. The currency is identified by contract address 0x20c0…8b50 (pathUSD on Tempo L2). The same Locus MPP gateway also hosts a Fill-In-the-Middle (FIM) endpoint for code infilling and a list-models endpoint (fixed price of 3000 base units, i.e. $0.003 assuming 6 decimals for pathUSD).

Note: The probe only tested HEAD and GET, which returned 404. This is expected because the endpoint is a POST-only route. The OpenAPI schema clearly documents POST as the method, and the 404 on other methods does not indicate the endpoint is down. Documentation references include the DeepSeek API docs at api-docs.deepseek.com and Locus-specific docs at beta.paywithlocus.com/mpp/deepseek.md.

Capabilities

openai-compatible-chatdeepseek-v3deepseek-r1chain-of-thought-reasoningcode-generationsse-streamingmpp-chargetempo-settlementpathusd-paymentfill-in-the-middle

Use cases

  • Agent-driven chat completions with per-call micropayments and no API key
  • Code generation and assistance using DeepSeek-V3 via programmatic payment
  • Complex multi-step reasoning tasks using DeepSeek-R1 chain-of-thought
  • Streaming AI responses over SSE for interactive applications
  • Code infilling (FIM) for IDE-style autocomplete workflows

Fit

Best for

  • AI agents that need pay-per-call LLM access without subscription keys
  • Developers wanting OpenAI-compatible format with DeepSeek models
  • Cost-sensitive workloads needing frontier-quality reasoning at low per-call prices

Not for

  • Users who need free or API-key-based access without crypto settlement
  • Applications requiring models outside the DeepSeek family (e.g., GPT-4, Claude)
  • High-volume batch jobs where per-call micropayment overhead is impractical

Quick start

curl -X POST https://deepseek.mpp.paywithlocus.com/deepseek/chat \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-chat",
    "messages": [{"role": "user", "content": "Explain quantum entanglement in one paragraph."}],
    "max_tokens": 256
  }'

Example

Request

{
  "model": "deepseek-chat",
  "stream": false,
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Write a Python function to compute Fibonacci numbers."
    }
  ],
  "max_tokens": 512,
  "temperature": 0.7
}

Response

{
  "id": "chatcmpl-abc123",
  "model": "deepseek-chat",
  "usage": {
    "total_tokens": 73,
    "prompt_tokens": 25,
    "completion_tokens": 48
  },
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "def fibonacci(n):\n    if n <= 1:\n        return n\n    a, b = 0, 1\n    for _ in range(2, n + 1):\n        a, b = b, a + b\n    return b"
      },
      "finish_reason": "stop"
    }
  ],
  "created": 1719000000
}

Endpoint

Transporthttp
Protocolmpp
CurrencypathUSD

Quality

0.70/ 1.00

Full OpenAPI schema with request body, parameters, and pricing hints are available. The probe returned 404 because it used HEAD/GET on a POST-only endpoint, so liveness is not directly confirmed but strongly implied by the well-structured OpenAPI spec. No crawled documentation pages returned useful content; external docs are referenced but not crawled. Response schema is inferred from OpenAI-compatible convention, not directly documented.

Warnings

  • Probe used HEAD/GET which returned 404; endpoint is POST-only so liveness was not directly confirmed
  • Dynamic pricing (model + token dependent) means exact cost per call cannot be predetermined
  • Response schema is inferred from OpenAI-compatible convention and not explicitly documented in the OpenAPI spec
  • Currency contract 0x20c0…8b50 assumed to be pathUSD with 6 decimals based on Tempo L2 context; not independently verified

Citations

Provenance

Indexed frommpp_dev
Enriched2026-04-19 17:31:08Z · anthropic/claude-opus-4.6 · v2
First seen2026-04-18
Last seen2026-04-22

Agent access