MPP-proxied OpenAI Chat Completions (GPT-4o, GPT-4, o1) settled via Tempo L2.
What it does
This endpoint wraps OpenAI's Chat Completions API behind the Micropayment Protocol (MPP), allowing AI agents to pay per-call using pathUSD on Tempo L2. The listed models include GPT-4o, GPT-4, o1, and others from the OpenAI family, with pricing that varies by model.
The endpoint is hosted at openai.mpp.tempo.xyz/v1/chat/completions and is designed to accept POST requests following the standard OpenAI chat completions request format. Callers send a JSON body containing a model identifier and a messages array; the MPP proxy handles payment negotiation via the 402 challenge/response flow before forwarding the request to OpenAI.
Important caveat: during probing, the endpoint returned 404 on both HEAD and GET requests, and no 402 MPP challenge was captured. The endpoint likely only responds to POST requests with the correct Content-Type header, which is consistent with the OpenAI API's design. However, without a captured 402 challenge, pricing details, supported models, and settlement parameters cannot be independently verified. All information below is inferred from the Bazaar listing metadata and the known MPP/Tempo conventions.
Capabilities
Use cases
- —AI agents that need on-demand access to OpenAI models without managing API keys or subscriptions
- —Pay-per-call LLM access for autonomous workflows settled in pathUSD on Tempo L2
- —Building multi-model pipelines where each call is individually metered and paid
Fit
Best for
- —Agents needing OpenAI chat completions with crypto-native per-call payment
- —Developers who want to resell or proxy OpenAI access via micropayments
- —Workflows requiring GPT-4o, GPT-4, or o1 without a direct OpenAI account
Not for
- —High-volume batch processing where a direct OpenAI subscription would be cheaper
- —Use cases requiring streaming responses if the MPP proxy does not support SSE
- —Applications needing non-chat OpenAI endpoints (embeddings, images, audio) — those may be separate endpoints
Quick start
curl -X POST https://openai.mpp.tempo.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <MPP_TOKEN>" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Example
Request
{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain micropayment protocols in one paragraph."
}
],
"max_tokens": 256,
"temperature": 0.7
}Response
{
"id": "chatcmpl-abc123",
"model": "gpt-4o",
"usage": {
"total_tokens": 73,
"prompt_tokens": 28,
"completion_tokens": 45
},
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Micropayment protocols enable small, per-call payments between agents and service providers..."
},
"finish_reason": "stop"
}
],
"created": 1717000000
}Endpoint
Quality
The endpoint returned 404 on HEAD and GET probes, and no 402 MPP challenge was captured. All crawled pages returned 'Not Found'. No OpenAPI schema, pricing details, or documentation are available. The listing is based entirely on the Bazaar metadata (title and description). The endpoint may be live for POST requests only, but this cannot be confirmed from the probe data.
Warnings
- —No 402 MPP challenge was captured — endpoint liveness for POST requests is unverified
- —All crawled pages (root, /docs, /api, /pricing, /README) returned 404
- —Pricing details are unknown — the listing says 'price varies by model' but no amounts were captured
- —No OpenAPI schema or documentation available
- —Request/response examples are inferred from standard OpenAI API format, not verified against this proxy
Citations
- —Endpoint is listed as 'Chat completions (GPT-4o, GPT-4, o1, etc.) - price varies by model' and is part of an OpenAI proxy group on Tempohttps://openai.mpp.tempo.xyz/v1/chat/completions