Pay-per-call Mistral AI chat completions via MPP on Tempo L2, $0.005–$0.10 per request.
What it does
This endpoint provides access to Mistral AI's chat completion API through the Locus MPP (Micropayment Protocol) gateway. It supports multiple Mistral models including Mistral Large, Codestral, Magistral (reasoning), and Pixtral (vision). Each call is settled as a one-shot charge on Tempo L2 using pathUSD, with pricing that varies by model and token count — advertised as $0.005 to $0.10 per request.
The chat endpoint accepts standard OpenAI-compatible parameters: a messages array, model selector, temperature, top_p, max_tokens, tool/function calling definitions, structured output via response_format (json_object, json_schema, or text), stop sequences, and a deterministic seed. The same Locus MPP gateway also exposes sibling endpoints for embeddings (/mistral/embed), content moderation (/mistral/moderate), and model listing (/mistral/models), each priced at $0.008 per call (8000 base units in 6-decimal pathUSD).
Note that the endpoint is a POST-only route; the probe returned 404 on HEAD and GET, which is expected for a POST-based chat API. The OpenAPI spec is well-structured with request schemas and payment metadata. Documentation is referenced at docs.mistral.ai for the upstream API and beta.paywithlocus.com/mpp/mistral.md for the MPP-specific integration guide, though the crawl did not retrieve those pages directly.
Capabilities
Use cases
- —Agent-driven conversational AI with per-call crypto micropayments
- —Code generation and assistance using Codestral models
- —Vision tasks using Pixtral through a single chat endpoint
- —Content moderation checks before publishing user-generated text
- —Embedding text for semantic search or RAG pipelines
Fit
Best for
- —AI agents that need pay-as-you-go LLM access without API keys or subscriptions
- —Developers wanting crypto-settled Mistral inference on Tempo L2
- —Multi-model workflows that switch between reasoning, vision, and code models
Not for
- —High-volume batch inference where per-call overhead matters
- —Users who need streaming (SSE) responses — not confirmed in the spec
- —Applications requiring fine-tuned or custom Mistral models
Quick start
curl -X POST https://mistral.mpp.paywithlocus.com/mistral/chat \
-H "Content-Type: application/json" \
-H "Authorization: <MPP-payment-header>" \
-d '{
"model": "mistral-large-latest",
"messages": [{"role": "user", "content": "Explain micropayments in one sentence."}],
"max_tokens": 100
}'Example
Request
{
"model": "mistral-large-latest",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
],
"max_tokens": 50,
"temperature": 0.7
}Response
{
"id": "chatcmpl-abc123",
"model": "mistral-large-latest",
"usage": {
"total_tokens": 26,
"prompt_tokens": 18,
"completion_tokens": 8
},
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"created": 1700000000
}Endpoint
Quality
The OpenAPI spec is well-structured with request schemas, model examples, and payment metadata. However, the probe returned 404 on HEAD/GET (expected for POST-only), no actual 402 challenge was captured for this specific endpoint, response schemas are absent, and the crawl yielded no additional documentation. The response example is inferred from standard Mistral API conventions, not directly observed.
Warnings
- —Probe did not capture a 402 MPP challenge on HEAD/GET — endpoint is POST-only; liveness not fully confirmed via probe
- —Response schema is not documented in the OpenAPI spec; example response is inferred from upstream Mistral API conventions
- —Pricing is variable ($0.005–$0.10) with no per-model breakdown available in the spec
- —Streaming support is not mentioned in the spec — unclear if SSE is available
Citations
- —Endpoint supports models including Mistral Large, Codestral, Magistral reasoning, and Pixtral visionhttps://mistral.mpp.paywithlocus.com
- —Chat completion pricing varies $0.005–$0.10 by model and tokens, settled via Tempo methodhttps://mistral.mpp.paywithlocus.com
- —Embeddings and moderation endpoints priced at 8000 base units (pathUSD)https://mistral.mpp.paywithlocus.com
- —API reference available at docs.mistral.ai, MPP-specific docs at beta.paywithlocus.com/mpp/mistral.mdhttps://mistral.mpp.paywithlocus.com