Grok chat with server-side code execution via xAI models, paid per call over MPP/Tempo
What it does
This endpoint provides access to xAI's Grok models with built-in code execution capability, served through the Locus MPP (Micropayment Protocol) gateway. It accepts an OpenAI-compatible chat completions request body with a Grok-4 family model (e.g. grok-4-0709, grok-4-1-fast-reasoning) and automatically enables server-side code execution alongside the conversation. You can also pass additional tools to combine code execution with other tool-use capabilities. Payment is settled per-call via the Tempo method on pathUSD, with dynamic pricing estimated at roughly $0.01–$0.50 per request depending on model and token usage.
The code-execution endpoint is one of several Grok capabilities exposed under the same MPP service, which also includes plain chat completions, web search, X (Twitter) search, image generation/editing, and text-to-speech. All endpoints share the same payment rail and OpenAPI schema. The code-execution path specifically requires a Grok-4 family model.
Note: The probe did not capture a live 402 challenge from this specific endpoint because it was tested with HEAD/GET, while the endpoint only accepts POST. The OpenAPI spec is well-defined and the service infrastructure responds (returning structured JSON errors for unknown routes), so the endpoint is likely live but could not be fully confirmed during probing.
Capabilities
Use cases
- —Running data analysis or computation as part of an LLM conversation
- —Generating and executing code snippets to answer math or programming questions
- —Combining code execution with custom tools for agentic workflows
- —Automating calculations, chart generation, or data transformations via chat
Fit
Best for
- —Agents that need verified computation results alongside LLM reasoning
- —Pay-per-call access to Grok-4 code execution without an xAI API key
- —Workflows combining natural language understanding with executable code
Not for
- —Long-running or resource-intensive compute jobs (this is chat-scoped execution)
- —Use cases requiring deterministic, auditable code execution environments outside an LLM context
- —Free-tier or high-volume batch processing where per-call pricing is prohibitive
Quick start
curl -X POST https://grok.mpp.paywithlocus.com/grok/code-execution \
-H "Content-Type: application/json" \
-d '{
"model": "grok-4-0709",
"messages": [
{"role": "user", "content": "Calculate the first 20 Fibonacci numbers using Python."}
],
"max_tokens": 1024
}'Example
Request
{
"model": "grok-4-0709",
"messages": [
{
"role": "user",
"content": "Write and run Python code to compute the factorial of 50."
}
],
"max_tokens": 512,
"temperature": 0.7
}Endpoint
Quality
The OpenAPI schema is detailed with clear request schemas, model names, and pricing guidance. However, the probe did not capture a live 402 challenge for this specific endpoint (HEAD/GET returned 404; POST was not attempted), no example response is available, and all crawled pages returned generic 404 JSON. The endpoint is likely live but not fully confirmed.
Warnings
- —Probe used HEAD/GET which returned 404; endpoint requires POST — live status not fully confirmed
- —No example response captured; response schema is not documented beyond '200: Successful response'
- —Pricing is described as 'Dynamic (~$0.01–$0.50)' with no exact per-token breakdown available
- —Grok-4 family models are required; older Grok-3 models may not work on this endpoint
Citations
- —Code execution endpoint requires Grok-4 family models (e.g. grok-4-0709, grok-4-1-fast-reasoning)https://grok.mpp.paywithlocus.com
- —Dynamic pricing approximately $0.01–$0.50 per requesthttps://grok.mpp.paywithlocus.com
- —Payment settled via Tempo method on pathUSDhttps://grok.mpp.paywithlocus.com
- —xAI API reference available at docs.x.aihttps://docs.x.ai
- —Additional documentation at beta.paywithlocus.com/mpp/grok.mdhttps://beta.paywithlocus.com/mpp/grok.md