Retrieve the status and output of a Replicate prediction via MPP micropayment.
What it does
The Replicate — Get Prediction endpoint is part of a Locus MPP proxy that wraps the Replicate API, letting agents poll for the result of a previously submitted model run. After calling the companion `/replicate/run` endpoint to kick off a prediction (image generation, language model inference, speech recognition, etc.), you use this endpoint to fetch the prediction's current status and output by supplying the `prediction_id`.
The endpoint is a POST at `https://replicate.mpp.paywithlocus.com/replicate/get-prediction`. Payment is handled via the MPP protocol using the Tempo method (pathUSD on Tempo L2). Each call costs 3 000 base units of pathUSD (6 decimals), which equals $0.003 per request. The request body is a simple JSON object with a single required field, `prediction_id`, which is the ID returned by the run endpoint.
This endpoint is part of a broader Replicate proxy that also exposes `/replicate/run` (submit a prediction), `/replicate/get-model` (fetch model metadata), and `/replicate/list-models` (browse available models). Together they provide a pay-per-call, API-key-free interface to Replicate's catalog of thousands of open-source AI models. Documentation is referenced at the Replicate HTTP API reference and the Locus MPP skill file.
Capabilities
Use cases
- —Polling for the output of an image generation job submitted via the run endpoint
- —Checking whether a language model inference has completed before consuming the result
- —Building an agent workflow that submits a Replicate model run and waits for completion
Fit
Best for
- —AI agents that need pay-per-call access to Replicate without managing API keys
- —Workflows that submit async model runs and need to poll for results
- —Developers integrating Replicate into MPP-compatible payment pipelines
Not for
- —Streaming or real-time model inference (this is a polling endpoint)
- —Users who already have a Replicate API key and prefer direct access
Quick start
curl -X POST https://replicate.mpp.paywithlocus.com/replicate/get-prediction \
-H "Content-Type: application/json" \
-d '{"prediction_id": "abc123xyz"}'Example
Request
{
"prediction_id": "r8abc123xyz456"
}Endpoint
Quality
OpenAPI schema is present with clear request body, pricing, and payment method. However, the probe returned 404 on HEAD/GET (expected since the endpoint is POST-only), no 402 challenge was captured via the probe's attempted methods, no response schema or example response is documented, and all crawled pages returned generic 'not found' messages. Pricing and intent are inferred from the OpenAPI x-payment-info block.
Warnings
- —Probe did not capture a 402 MPP challenge because it only tried HEAD and GET; the endpoint is POST-only, so liveness via 402 was not confirmed.
- —No response schema or example response is provided in the OpenAPI spec.
- —All crawled pages (root, /docs, /api, /pricing, /README) returned 404 JSON errors; supplementary documentation was not reachable.
- —Currency address 0x20c0…b9537d11c60e8b50 is assumed to be pathUSD with 6 decimals based on the companion endpoint's human-readable description; if this assumption is wrong the price calculation would differ.
Citations
- —Endpoint path is /replicate/get-prediction, POST method, with prediction_id as the sole required fieldhttps://replicate.mpp.paywithlocus.com
- —Payment amount is 3000 base units via Tempo method with currency 0x20c000000000000000000000b9537d11c60e8b50https://replicate.mpp.paywithlocus.com
- —Companion /replicate/run endpoint describes pricing as $0.005–$0.05 (model-dependent), implying pathUSD with 6 decimalshttps://replicate.mpp.paywithlocus.com
- —Replicate API reference and Locus MPP docs are referenced as external documentationhttps://replicate.com/docs/reference/http