DeepSeek Fill-In-the-Middle code completion via pay-per-call MPP on Tempo L2
What it does
This endpoint provides access to DeepSeek's Fill-In-the-Middle (FIM) capability through the Locus MPP (Micropayment Protocol), settling payments in pathUSD on Tempo L2. FIM is a code-completion technique where you supply a code prefix (prompt) and an optional suffix, and the model generates the code that belongs between them — ideal for inline code completion in editors and AI-assisted programming workflows.
The endpoint accepts POST requests with an OpenAI-compatible request body. The only supported model for FIM is `deepseek-chat` (DeepSeek-V3). You provide a `prompt` (the code before the cursor) and optionally a `suffix` (the code after the cursor), along with parameters like `max_tokens` (default 256) and `temperature` (default 1). Pricing is token-dependent, estimated at roughly $0.003–$0.005 per request according to the OpenAPI spec.
The endpoint is part of a broader DeepSeek service on Locus MPP that also exposes chat completions and a model-listing endpoint. Payment uses the MPP charge intent settled via the Tempo method. Note that the probe returned 404 on HEAD/GET — this is expected because the endpoint only accepts POST requests. The OpenAPI schema is well-defined and the service appears operational, though no live 402 challenge was captured on the probed methods.
Capabilities
Use cases
- —Inline code completion in IDE extensions and editor plugins
- —Automated code infilling for refactoring or template expansion
- —AI-assisted programming where surrounding context is known
- —Generating function bodies given signatures and call sites
- —Filling in missing code blocks in partially written programs
Fit
Best for
- —Agent-driven code completion workflows that need pay-per-call pricing
- —Developers wanting DeepSeek-V3 FIM without managing API keys or subscriptions
- —Applications that need to insert code between known prefix and suffix contexts
Not for
- —General-purpose chat or reasoning tasks (use the /deepseek/chat endpoint instead)
- —Use cases requiring models other than deepseek-chat for FIM
- —High-volume batch processing where subscription pricing would be cheaper
Quick start
curl -X POST https://deepseek.mpp.paywithlocus.com/deepseek/fim \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"prompt": "def fibonacci(n):\n if n <= 1:\n return n\n",
"suffix": "\nprint(fibonacci(10))",
"max_tokens": 256
}'Example
Request
{
"model": "deepseek-chat",
"prompt": "def fibonacci(n):\n if n <= 1:\n return n\n",
"suffix": "\nprint(fibonacci(10))",
"max_tokens": 256,
"temperature": 0.2
}Endpoint
Quality
The OpenAPI schema is well-defined with clear request parameters and payment info, but no live 402 challenge was captured (endpoint is POST-only and probe used HEAD/GET). No example responses are available, and crawled pages all returned 404 (expected for non-endpoint paths). Pricing is approximate and amount is null in the spec.
Warnings
- —No 402 challenge captured — endpoint only accepts POST, probe used HEAD/GET; liveness not fully confirmed
- —Pricing amount is null in the OpenAPI spec; the ~$0.003–$0.005 estimate is from the description field only
- —No example response schema or sample output available in the provided material
- —Currency address 0x20c000000000000000000000b9537d11c60e8b50 assumed to be pathUSD (6 decimals) based on Tempo method context
Citations
- —FIM endpoint supports only deepseek-chat modelhttps://deepseek.mpp.paywithlocus.com
- —Pricing is token dependent, approximately $0.003–$0.005 per requesthttps://deepseek.mpp.paywithlocus.com
- —Payment uses charge intent with Tempo settlement methodhttps://deepseek.mpp.paywithlocus.com
- —DeepSeek-V3 is described as fast chat and code modelhttps://deepseek.mpp.paywithlocus.com
- —API reference available at api-docs.deepseek.comhttps://api-docs.deepseek.com