Pay-per-use GPT endpoint on Base via x402 — no API keys, just USDC micropayments.
What it does
x402Factory's GPT endpoint exposes OpenAI GPT models (gpt-5.1, gpt-5-mini, gpt-5-nano, gpt-5-pro) behind an x402 payment wall on the Base network. Callers pay in USDC per request with no accounts or API keys required — the payment itself serves as authentication. The default call uses gpt-5-mini with up to 1000 input tokens and 2000 max output tokens for a fixed price of 0.01 USDC. For non-default configurations, pricing is computed as (input_tokens × input_price_per_million + max_output_tokens × output_price_per_million) / 1,000,000, with a minimum of 0.001 USDC, always rounded up to 6 decimals.
Beyond simple message calls, the endpoint supports three actions: `message` (or omit action) for a direct GPT completion, `create` to store a custom agent with a preprompt for 0.001 USDC, and `list` to retrieve up to 100 recent custom agents for the paying wallet for 0.001 USDC. Custom agents are reusable — once created, they can be invoked via `custom_id` in the request body or as a URL path parameter (`/base/llm/gpt/{custom_id}`), each call priced based on the agent's stored preprompt, model, max_output_tokens, and max_input_tokens.
The endpoint returns a well-defined JSON schema including fields like `ok`, `action`, `model`, `price_usdc`, `reply`, and `custom_id`. An optional `max_input_tokens` parameter lets callers cap input token pricing; if the actual message exceeds this cap, the server returns an error rather than under-pricing. The x402 challenge is live on Base (chain ID 8453) and payments settle to address 0x402FaCcC3fAeb72351CC2b68C7966faF5f22B0d4 using the USDC contract at 0x833589fcd6edb6e08f4c7c32d4f71b54bda02913.
Capabilities
Use cases
- —AI agents that need on-demand GPT completions without managing API keys
- —Creating reusable custom agents with stored system prompts for repeated tasks
- —Prototyping LLM-powered workflows with instant USDC micropayments on Base
- —Autonomous agents discovering and consuming LLM endpoints via x402 protocol
- —Building chatbot backends where each query is individually metered and paid
Fit
Best for
- —Autonomous AI agents needing keyless, pay-per-call GPT access
- —Developers who want to avoid OpenAI account management and billing complexity
- —Micropayment-based LLM consumption on Base/USDC
- —Creating persistent custom agents with fixed preprompts for repeated use
Not for
- —High-volume production workloads where OpenAI direct billing is cheaper
- —Use cases requiring streaming responses (no streaming mentioned in schema)
- —Applications needing non-GPT models (Claude, Gemini, open-source LLMs)
Quick start
curl -X POST https://x402factory.ai/base/llm/gpt \
-H "Content-Type: application/json" \
-H "X-PAYMENT: <x402-payment-header>" \
-d '{"message": "What is the x402 protocol?", "model": "gpt-5-mini", "max_output_tokens": 2000}'Example
Request
{
"model": "gpt-5-mini",
"action": "message",
"message": "Explain the x402 payment protocol in two sentences.",
"max_output_tokens": 2000
}Response
{
"ok": true,
"model": "gpt-5-mini",
"reply": "The x402 protocol enables pay-per-use API access by embedding USDC micropayments directly into HTTP requests. It eliminates the need for API keys or accounts — your wallet is your identity.",
"action": "call",
"price_usdc": 0.01,
"max_output_tokens": 2000,
"estimated_input_tokens": 12
}Endpoint
Quality
The x402 challenge is live and returns a detailed outputSchema with full input/output field descriptions and pricing logic. However, there are no separate docs, no OpenAPI spec, and the crawled /docs, /api, /pricing, and /README pages all return 404, so documentation beyond the challenge payload is absent.
Warnings
- —No dedicated documentation pages found — /docs, /api, /pricing, /README all return 404
- —No OpenAPI specification available
- —Response example is inferred from the outputSchema, not from an actual successful call
- —Model names reference 'gpt-5' variants which may be internal aliases; actual OpenAI model mappings are not documented
Citations
- —Endpoint is live and returns HTTP 402 with x402 challenge on POSThttps://x402factory.ai/base/llm/gpt
- —Supported models: gpt-5.1, gpt-5-mini, gpt-5-nano, gpt-5-prohttps://x402factory.ai/base/llm/gpt
- —Default price is 0.01 USDC for gpt-5-mini with up to 1000 input tokens and 2000 max output tokenshttps://x402factory.ai/base/llm/gpt
- —Payments settle on Base (chain ID 8453) using USDC at 0x833589fcd6edb6e08f4c7c32d4f71b54bda02913https://x402factory.ai/base/llm/gpt
- —x402Factory describes itself as pay-per-use API endpoints for the agent economy on Solana and Basehttps://x402factory.ai