MPPtempoquality 0.62

Brave Search LLM Context endpoint — retrieve search-grounded text chunks optimized for LLM consumption via MPP.

Price
$0.035 / call
Protocol
mpp
Verified
no

What it does

The Brave Search LLM Context endpoint (`/brave/llm-context`) returns search-grounded textual context designed for direct injection into large language model prompts. It is part of a broader Brave Search suite served through the Locus MPP (Micropayment Protocol) gateway, which also offers web search, news search, image search, video search, and AI answers endpoints.

This specific endpoint accepts a search query and returns content from up to 50 URLs, with configurable token limits (1,024–32,768 tokens, default 8,192). You can filter by freshness using presets like past day, week, month, or year, or specify a custom date range. Payment is per-call (intent: charge) settled via Tempo L2 in pathUSD. The listed amount is 35,000 base units; assuming pathUSD uses 6 decimals, this equates to $0.035 per request.

The endpoint is accessed via HTTP POST with a JSON body. The probe did not capture a 402 challenge on HEAD/GET (returning 404 instead), which is consistent with a POST-only route — the OpenAPI spec confirms POST as the only method. The OpenAPI schema is well-defined with clear parameter descriptions and constraints. Documentation references point to Brave's official API docs and a Locus-hosted markdown file for integration guidance.

Capabilities

web-searchllm-context-retrievalfreshness-filteringtoken-limit-controlurl-count-controlper-call-micropaymenttempo-l2-settlementprivacy-first-search

Use cases

  • Retrieval-augmented generation (RAG) pipelines that need fresh web context for LLM prompts
  • AI agents that need to ground responses in current web information
  • Automated research tools that gather and summarize web content
  • Chatbots requiring up-to-date factual context from the open web
  • Content verification workflows that cross-reference claims against live search results

Fit

Best for

  • LLM applications needing search-grounded context in a single API call
  • Privacy-conscious search integration without API key management (pay-per-call via MPP)
  • Agents that need configurable token budgets for retrieved context
  • RAG systems requiring freshness-filtered web content

Not for

  • High-volume bulk scraping or crawling (per-call payment model)
  • Image or video content retrieval (use sibling endpoints instead)
  • Applications that need traditional paginated search result links rather than LLM-optimized text

Quick start

curl -X POST https://brave.mpp.paywithlocus.com/brave/llm-context \
  -H "Content-Type: application/json" \
  -H "Authorization: <MPP-payment-header>" \
  -d '{"q": "latest developments in quantum computing", "maximum_number_of_tokens": 4096}'

Example

Request

{
  "q": "latest developments in quantum computing",
  "freshness": "pw",
  "maximum_number_of_urls": 10,
  "maximum_number_of_tokens": 4096
}

Endpoint

Transporthttp
Protocolmpp
CurrencypathUSD

Quality

0.62/ 1.00

The OpenAPI spec is well-structured with clear parameter schemas and payment info for all endpoints. However, the probe did not capture a live 402 challenge (POST-only route probed with HEAD/GET), no response schema is documented, no example responses are available, and crawled pages all returned generic 404 JSON. Price is inferred assuming 6-decimal pathUSD.

Warnings

  • Probe returned 404 on HEAD and GET; endpoint is POST-only so liveness could not be confirmed via the probe methods used
  • No response schema documented in the OpenAPI spec — output format is unknown
  • Price assumes pathUSD uses 6 decimals (standard for USD stablecoins) but this is not explicitly confirmed in the probe data
  • No example responses available; actual output structure must be discovered by calling the endpoint

Citations

Provenance

Indexed frommpp_dev
Enriched2026-04-19 17:23:50Z · anthropic/claude-opus-4.6 · v2
First seen2026-04-18
Last seen2026-04-22

Agent access

Brave Search LLM Context endpoint — retrieve search-grounded text chunks optimized for LLM consumption via MPP. — Clawmart · Clawmart