Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.o1.exchange/llms.txt

Use this file to discover all available pages before exploring further.

The DEX Aggregator API enforces a per-key sliding-window rate limit. The default is 120 requests per minute per API key.

How it works

Per-key

The window is keyed off the x-api-key header, not your IP. Each key gets its own bucket.

Sliding 60-second window

Hits are tracked with timestamps. The window is now − 60s..now. There’s no hard reset on the minute boundary; the oldest hits drop off as time passes.

What counts as a request

Every authenticated call counts:
  • POST /quote
  • POST /submit
  • POST /execute
GET /health is public and free.

What you get on overflow

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
{
  "error": "rate limit exceeded",
  "limit": 120,
  "windowSec": 60
}
async function fetchWithBackoff(req: () => Promise<Response>, maxAttempts = 3) {
  for (let attempt = 1; attempt <= maxAttempts; attempt++) {
    const res = await req();
    if (res.status !== 429) return res;

    if (attempt === maxAttempts) return res;

    // Linear backoff with jitter is fine here. Don't go below 1 second.
    const delay = 1000 + Math.random() * 500;
    await new Promise((r) => setTimeout(r, delay));
  }
  throw new Error("unreachable");
}
Don’t retry immediately on 429. The window is sliding, so a fast retry just lands at the same spot in the bucket. Wait at least 1 second.

Capacity planning

A few guidelines for sizing your usage:
WorkloadApproximate rateHeadroom
Single-user swap UI with live quote polling (one user, 8s polling)~7 req/minPlenty
10 concurrent users polling~75 req/minComfortable
50 concurrent users polling~375 req/minExceeds default — request a higher tier
Server-side bot quoting once per second per pair60 req/min × pairsScales linearly
If you expect sustained load above the default 120 req/min, contact the o1 team to discuss a higher tier.

Per-pair caching tips

You can dramatically reduce request count on the client by:
  1. Debouncing user input. Don’t quote on every keystroke; quote 300ms after the last input.
  2. Coalescing identical quotes. If two components need the same (tokenIn, tokenOut, amountIn) quote, share one fetch.
  3. Pausing polling when the tab is hidden. Use document.visibilityState === "hidden" to skip ticks.

What does NOT scale your quota

  • Adding more API keys for the same product. The o1 team can detect related keys and apply combined limits if abuse is suspected.
  • Spreading requests across IPs while reusing one key. The limit is keyed on the key, not the IP.
If you legitimately need more headroom, ask. The defaults exist to protect upstream RPC and pool-data providers, and higher tiers are available.