Core

Pagination

Offset-based pagination on every list endpoint. Same shape across the entire API surface.

Every paginated list endpoint — GET /api/v1/payments, GET /api/v1/webhooks, etc. — uses the SAME offset-based envelope. Pass limit + offset on the query string; we return the page of rows plus a pagination block with the total count so you can compute how many pages remain.

Exception — GET /api/v1/payment-methods: not paginated. It's a small per-country catalog (≤30 entries for the largest market). The response shape is { shop, environment, filters, count, totalAvailable, routableCount, methods } and you receive every row in one call. Treat it as a config endpoint to cache and refresh sporadically, not a stream to paginate.

Request parameters

ParamTypeDefaultNotes
limitinteger50Rows per page. Range 1–100; values outside the range are clamped silently.
offsetinteger0How many rows to skip. offset = page * limit.
bash
# First page (default limit=50, offset=0)
curl "https://sandbox.key2pays.com/api/v1/payments" \
  -H "Authorization: Bearer sk_test_51N8mP...exampleK3Y"

# Page 3 with 20 rows per page (skip 40 rows)
curl "https://sandbox.key2pays.com/api/v1/payments?limit=20&offset=40" \
  -H "Authorization: Bearer sk_test_51N8mP...exampleK3Y"

Response envelope

Every list endpoint wraps results in { data: [...], pagination: {...} }. The pagination block is identical everywhere — you can write ONE pagination helper for the whole API.

json
{
  "data": [
    { "id": "TXN-MP2WEMT1-KAPL", "amount": 50, "status": "completed", "…": "…" },
    { "id": "TXN-MP2VSE8S-UME4", "amount": 25, "status": "pending",   "…": "…" }
  ],
  "pagination": {
    "total":  127,   // total rows matching the filter, across all pages
    "limit":  20,    // echoes the requested page size
    "offset": 40,    // echoes the requested offset
    "pages":  7      // ceil(total / limit) — total page count
  }
}

Iterating through all pages

javascript
async function* iterate(path, params = {}) {
  const limit = 100;          // max page size
  let offset = 0;
  while (true) {
    const url = new URL(path, "https://sandbox.key2pays.com/api/v1");
    url.searchParams.set("limit", String(limit));
    url.searchParams.set("offset", String(offset));
    for (const [k, v] of Object.entries(params)) url.searchParams.set(k, v);
    const res = await fetch(url, { headers: { Authorization: `Bearer ${process.env.K2P_KEY}` } });
    const { data, pagination } = await res.json();
    for (const row of data) yield row;
    offset += data.length;
    if (offset >= pagination.total) return;
  }
}

// Usage:
for await (const tx of iterate("/payments", { status: "completed" })) {
  console.log(tx.id);
}
Concurrent inserts caveat: if rows are being created while you paginate, offset-based pagination can skip or duplicate a row at the page boundary (new rows shift the offsets). For one-time exports this is usually fine; for replicas-against-prod, freeze a window with ?from/?to filters or process events via webhooks instead.

Future cursor-based migration

We may introduce cursor-based pagination (starting_after / next_cursor) on these same endpoints in a future major version, for high-volume scans. To make the migration smooth on your side:

  • Abstract pagination into a single helper (like the one above) — when we add cursor params, you swap one function.
  • Treat the data[].id as opaque (don't parse the timestamp out of it). Cursor mode will accept those same ids as cursors.
  • Read pagination.total only when you need it. Cursor mode won't expose it (computing total over millions of rows is expensive).
The offset-based envelope above is the contract for now. We'll publish a clear migration window in the Changelog before any breaking change, and offset-based queries will keep working on the legacy code path for at least one major version.