What is API Pagination? Cursor vs Offset Strategies
API pagination splits large result sets across pages — offset/limit, cursor, keyset, page-based. Each strategy has different perf/UX tradeoffs.
What is API pagination?
API pagination is the practice of splitting a large result set into smaller chunks ("pages") returned over multiple requests. Without pagination, an endpoint that returns 1M records would crash the database, exhaust memory, or time out. With pagination, clients fetch 50 or 100 records at a time and request more as needed.
The right pagination strategy depends on data shape, traffic patterns, and UX needs. Offset/limit is simple but breaks at scale. Cursor-based is efficient but harder to use. Keyset is the modern best-of-both. Choosing the wrong strategy is a common cause of slow APIs.
The four main pagination strategies
| Strategy | Request shape | Best for |
|---|---|---|
| Offset/Limit | ?offset=200&limit=50 | Small datasets, jump-to-page UX |
| Page-based | ?page=5&per_page=50 | Same as offset, friendlier URL |
| Cursor-based | ?cursor=abc123&limit=50 | Live feeds, infinite scroll |
| Keyset (seek) | ?after_id=12345&limit=50 | Large datasets, stable order |
Offset/Limit pagination
GET /api/users?offset=200&limit=50Pros: Simple to implement; supports jump-to-page UX (page 1, 2, ... 100); total count easy.
Cons: Slow at large offsets (DB still reads + discards rows up to offset); inconsistent if data changes during pagination (rows may appear twice or be skipped); not great for infinite scroll.
SQL behind it:
SELECT * FROM users
ORDER BY created_at
LIMIT 50 OFFSET 200;Cursor-based pagination
GET /api/users?cursor=eyJpZCI6MTIzNDV9&limit=50
{
"data": [...],
"next_cursor": "eyJpZCI6MTIzOTV9",
"prev_cursor": "eyJpZCI6MTIyOTV9"
}Pros: Stable across data changes; fast (uses indexed ORDER BY); ideal for infinite scroll/live feeds.
Cons: No jump-to-page; cursor is opaque; total count harder.
Cursors are typically base64-encoded JSON containing the sort key of the last item: {"id": 12345} or {"created_at": "2026-01-15", "id": 12345}.
Keyset (seek) pagination
GET /api/users?after_id=12345&limit=50Pros: Same speed as cursor — uses indexed seek; simpler API (cursor not encoded); stable.
Cons: Same limits as cursor (no jump-to-page).
SQL:
SELECT * FROM users
WHERE id > 12345
ORDER BY id
LIMIT 50;Performance comparison
| Page | Offset/Limit (rows scanned) | Keyset (rows scanned) |
|---|---|---|
| 1 | 50 | 50 |
| 10 | 500 | 50 |
| 100 | 5,000 | 50 |
| 1,000 | 50,000 (slow!) | 50 |
| 10,000 | 500,000 (very slow) | 50 |
Keyset is constant time. Offset/limit gets slower with each page.
Common API pagination patterns
Link header (RFC 5988)
Link: <https://api.example.com/users?offset=50&limit=50>; rel="next",
<https://api.example.com/users?offset=0&limit=50>; rel="first",
<https://api.example.com/users?offset=950&limit=50>; rel="last"Used by GitHub, Stripe (sometimes). Standard but harder to parse.
Pagination metadata in response body
{
"data": [...],
"pagination": {
"total": 1000,
"page": 5,
"per_page": 50,
"total_pages": 20
}
}Easier for clients to use; most modern APIs do this.
Cursor in response
{
"data": [...],
"next_cursor": "abc123",
"has_more": true
}Pagination best practices
- Default + max limit. Set sensible default (20-50) and max (100-1000). Reject overly large requests.
- Keyset over offset for large datasets. If users can paginate past page 100, offset is wrong choice.
- Stable sort. Always include a unique tiebreaker (id) in ORDER BY to avoid duplicates.
- Index the sort key. ORDER BY without index = full table scan.
- Document the strategy. Explicit in OpenAPI spec — users shouldn't have to guess.
- Skip total counts at scale. COUNT(*) on huge tables is slow. Use estimates or skip the count.
- Cache page 1. Most-requested page; can be CDN-cached.
- Consider streaming. For huge datasets, server-sent events or streaming JSON beats pagination.
Common pagination pitfalls
- Slow deep pages. Offset/limit at offset=10000 is brutal. Switch to keyset.
- Inconsistent results. Without unique tiebreaker, ORDER BY non-unique field shows duplicates.
- Missing items during pagination. If data is inserted between page requests, items can shift; cursor mitigates.
- Total count at scale. COUNT(*) on 100M-row table is multi-second. Skip or estimate.
- No max limit. Client requests
limit=1000000, DOS yourself. - Page numbers in URL forever. Bookmarked URL with
?page=5is meaningless after data changes — cursor URLs are similarly fragile.
FAQ: API pagination
Should I use offset or cursor pagination?
Cursor (or keyset) for anything with >1000 records or live feeds. Offset/page-based for small admin lists where jump-to-page matters.
How do I implement cursor pagination?
Encode the sort key of the last item as base64 JSON. On next request, decode and use as WHERE sort_key > ? in SQL.
How do I get total count with cursor pagination?
Issue a separate COUNT(*) query, or skip total entirely. Many cursor-based APIs intentionally don't expose total.
What's the maximum page size I should support?
Depends on payload. Typical: 100-1000. Beyond that, response time + payload size become problematic.
Should pagination params be in query string or body?
Query string for GET (idiomatic, cacheable). Body acceptable for POST (e.g., search with complex filters + pagination).
Do I need both next and prev cursors?
Forward-only is usually enough. prev requires double-buffering or reverse query — adds complexity.
How does GraphQL handle pagination?
Relay Cursor Connections spec — standardized cursor pagination with edges, pageInfo.hasNextPage, endCursor.
Load test paginated APIs with LoadFocus
Pagination perf bugs only show under real traffic. LoadFocus runs JMeter and k6 scripts that hit deep pages with realistic concurrency from 25+ regions. Sign up free at loadfocus.com/signup.
Related LoadFocus Tools
Put this concept into practice with LoadFocus — the same platform that powers everything you just read about.