Loading learning content...
While LIMIT/TOP/FETCH controls how many rows to return, OFFSET controls where to start returning rows. This combination is fundamental to pagination—displaying data in manageable chunks with 'Previous' and 'Next' navigation.
Consider an e-commerce product listing with 10,000 items. Showing all 10,000 at once is impractical: the page would load slowly, users would be overwhelmed, and bandwidth would be wasted. Instead, we show 20 products per page, letting users navigate through pages. Page 1 shows products 1-20, page 2 shows 21-40, and so on.
OFFSET enables this by specifying how many rows to skip before returning results. However, OFFSET has significant performance implications at scale that every database developer must understand.
By the end of this page, you will understand: (1) OFFSET syntax across database systems, (2) How OFFSET works internally, (3) The OFFSET performance problem at scale, (4) Why deep pagination becomes progressively slower, (5) Cross-database OFFSET syntax variations, and (6) When OFFSET is appropriate vs. when alternatives should be used.
OFFSET syntax varies slightly across database systems, though the semantics are consistent: skip the first N rows, then return the rest (or a limited number).
12345678910111213141516171819202122232425
-- Standard syntax: LIMIT followed by OFFSETSELECT column1, column2FROM table_nameORDER BY columnLIMIT row_count OFFSET skip_count; -- Example: Page 3 of 20 items per page (rows 41-60)SELECT product_name, priceFROM productsORDER BY product_idLIMIT 20 OFFSET 40; -- Skip 40 rows, return next 20 -- Alternative MySQL syntax: LIMIT offset, count-- (Note: offset comes FIRST in this form!)SELECT product_name, priceFROM productsORDER BY product_idLIMIT 40, 20; -- Skip 40 rows, return 20 rows-- Equivalent to: LIMIT 20 OFFSET 40 -- PostgreSQL also accepts: OFFSET without LIMITSELECT product_name, priceFROM productsORDER BY price DESCOFFSET 100; -- Skip 100 rows, return ALL remaining rowsMySQL's LIMIT offset, count syntax puts the offset FIRST, opposite to the logical order. Many developers confuse LIMIT 40, 20 (skip 40, return 20) with LIMIT 40 (return 40). Always use the clearer LIMIT 20 OFFSET 40 syntax to avoid mistakes.
1234567891011121314151617181920212223
-- SQL Standard syntax (PostgreSQL, Oracle 12c+, SQL Server 2012+)SELECT column1, column2FROM table_nameORDER BY columnOFFSET skip_count ROWSFETCH FIRST row_count ROWS ONLY; -- Example: Page 3 of 20 items per pageSELECT product_name, priceFROM productsORDER BY product_idOFFSET 40 ROWSFETCH FIRST 20 ROWS ONLY; -- Singular ROW also valid (for readability with 1)OFFSET 1 ROW FETCH FIRST 1 ROW ONLY; -- OFFSET with FETCH NEXT (equivalent to FETCH FIRST)OFFSET 40 ROWS FETCH NEXT 20 ROWS ONLY; -- OFFSET alone (SQL Server requires FETCH if OFFSET used)-- PostgreSQL is more permissiveOFFSET 100 ROWS; -- Works in PostgreSQL, error in SQL Server12345678910111213141516171819202122232425
-- SQL Server requires ORDER BY when using OFFSETSELECT product_name, priceFROM productsORDER BY product_id -- Required!OFFSET 40 ROWSFETCH NEXT 20 ROWS ONLY; -- Error without ORDER BY:SELECT product_name, priceFROM productsOFFSET 40 ROWS FETCH NEXT 20 ROWS ONLY;-- Error: Invalid usage of OFFSET/FETCH. ORDER BY is mandatory. -- SQL Server also requires FETCH with OFFSETSELECT product_name, priceFROM productsORDER BY product_idOFFSET 40 ROWS;-- Error: FETCH must be specified with OFFSET -- TOP does not support OFFSET (separate mechanism)SELECT TOP (20) product_name, priceFROM productsORDER BY product_idOFFSET 40; -- Syntax error: can't combine TOP with OFFSET| Database | Syntax | ORDER BY Required? | Notes |
|---|---|---|---|
| MySQL | LIMIT n OFFSET m | No (but recommended) | Also: LIMIT m, n |
| PostgreSQL | LIMIT n OFFSET m | No (but recommended) | Also: OFFSET FETCH standard |
| SQLite | LIMIT n OFFSET m | No (but recommended) | LIMIT required with OFFSET |
| SQL Server | OFFSET m FETCH n | Yes (mandatory) | TOP cannot use OFFSET |
| Oracle 12c+ | OFFSET m FETCH n | Recommended | ROWNUM legacy alternative |
| DB2 | OFFSET m FETCH n | Yes | Standard compliant |
Understanding OFFSET's internal mechanics reveals why it has performance limitations. The key insight: OFFSET doesn't magically jump to row N—it reads and discards rows 1 through N-1.
When executing a query with OFFSET, the database follows these steps:
The critical issue is step 3: skipping rows still requires reading them.
1234567891011121314151617181920
-- Consider this pagination query:SELECT product_name, priceFROM productsORDER BY price DESCOFFSET 10000 ROWSFETCH FIRST 20 ROWS ONLY; -- Database execution:-- 1. Identify all products matching criteria (full table scan if no WHERE)-- 2. Sort ALL products by price DESC-- 3. Read row 1, discard-- 4. Read row 2, discard-- 5. ... (repeat 10,000 times)-- 6. Read row 10,001, return to client-- 7. Read row 10,002, return to client-- 8. ... (return 20 rows total)-- 9. Stop at row 10,020 -- Result: Database did 10,020 row reads to return 20 rows!-- The 10,000 discarded reads are wasted work.OFFSET work scales linearly with the offset value. OFFSET 10 reads 10 rows. OFFSET 10,000 reads 10,000 rows. OFFSET 1,000,000 reads 1,000,000 rows. Deep pagination becomes progressively slower regardless of how few rows you request.
| Page | OFFSET Value | Rows Read | Rows Returned | Efficiency |
|---|---|---|---|---|
| 1 | 0 | 20 | 20 | 100% |
| 10 | 180 | 200 | 20 | 10% |
| 100 | 1,980 | 2,000 | 20 | 1% |
| 1,000 | 19,980 | 20,000 | 20 | 0.1% |
| 10,000 | 199,980 | 200,000 | 20 | 0.01% |
| 100,000 | 1,999,980 | 2,000,000 | 20 | 0.001% |
Beyond row reads, OFFSET causes additional overhead:
123456789101112131415161718192021
-- PostgreSQL: Observe OFFSET impact with EXPLAIN ANALYZEEXPLAIN (ANALYZE, BUFFERS)SELECT * FROM ordersORDER BY order_date DESCOFFSET 100000 ROWSFETCH FIRST 20 ROWS ONLY; -- Output might show:-- Limit (actual rows=20, loops=1)-- -> Sort (actual rows=100020, loops=1)-- Sort Method: external merge Disk: 45000kB <-- Disk sort!-- -> Seq Scan on orders (actual rows=500000, loops=1) -- Note: Sort processed 100,020 rows (OFFSET + LIMIT)-- If table has 500K rows, all were scanned for sorting-- Disk sort indicates memory was insufficient -- Compare page 1 vs page 5000 (20 items per page):-- Page 1: OFFSET 0 - returns in 2ms-- Page 5000: OFFSET 99980 - returns in 850ms-- 425x slower for the same number of returned rows!Despite its scaling issues, OFFSET is perfectly acceptable in many scenarios. Understanding when OFFSET is fine versus when it's problematic helps you make the right choice.
12345678910111213141516171819202122
-- Acceptable: Small filtered result setSELECT order_id, order_date, totalFROM ordersWHERE customer_id = 12345 -- Filters to ~50 orders for this customerORDER BY order_date DESCOFFSET 20 FETCH FIRST 10 ROWS ONLY;-- With only 50 orders total, OFFSET 20 is trivial -- Acceptable: Known-small tableSELECT * FROM product_categoriesORDER BY nameOFFSET 40 FETCH FIRST 20 ROWS ONLY;-- If categories table has ~100 rows, any OFFSET is fine -- Acceptable: Admin query (low traffic)SELECT u.*, COUNT(o.order_id) as order_countFROM users uLEFT JOIN orders o ON u.user_id = o.user_idGROUP BY u.user_idORDER BY order_count DESCOFFSET 1000 FETCH FIRST 50 ROWS ONLY;-- Complex query but acceptable for admin-only useOFFSET has several non-obvious behaviors that can affect query results. Understanding these edge cases prevents surprises.
123456789101112131415
-- What happens when OFFSET exceeds available rows?-- Table has 100 products SELECT product_name, priceFROM productsORDER BY price DESCOFFSET 150 FETCH FIRST 20 ROWS ONLY; -- Result: Empty result set (0 rows)-- No error, no padding - just empty-- This is standard behavior across all databases -- Check if you've gone past the end:-- If result count < requested limit, you might be at/past the end-- But OFFSET 95 FETCH 20 returns 5 rows (rows 96-100)12345678910
-- OFFSET 0 is equivalent to no OFFSETSELECT * FROM products ORDER BY price LIMIT 20 OFFSET 0;SELECT * FROM products ORDER BY price LIMIT 20;-- Both return the same first 20 rows -- Some applications always include OFFSET for consistency:-- Page calculation: OFFSET = (page_number - 1) * page_size-- Page 1: OFFSET = (1-1) * 20 = 0-- Page 2: OFFSET = (2-1) * 20 = 20-- Page 3: OFFSET = (3-1) * 20 = 401234567
-- Negative OFFSET values are errors in all databasesSELECT * FROM products ORDER BY price LIMIT 20 OFFSET -10;-- Error: OFFSET must be a non-negative integer -- Applications should validate page numbers:-- If page_number <= 0: treat as page 1-- If page_number > max_page: return empty or last page12345678910111213141516
-- DANGEROUS: OFFSET without ORDER BYSELECT * FROM products LIMIT 20 OFFSET 40;-- Which rows are skipped? Unknown and unstable!-- May return different results on repeated execution -- DANGEROUS: ORDER BY column with duplicatesSELECT * FROM products ORDER BY category LIMIT 20 OFFSET 40;-- If multiple products have same category, their relative order-- within that category is undefined. Products might appear on-- multiple pages or skip pages as data changes. -- SAFE: Add unique tiebreaker to ORDER BYSELECT * FROM products ORDER BY category, product_id -- product_id breaks tiesLIMIT 20 OFFSET 40;-- Now every product has a unique position in the orderingIf you ORDER BY a non-unique column, rows with the same value have arbitrary positions. Between page loads, data changes or parallel queries may shift these rows, causing duplicates or gaps in pagination. Always include a unique column (like primary key) in ORDER BY for stable pagination.
1234567891011121314151617181920
-- Scenario: User views page 1, then page 2-- Time T1: User requests page 1SELECT * FROM products ORDER BY created_at DESC LIMIT 20 OFFSET 0;-- Returns newest 20 products (IDs 100-81) -- Time T2: Someone adds a new product (ID 101) -- Time T3: User requests page 2SELECT * FROM products ORDER BY created_at DESC LIMIT 20 OFFSET 20;-- Now returns products 80-61 (shifted by new product)-- Product ID 81 appeared on page 1 AND page 2! -- Scenario: Deletion causes gap-- Time T1: User views page 1 (products 100-81)-- Time T2: Product 95 is deleted-- Time T3: User views page 2-- Products 80-61 returned, but product 94 was never seen! -- These issues are inherent to OFFSET pagination with live data-- Cursor-based pagination (next page) handles this betterGiven OFFSET's limitations, several alternative pagination approaches exist. This section introduces when to consider them; the next page covers implementation details.
Instead of 'skip N rows,' keyset pagination says 'get rows after this value.' It uses indexed columns to jump directly to the starting point.
1234567891011121314
-- OFFSET approach (slow for deep pages)SELECT * FROM productsORDER BY product_idOFFSET 10000 LIMIT 20;-- Must read 10,000 rows to skip them -- Keyset approach (fast for any page)-- Client remembers last seen product_id (e.g., 10523)SELECT * FROM productsWHERE product_id > 10523ORDER BY product_idLIMIT 20;-- Index seek to 10524, read only 20 rows!-- Same speed whether it's "page 2" or "page 5000"The seek method is a generalization of keyset pagination that handles composite ordering and descending sorts:
12345678910
-- Paginating by multiple columns with DESC order-- Last seen: created_at='2024-01-15 10:30:00', id=5000 SELECT * FROM eventsWHERE (created_at, id) < ('2024-01-15 10:30:00', 5000)ORDER BY created_at DESC, id DESCLIMIT 20; -- Row comparison uses tuple ordering-- Efficiently uses index on (created_at DESC, id DESC)For queries requiring expensive column fetches, fetch only keys first, then join:
12345678910111213141516
-- Slow: OFFSET reads and discards full rowsSELECT * FROM productsORDER BY created_at DESCOFFSET 10000 LIMIT 20;-- Reads 10,020 full rows (including large description TEXT) -- Fast: OFFSET on minimal columns, then fetch full rowsSELECT p.* FROM products pINNER JOIN ( SELECT product_id FROM products ORDER BY created_at DESC OFFSET 10000 LIMIT 20) keys ON p.product_id = keys.product_idORDER BY p.created_at DESC;-- Inner query offsets only (id, created_at) - much smaller-- Outer query fetches only 20 full rowsReal applications often combine approaches: OFFSET for small result sets or early pages, keyset for deep pagination or infinite scroll. The next page on Pagination covers practical implementation patterns combining these techniques.
When you do use OFFSET, follow these practices to minimize issues and prepare for future optimization.
123456789101112131415161718192021222324252627282930
-- Best Practice: Deterministic ORDER BY with indexCREATE INDEX idx_products_created ON products(created_at DESC, product_id DESC); SELECT * FROM productsORDER BY created_at DESC, product_id DESC -- DeterministicOFFSET 40 LIMIT 20; -- Best Practice: Limit maximum offset in application-- Application code pseudo:-- max_offset = 1000 // ~50 pages of 20-- if requested_offset > max_offset:-- return error("Use search for results beyond page 50") -- Best Practice: Compute count separately (or skip it)-- Instead of:SELECT COUNT(*) as total, p.*FROM products pORDER BY created_at DESCOFFSET 40 LIMIT 20; -- COUNT(*) scans all rows! -- Do:SELECT * FROM productsORDER BY created_at DESC, product_id DESCOFFSET 40 LIMIT 21; -- Request 21 to detect "has more"-- has_more = (rows_returned == 21)-- Return first 20 rows only -- Or cache count separately:-- total_products = cache.get('product_count') // Updated periodicallyOFFSET interacts with other query features in specific ways. Understanding these interactions prevents unexpected results.
1234567891011121314
-- DISTINCT is applied before OFFSETSELECT DISTINCT categoryFROM productsORDER BY categoryOFFSET 5 LIMIT 10; -- Execution:-- 1. Select all categories (with duplicates)-- 2. Eliminate duplicates (DISTINCT)-- 3. Sort unique categories-- 4. Skip 5-- 5. Return next 10 -- If 50 unique categories exist, returns categories 6-15123456789101112131415
-- OFFSET applies to grouped resultsSELECT department, COUNT(*) as employee_countFROM employeesGROUP BY departmentORDER BY employee_count DESCOFFSET 5 LIMIT 10; -- Execution:-- 1. Group all employees by department-- 2. Count each group-- 3. Sort by count descending-- 4. Skip 5 departments-- 5. Return next 10 departments -- Paginates the aggregated results, not individual rows12345678910111213141516171819
-- OFFSET applies to the entire UNION result(SELECT product_name, 'on_sale' as type FROM products WHERE on_sale = true)UNION ALL(SELECT product_name, 'new' as type FROM products WHERE is_new = true)ORDER BY product_nameOFFSET 20 LIMIT 20; -- The combined result is sorted and paginated as one set -- To paginate each part separately, use subqueries:SELECT * FROM ( SELECT product_name FROM products WHERE on_sale = true ORDER BY product_name LIMIT 10 OFFSET 10) sale_productsUNION ALLSELECT * FROM ( SELECT product_name FROM products WHERE is_new = true ORDER BY product_name LIMIT 10 OFFSET 10) new_products;12345678910111213141516171819
-- OFFSET in outer query paginates subquery resultsSELECT *, RANK() OVER (ORDER BY price DESC) as price_rankFROM ( SELECT * FROM products WHERE category = 'Electronics') electronicsORDER BY price DESCOFFSET 10 LIMIT 10; -- Subquery filters to electronics-- Window function adds ranking BEFORE offset-- OFFSET skips first 10 ranked products-- Returns products ranked 11-20 -- OFFSET in correlated subquery (rare, usually inefficient)SELECT p.product_name, (SELECT COUNT(*) FROM reviews r WHERE r.product_id = p.product_id LIMIT 1 OFFSET 0) as review_count -- Pointless OFFSET 0FROM products p;OFFSET is a fundamental but misunderstood SQL feature. Let's consolidate the key insights:
LIMIT n OFFSET m, SQL Server uses OFFSET m FETCH NEXT n, and ordering requirements differ.What's Next:
With OFFSET understood, we're ready to integrate everything into practical pagination implementation. The next page covers pagination patterns—calculation formulas, UI considerations, keyset implementation, and strategies for building scalable paginated interfaces.
You now understand OFFSET deeply—its mechanics, performance characteristics, edge cases, and limitations. You can make informed decisions about when OFFSET is appropriate and when to consider alternatives for scalable pagination.