Loading learning content...
Once the query optimizer produces an execution plan and the execution engine begins processing, the final step is result retrieval—getting the data from the database server to your application. This seemingly simple step involves sophisticated mechanisms for buffering, streaming, network transfer, and memory management.
Result retrieval encompasses:
Understanding result retrieval helps you write applications that handle database interactions efficiently. The difference between fetching a million rows properly versus improperly can mean seconds versus minutes—or success versus out-of-memory crashes.
By the end of this page, you'll understand how results flow from the execution engine through buffers and networks to your application, how cursor types affect memory and performance, and how to optimize result handling for different scenarios—from small queries to massive data exports.
The execution engine (also called the query executor or runtime) is the component that actually runs the execution plan. It coordinates operators, manages memory, handles I/O, and produces the result stream.
How Execution Proceeds:
The execution engine processes the plan tree, typically using one of these models:
| Model | How It Works | Advantages | Used By |
|---|---|---|---|
| Volcano/Iterator | Each operator implements next() returning one row at a time | Simple, pipelining, low memory for streaming | PostgreSQL, many traditional systems |
| Push-Based | Operators push data to downstream consumers | Better parallelism, cache efficiency | HyPer, some modern systems |
| Vectorized | Operators process batches of rows (vectors) | SIMD optimizations, reduced interpretation overhead | ClickHouse, DuckDB, Snowflake |
| Compiled | Query compiled to native code | Maximum performance, no interpreter overhead | HyPer, MemSQL, some JIT systems |
Pipelining vs. Blocking:
A key distinction in execution is whether operators can pipeline (produce output incrementally) or must block (consume all input before producing any output).
Pipelined operators:
Blocking operators:
Pipelines allow the first result row to reach the application before all input is processed—crucial for interactive queries and streaming results.
Pipelined queries can return the first row very quickly, even if total execution is slow. A LIMIT 10 query on a pipelined plan might return instantly, while a blocking Sort must process all rows before returning any. Consider whether your application can benefit from streaming results.
Results don't flow directly from disk to application—they pass through multiple buffer layers that optimize throughput and manage memory.
Buffer Layers in Result Flow:
Batching and Network Efficiency:
Sending one row at a time over the network is extremely inefficient—each packet has overhead (headers, round-trip latency). Instead, databases batch multiple rows:
| Parameter | Database | Description |
|---|---|---|
default_fetch_size | Various | Rows fetched per network round-trip |
pq_row_messages | PostgreSQL | Rows per protocol message |
net_buffer_length | MySQL | Size of network buffer (bytes) |
fetch size | JDBC | Rows fetched at a time from server |
Example impact: Fetching 10,000 rows one at a time requires 10,000 round trips. With fetch size of 100, only 100 round trips are needed—potentially 100x faster over high-latency networks.
For bulk data retrieval, increase fetch size to reduce network round trips. For interactive applications processing one row at a time, smaller fetch sizes reduce memory usage. There's no universal right answer—tune based on your use case.
A cursor is a database mechanism that provides row-by-row access to query results. Cursors are essential for handling large result sets without loading everything into memory.
Cursor Types:
| Cursor Type | Characteristics | Use Case | Memory Impact |
|---|---|---|---|
| Forward-Only | Can only move forward through results | Sequential processing, data export | Low—only current batch in memory |
| Scrollable | Can move forward, backward, to any position | Interactive browsing, random access | Higher—may cache rows for repositioning |
| Sensitive | Sees changes made by other transactions | Real-time data views | Variable—depends on implementation |
| Insensitive | Snapshot of data at cursor open time | Consistent processing, reports | Higher—may materialize entire result |
| Server-Side | Results held on database server | Large results, limited client memory | Server memory used; client memory saved |
| Client-Side | All results loaded to client immediately | Small results, disconnected processing | High client memory; frees server resources |
DECLARE CURSOR Syntax:
123456789101112131415161718192021222324252627282930313233
-- Basic cursor (forward-only by default)DECLARE order_cursor CURSOR FORSELECT order_id, customer_id, totalFROM ordersWHERE order_date > '2024-01-01'; -- Open cursor and fetch rowsOPEN order_cursor; FETCH NEXT FROM order_cursor INTO @id, @customer, @total;WHILE @@FETCH_STATUS = 0BEGIN -- Process each row PRINT 'Order: ' + CAST(@id AS VARCHAR); FETCH NEXT FROM order_cursor INTO @id, @customer, @total;END CLOSE order_cursor;DEALLOCATE order_cursor; -- Scrollable cursor (SQL Server example)DECLARE scroll_cursor CURSOR SCROLL FORSELECT product_name, price FROM products; OPEN scroll_cursor; FETCH FIRST FROM scroll_cursor; -- First rowFETCH LAST FROM scroll_cursor; -- Last rowFETCH ABSOLUTE 10 FROM scroll_cursor; -- 10th rowFETCH RELATIVE -2 FROM scroll_cursor; -- 2 rows back CLOSE scroll_cursor;DEALLOCATE scroll_cursor;Cursors in Application Code:
Most application languages access cursors through database drivers without explicit DECLARE/FETCH. The driver manages cursor behavior based on configuration:
1234567891011121314151617181920212223242526272829
// Java JDBC cursor exampleimport java.sql.*; // Default behavior: client-side cursor, loads all rowsStatement stmt = conn.createStatement();ResultSet rs = stmt.executeQuery("SELECT * FROM large_table");// Danger: All rows loaded to memory! // Server-side cursor with fetch sizeStatement stmt = conn.createStatement( ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);stmt.setFetchSize(100); // Fetch 100 rows at a timeResultSet rs = stmt.executeQuery("SELECT * FROM large_table"); while (rs.next()) { // Process one row at a time // Only ~100 rows in memory at once String name = rs.getString("name"); processRow(name);} // PostgreSQL server-side cursor (requires transaction)conn.setAutoCommit(false); // REQUIRED for PostgreSQLStatement stmt = conn.createStatement();stmt.setFetchSize(100);ResultSet rs = stmt.executeQuery("SELECT * FROM large_table");// Now uses server-side cursor, streaming resultsPostgreSQL only uses server-side cursors when autocommit is off (inside a transaction) and fetch size is set. Otherwise, it loads entire results client-side. Other databases have different defaults—check your driver's documentation.
Results travel from database server to client via database-specific wire protocols. Understanding these protocols helps diagnose performance issues and optimize data transfer.
Common Database Protocols:
| Database | Protocol | Port | Key Features |
|---|---|---|---|
| PostgreSQL | PostgreSQL Protocol v3 | 5432 | Text and binary formats, COPY for bulk transfer |
| MySQL | MySQL Client/Server Protocol | 3306 | Prepared statements, compression option |
| SQL Server | TDS (Tabular Data Stream) | 1433 | Rows as tabular data, column metadata |
| Oracle | Oracle Net (SQL*Net) | 1521 | Array fetch, Advanced Queuing integration |
Protocol Efficiency Considerations:
Text vs. Binary Transfer:
Compression:
COMPRESS, PostgreSQL SSL compression)Connection Pooling:
12345678910111213141516
-- PostgreSQL: Binary transfer for specific types-- (Driver-level configuration, not SQL)-- Some drivers: preferBinary=true parameter -- MySQL: Enable compressionmysql --compress -h hostname -u user -p database-- In JDBC: useCompression=true -- PostgreSQL: COPY for bulk export (very fast)COPY orders TO '/tmp/orders.csv' WITH CSV HEADER; -- Or to application via STDOUT:COPY orders TO STDOUT WITH CSV HEADER;-- Client: psql -c "COPY orders TO STDOUT CSV" > orders.csv -- Bulk transfer is often 10-100x faster than row-by-row SELECTWhen exporting large amounts of data, database-specific bulk commands like PostgreSQL COPY or SQL Server BULK INSERT are dramatically faster than row-by-row SELECT. They use optimized serialization and minimize protocol overhead.
Results can be delivered to the application in two fundamental ways:
Streaming: Results are produced and consumed incrementally. Only a small buffer of rows exists at any time.
Materialization: All results are computed and stored (in memory or on disk) before any are delivered to the application.
When Each Approach Is Used:
Memory Implications:
| Approach | Server Memory | Client Memory | Total I/O |
|---|---|---|---|
| Streaming (100 fetch) | Buffer for 100 rows | Buffer for 100 rows | Minimal—rows flow through |
| Client materialization | Minimal | All rows in memory | All rows transferred |
| Server materialization | All rows in memory/disk | Fetch size buffer | All rows computed, transferred on demand |
12345678910111213141516171819202122232425
# Python example: Memory implicationsimport psycopg2 # BAD: Materialize all rows in memoryconn = psycopg2.connect(database="mydb")cursor = conn.cursor()cursor.execute("SELECT * FROM billion_row_table")rows = cursor.fetchall() # 💥 Out of memory! # GOOD: Stream with server-side cursorconn = psycopg2.connect(database="mydb")# Named cursor creates server-side cursorwith conn.cursor(name='stream_cursor') as cursor: cursor.itersize = 1000 # Fetch 1000 at a time cursor.execute("SELECT * FROM billion_row_table") for row in cursor: # Iterate streaming process_row(row) # Only ~1000 rows in memory # ALTERNATIVE: Use COPY for exportwith conn.cursor() as cursor: with open('output.csv', 'w') as f: cursor.copy_expert( "COPY (SELECT * FROM billion_row_table) TO STDOUT CSV", f ) # Streams directly to file, minimal memoryfetchall() in most database drivers loads ALL results into application memory. For large results, this causes out-of-memory errors. Use streaming (iterate over cursor) or explicit fetch size limits for large result sets.
Along with result data, the database sends metadata describing the result structure. This metadata tells the client how to interpret the data.
Metadata Components:
123456789101112131415161718192021222324252627282930313233343536373839404142
// Java JDBC metadata exampleimport java.sql.*; Statement stmt = conn.createStatement();ResultSet rs = stmt.executeQuery( "SELECT order_id, customer_name, total FROM orders"); ResultSetMetaData meta = rs.getMetaData(); System.out.println("Column count: " + meta.getColumnCount()); for (int i = 1; i <= meta.getColumnCount(); i++) { System.out.println("\nColumn " + i + ":"); System.out.println(" Name: " + meta.getColumnName(i)); System.out.println(" Type: " + meta.getColumnTypeName(i)); System.out.println(" Nullable: " + (meta.isNullable(i) == ResultSetMetaData.columnNullable)); System.out.println(" Table: " + meta.getTableName(i)); System.out.println(" Precision: " + meta.getPrecision(i)); System.out.println(" Scale: " + meta.getScale(i));} /* Output:Column count: 3 Column 1: Name: order_id Type: integer Nullable: false Table: orders Precision: 10 Scale: 0 Column 2: Name: customer_name Type: varchar Nullable: true Table: orders Precision: 100 Scale: 0... */Metadata and Dynamic Queries:
Metadata is essential for tools that handle arbitrary SQL:
Metadata is sent once per query, not per row. For queries returning millions of rows, metadata overhead is negligible. But for many small queries (microservice patterns), metadata can be significant. Prepared statements avoid re-sending metadata on each execution.
Efficient result retrieval requires matching your approach to your use case. Here are optimization strategies for different scenarios:
12345678
-- OLTP pattern: single row by keySELECT customer_name, email, phoneFROM customersWHERE customer_id = ?; -- Prepared statement parameter -- Use covering index if possibleCREATE INDEX idx_customers_lookup ON customers(customer_id) INCLUDE (customer_name, email, phone);Result retrieval is the final step in SQL execution—delivering data from the database to your application. Let's consolidate the key concepts:
What's Next:
We've now covered the complete SQL execution flow: parsing, optimization, execution plan generation, and result retrieval. The final page brings everything together with a comprehensive look at understanding execution as a whole—how all these phases interact, how to troubleshoot performance across the entire pipeline, and how to develop holistic SQL performance intuition.
You now understand how SQL results flow from the execution engine to your application. This knowledge helps you design applications that handle database interactions efficiently, avoiding memory issues and optimizing throughput. Next, we'll tie everything together with a holistic view of SQL execution.