Loading content...
Profiler tools generate streams of metrics, but metrics without interpretation are just numbers. The difference between a developer who sees "logical reads: 45,892" and one who immediately recognizes that this represents excessive data access requiring optimization is the difference between collecting data and understanding performance.
Execution statistics are the vocabulary of query performance analysis. Each metric tells a specific story about what happened during query execution—time spent, resources consumed, data processed, waiting endured. Fluency in this vocabulary enables rapid diagnosis of performance issues that would otherwise require extensive experimentation.
This page establishes deep understanding of the key execution statistics, their relationships, and the performance insights they reveal.
By the end of this page, you will understand the core categories of execution statistics, interpret specific metrics across database platforms, recognize the relationships between metrics that reveal bottleneck types, and develop intuition for which statistics matter most in different diagnostic scenarios.
Timing statistics measure where time is spent during query execution. They represent the most immediately actionable metrics—if a query takes too long, timing statistics reveal which phases consume that time.
Wall-Clock Time (Elapsed Time)
This is the total real-world time from query submission to result completion—what a user experiences. It includes all phases: parsing, optimization, execution, waiting, and result transmission.
Elapsed Time = CPU Time + Wait Time + Idle Time
Elapsed time alone tells you whether a query is slow but not why. A 10-second query might spend 9 seconds computing (CPU-bound), 9 seconds waiting for disk (I/O-bound), or 9 seconds waiting for locks (contention-bound). The optimization approach differs completely for each case.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115
-- ================================================================-- Timing Statistics Analysis-- ================================================================ -- =========================-- PostgreSQL Timing Analysis-- ========================= EXPLAIN (ANALYZE, TIMING)SELECT c.customer_id, SUM(o.total) AS lifetime_valueFROM customers cJOIN orders o ON c.customer_id = o.customer_idWHERE c.segment = 'Enterprise'GROUP BY c.customer_idORDER BY lifetime_value DESCLIMIT 100; /*Understanding PostgreSQL timing output: Sort (cost=1250.00..1250.25 rows=100 width=40) (actual time=245.892..245.923 rows=100 loops=1) ^^^^^^^^^^^^^^^^^^^^^^ | | | | | +-- Time when LAST row returned: 245.923ms | +----------- Time when FIRST row returned: 245.892ms +---------------- This is "startup time..total time" Key insight: - If startup time ≈ total time: Operator returns all rows at once (blocking)- If startup time << total time: Operator streams rows incrementally Blocking operators (high startup cost):- Sort (must see all input before returning any output)- HashAggregate (must build complete hash table)- Hash Join (must build hash table before probing) Streaming operators (low startup cost):- Seq Scan (returns rows as it finds them)- Index Scan (returns rows as index entries are read)- Nested Loop (returns each matched pair immediately)*/ -- =========================-- MySQL Timing Analysis-- ========================= -- Enable timing in Performance SchemaUPDATE performance_schema.setup_instrumentsSET ENABLED = 'YES', TIMED = 'YES'WHERE NAME LIKE 'statement/%'; -- View detailed timing breakdownSELECT EVENT_NAME, TIMER_WAIT / 1000000000000 AS elapsed_seconds, TIMER_START / 1000000000000 AS start_seconds, TIMER_END / 1000000000000 AS end_seconds, LOCK_TIME / 1000000000000 AS lock_seconds, ROWS_EXAMINED, ROWS_SENTFROM performance_schema.events_statements_currentWHERE THREAD_ID = ( SELECT THREAD_ID FROM performance_schema.threads WHERE PROCESSLIST_ID = CONNECTION_ID()); /*MySQL timing breakdown:- TIMER_WAIT: Total statement duration- LOCK_TIME: Time spent waiting for table locks- Analysis: (TIMER_WAIT - LOCK_TIME) ≈ actual execution time If LOCK_TIME is significant portion of TIMER_WAIT:→ Contention issue, not query efficiency issue*/ -- =========================-- SQL Server Timing Analysis-- ========================= SET STATISTICS TIME ON; SELECT ProductID, SUM(OrderQty) AS TotalQtyFROM Sales.SalesOrderDetailGROUP BY ProductIDORDER BY TotalQty DESC; SET STATISTICS TIME OFF; /*SQL Server timing output: SQL Server parse and compile time: CPU time = 15 ms, elapsed time = 18 ms. ^^ Compilation overhead (cached plans show 0 ms) SQL Server Execution Times: CPU time = 234 ms, elapsed time = 891 ms. ^^^^ ^^^^ | | | +-- Wall-clock time (includes waits) +------------------------ Pure compute time CRITICAL INSIGHT:If elapsed >> CPU: Query is WAIT-BOUND (I/O, locks, memory)If elapsed ≈ CPU: Query is CPU-BOUND (efficient but compute-heavy) Example analysis:- CPU: 234ms, Elapsed: 891ms- Wait time: 891 - 234 = 657ms (74% of execution)- Diagnosis: This query is wait-bound, not compute-bound- Action: Investigate I/O or lock waits, not query logic*/The ratio of CPU time to elapsed time is instantly diagnostic. CPU ≈ Elapsed means the query is actively computing; optimize the algorithm. CPU << Elapsed means the query is waiting; investigate I/O, locks, or memory pressure. This single ratio directs your entire optimization strategy.
Phase-Level Timing
Advanced profiling breaks execution into phases, each with its own timing:
| Phase | What It Measures | High Time Indicates |
|---|---|---|
| Parse/Compile | SQL text → execution plan | Complex query, not using prepared statements |
| Optimize | Plan enumeration and costing | Many joins, complex predicates |
| Execute | Actual data retrieval and processing | The meat of query work |
| Fetch/Send | Result transmission to client | Large result sets, network latency |
| Lock Acquisition | Waiting for table/row locks | Contention with other transactions |
| Buffer I/O | Waiting for disk pages | Cold cache, insufficient memory |
Row statistics track data volume at each stage of query execution. They reveal how efficiently the query filters data and expose mismatches between what the optimizer expected and what actually occurred.
Key Row Metrics:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134
-- ================================================================-- Row Statistics Analysis Patterns-- ================================================================ -- =========================-- Filtering Efficiency Analysis-- ========================= -- PostgreSQL: Analyze filter effectivenessEXPLAIN ANALYZESELECT * FROM orders WHERE status = 'shipped' AND created_at > '2024-01-01' AND customer_segment = 'enterprise'; /*Output analysis: Seq Scan on orders (actual time=0.015..234.567 rows=50 loops=1) Filter: ((status = 'shipped'::text) AND (created_at > '2024-01-01'::date) AND (customer_segment = 'enterprise'::text)) Rows Removed by Filter: 249950 ^^^^^^ DIAGNOSIS:- Rows returned: 50- Rows removed: 249,950- Filtering ratio: 50 / 250,000 = 0.02% pass rate This is HIGHLY INEFFICIENT:- Query scanned 250,000 rows to return 50- 99.98% of I/O was wasted SOLUTION: Add composite index on filter columns:CREATE INDEX idx_orders_filtered ON orders(status, created_at, customer_segment);*/ -- =========================-- Loops and Row Amplification-- ========================= -- PostgreSQL: Understanding loops in nested operationsEXPLAIN ANALYZESELECT c.customer_name, o.order_id, o.totalFROM customers cJOIN orders o ON c.customer_id = o.customer_idWHERE c.country = 'USA'; /*Nested Loop (actual time=0.045..456.789 rows=15000 loops=1) -> Index Scan using idx_customers_country on customers c (actual time=0.030..12.340 rows=1000 loops=1) Index Cond: (country = 'USA'::text) -> Index Scan using idx_orders_customer on orders o (actual time=0.005..0.425 rows=15 loops=1000) ^^^^^^^^^^ Index Cond: (customer_id = c.customer_id) CRITICAL UNDERSTANDING:- Inner scan executes 1000 times (loops=1000)- Each loop returns ~15 rows on average- Total inner rows: 15 × 1000 = 15,000- Total inner time: 0.425ms × 1000 = 425ms The "actual time" is PER LOOP, not total!To calculate total contribution: time × loops*/ -- =========================-- Estimate vs. Actual Mismatch-- ========================= -- PostgreSQL: Detect bad estimatesEXPLAIN ANALYZESELECT * FROM products pJOIN order_items oi ON p.product_id = oi.product_idWHERE p.category = 'Electronics' AND oi.quantity > 10; /*Hash Join (cost=125.00..2450.00 rows=500 width=120) (actual time=12.345..789.012 rows=45000 loops=1) ^^^^^ Estimated: 500 rows Actual: 45,000 rows DIAGNOSIS: 90x underestimate! Impact of bad estimates:- Wrong join order selection- Insufficient memory allocation for hash tables- Spilling to disk that shouldn't occur- Incorrect parallelism decisions SOLUTION: Update statisticsANALYZE products;ANALYZE order_items; If problem persists after ANALYZE:- Correlation between columns not captured- Consider extended statistics (PostgreSQL) or histogram tuning*/ -- =========================-- MySQL Row Statistics-- ========================= SELECT SQL_TEXT, ROWS_EXAMINED, ROWS_SENT, ROWS_EXAMINED / NULLIF(ROWS_SENT, 0) AS examine_to_send_ratio, CASE WHEN ROWS_EXAMINED > ROWS_SENT * 1000 THEN 'CRITICAL: Massive over-scan' WHEN ROWS_EXAMINED > ROWS_SENT * 100 THEN 'WARNING: Significant over-scan' WHEN ROWS_EXAMINED > ROWS_SENT * 10 THEN 'INFO: Moderate filtering' ELSE 'GOOD: Efficient access' END AS efficiency_ratingFROM performance_schema.events_statements_historyWHERE SQL_TEXT IS NOT NULL AND ROWS_SENT > 0ORDER BY ROWS_EXAMINED DESCLIMIT 20; /*Interpretation guide for examine:send ratio: Ratio 1:1 → Perfect: Every row examined is returned (ideal)Ratio 10:1 → Acceptable: Some filtering, typical for WHERE clausesRatio 100:1 → Warning: 99% of rows filtered, likely missing indexRatio 1000:1+ → Critical: Massive table scan, urgent optimization needed*/A query returning 10 rows might examine 10 million. The result set size tells you nothing about the work performed. Always check rows examined, not just rows returned. The ratio between them is often the single most important efficiency metric.
I/O statistics measure data movement between storage, memory, and the query executor. Since I/O is typically the slowest operation in query execution (disk access is 10,000-100,000× slower than memory access), I/O metrics are critical for performance analysis.
Logical vs. Physical I/O
This distinction is fundamental:
Logical I/O (Buffer Gets) — Pages read from the database buffer cache (memory). Fast, but still has CPU cost for buffer management.
Physical I/O (Disk Reads) — Pages read from storage into the buffer cache. Slow, involves disk seek and transfer time.
Buffer Hit Ratio = Logical I/O from Cache / Total Logical I/O
A high buffer hit ratio (>95%) indicates good cache utilization. A low ratio suggests the working set exceeds available memory or access patterns defeat caching.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159
-- ================================================================-- I/O Statistics Deep Dive-- ================================================================ -- =========================-- PostgreSQL Buffer Analysis-- ========================= EXPLAIN (ANALYZE, BUFFERS)SELECT order_id, customer_id, total, statusFROM ordersWHERE created_at BETWEEN '2024-01-01' AND '2024-06-30'ORDER BY created_at DESC; /*Understanding BUFFERS output: Sort (actual time=125.456..145.678 rows=50000 loops=1) Sort Key: created_at DESC Sort Method: external merge Disk: 4520kB Buffers: shared hit=1234 read=5678, temp read=567 written=567 ^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^ | | | | | +-- Temporary file I/O (BAD) | +-------------- Physical disk reads (slow) +----------------------------- Buffer cache hits (fast) METRICS EXPLAINED: shared hit=1234: Pages found in PostgreSQL shared buffer cache Fast memory access, but still costs CPU cycles shared read=5678: Pages read from operating system or disk Much slower than hits; may involve physical disk I/O temp read/written=567: Pages written to and read from temporary files Indicates work_mem exceeded for sort/hash operations VERY SLOW: Involves disk I/O for intermediate results DIAGNOSIS for this query:- Cache hit ratio: 1234 / (1234 + 5678) = 17.8% (POOR)- Temporary file usage: Sort spilled to disk- Action: Increase shared_buffers, work_mem, or add index to avoid sort*/ -- Calculate buffer efficiency for recent queriesSELECT query, calls, shared_blks_hit, shared_blks_read, ROUND(100.0 * shared_blks_hit / NULLIF(shared_blks_hit + shared_blks_read, 0), 2) AS cache_hit_pct, temp_blks_read, temp_blks_written, CASE WHEN temp_blks_written > 0 THEN 'SPILLING TO DISK' ELSE 'IN-MEMORY' END AS sort_hash_locationFROM pg_stat_statementsWHERE calls > 100ORDER BY shared_blks_read DESCLIMIT 20; -- =========================-- SQL Server I/O Statistics-- ========================= SET STATISTICS IO ON; SELECT p.ProductName, SUM(od.Quantity) as TotalSoldFROM Products pJOIN OrderDetails od ON p.ProductID = od.ProductIDGROUP BY p.ProductID, p.ProductNameORDER BY TotalSold DESC; SET STATISTICS IO OFF; /*SQL Server STATISTICS IO output: Table 'OrderDetails'. Scan count 1, logical reads 15234, physical reads 4521, read-ahead reads 5000 ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ | | | +-- Pages brought from disk +----------------------- Total pages accessed Table 'Products'. Scan count 1, logical reads 52, physical reads 0, read-ahead reads 0 METRIC MEANINGS: Logical reads: Buffer pool page accesses (hit or miss) Primary measure of data access volume Physical reads: Actual disk I/O operations High count = cold cache or large dataset Read-ahead reads: Speculative I/O (prefetching) Good for sequential scans, wasted for random access EFFICIENCY ANALYSIS:- OrderDetails: 15234 logical, 4521 physical- Cache hit ratio: (15234 - 4521) / 15234 = 70.3%- 30% of accesses required disk I/O If physical reads are consistently high:1. Table too large for buffer pool2. Query accessing infrequently-used data3. Buffer pool under memory pressure*/ -- =========================-- MySQL I/O Analysis-- ========================= -- Check InnoDB buffer pool efficiencySHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read%'; /*Key metrics:- Innodb_buffer_pool_read_requests: Total logical reads- Innodb_buffer_pool_reads: Physical disk reads- Hit ratio: 1 - (reads / read_requests) Target: > 99% hit ratio for OLTP workloads*/ -- Per-query I/O from Performance SchemaSELECT DIGEST_TEXT, COUNT_STAR AS executions, SUM_ROWS_EXAMINED AS total_rows, SUM_CREATED_TMP_TABLES AS temp_tables, SUM_CREATED_TMP_DISK_TABLES AS disk_temp_tables, CASE WHEN SUM_CREATED_TMP_DISK_TABLES > 0 THEN 'Uses disk temp tables' ELSE 'Memory only' END AS temp_table_statusFROM performance_schema.events_statements_summary_by_digestWHERE SCHEMA_NAME = 'your_database'ORDER BY SUM_CREATED_TMP_DISK_TABLES DESCLIMIT 20; /*CREATED_TMP_DISK_TABLES > 0 is a performance concern:- Intermediate results don't fit in memory- Query using filesort or temp tables that spill- Increase tmp_table_size / max_heap_table_size- Or optimize query to reduce intermediate result size*/| Metric Pattern | Diagnosis | Typical Action |
|---|---|---|
| High logical, low physical | Data well-cached | Query is I/O-efficient; optimize elsewhere |
| High physical, moderate logical | Cold cache or working set > memory | Increase buffer pool or reduce data accessed |
| Logical >> Rows returned | Over-scanning; reading unnecessary data | Add indexes on filter columns |
| Temp I/O present | Sort/hash exceeding work memory | Increase work_mem or avoid in-memory operations |
| High read-ahead, low logical | Prefetch wasted on random access | Pattern mismatch; consider index reorganization |
A single physical disk read (especially on spinning disk) can cost 5-15ms—equivalent to millions of CPU cycles. Even with SSDs (0.1-0.5ms), physical I/O dominates execution time for most queries. When optimizing, reducing physical reads typically yields the highest performance gains.
Memory statistics track workspace memory allocated for query operations like sorting, hashing, and joining. When operations exceed available memory, they spill to disk—dramatically degrading performance.
Key Memory Consumers:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152
-- ================================================================-- Memory Statistics and Sort/Hash Behavior-- ================================================================ -- =========================-- PostgreSQL: Sort Method Detection-- ========================= EXPLAIN (ANALYZE, BUFFERS)SELECT * FROM large_tableORDER BY created_at DESCLIMIT 1000; /*Possible sort methods in output: 1. Sort Method: quicksort Memory: 1024kB → All data fit in work_mem → Fast in-memory sort → GOOD performance 2. Sort Method: top-N heapsort Memory: 512kB → LIMIT optimization applied → Only keeping top N rows in memory → EXCELLENT for large table + small LIMIT 3. Sort Method: external merge Disk: 45000kB → Data exceeded work_mem → Spilling to temporary files → SLOW: Involves disk I/O DIAGNOSIS:- If you see "external merge" frequently: → Increase work_mem (per-query setting) → But be careful: work_mem × concurrent queries = total memory- If you see "top-N heapsort": → LIMIT optimization is working well → Efficient even for large datasets*/ -- Check and adjust work_memSHOW work_mem; -- Default often 4MB SET work_mem = '256MB'; -- Increase for current session -- Re-run query and check if sort method changesEXPLAIN (ANALYZE, BUFFERS)SELECT * FROM large_tableORDER BY created_at DESCLIMIT 1000; -- =========================-- PostgreSQL: Hash Table Analysis-- ========================= EXPLAIN (ANALYZE, BUFFERS)SELECT c.customer_name, COUNT(*) as order_countFROM customers cJOIN orders o ON c.customer_id = o.customer_idGROUP BY c.customer_id, c.customer_name; /*Hash Join (actual time=45.123..678.456 rows=50000 loops=1) Hash Cond: (o.customer_id = c.customer_id) -> Seq Scan on orders o (actual time=0.015..234.567 rows=500000 loops=1) -> Hash (actual time=25.678..25.678 rows=50000 loops=1) Buckets: 65536 Batches: 1 Memory Usage: 3500kB ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^ | | | +-- Hash table memory footprint +-------------- Number of hash table passes BATCHES interpretation:- Batches: 1 = Entire hash table fits in memory (GOOD)- Batches: 4 = Hash table split into 4 disk-based batches (BAD)- Batches: 16+ = Severe memory pressure, many disk passes (VERY BAD) When Batches > 1:- Hash table too large for hash_mem_multiplier × work_mem- Inner relation processed multiple times from disk- Performance degrades significantly SOLUTION for Batches > 1:- Increase work_mem- Ensure smaller relation is used for hash build- Consider different join strategy if memory-constrained*/ -- =========================-- SQL Server: Memory Grant Analysis-- ========================= -- View memory grants for running queriesSELECT session_id, request_id, scheduler_id, dop, request_time, grant_time, requested_memory_kb, granted_memory_kb, required_memory_kb, used_memory_kb, max_used_memory_kb, query_cost, ideal_memory_kb, CASE WHEN granted_memory_kb < requested_memory_kb THEN 'MEMORY CONSTRAINED' ELSE 'FULL GRANT' END AS grant_statusFROM sys.dm_exec_query_memory_grantsWHERE session_id > 50; /*Memory grant interpretation: requested_memory_kb: What query optimizer requestedgranted_memory_kb: What was actually availablerequired_memory_kb: Minimum to run without spillingideal_memory_kb: Optimal amount for best performance WARNING CONDITIONS:1. granted < requested: Memory pressure, may spill2. max_used >> granted: Query needs more memory than granted3. granted >> max_used: Over-estimation, wasting memory Query waiting for memory grant:- grant_time IS NULL but request_time IS NOT NULL- Queries queued waiting for memory- Consider Resource Governor or reducing concurrent load*/ -- Check for memory-spilling in Query StoreSELECT qt.query_sql_text, AVG(rs.avg_query_max_used_memory) AS avg_memory_kb, MAX(rs.max_query_max_used_memory) AS max_memory_kb, AVG(rs.avg_tempdb_space_used) AS avg_tempdb_kb, CASE WHEN AVG(rs.avg_tempdb_space_used) > 0 THEN 'SPILLING TO TEMPDB' ELSE 'IN-MEMORY' END AS spill_statusFROM sys.query_store_query_text qtJOIN sys.query_store_query q ON qt.query_text_id = q.query_text_idJOIN sys.query_store_plan p ON q.query_id = p.query_idJOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_idGROUP BY qt.query_sql_textHAVING AVG(rs.avg_tempdb_space_used) > 0ORDER BY avg_tempdb_kb DESC;When SQL Server queries must wait for memory grants, they queue in resource semaphore waits. These don't show as query execution time—the query appears fast once it runs, but users experience significant delays waiting to start. Monitor for RESOURCE_SEMAPHORE waits in addition to execution statistics.
Wait statistics capture time spent waiting for resources—the portion of elapsed time when the query wasn't actively computing. High wait time is often the primary cause of query latency, yet it's invisible without explicit wait tracking.
Common Wait Categories:
| Wait Category | Examples | Indicates | Typical Resolution |
|---|---|---|---|
| I/O Waits | PAGEIOLATCH_*, ASYNC_IO_COMPLETION | Waiting for disk pages | Add memory, faster storage, reduce data volume |
| Lock Waits | LCK_M_S, LCK_M_X, LCK_M_IX | Blocked by other transactions | Reduce transaction scope, optimize blocking queries |
| Latch Waits | PAGELATCH_*, LATCH_EX | Contention on memory structures | Optimize hot pages, reduce contention |
| Memory Waits | RESOURCE_SEMAPHORE | Waiting for memory grant | Reduce concurrent memory-heavy queries, add RAM |
| CPU/Scheduler | SOS_SCHEDULER_YIELD, CXPACKET | CPU saturation or parallelism issues | Reduce CPU-intensive work, tune MAXDOP |
| Network Waits | ASYNC_NETWORK_IO | Client consuming results slowly | Client-side optimization, batch smaller results |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165
-- ================================================================-- Wait Statistics Analysis-- ================================================================ -- =========================-- SQL Server: Session Wait Analysis-- ========================= -- View waits for currently executing queriesSELECT r.session_id, r.status, r.wait_type, r.wait_time AS current_wait_ms, r.wait_resource, r.blocking_session_id, r.cpu_time, r.total_elapsed_time, r.total_elapsed_time - r.cpu_time AS total_wait_time_approx, ROUND(100.0 * (r.total_elapsed_time - r.cpu_time) / NULLIF(r.total_elapsed_time, 0), 1) AS wait_pct, t.text AS query_textFROM sys.dm_exec_requests rCROSS APPLY sys.dm_exec_sql_text(r.sql_handle) tWHERE r.session_id > 50 AND r.status != 'running' -- Currently waitingORDER BY r.wait_time DESC; /*Key wait types to recognize: LCK_M_S, LCK_M_X, LCK_M_U: Lock waits (Shared, eXclusive, Update) Blocked by another transaction Check blocking_session_id for blocker PAGEIOLATCH_SH, PAGEIOLATCH_EX: I/O waits for data pages Disk can't keep up with requests Add memory or faster storage CXPACKET, CXCONSUMER: Parallelism synchronization waits Normal for parallel queries, but excessive = MAXDOP issue ASYNC_NETWORK_IO: Client consuming results slowly Application issue, not database issue RESOURCE_SEMAPHORE: Waiting for memory grant Too many memory-intensive concurrent queries*/ -- Aggregate wait statistics since server startSELECT wait_type, waiting_tasks_count, wait_time_ms, max_wait_time_ms, signal_wait_time_ms, wait_time_ms - signal_wait_time_ms AS resource_wait_ms, ROUND(100.0 * wait_time_ms / SUM(wait_time_ms) OVER(), 2) AS pct_of_totalFROM sys.dm_os_wait_statsWHERE wait_type NOT LIKE 'SLEEP_%' AND wait_type NOT LIKE 'BROKER_%' AND wait_type NOT LIKE 'XE_%' AND wait_type NOT IN ('WAITFOR', 'CLR_AUTO_EVENT', 'REQUEST_FOR_DEADLOCK_SEARCH')ORDER BY wait_time_ms DESC; -- =========================-- PostgreSQL: Wait Event Analysis -- ========================= -- View current wait events for active sessionsSELECT pid, usename, application_name, client_addr, state, wait_event_type, wait_event, query_start, NOW() - query_start AS query_duration, LEFT(query, 100) AS query_previewFROM pg_stat_activityWHERE state != 'idle' AND pid != pg_backend_pid()ORDER BY query_start; /*PostgreSQL wait event interpretation: wait_event_type | Meaning----------------|---------------------------LWLock | Lightweight lock (internal)Lock | Heavyweight lock (row/table)BufferPin | Buffer pinned by another processActivity | Background process waitingClient | Waiting for client inputExtension | Extension-related waitIPC | Inter-process communicationTimeout | Timeout-based waitIO | I/O operation in progress wait_event examples:- 'DataFileRead': Reading data pages from disk- 'WALSync': Waiting for WAL to sync- 'Lock:tuple': Row-level lock wait- 'ClientRead': Waiting for data from client*/ -- Historical wait analysis via pg_stat_statements + extensions-- (Requires pg_wait_sampling extension for detailed historical waits) -- =========================-- Oracle: Wait Event Analysis-- ========================= -- Current session waitsSELECT s.sid, s.serial#, s.username, s.status, sw.event, sw.wait_class, sw.wait_time, sw.seconds_in_wait, sw.state, s.sql_id, SUBSTR(sq.sql_text, 1, 100) AS sql_previewFROM v$session sLEFT JOIN v$session_wait sw ON s.sid = sw.sidLEFT JOIN v$sql sq ON s.sql_id = sq.sql_id AND s.sql_child_number = sq.child_numberWHERE s.type = 'USER' AND s.status = 'ACTIVE' AND sw.event NOT LIKE '%idle%'ORDER BY sw.seconds_in_wait DESC; -- Top wait events from ASHSELECT event, wait_class, COUNT(*) AS sample_count, COUNT(*) * 10 AS approx_wait_seconds, -- ASH samples every second ROUND(100.0 * COUNT(*) / SUM(COUNT(*)) OVER(), 2) AS pct_db_timeFROM v$active_session_historyWHERE sample_time > SYSDATE - INTERVAL '1' HOUR AND wait_class IS NOT NULLGROUP BY event, wait_classORDER BY sample_count DESCFETCH FIRST 20 ROWS ONLY; /*Critical Oracle wait events: 'db file sequential read': Single-block read (index access)'db file scattered read': Multi-block read (full scan)'log file sync': Commit waiting for redo log'buffer busy waits': Buffer contention'enq: TX - row lock contention': Row-level blocking'latch: xxx': Internal latch contention*/When analyzing waits, trace the chain: Session A waits for Session B which waits for Session C. The root cause is Session C's behavior, not Session A's query. Use blocking_session_id (SQL Server) or pg_blocking_pids() (PostgreSQL) to trace these chains to their source.
Individual metrics tell partial stories. The relationships between metrics reveal complete performance pictures. Expert analysts recognize patterns across metric combinations that diagnose issues faster than examining metrics in isolation.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136
-- ================================================================-- Metric Pattern Analysis Queries-- ================================================================ -- =========================-- Pattern: I/O Efficiency Analysis-- ========================= -- SQL Server: Find queries with poor I/O efficiencySELECT TOP 20 SUBSTRING(st.text, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1) AS query, qs.execution_count, qs.total_logical_reads, qs.total_rows, qs.total_logical_reads / NULLIF(qs.total_rows, 0) AS reads_per_row, CASE WHEN qs.total_logical_reads / NULLIF(qs.total_rows, 0) > 1000 THEN 'CRITICAL: Severe over-reading' WHEN qs.total_logical_reads / NULLIF(qs.total_rows, 0) > 100 THEN 'WARNING: Significant over-reading' WHEN qs.total_logical_reads / NULLIF(qs.total_rows, 0) > 10 THEN 'MODERATE: Some filtering overhead' ELSE 'GOOD: Efficient I/O' END AS io_efficiencyFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) stWHERE qs.total_rows > 0ORDER BY reads_per_row DESC; -- =========================-- Pattern: CPU vs Wait Time-- ========================= -- SQL Server: Identify wait-bound vs CPU-bound queriesSELECT TOP 20 SUBSTRING(st.text, (qs.statement_start_offset/2)+1, 200) AS query_preview, qs.execution_count, qs.total_worker_time / 1000 AS total_cpu_ms, qs.total_elapsed_time / 1000 AS total_elapsed_ms, (qs.total_elapsed_time - qs.total_worker_time) / 1000 AS total_wait_ms, ROUND(100.0 * qs.total_worker_time / NULLIF(qs.total_elapsed_time, 0), 1) AS cpu_pct, CASE WHEN 100.0 * qs.total_worker_time / NULLIF(qs.total_elapsed_time, 0) < 30 THEN 'WAIT-BOUND: Focus on waits/blocking' WHEN 100.0 * qs.total_worker_time / NULLIF(qs.total_elapsed_time, 0) > 80 THEN 'CPU-BOUND: Optimize algorithm/indexes' ELSE 'MIXED: Check both aspects' END AS optimization_focusFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) stWHERE qs.total_elapsed_time > 1000000 -- > 1 second totalORDER BY total_elapsed_ms DESC; -- =========================-- Pattern: Estimate Accuracy Analysis-- ========================= -- PostgreSQL: Find queries with estimate mismatches-- Run this after EXPLAIN ANALYZE on target queries /*From EXPLAIN ANALYZE output, look for: Hash Join (cost=... rows=1000 width=...) (actual time=... rows=150000 loops=1) ^^^^^^^ Estimate: 1000 rowsActual: 150000 rowsRatio: 150x underestimate Systematic approach:1. Extract estimated vs actual from plan nodes2. Calculate ratio for each node3. Flag nodes with ratio > 10 or < 0.1 Causes of bad estimates:- Stale statistics (ANALYZE not run recently)- Correlated predicates (statistics assume independence)- Complex expressions (optimizer can't estimate)- Skewed data (outliers not captured in histograms)*/ -- PostgreSQL: Check table statistics freshnessSELECT schemaname, relname, n_live_tup AS estimated_rows, n_dead_tup AS dead_rows, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze, CASE WHEN last_analyze IS NULL AND last_autoanalyze IS NULL THEN 'NEVER ANALYZED' WHEN GREATEST(last_analyze, last_autoanalyze) < NOW() - INTERVAL '7 days' THEN 'STALE STATISTICS' ELSE 'RECENT STATISTICS' END AS stats_statusFROM pg_stat_user_tablesORDER BY n_live_tup DESC; -- =========================-- Pattern: Memory Pressure Detection-- ========================= -- SQL Server: Queries affected by memory pressureSELECT qt.query_sql_text, COUNT(DISTINCT p.plan_id) AS plan_count, AVG(rs.avg_query_max_used_memory) AS avg_memory_kb, MAX(rs.max_query_max_used_memory) AS max_memory_kb, AVG(rs.avg_tempdb_space_used) AS avg_tempdb_kb, MAX(rs.max_tempdb_space_used) AS max_tempdb_kb, CASE WHEN MAX(rs.max_tempdb_space_used) > AVG(rs.avg_query_max_used_memory) THEN 'MEMORY CONSTRAINED: Spilling exceeds grants' WHEN MAX(rs.max_tempdb_space_used) > 0 THEN 'SOME SPILLING: Occasional memory pressure' ELSE 'GOOD: Operating in-memory' END AS memory_statusFROM sys.query_store_query_text qtJOIN sys.query_store_query q ON qt.query_text_id = q.query_text_idJOIN sys.query_store_plan p ON q.query_id = p.query_idJOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_idGROUP BY qt.query_sql_textHAVING MAX(rs.max_tempdb_space_used) > 0ORDER BY max_tempdb_kb DESC;Effective statistics interpretation requires both technical knowledge and analytical discipline. These best practices ensure you draw accurate conclusions from execution statistics.
Statistics are meaningless without context. Before optimizing, establish baseline metrics under normal conditions. After changes, compare against this baseline. A 50ms query time is neither good nor bad until you know it was previously 500ms (improvement) or 5ms (regression).
Execution statistics form the vocabulary of query performance analysis. Fluency in interpreting these metrics enables rapid diagnosis of performance issues and confident optimization decisions.
What's next:
Now that you understand execution statistics interpretation, the next page examines techniques for identifying slow queries—how to systematically discover which queries in your workload require optimization attention, using both aggregate analysis and threshold-based detection.
You now possess comprehensive knowledge of SQL execution statistics—their categories, interpretation, and diagnostic patterns. This foundation enables you to read profiler output with expert fluency, transforming raw numbers into actionable performance insights.