Loading learning content...
Consider two approaches to securing valuables in a building. You could install a single lock on the front door—simple, but everyone must wait for the key holder. Or you could install individual locks on each drawer, cabinet, and safe—more complex, but multiple people can access different items simultaneously.
This is the essence of lock granularity in databases: the size of the data unit that a single lock protects. Lock an entire table, and management is simple but concurrency suffers. Lock individual rows, and concurrency is maximized but overhead accumulates.
Lock granularity is one of the most impactful decisions in database concurrency control. It directly affects transaction throughput, response time, memory usage, and system complexity. In this page, we explore the granularity spectrum, the tradeoffs at each level, and the sophisticated mechanisms databases use to dynamically balance these concerns.
By the end of this page, you will understand the hierarchy of lockable resources (row, page, table, database), the fundamental tradeoffs between fine and coarse granularity, how intention locks enable efficient multi-granularity locking, lock escalation strategies and when they trigger, and practical guidance for choosing appropriate granularity.
Lock granularity exists on a spectrum from the entire database down to individual field values. Each level represents a different balance point between overhead and concurrency.
| Level | What's Locked | Concurrency | Overhead | Use Case |
|---|---|---|---|---|
| Database | Entire database | Minimal (1 tx at a time) | Minimal | Backup, major maintenance |
| Table | All rows in a table | Low (1 tx per table) | Very Low | DDL operations, bulk loads |
| Page/Block | All rows on a page (e.g., 8KB) | Medium | Low | Systems without row locking |
| Row/Record | Single row | High | Medium | Standard transactional operations |
| Column/Field | Single field in a row | Highest | Very High | Rare; specific optimization |
The Fundamental Tradeoff:
As we move down the hierarchy (coarse → fine):
This tradeoff is inescapable. The goal is to find the sweet spot for a given workload.
If row-level maximizes concurrency, why not always use it? Because locking 100,000 rows individually requires 100,000 lock table entries, consuming significant memory and CPU. For bulk operations, a single table lock is far more efficient. This is why lock escalation exists.
Coarse-grained locks cover large units of data. While they limit concurrency, they shine in specific scenarios where simplicity and low overhead matter more than parallelism.
Database-Level Locks lock the entire database, effectively serializing all access.
12345678910111213
-- SQL Server: Set database to single-user modeALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE;-- All other connections terminated; exclusive access -- Perform maintenance... -- Return to multi-userALTER DATABASE MyDatabase SET MULTI_USER; -- SQLite: Database-level locking modesPRAGMA locking_mode=EXCLUSIVE;-- Database locked exclusively until connection closesDatabase-level locks effectively take the application offline. Use only during maintenance windows or for non-production systems.
Fine-grained locking maximizes concurrency by locking the smallest practical unit of data. Most modern transactional databases use row-level locking as the default.
Row-Level Locks are the gold standard for OLTP systems. Each row can be accessed independently.
123456789101112131415161718
-- Transaction 1: Update row 101BEGIN;UPDATE accounts SET balance = balance - 100 WHERE id = 101;-- X-lock on row 101 only -- Transaction 2 (concurrent): Update row 102BEGIN;UPDATE accounts SET balance = balance + 50 WHERE id = 102;-- X-lock on row 102-- SUCCEEDS - no conflict with T1! -- Transaction 3 (concurrent): Update row 101BEGIN;UPDATE accounts SET balance = balance + 25 WHERE id = 101;-- BLOCKED - waiting for T1's X-lock on row 101 -- Key insight: T2 proceeds because different row-- T3 waits because same row as T1The industry has largely converged on row-level locking for OLTP workloads. It offers the best balance of concurrency and overhead. Page-level is used in some legacy systems; field-level is academic. When in doubt, trust the default.
Choosing lock granularity involves balancing multiple competing concerns. Let's analyze these tradeoffs quantitatively.
Lock Overhead Model:
Consider a transaction that touches N rows in a table:
| Granularity | Locks Acquired | Lock Table Memory | Acquisition Time |
|---|---|---|---|
| Table | 1 | O(1) | O(1) |
| Page | N/R (R = rows per page) | O(N/R) | O(N/R) |
| Row | N | O(N) | O(N) |
| Factor | Favors Coarse | Favors Fine |
|---|---|---|
| Workload Type | Bulk/batch operations | OLTP, many small transactions |
| Access Pattern | Full table scans | Point queries, small ranges |
| Concurrency Requirement | Low | High |
| Transaction Duration | Long-running | Short, frequent |
| Contention Level | Low (different transactions touch same data rarely) | High (frequent conflicts on same rows) |
| Memory Constraints | Limited lock table memory | Ample memory available |
| Deadlock Tolerance | Must be avoided | Can be handled with retries |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
// Decision Support: Choosing Lock Granularity interface WorkloadCharacteristics { transactionsPerSecond: number; avgRowsPerTransaction: number; concurrentTransactions: number; readToWriteRatio: number; // e.g., 0.9 means 90% reads hotSpotFrequency: number; // How often transactions touch same rows} function recommendGranularity(workload: WorkloadCharacteristics): string { // Calculate expected lock count const rowLocksPerSecond = workload.transactionsPerSecond * workload.avgRowsPerTransaction; // Calculate contention factor const contentionFactor = workload.concurrentTransactions * workload.hotSpotFrequency; // Decision logic if (workload.avgRowsPerTransaction > 10000) { return "TABLE_LEVEL - Bulk operation pattern detected"; } if (rowLocksPerSecond > 100000 && workload.concurrentTransactions < 10) { return "TABLE_LEVEL - High volume, low concurrency; reduce overhead"; } if (contentionFactor > 0.5 && workload.concurrentTransactions > 50) { return "ROW_LEVEL - High concurrency with contention; maximize parallelism"; } if (workload.readToWriteRatio > 0.95) { return "ROW_LEVEL with MVCC - Read-heavy; leverage snapshots"; } return "ROW_LEVEL - Default recommendation for balanced OLTP";} // Example usage:const eCommerceWorkload: WorkloadCharacteristics = { transactionsPerSecond: 1000, avgRowsPerTransaction: 5, concurrentTransactions: 200, readToWriteRatio: 0.8, hotSpotFrequency: 0.1 // 10% of transactions touch same popular products}; console.log(recommendGranularity(eCommerceWorkload));// Output: "ROW_LEVEL - Default recommendation for balanced OLTP"There is no universally optimal granularity. The best choice depends on your specific workload. Most databases default to row-level for good reason, but bulk operations benefit tremendously from table-level. Design your schema and transactions with lock behavior in mind.
Modern databases don't force a choice between row-level and table-level locking. They support multiple granularities simultaneously using intention locks.
The Problem Without Intention Locks:
Suppose Transaction T1 wants an X-lock on the entire table. How does it know if any row is already locked? Without intention locks, it would need to scan every row's lock—O(n) for n rows.
The Solution:
Before acquiring a fine-grained lock, the transaction first acquires intention locks on all ancestor levels. This provides a quick check: if the table has no IX locks, no rows in it can have X locks.
123456789101112131415161718192021222324252627282930
-- Multi-Granularity Locking Example -- Transaction T1: Update single row in employeesBEGIN;-- System automatically acquires:-- 1. IX on database (if tracked)-- 2. IX on employees table-- 3. IX on page containing row-- 4. X on the specific rowUPDATE employees SET salary = 75000 WHERE employee_id = 12345;COMMIT;-- All locks released -- Transaction T2 (concurrent): Read different rowBEGIN;-- System acquires:-- 1. IS on employees table (compatible with T1's IX)-- 2. IS on different page (or same page - IS compatible with IX)-- 3. S on row 67890SELECT * FROM employees WHERE employee_id = 67890;-- SUCCEEDS - IS and IX are compatible at table levelCOMMIT; -- Transaction T3 (concurrent): Full table lock for maintenanceBEGIN;-- System requests S on employees table-- BLOCKED if T1 is still running (S conflicts with IX)LOCK TABLE employees IN SHARE MODE;-- Will proceed after T1 commitsCOMMIT;In most databases, you don't manually manage intention locks. The system automatically acquires the appropriate intention locks based on your SQL operations. Understanding the protocol helps diagnose blocking behavior.
Lock escalation is the automatic promotion of many fine-grained locks to a single coarse-grained lock. It's a pragmatic optimization to prevent runaway lock overhead.
Why Escalation Happens:
Consider a transaction that updates 50,000 rows. With row-level locking:
After escalation to table-level:
| Database | Default Threshold | Configurable? | Notes |
|---|---|---|---|
| SQL Server | ~5,000 locks OR 40% of lock memory | Yes (ALTER TABLE) | Can disable per-table |
| PostgreSQL | N/A | N/A | No escalation; relies on MVCC |
| MySQL InnoDB | N/A | N/A | No automatic escalation |
| Oracle | Rare/configurable | Via parameters | Generally avoids escalation |
| DB2 | ~5,000 locks | Yes | LOCKLIST and MAXLOCKS |
1234567891011121314151617181920212223242526
-- SQL Server: Lock Escalation Examples -- View lock escalation eventsSELECT object_name(p.object_id) as TableName, escalation_cause, COUNT(*) as EscalationCountFROM sys.dm_tran_locks lJOIN sys.partitions p ON l.resource_associated_entity_id = p.hobt_idGROUP BY object_name(p.object_id), escalation_cause; -- Disable lock escalation on a tableALTER TABLE orders SET (LOCK_ESCALATION = DISABLE);-- Warning: May cause out-of-memory if many rows locked -- Enable escalation only to table level (not partition)ALTER TABLE orders SET (LOCK_ESCALATION = TABLE); -- Enable escalation to partition level first (for partitioned tables)ALTER TABLE orders SET (LOCK_ESCALATION = AUTO); -- Trigger escalation intentionally for testingUPDATE large_table SET status = 'processed';-- If touching > 5000 rows, likely to escalateAfter escalation, even transactions that would only touch unaffected rows are blocked. This can cause unexpected latency spikes in applications. Monitor escalation events and consider restructuring transactions that frequently trigger escalation.
Armed with theoretical understanding, let's distill practical guidance for real-world systems.
| Scenario | Recommended Granularity | Rationale |
|---|---|---|
| Standard web app | Row (default) | Many concurrent users, small transactions |
| Nightly ETL job | Table (explicit) | Processing millions of rows; block concurrent access |
| Analytics dashboard | Row + MVCC | Read-heavy; snapshots avoid blocking writers |
| High-frequency trading | Row + custom tuning | Microsecond latency requires careful optimization |
| Bulk data migration | Table per batch | Minimize lock overhead; accept blocking |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
-- Pattern 1: Batch processing to avoid escalationDECLARE @batch_size INT = 5000;DECLARE @rows_affected INT = 1; WHILE @rows_affected > 0BEGIN BEGIN TRANSACTION; UPDATE TOP (@batch_size) orders SET processed = 1 WHERE processed = 0; SET @rows_affected = @@ROWCOUNT; COMMIT; -- Each batch: row-level locks, released at commit -- Never exceeds escalation thresholdEND -- Pattern 2: Explicit table lock for maintenanceBEGIN TRANSACTION; -- Lock table - signals intent, prevents row escalation surprisesLOCK TABLE audit_log IN EXCLUSIVE MODE; -- Perform bulk deletionDELETE FROM audit_log WHERE log_date < DATEADD(year, -1, GETDATE()); -- Single table lock throughout; clear blocking behaviorCOMMIT; -- Pattern 3: SKIP LOCKED for parallel queue processing-- Each worker gets different rows; no blockingBEGIN TRANSACTION; SELECT TOP 1 *FROM work_queue WITH (ROWLOCK, XLOCK, READPAST)WHERE status = 'pending'ORDER BY created_at; -- Process the item...UPDATE work_queue SET status = 'complete' WHERE id = @selected_id; COMMIT;For 80% of applications, the database's default row-level locking is perfect. Spend your optimization effort on the 20% of operations that actually cause contention. Don't over-engineer lock granularity unless you have measured problems.
Lock granularity is a critical decision point in database concurrency control, balancing overhead against concurrency. Let's consolidate the key insights:
Module Complete: Lock Concepts
Congratulations! You've completed the Lock Concepts module. You now understand:
With this foundation, you're ready to explore how these locks are organized into protocols—particularly Two-Phase Locking (2PL)—that guarantee serializability. The next module covers these locking protocols in depth.
You've mastered the foundational concepts of database locking: lock definitions, types (S and X), compatibility rules, and granularity tradeoffs. In the next module, we'll explore Two-Phase Locking (2PL) and its variants—the protocols that use these lock primitives to guarantee serializable transaction schedules.