Loading learning content...
Throughout this module, we've defined unrepeatable reads, explored how they manifest in real scenarios, examined read inconsistency patterns, and quantified the business impact. The natural question is: How do we prevent them?
The good news is that database systems provide robust mechanisms for preventing unrepeatable reads. The challenge lies in understanding when to apply them, what trade-offs they introduce, and how to choose the right approach for your specific use case.
This page presents a comprehensive toolkit for preventing unrepeatable reads—from SQL isolation levels to explicit locking, from MVCC internals to application-level design patterns. By the end, you'll know exactly how to protect your transactions from reading stale data.
By the end of this page, you will understand how SQL isolation levels prevent unrepeatable reads, when to use explicit locks vs. isolation levels, how MVCC provides read consistency, database-specific implementation details, application design patterns that reduce risk, and a decision framework for choosing the right approach.
The SQL standard defines four isolation levels that progressively prevent more concurrency anomalies. For unrepeatable reads specifically, you need at least REPEATABLE READ.
The Four Standard Isolation Levels:
| Level | Dirty Read | Unrepeatable Read | Phantom Read | Use Case |
|---|---|---|---|---|
| READ UNCOMMITTED | Possible | Possible | Possible | Rare—approximate counts only |
| READ COMMITTED | Prevented | Possible | Possible | Default in many DBs; single-statement consistency |
| REPEATABLE READ | Prevented | Prevented | Possible* | Multi-statement consistency needed |
| SERIALIZABLE | Prevented | Prevented | Prevented | Full isolation; critical operations |
*Note: Many implementations (PostgreSQL, MySQL InnoDB) also prevent phantoms at REPEATABLE READ through MVCC or next-key locking.
Setting Isolation Level:
-- Transaction level
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- All reads in this transaction see the same snapshot
COMMIT;
-- Session level (applies to all subsequent transactions)
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- Statement in BEGIN (PostgreSQL)
BEGIN ISOLATION LEVEL REPEATABLE READ;
How REPEATABLE READ Prevents Unrepeatable Reads:
This guarantees that reading the same row twice always returns the same value—the unrepeatable read is impossible by construction.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
-- ==========================================-- Demonstration: REPEATABLE READ prevents unrepeatable reads-- Run in two sessions to observe the difference-- ========================================== -- Session 1: Reader transaction (REPEATABLE READ)BEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; -- First readSELECT balance FROM accounts WHERE id = 1;-- Returns: 1000 -- [Session 2 now runs: UPDATE accounts SET balance = 500 WHERE id = 1; COMMIT;] -- Second read (after Session 2 commits)SELECT balance FROM accounts WHERE id = 1;-- Still returns: 1000 (snapshot preserved!) -- Verify we're seeing old data-- (Session 2's change won't be visible until we commit)SELECT NOW(), txid_current(); -- Transaction info for debugging COMMIT; -- After commit, new transaction sees updated valueBEGIN;SELECT balance FROM accounts WHERE id = 1;-- Now returns: 500COMMIT; -- ==========================================-- Comparison: READ COMMITTED allows unrepeatable reads -- ========================================== -- Session 1: Reader transaction (READ COMMITTED)BEGIN;SET TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT balance FROM accounts WHERE id = 1;-- Returns: 500 -- [Session 2: UPDATE accounts SET balance = 300 WHERE id = 1; COMMIT;] SELECT balance FROM accounts WHERE id = 1;-- Returns: 300 (different! unrepeatable read occurred) COMMIT;PostgreSQL and Oracle default to READ COMMITTED. SQL Server defaults to READ COMMITTED but can be configured for SNAPSHOT. MySQL InnoDB defaults to REPEATABLE READ. Always verify your database's default and explicitly set isolation level for critical transactions.
While isolation levels provide automatic protection, explicit locking gives you fine-grained control over what data is protected and how. This is especially useful when you need both read consistency AND write coordination.
Types of Explicit Locks:
| Clause | Lock Type | Blocks | Use Case |
|---|---|---|---|
| FOR UPDATE | Exclusive (X) | Other FOR UPDATE, FOR SHARE | I will modify this row |
| FOR NO KEY UPDATE | Exclusive (weaker) | Other FOR UPDATE, FOR SHARE | Update but not key columns |
| FOR SHARE | Shared (S) | FOR UPDATE only | Protect from modification |
| FOR KEY SHARE | Shared (weaker) | FOR UPDATE only | Protect from key modification |
FOR UPDATE: When You'll Modify What You Read
The most common locking pattern is SELECT ... FOR UPDATE, which:
BEGIN;
-- Lock the row while reading
SELECT balance FROM accounts WHERE id = 1 FOR UPDATE;
-- Returns: 1000
-- Lock prevents other transactions from updating this row
-- Safe to do calculations based on the locked value
-- No other transaction can change it underneath us
UPDATE accounts SET balance = balance - 500 WHERE id = 1;
COMMIT; -- Lock released
FOR SHARE: When You Need Stability But Won't Modify
If you only need to ensure the data doesn't change but won't update it yourself:
BEGIN;
-- Prevent modifications during calculation
SELECT base_rate, multiplier
FROM pricing
WHERE product_id = 42
FOR SHARE;
-- Calculate price knowing values won't change
-- Other readers can still read; writers must wait
INSERT INTO quotes (product_id, price, created_at)
VALUES (42, base_rate * multiplier, NOW());
COMMIT;
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
-- ==========================================-- Pattern 1: Pessimistic Locking for Updates-- ========================================== -- Classic "check and update" with lockingBEGIN; -- Lock source accountSELECT balance INTO @source_balanceFROM accounts WHERE id = 1 FOR UPDATE; -- Lock destination account SELECT balance INTO @dest_balanceFROM accountsWHERE id = 2FOR UPDATE; -- Both accounts now locked - safe to verify and transferIF @source_balance >= 500 THEN UPDATE accounts SET balance = balance - 500 WHERE id = 1; UPDATE accounts SET balance = balance + 500 WHERE id = 2;END IF; COMMIT; -- ==========================================-- Pattern 2: Lock Ordering to Prevent Deadlock-- ========================================== -- BAD: Can deadlock if another transaction locks in opposite order-- Session 1: SELECT ... FROM accounts WHERE id = 1 FOR UPDATE;-- SELECT ... FROM accounts WHERE id = 2 FOR UPDATE;-- Session 2: SELECT ... FROM accounts WHERE id = 2 FOR UPDATE;-- SELECT ... FROM accounts WHERE id = 1 FOR UPDATE; -- GOOD: Always lock in consistent order (e.g., by ID)BEGIN; SELECT * FROM accounts WHERE id IN (1, 2) ORDER BY id FOR UPDATE; -- Both rows locked in deterministic order COMMIT; -- ==========================================-- Pattern 3: SKIP LOCKED for Queue Processing-- ========================================== -- Process jobs without waiting for locked rowsBEGIN; SELECT id, payload FROM job_queue WHERE status = 'pending'ORDER BY created_atLIMIT 1FOR UPDATE SKIP LOCKED; -- If a row is available and not locked by another worker:UPDATE job_queue SET status = 'processing' WHERE id = @job_id; COMMIT; -- Process the job... -- ==========================================-- Pattern 4: NOWAIT for Fast Failure-- ========================================== BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE NOWAIT;-- If row is already locked: ERROR instead of waiting -- Use in time-sensitive operations where waiting is unacceptable COMMIT;Explicit locking is powerful but introduces deadlock risk. Always acquire locks in a consistent order and consider using NOWAIT or lock timeouts in high-contention scenarios. Monitor for deadlock occurrences in production.
Multi-Version Concurrency Control (MVCC) is the mechanism that enables REPEATABLE READ to work without blocking readers. Understanding MVCC helps you appreciate what the database does to provide read consistency.
How MVCC Works:
Instead of overwriting data when updated, MVCC maintains multiple versions of each row:
Row ID: 42
┌─────────────────────────────────────────────────────────────┐
│ Version 1: value = 100, created by TXN 100, valid until TXN 200 │
│ Version 2: value = 150, created by TXN 200, valid until TXN 300 │
│ Version 3: value = 200, created by TXN 300, current │
└─────────────────────────────────────────────────────────────┘
When TXN 250 reads Row 42:
- TXN 250 started after TXN 200 committed but before TXN 300
- TXN 250 sees Version 2 (value = 150)
- TXN 300's write is invisible to TXN 250
Key Components:
PostgreSQL MVCC Implementation:
-- Viewing MVCC metadata (PostgreSQL)
SELECT
xmin, -- Transaction that created this version
xmax, -- Transaction that deleted/updated this version (0 if current)
cmin, -- Command within xmin that created this
cmax, -- Command within xmax that deleted this
ctid, -- Physical location (page, tuple)
*
FROM accounts
WHERE id = 1;
-- Understanding snapshot
SELECT
txid_current() AS my_txn_id,
txid_current_snapshot() AS snapshot_info;
-- Snapshot format: xmin:xmax:xip_list
-- Example: 100:105:102,103
-- Means: see all committed with XID < 100,
-- exclude in-progress 102 and 103,
-- newest possible is 105
MySQL InnoDB MVCC:
InnoDB uses a different approach with undo logs:
Current row data → Latest version
Undo log entry → Previous version
Undo log entry → Even older version
Readers follow undo log chains to find the version valid for their snapshot.
MVCC Trade-offs:
| Advantage | Disadvantage |
|---|---|
| Readers don't block writers | Storage overhead for versions |
| Writers don't block readers | Garbage collection required |
| High read concurrency | Long transactions hold old versions |
| No read locks needed | Write skew still possible |
Old row versions accumulate until no active transaction needs them. Long-running transactions can cause "bloat" by preventing garbage collection. Monitor for table bloat in PostgreSQL (pg_stat_user_tables.n_dead_tup) and run VACUUM regularly.
Each database system implements isolation slightly differently. Knowing these details helps you write portable code and understand vendor-specific behaviors.
PostgreSQL:
| Level | Snapshot Timing | Notes |
|---|---|---|
| READ COMMITTED | Start of each statement | Default; each statement sees committed changes |
| REPEATABLE READ | First query in transaction | True snapshot; serialization error if conflict |
| SERIALIZABLE | Same as REPEATABLE READ | Plus predicate locking for full isolation |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
-- ==========================================-- PostgreSQL Specifics-- ========================================== -- REPEATABLE READ can throw serialization errorBEGIN ISOLATION LEVEL REPEATABLE READ;SELECT * FROM accounts WHERE id = 1;-- Another transaction modifies and commitsUPDATE accounts SET balance = 0 WHERE id = 1;-- ERROR: could not serialize access due to concurrent update-- Application must retry the transactionROLLBACK; -- Check current isolation levelSHOW transaction_isolation; -- ==========================================-- MySQL (InnoDB) Specifics -- ========================================== -- MySQL REPEATABLE READ is the default-- Transaction sees snapshot from first consistent read START TRANSACTION;SELECT * FROM accounts WHERE id = 1; -- Establishes snapshot-- Concurrent update commitsSELECT * FROM accounts WHERE id = 1; -- Same value as first read -- MySQL also has locking readsSELECT * FROM accounts WHERE id = 1 FOR UPDATE; -- X lockSELECT * FROM accounts WHERE id = 1 FOR SHARE; -- S lockSELECT * FROM accounts WHERE id = 1 LOCK IN SHARE MODE; -- Legacy syntax COMMIT; -- ==========================================-- SQL Server Specifics-- ========================================== -- SQL Server defaults to READ COMMITTED with locking (not MVCC)-- For MVCC-style behavior, enable SNAPSHOT -- Database level (one-time setup)ALTER DATABASE mydb SET ALLOW_SNAPSHOT_ISOLATION ON;ALTER DATABASE mydb SET READ_COMMITTED_SNAPSHOT ON; -- Changes RC behavior -- Transaction levelSET TRANSACTION ISOLATION LEVEL SNAPSHOT;BEGIN TRANSACTION;SELECT * FROM accounts WHERE id = 1;-- Uses row versioning like PostgreSQL REPEATABLE READCOMMIT; -- ==========================================-- Oracle Specifics-- ========================================== -- Oracle uses MVCC by default, even for READ COMMITTED-- READ ONLY transaction gives strong snapshot SET TRANSACTION READ ONLY;-- All queries see data as of transaction start-- Cannot modify data -- Or use SERIALIZABLESET TRANSACTION ISOLATION LEVEL SERIALIZABLE;Important Implementation Differences:
| Aspect | PostgreSQL | MySQL InnoDB | SQL Server | Oracle |
|---|---|---|---|---|
| Default level | READ COMMITTED | REPEATABLE READ | READ COMMITTED | READ COMMITTED |
| MVCC default | Yes | Yes | No (configurable) | Yes |
| RR phantom prevention | Yes (MVCC) | Yes (gap locks) | No | Yes |
| Serialization errors | Possible at RR | Rare | Possible | Possible |
| Locking reads | FOR UPDATE/SHARE | FOR UPDATE/SHARE | WITH (UPDLOCK) | FOR UPDATE |
Portability Considerations:
-- Portable pattern: Explicit isolation level
BEGIN;
-- PostgreSQL, MySQL, Oracle
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- For SQL Server, also consider:
-- SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
When writing cross-database code, test on each target database. Behavior can differ even with the same isolation level name.
REPEATABLE READ in PostgreSQL uses MVCC and can throw serialization errors. REPEATABLE READ in MySQL uses gap locking and rarely throws. Same name, different behavior. Always test your isolation requirements on your specific database version.
Beyond database configuration, application-level design can reduce the risk and impact of unrepeatable reads.
Pattern 1: Single Query When Possible
The simplest prevention is to read all related data in a single query:
-- BAD: Multiple queries can see different states
SELECT base_price FROM products WHERE id = 1;
-- Time passes, discount might change
SELECT discount_pct FROM promotions WHERE product_id = 1;
-- GOOD: Single query, atomic read
SELECT
p.base_price,
COALESCE(pr.discount_pct, 0) as discount_pct,
p.base_price * (1 - COALESCE(pr.discount_pct, 0)/100) as final_price
FROM products p
LEFT JOIN promotions pr ON p.id = pr.product_id
WHERE p.id = 1;
Pattern 2: Optimistic Concurrency Control (OCC)
Instead of locking, detect conflicts at write time:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160
from dataclasses import dataclassfrom typing import Optionalimport psycopg2from psycopg2 import sql # ==========================================# Pattern 2: Optimistic Concurrency Control# ========================================== @dataclassclass Account: id: int balance: float version: int # For optimistic locking def transfer_optimistic(conn, from_id: int, to_id: int, amount: float) -> bool: """ Transfer using optimistic concurrency control. Uses version numbers to detect concurrent modifications. """ max_retries = 3 for attempt in range(max_retries): with conn.cursor() as cur: try: # READ COMMITTED is fine - we check version at update # Read current state with versions cur.execute(""" SELECT id, balance, version FROM accounts WHERE id IN (%s, %s) ORDER BY id """, (from_id, to_id)) accounts = {row[0]: Account(*row) for row in cur.fetchall()} if accounts[from_id].balance < amount: return False # Insufficient funds # Attempt updates with version check cur.execute(""" UPDATE accounts SET balance = balance - %s, version = version + 1 WHERE id = %s AND version = %s """, (amount, from_id, accounts[from_id].version)) if cur.rowcount == 0: # Version changed - concurrent modification detected conn.rollback() continue # Retry cur.execute(""" UPDATE accounts SET balance = balance + %s, version = version + 1 WHERE id = %s AND version = %s """, (amount, to_id, accounts[to_id].version)) if cur.rowcount == 0: conn.rollback() continue conn.commit() return True except Exception as e: conn.rollback() if attempt == max_retries - 1: raise return False # All retries failed # ==========================================# Pattern 3: Read-Your-Writes in Caching# ========================================== class ConsistentCache: """ Cache that ensures read-your-writes consistency. After a write, subsequent reads see the written value. """ def __init__(self, db_connection, cache_client): self.db = db_connection self.cache = cache_client self._local_writes = {} # Track writes in current request def get(self, key: str) -> Optional[any]: # First check local writes (read-your-writes) if key in self._local_writes: return self._local_writes[key] # Then check cache cached = self.cache.get(key) if cached is not None: return cached # Finally hit database value = self._fetch_from_db(key) self.cache.set(key, value, ttl=300) return value def set(self, key: str, value: any) -> None: # Write to database self._write_to_db(key, value) # Track for read-your-writes self._local_writes[key] = value # Invalidate cache (or update it) self.cache.delete(key) def clear_local_writes(self): """Call at end of request to clear tracking.""" self._local_writes = {} # ==========================================# Pattern 4: Idempotent Operations# ========================================== def process_payment_idempotent(conn, payment_id: str, amount: float) -> str: """ Idempotent payment processing. Can be safely retried if unrepeatable read causes initial failure. """ with conn.cursor() as cur: # Check if already processed cur.execute(""" SELECT status, result FROM payments WHERE id = %s """, (payment_id,)) existing = cur.fetchone() if existing: return existing[1] # Return existing result # Insert with conflict handling cur.execute(""" INSERT INTO payments (id, amount, status, result) VALUES (%s, %s, 'processing', NULL) ON CONFLICT (id) DO NOTHING RETURNING id """, (payment_id, amount)) if cur.fetchone() is None: # Another process inserted first cur.execute("SELECT result FROM payments WHERE id = %s", (payment_id,)) return cur.fetchone()[0] # We own this payment - process it result = f"processed_{payment_id}" cur.execute(""" UPDATE payments SET status = 'complete', result = %s WHERE id = %s """, (result, payment_id)) conn.commit() return resultThe best approach combines multiple strategies: appropriate isolation levels + good query design + application-level checks. Each layer catches issues the others might miss.
Given multiple prevention strategies, how do you choose the right one? Here's a decision framework based on your specific requirements.
Step 1: Assess the Requirement
| Question | If Yes | If No |
|---|---|---|
| Do you read the same data twice in one transaction? | Need isolation | Lower risk |
| Do you read related data that must be consistent? | Need isolation | Lower risk |
| Is the time between reads significant (>100ms)? | Higher risk | Lower risk |
| Are there concurrent writes to this data? | Higher risk | Lower risk |
| Would inconsistency cause business harm? | Must prevent | Can accept risk |
Step 2: Choose Strategy Based on Profile
| Scenario | Recommended Strategy | Rationale |
|---|---|---|
| Low risk, high performance needed | READ COMMITTED + single queries | Minimize overhead, design around risk |
| Moderate risk, consistency matters | REPEATABLE READ | Good balance of consistency and performance |
| High risk, critical operations | SERIALIZABLE or explicit locks | Maximum protection |
| Will modify what is read | SELECT ... FOR UPDATE | Prevent lost updates too |
| Read-only reporting | REPEATABLE READ or READ ONLY | Consistent snapshot for reports |
| Long-running batch | Explicit snapshot + bounded transactions | Avoid holding resources too long |
Step 3: Consider Trade-offs
Isolation Strength ────────────────────────────────────►
READ UNCOMMITTED → READ COMMITTED → REPEATABLE READ → SERIALIZABLE
◄──────────────────────────────────── Concurrency
Performance
Stronger isolation reduces concurrency and may increase:
Weaker isolation increases:
Step 4: Implementation Checklist
□ Identify all transactions needing protection
□ Choose appropriate isolation level per transaction type
□ Set isolation level explicitly (don't rely on defaults)
□ For locks: establish consistent lock ordering
□ Implement retry logic for serialization errors
□ Add monitoring for lock waits and retries
□ Test under concurrent load
□ Document isolation requirements in code
Different transactions in the same application may need different isolation levels. A report might need REPEATABLE READ while a simple status check can use READ COMMITTED. Choose per use case, not globally.
Prevention works best when verified through monitoring and testing. Here's how to confirm your isolation protections are working.
PostgreSQL Monitoring:
-- Check active isolation levels
SELECT pid, state, wait_event, query,
(SELECT setting FROM pg_settings WHERE name = 'default_transaction_isolation')
FROM pg_stat_activity
WHERE state = 'active';
-- Monitor lock waits
SELECT blocked_locks.pid AS blocked_pid,
blocked_activity.query AS blocked_query,
blocking_locks.pid AS blocking_pid,
blocking_activity.query AS blocking_query
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity
ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
WHERE NOT blocked_locks.granted;
-- Serialization failures
SELECT datname, count(*) as serialization_failures
FROM pg_stat_database_conflicts
WHERE confl_snapshot > 0
GROUP BY datname;
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
import threadingimport timefrom concurrent.futures import ThreadPoolExecutorimport psycopg2 def test_unrepeatable_read_prevented(): """ Test that REPEATABLE READ prevents unrepeatable reads. Creates a race condition and verifies protection works. """ conn1 = psycopg2.connect(database="test") conn2 = psycopg2.connect(database="test") # Setup with conn1.cursor() as cur: cur.execute("DROP TABLE IF EXISTS test_isolation") cur.execute("CREATE TABLE test_isolation (id INT PRIMARY KEY, value INT)") cur.execute("INSERT INTO test_isolation VALUES (1, 100)") conn1.commit() read_values = [] barrier = threading.Barrier(2) def reader(): """Transaction that reads twice.""" with conn1.cursor() as cur: cur.execute("BEGIN ISOLATION LEVEL REPEATABLE READ") # First read cur.execute("SELECT value FROM test_isolation WHERE id = 1") first_value = cur.fetchone()[0] read_values.append(('first', first_value)) # Signal writer to proceed barrier.wait() # Wait for writer time.sleep(0.1) barrier.wait() # Second read cur.execute("SELECT value FROM test_isolation WHERE id = 1") second_value = cur.fetchone()[0] read_values.append(('second', second_value)) cur.execute("COMMIT") def writer(): """Transaction that modifies between reads.""" barrier.wait() # Wait for reader's first read with conn2.cursor() as cur: cur.execute("UPDATE test_isolation SET value = 999 WHERE id = 1") conn2.commit() barrier.wait() # Signal reader to continue # Run concurrent transactions with ThreadPoolExecutor(max_workers=2) as executor: futures = [executor.submit(reader), executor.submit(writer)] for f in futures: f.result() # Verify: both reads should return the same value (100) first = next(v for t, v in read_values if t == 'first') second = next(v for t, v in read_values if t == 'second') assert first == second == 100, \ f"Unrepeatable read detected! First: {first}, Second: {second}" print("✓ REPEATABLE READ correctly prevents unrepeatable reads") # Verify the update did happen (visible to new transaction) with conn1.cursor() as cur: cur.execute("SELECT value FROM test_isolation WHERE id = 1") final_value = cur.fetchone()[0] assert final_value == 999, "Update should be visible to new transactions" print("✓ Update is visible to subsequent transactions") conn1.close() conn2.close() def test_read_committed_allows_unrepeatable(): """ Verify that READ COMMITTED does allow unrepeatable reads. This confirms our understanding of the isolation level behavior. """ conn1 = psycopg2.connect(database="test") conn2 = psycopg2.connect(database="test") # Setup with conn1.cursor() as cur: cur.execute("DELETE FROM test_isolation") cur.execute("INSERT INTO test_isolation VALUES (1, 100)") conn1.commit() # Read with READ COMMITTED conn1.set_session(isolation_level='READ COMMITTED') with conn1.cursor() as cur: cur.execute("BEGIN") cur.execute("SELECT value FROM test_isolation WHERE id = 1") first_value = cur.fetchone()[0] # Concurrent update with conn2.cursor() as cur2: cur2.execute("UPDATE test_isolation SET value = 888 WHERE id = 1") conn2.commit() # Second read cur.execute("SELECT value FROM test_isolation WHERE id = 1") second_value = cur.fetchone()[0] cur.execute("COMMIT") # With READ COMMITTED, second read should see the update assert first_value == 100 and second_value == 888, \ "READ COMMITTED should allow unrepeatable reads" print("✓ Confirmed: READ COMMITTED allows unrepeatable reads (as expected)") conn1.close() conn2.close() if __name__ == "__main__": test_unrepeatable_read_prevented() test_read_committed_allows_unrepeatable() print("\nAll isolation tests passed!")Metrics to Track:
| Metric | What It Indicates | Action Threshold |
|---|---|---|
| Lock wait time | Contention level | >100ms p95 investigate |
| Serialization failures/sec | Retry frequency | >10/sec tune isolation |
| Transaction retries | Application handling | >5% investigate |
| MVCC version count | Storage overhead | >3 versions/row vacuum |
| Long-running transactions | Potential holdups | >1 min alert |
Include isolation tests in your CI/CD pipeline. Run concurrent test scenarios with assertions that verify consistency. These catch regressions when someone changes isolation levels or query patterns.
We've covered a comprehensive toolkit for preventing unrepeatable reads. Let's consolidate the key strategies:
Module Complete: Unrepeatable Read Problem
You've now mastered the unrepeatable read problem:
With this knowledge, you can design and implement database interactions that maintain data consistency even under heavy concurrent access.
Congratulations! You now have a complete understanding of the unrepeatable read problem—from its formal definition through to practical prevention strategies. You can identify at-risk scenarios, articulate business impact, and implement appropriate protections using isolation levels, locking, and application design patterns.