Loading content...
Understanding unrepeatable reads becomes concrete through detailed examples. In this page, we'll trace through multiple real-world scenarios step by step, observing exactly when and how the anomaly occurs, what values each transaction sees, and what problems result.
Each scenario represents a common pattern in production systems—situations that database engineers and application developers encounter regularly. By working through these examples, you'll develop the intuition to recognize unrepeatable read risks in your own code.
By the end of this page, you will have worked through five detailed scenarios: banking balance checks, inventory reservation, pricing calculations, report generation, and configuration changes. Each scenario includes timeline diagrams, SQL traces, and analysis of the consequences.
The Situation:
A banking application allows customers to transfer money between their own accounts. The transfer logic:
The Setup:
-- Initial state
accounts:
| id | owner | balance |
|----|--------|--------|
| 1 | Alice | 1000 |
| 2 | Alice | 500 |
Alice wants to transfer $800 from Account 1 to Account 2. Simultaneously, an automatic payment of $300 debits Account 1.
| Time | T₁ (Transfer) | T₂ (Auto-payment) | Account 1 | Account 2 |
|---|---|---|---|---|
| t₀ | BEGIN | $1000 | $500 | |
| t₁ | SELECT balance FROM accounts WHERE id=1 | $1000 | $500 | |
| t₂ | → Sees $1000, proceeds with transfer | BEGIN | $1000 | $500 |
| t₃ | UPDATE accounts SET balance=700 WHERE id=1 | $700* | $500 | |
| t₄ | COMMIT | $700 | $500 | |
| t₅ | SELECT balance FROM accounts WHERE id=2 | $700 | $500 | |
| t₆ | → Sees $500 | $700 | $500 | |
| t₇ | UPDATE accounts SET balance=200 WHERE id=1 | $200 | $500 | |
| t₈ | UPDATE accounts SET balance=1300 WHERE id=2 | $200 | $1300 | |
| t₉ | COMMIT | $200 | $1300 |
Analysis:
At t₁: T₁ reads Account 1 balance = $1000. Based on this, it decides $800 transfer is allowed.
At t₄: T₂ commits, reducing Account 1 to $700. This is now the actual balance.
At t₇: T₁ subtracts $800 from what it thinks is the current balance. The UPDATE executes against the actual current value ($700), resulting in $200 → Overdraft!
The Problem: T₁ made its decision based on a $1000 balance that no longer existed. The actual balance at decision time was $700, which doesn't support an $800 transfer.
Result: Account 1 ends with $200, but should have rejected the transfer (insufficient funds after auto-payment). The bank has allowed an $100 overdraft.
1234567891011121314151617181920212223242526272829303132333435363738394041424344
-- Session 1: Transfer transaction (problems shown)BEGIN;SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Step 1: Check source balanceSELECT balance FROM accounts WHERE id = 1; -- Returns 1000 -- At this point, Session 2 runs:-- UPDATE accounts SET balance = 700 WHERE id = 1;-- COMMIT;-- Account 1 is now 700, but Session 1 doesn't know -- Step 2: Verify funds (using stale value!)-- Application thinks: 1000 >= 800? Yes, proceed -- Step 3: Check destination (for logging purposes)SELECT balance FROM accounts WHERE id = 2; -- Returns 500 -- Step 4: Execute transferUPDATE accounts SET balance = balance - 800 WHERE id = 1;-- This runs against current value: 700 - 800 = -100 (or overdraft) UPDATE accounts SET balance = balance + 800 WHERE id = 2;-- 500 + 800 = 1300 COMMIT; -- Fix: Use REPEATABLE READ or SELECT ... FOR UPDATEBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT balance FROM accounts WHERE id = 1; -- Returns 1000-- Session 2's update now will be invisible to this transaction-- OR Session 2 will block if we used SELECT ... FOR UPDATE SELECT balance FROM accounts WHERE id = 2; -- Returns 500 -- Now we have a consistent view. Transfer can proceed safely.UPDATE accounts SET balance = balance - 800 WHERE id = 1;UPDATE accounts SET balance = balance + 800 WHERE id = 2; COMMIT;-- Transaction commits, BUT if Session 2 had committed first,-- PostgreSQL detects conflict and aborts this transactionIn real banking systems, this type of bug can lead to overdrafts, incorrect fee calculations, regulatory violations, and customer disputes. Banks use stronger isolation levels and explicit locking for balance operations precisely to prevent these scenarios.
The Situation:
An e-commerce platform allows customers to add items to their cart and check out. During checkout, the system:
The Setup:
-- Initial state
products:
| id | name | stock | price |
|----|---------------|-------|-------|
| 42 | Limited Sneakers | 5 | 299 |
Three users (Alice, Bob, Carol) are all trying to purchase these limited-edition sneakers.
| Time | Alice (T₁) | Bob (T₂) | Carol (T₃) | Actual Stock |
|---|---|---|---|---|
| t₀ | BEGIN | 5 | ||
| t₁ | Read stock → 5 | BEGIN | 5 | |
| t₂ | Shows: '5 available' | Read stock → 5 | BEGIN | 5 |
| t₃ | User confirms qty=3 | Shows: '5 available' | Read stock → 5 | 5 |
| t₄ | Read stock → 5 | User confirms qty=3 | Shows: '5 available' | 5 |
| t₅ | stock=5-3=2, COMMIT | User confirms qty=3 | 2 | |
| t₆ | Success! | Read stock → 2 | 2 | |
| t₇ | stock=2-3=-1 ❌ | Read stock → 2 | 2 | |
| t₈ | Error: Insufficient stock | stock=2-3=-1 ❌ | 2 | |
| t₉ | ROLLBACK | Error: Insufficient stock | 2 |
Analysis:
From User Perspective:
Bob and Carol had a frustrating experience:
This is a classic unrepeatable read: the stock value changed between their initial read (5) and their validation read (2).
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
-- Problematic pattern (READ COMMITTED)BEGIN; -- Initial stock check (shown to user)SELECT stock FROM products WHERE id = 42; -- Returns 5 -- User interaction delay (filling payment form, etc.)-- During this time, other transactions can commit -- Final validation before purchaseSELECT stock FROM products WHERE id = 42; -- Might return 2, 1, or 0! -- Attempt to reserveUPDATE products SET stock = stock - 3 WHERE id = 42 AND stock >= 3;-- If affected rows = 0, reservation failed COMMIT; -- Better Pattern 1: Optimistic with retryBEGIN; -- Lock the row immediatelySELECT stock FROM products WHERE id = 42 FOR UPDATE;-- Now holds exclusive lock - other transactions must wait -- Validate-- (if stock >= requested, proceed; else rollback with user-friendly message) UPDATE products SET stock = stock - 3 WHERE id = 42;COMMIT; -- Better Pattern 2: Atomic check-and-updateBEGIN; -- Single atomic operationUPDATE products SET stock = stock - 3 WHERE id = 42 AND stock >= 3RETURNING stock + 3 AS previous_stock, stock AS new_stock; -- If no rows returned, stock was insufficient-- Application can provide accurate feedback COMMIT; -- Better Pattern 3: Reservation with timeoutCREATE TABLE reservations ( id SERIAL PRIMARY KEY, product_id INT REFERENCES products(id), quantity INT, reserved_at TIMESTAMP DEFAULT NOW(), expires_at TIMESTAMP, status VARCHAR(20) DEFAULT 'pending'); -- Reserve first, commit laterBEGIN;SELECT stock FROM products WHERE id = 42 FOR UPDATE; INSERT INTO reservations (product_id, quantity, expires_at)VALUES (42, 3, NOW() + INTERVAL '10 minutes')RETURNING id; UPDATE products SET stock = stock - 3 WHERE id = 42;COMMIT; -- If user cancels or expires, a cleanup job releases reservationGood e-commerce systems show 'estimated availability' or '~5 remaining' to set expectations that inventory is approximate. Critical operations like final checkout always use locked reads or atomic updates to ensure accuracy.
The Situation:
A ride-sharing app calculates fares based on current demand (surge pricing). The fare calculation:
The Setup:
routes:
| id | origin | dest | base_fare |
|----|---------|---------|----------|
| 1 | Airport | Downtown| 25.00 |
surge_pricing:
| route_id | multiplier | updated_at |
|----------|------------|--------------------|
| 1 | 1.5 | 2024-01-15 17:00:00 |
A customer requests a ride. Meanwhile, the pricing engine updates the surge multiplier based on new demand data.
| Time | T₁ (Fare Calc) | T₂ (Surge Update) | Base | Surge | Expected Fare* |
|---|---|---|---|---|---|
| t₀ | BEGIN - Calculate fare | $25 | 1.5× | $37.50 | |
| t₁ | Read base_fare → $25 | $25 | 1.5× | $37.50 | |
| t₂ | BEGIN | $25 | 1.5× | $37.50 | |
| t₃ | UPDATE multiplier = 2.0 | $25 | 2.0×* | $50.00 | |
| t₄ | COMMIT | $25 | 2.0× | $50.00 | |
| t₅ | Read surge → 2.0 | $25 | 2.0× | $50.00 | |
| t₆ | Calculate: $25 × 2.0 = $50 | $25 | 2.0× | $50.00 | |
| t₇ | Return fare = $50 | $25 | 2.0× | $50.00 | |
| t₈ | COMMIT | $25 | 2.0× | $50.00 |
Analysis:
In this case, the transaction read consistent data (base from state S₁, surge from state S₂), and the resulting fare ($50) happens to be a valid fare—it matches the post-update pricing.
But consider the reverse scenario:
| Time | T₁ (Fare Calc) | Base | Surge | Customer Sees |
|---|---|---|---|---|
| t₁ | Read surge → 2.0 | $25 | 2.0× | |
| t₂ | Surge UPDATE → 1.0, COMMIT | $25 | 1.0× | |
| t₃ | Read base → $25 | $25 | 1.0× | |
| t₄ | Calculate: $25 × 2.0 = $50 | $50 quoted |
Now the customer is quoted $50 based on a surge that no longer exists. When they check the app later, it shows normal pricing (1.0×). They feel overcharged.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
-- Problematic: Multiple reads can be inconsistentBEGIN; SELECT base_fare FROM routes WHERE id = 1;-- Returns 25.00 -- Time passes, surge may update SELECT multiplier FROM surge_pricing WHERE route_id = 1;-- Returns 2.0 (but might have changed!) -- Calculate in application: 25 * 2.0 = 50 COMMIT; -- Solution 1: Single query with JOINBEGIN; SELECT r.base_fare, s.multiplier, r.base_fare * s.multiplier * :distance AS calculated_fare, s.updated_at AS surge_timestampFROM routes rJOIN surge_pricing s ON r.id = s.route_idWHERE r.id = 1; -- All values from same snapshot, consistent COMMIT; -- Solution 2: REPEATABLE READ for multi-step calculationBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT base_fare FROM routes WHERE id = 1;SELECT multiplier FROM surge_pricing WHERE route_id = 1; -- Even if surge updates between these queries,-- we still see the value from transaction start COMMIT; -- Solution 3: Lock and record (for audit trail)BEGIN; SELECT r.base_fare, s.multiplier, s.updated_at as surge_versionFROM routes rJOIN surge_pricing s ON r.id = s.route_idWHERE r.id = 1FOR SHARE; -- Shared lock prevents concurrent updates -- Calculate and record with version infoINSERT INTO fare_quotes ( route_id, base_fare, surge_multiplier, surge_version, quoted_fare, created_at) VALUES ( 1, 25.00, 2.0, '2024-01-15 17:00:00', 50.00, NOW()); COMMIT;Smart pricing systems record the 'version' of pricing data used for each quote. If customers dispute charges, the system can show exactly what pricing was in effect when the quote was generated—even if it changed milliseconds later.
The Situation:
A business intelligence system generates an end-of-day financial report. The report includes:
The Setup:
orders:
| id | amount | payment_method | status |
|-----|---------|----------------|----------|
| 101 | 150.00 | credit_card | completed |
| 102 | 75.00 | paypal | completed |
| ... | ... | ... | ... |
Total: $10,000 in completed orders
refunds:
| id | order_id | amount |
|----|----------|--------|
| 1 | 101 | 50.00 |
| ...| ... | ... |
Total: $500 in refunds
Expected net revenue: $10,000 - $500 = $9,500
During report generation, live transactions continue processing.
| Time | Report Transaction | Live Transaction | Orders Total | Refunds Total |
|---|---|---|---|---|
| t₀ | BEGIN report generation | $10,000 | $500 | |
| t₁ | SELECT SUM(amount) FROM orders | $10,000 | $500 | |
| t₂ | → Returns $10,000 | New order: $200 | $10,200 | $500 |
| t₃ | COMMIT | $10,200 | $500 | |
| t₄ | Refund: $100 for order 102 | $10,200 | $600 | |
| t₅ | COMMIT | $10,200 | $600 | |
| t₆ | SELECT SUM(amount) FROM refunds | $10,200 | $600 | |
| t₇ | → Returns $600 | $10,200 | $600 | |
| t₈ | Net = $10,000 - $600 = $9,400 | $10,200 | $600 | |
| t₉ | COMMIT | $10,200 | $600 |
Analysis:
The Report is Wrong:
| Metric | Report Value | Correct Value | Error |
|---|---|---|---|
| Orders | $10,000 | $10,200 | -$200 |
| Refunds | $600 | $600 | ✓ |
| Net Revenue | $9,400 | $9,600 | -$200 |
The report shows $9,400 net revenue, but the correct value is $9,600. The $200 new order was committed after the orders were summed but before refunds were summed, so it's missing from orders but the report includes refunds that occurred after it.
Even Worse Scenario:
The new order at t₃ could have immediately been refunded at t₄. The report would then show:
This makes refunds appear disproportionately high relative to orders—a concerning signal that doesn't reflect reality.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
-- Problematic: Sequential queriesBEGIN; SELECT SUM(amount) AS total_orders FROM orders WHERE status = 'completed';-- Concurrent inserts/updates can commit hereSELECT SUM(amount) AS total_refunds FROM refunds;-- More concurrent changes can commit hereSELECT SUM(amount) AS total_orders, payment_method FROM orders WHERE status = 'completed' GROUP BY payment_method;-- Each query sees different database state COMMIT; -- Solution 1: REPEATABLE READ for consistent snapshotBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; -- All queries now see the same snapshotSELECT SUM(amount) AS total_orders FROM orders WHERE status = 'completed';SELECT SUM(amount) AS total_refunds FROM refunds;SELECT SUM(amount) AS total_orders, payment_method FROM orders WHERE status = 'completed' GROUP BY payment_method; -- Report timestamp for referenceSELECT NOW() AS report_snapshot_time; COMMIT; -- Solution 2: Explicit snapshot in PostgreSQLBEGIN; -- Record the snapshot pointSELECT txid_current_snapshot() AS snapshot; SELECT SUM(amount) AS total_orders FROM orders WHERE status = 'completed';SELECT SUM(amount) AS total_refunds FROM refunds; COMMIT; -- Solution 3: Point-in-time reporting with temporal tables-- (If using system-versioned temporal tables) -- Query as of specific timestampSELECT SUM(amount) AS total_orders FROM orders FOR SYSTEM_TIME AS OF TIMESTAMP '2024-01-15 23:59:59'WHERE status = 'completed'; SELECT SUM(amount) AS total_refunds FROM refunds FOR SYSTEM_TIME AS OF TIMESTAMP '2024-01-15 23:59:59'; -- Both queries return data as of the same moment -- Solution 4: Materialized snapshotBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; -- Create temporary snapshot of dataCREATE TEMP TABLE report_orders AS SELECT * FROM orders WHERE status = 'completed'; CREATE TEMP TABLE report_refunds AS SELECT * FROM refunds; COMMIT; -- Now run all report queries against temp tables-- Data is frozen, no inconsistency possibleSELECT SUM(amount) FROM report_orders;SELECT SUM(amount) FROM report_refunds;-- etc.Financial reports used for regulatory compliance, tax filings, or audits must use consistent snapshots. An inconsistent report could fail audits, trigger regulatory penalties, or misrepresent the company's financial position. Always use REPEATABLE READ or higher for financial reporting.
The Situation:
A multi-tenant SaaS application stores customer configuration in the database. When processing a request:
The Setup:
subscriptions:
| customer_id | tier | expires_at |
|-------------|----------|--------------------|
| 42 | premium | 2024-12-31 23:59:59 |
tier_features:
| tier | feature_name | enabled |
|---------|---------------|--------|
| premium | advanced_api | true |
| premium | bulk_export | true |
| basic | advanced_api | false |
| basic | bulk_export | false |
tier_limits:
| tier | rate_limit | storage_gb |
|---------|------------|------------|
| premium | 10000 | 100 |
| basic | 1000 | 10 |
The customer's subscription is about to be downgraded from premium to basic due to failed payment.
| Time | Request Processing (T₁) | Billing System (T₂) | Customer Tier |
|---|---|---|---|
| t₀ | BEGIN - Process API request | premium | |
| t₁ | SELECT tier FROM subscriptions | premium | |
| t₂ | → tier = 'premium' | BEGIN - Downgrade | premium |
| t₃ | UPDATE subscriptions SET tier='basic' | basic* | |
| t₄ | COMMIT | basic | |
| t₅ | SELECT * FROM tier_features WHERE tier='premium' | basic | |
| t₆ | → advanced_api=true, bulk_export=true | basic | |
| t₇ | SELECT rate_limit FROM tier_limits WHERE tier='premium' | basic | |
| t₈ | → rate_limit = 10000 | basic | |
| t₉ | Allow request with premium limits | basic | |
| t₁₀ | COMMIT | basic |
Analysis:
The request was processed as if the customer had premium tier:
Why This Happens:
At t₁, the transaction read tier='premium'. Even though the subscription changed at t₄, subsequent reads used the originally-read tier value to query related tables.
This isn't a classic unrepeatable read (the tier value wasn't read twice), but it's a form of read inconsistency: the tier was 'premium' when read, but 'basic' when the features and limits were applied.
Worse Still:
If the request was long-running (batch job, large export), thousands of operations could be processed with premium access after the downgrade.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
-- Problematic: Reading configuration in piecesBEGIN; SELECT tier INTO @tier FROM subscriptions WHERE customer_id = 42;-- @tier = 'premium' -- Downgrade commits here SELECT * FROM tier_features WHERE tier = @tier; -- Still queries with 'premium' because @tier is cached SELECT rate_limit FROM tier_limits WHERE tier = @tier; COMMIT; -- Solution 1: Single joined queryBEGIN; SELECT s.tier, s.expires_at, tf.feature_name, tf.enabled, tl.rate_limit, tl.storage_gbFROM subscriptions sJOIN tier_features tf ON s.tier = tf.tierJOIN tier_limits tl ON s.tier = tl.tierWHERE s.customer_id = 42; -- All configuration read in one atomic query COMMIT; -- Solution 2: REPEATABLE READBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT tier INTO @tier FROM subscriptions WHERE customer_id = 42; -- Even if tier changes in another transaction,-- we continue to see 'premium' from our snapshot SELECT * FROM tier_features WHERE tier = @tier;SELECT rate_limit FROM tier_limits WHERE tier = @tier; -- Consistent configuration throughout the transaction COMMIT; -- Solution 3: Lock the subscription during processingBEGIN; SELECT tier INTO @tier FROM subscriptions WHERE customer_id = 42FOR SHARE; -- Blocks concurrent updates -- Any attempt to downgrade now waits for this transaction SELECT * FROM tier_features WHERE tier = @tier;SELECT rate_limit FROM tier_limits WHERE tier = @tier; -- Process request... COMMIT; -- Downgrade can proceed now -- Solution 4: Versioned configurationALTER TABLE subscriptions ADD COLUMN config_version INT DEFAULT 1; BEGIN; SELECT tier, config_version INTO @tier, @version FROM subscriptions WHERE customer_id = 42; -- At end of processing, verify version hasn't changedSELECT config_version INTO @current_version FROM subscriptions WHERE customer_id = 42; IF @version != @current_version THEN -- Configuration changed mid-request, abort or reprocess ROLLBACK;ELSE COMMIT;END IF;Many applications cache configuration in memory. If database configuration changes, the cache must be invalidated. Without proper cache invalidation, you can have inconsistency even without concurrent database reads—the in-memory cache diverges from database state.
Now that we've seen several scenarios, let's identify the common patterns that indicate unrepeatable read risk.
High-Risk Patterns:
| Scenario Type | Risk Level | Recommended Solution |
|---|---|---|
| Balance verification | High | SELECT ... FOR UPDATE or REPEATABLE READ |
| Inventory check | High | Atomic UPDATE with WHERE clause |
| Pricing calculation | Medium | Single JOIN query or REPEATABLE READ |
| Report generation | High | REPEATABLE READ with snapshot timestamp |
| Configuration lookup | Medium | Joined query or version verification |
| Audit trail reading | Low | REPEATABLE READ if cross-table |
Questions to Ask When Reviewing Code:
Does this transaction read the same data more than once?
Does this transaction read multiple related items that should be consistent?
Is there meaningful processing time between reads?
What would happen if a read value changed mid-transaction?
Are there concurrent transactions that modify this data?
Not all unrepeatable reads are problematic. A social media feed that shows slightly different content between refreshes is fine. A bank balance that changes during a withdrawal is not. Evaluate the consequences in your specific domain.
The following SQL scripts can be run in two separate database sessions to observe unrepeatable reads in action.
Setup: Create Test Environment
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
-- ==========================================-- Setup (run once)-- ==========================================CREATE TABLE demo_accounts ( id INT PRIMARY KEY, name VARCHAR(50), balance DECIMAL(10,2)); INSERT INTO demo_accounts VALUES (1, 'Alice', 1000.00);INSERT INTO demo_accounts VALUES (2, 'Bob', 500.00); -- ==========================================-- Session 1: Observe the unrepeatable read-- Run each step and wait for Session 2 instructions-- ========================================== -- Step 1.1: Start transaction with READ COMMITTEDBEGIN;SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Step 1.2: First readSELECT balance FROM demo_accounts WHERE id = 1;-- Expected: 1000.00-- << Pause here, go to Session 2, Step 2.1-2.2 >> -- Step 1.3: Second read (after Session 2 commits)SELECT balance FROM demo_accounts WHERE id = 1;-- Expected: 700.00 (DIFFERENT from first read!)-- This is an unrepeatable read -- Step 1.4: CleanupROLLBACK; -- ==========================================-- Session 2: Modify data between Session 1's reads-- ========================================== -- Step 2.1: After Session 1 runs Step 1.2, run this:BEGIN;UPDATE demo_accounts SET balance = 700.00 WHERE id = 1; -- Step 2.2: Commit the changeCOMMIT;-- Now go back to Session 1, run Step 1.3 -- ==========================================-- Now repeat with REPEATABLE READ-- ========================================== -- Session 1: REPEATABLE READ testBEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; -- First readSELECT balance FROM demo_accounts WHERE id = 1;-- Returns: 700.00 (current value) -- << Session 2: UPDATE to 500.00 and COMMIT >> -- Second read (after Session 2 commits)SELECT balance FROM demo_accounts WHERE id = 1;-- Still returns: 700.00 (same as first read!)-- No unrepeatable read occurs COMMIT; -- ==========================================-- Session 2: Try to modify during REPEATABLE READ-- ==========================================BEGIN;UPDATE demo_accounts SET balance = 500.00 WHERE id = 1;COMMIT;-- The update succeeds and commits -- But Session 1 continues to see 700.00 from its snapshotExpected Observations:
| Test | Isolation Level | First Read | After Concurrent Update | Second Read | Unrepeatable? |
|---|---|---|---|---|---|
| 1 | READ COMMITTED | 1000.00 | Committed | 700.00 | Yes ⚠️ |
| 2 | REPEATABLE READ | 700.00 | Committed | 700.00 | No ✓ |
The demonstration clearly shows how isolation level affects read consistency.
Running these scripts in real database sessions builds intuition that no amount of reading can provide. Set up a PostgreSQL or MySQL instance and experiment with different isolation levels to see how they behave under concurrent access.
We've worked through five detailed scenarios demonstrating unrepeatable reads in real-world contexts. Let's consolidate the key lessons:
What's next:
Understanding the scenarios helps recognize the problem, but understanding the business impact helps prioritize solutions. In the next page, we'll explore the consequences of unrepeatable reads on business operations, customer experience, compliance, and technical debt.
You've now seen unrepeatable reads in action across multiple domains. You can recognize the patterns that indicate risk and understand the mechanics of how concurrent transactions create inconsistency. Next, we'll examine why these problems matter from a business perspective.