Loading learning content...
Two bank customers access their shared family account at the exact same moment. One is depositing $500; the other is withdrawing $300. The account has $1,000. What should the final balance be?
Mathematically, the answer is obvious: $1,000 + $500 - $300 = $1,200. But in a database system, without proper controls, the following disaster could occur:
Final balance: $700. The $500 deposit vanished because Transaction A overwrote it with stale data. This is the lost update problem—one of several concurrency anomalies that isolation prevents.
By the end of this page, you will understand how databases manage concurrent transactions, the spectrum of concurrency anomalies that can occur, and how isolation levels trade correctness for performance. You'll learn to choose the right isolation level for your use case and recognize when stronger guarantees are necessary.
Isolation is the ACID property that defines how and when the effects of one transaction become visible to other concurrent transactions. At its strongest, isolation creates the illusion that each transaction executes in complete solitude, as if no other transactions exist.
Formal Definition:
A database system exhibits isolation when the result of executing concurrent transactions is the same as if those transactions had been executed serially—one after another, in some order.
This property is also called serializability: concurrent execution produces results equivalent to some serial ordering. The database doesn't actually run transactions one at a time (that would be too slow), but it ensures that the observable outcomes match a possible serial execution.
Serializability doesn't mean transactions run sequentially—it means they APPEAR to run sequentially. In reality, operations interleave for performance. The database uses locking, timestamps, or validation to ensure that despite this interleaving, no anomalies occur that couldn't happen in a serial execution.
Why Isolation is Challenging:
Concurrency is essential for database performance. A system serving thousands of users cannot make each one wait for every other transaction to complete. But concurrency introduces the possibility of interference:
Isolation mechanisms prevent these interferences—or at least define which ones are acceptable for a given workload.
Before understanding isolation levels, we must understand the problems they prevent. These are the concurrency anomalies—situations where concurrent execution produces results that could not occur in any serial execution.
Detailed Example: Dirty Read
123456789101112131415161718192021
Timeline showing a dirty read: Time Transaction A Transaction B───── ───────────────────────────── ─────────────────────────────t1 BEGIN t2 BEGINt3 UPDATE accounts SET balance = balance + 1000 WHERE account_id = 'A' -- Balance now 2000 (uncommitted)t4 SELECT balance FROM accounts WHERE account_id = 'A' -- DIRTY READ: sees 2000 (uncommitted data)t5 -- Makes business decision based on 2000t6 ROLLBACK -- Balance reverts to 1000t7 -- Transaction A's decision was based on -- data that never existed! Result: A acted on phantom data. Depending on the action, this could causereal-world problems (approving a loan, shipping goods, etc.)Detailed Example: Lost Update
1234567891011121314151617181920212223242526272829
Timeline showing a lost update: Initial: account balance = 1000 Time Transaction A Transaction B───── ───────────────────────────── ─────────────────────────────t1 BEGIN BEGINt2 SELECT balance FROM accounts WHERE account_id = 'A' -- Reads 1000t3 SELECT balance FROM accounts WHERE account_id = 'A' -- Reads 1000t4 UPDATE accounts SET balance = 1000 - 300 = 700 WHERE account_id = 'A't5 COMMIT -- Balance is 700t6 UPDATE accounts SET balance = 1000 + 500 = 1500 WHERE account_id = 'A't7 COMMIT -- Balance is 1500 Expected: 1000 - 300 + 500 = 1200Actual: 1500 (Transaction A's withdrawal was lost) The $300 withdrawal simply vanished because B overwrote A's changesusing a stale read of the original balance.Write skew is particularly insidious because each individual transaction appears correct. Example: A hospital has two on-call doctors. The invariant is 'at least one doctor must remain on call.' Two transactions simultaneously check that two doctors are on call, then each removes one doctor. Individually correct; together, zero doctors on call. Write skew violates cross-row invariants.
Full serializability prevents all anomalies but comes with significant performance costs (more locking, more aborted transactions, less concurrency). Most databases offer multiple isolation levels, each allowing certain anomalies in exchange for better performance.
The SQL standard defines four isolation levels, from weakest to strongest:
| Isolation Level | Dirty Read | Non-Repeatable Read | Phantom Read | Lost Update | Write Skew |
|---|---|---|---|---|---|
| Read Uncommitted | Possible | Possible | Possible | Possible | Possible |
| Read Committed | Prevented | Possible | Possible | Possible | Possible |
| Repeatable Read | Prevented | Prevented | Possible* | Prevented | Possible |
| Serializable | Prevented | Prevented | Prevented | Prevented | Prevented |
The SQL standard defines these levels by which anomalies they prevent, but database implementations vary. PostgreSQL's Repeatable Read actually prevents phantom reads. MySQL's Repeatable Read uses gap locking that also prevents many phantoms. Oracle doesn't have a true Read Uncommitted. Always test your specific database's behavior.
Level 1: Read Uncommitted
The Wild West of isolation. A transaction can see uncommitted changes from other transactions. Almost never used in practice because dirty reads cause unpredictable behavior.
Use case: Some read-only analytics on transactional tables where approximate results are acceptable and stale/uncommitted data won't cause harm.
Level 2: Read Committed
The most common default. A transaction only sees data committed before each statement begins. Each statement gets a fresh view of committed data, but that view can change between statements in the same transaction.
Use case: General-purpose OLTP workloads where individual statements need current data but cross-statement consistency isn't critical.
1234567891011121314151617181920212223
-- Read Committed behavior example-- Transaction A's view can change between statements -- Transaction A -- Transaction BBEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT balance FROM accounts WHERE account_id = 1;-- Returns: 1000 BEGIN; UPDATE accounts SET balance = 1500 WHERE account_id = 1; COMMIT; SELECT balance FROM accountsWHERE account_id = 1;-- Returns: 1500 (different!)-- This is allowed at Read Committed COMMIT;Level 3: Repeatable Read
A transaction sees a consistent snapshot of committed data as of the transaction's start time (in snapshot-based implementations) or holds locks on all read data (in lock-based implementations). The same query returns the same results throughout the transaction.
Use case: Reporting that requires consistent totals across multiple queries; multi-statement operations where intermediate decisions depend on earlier reads.
Level 4: Serializable
The gold standard. Transactions behave as if executed serially. All anomalies are prevented. In practice, databases achieve this through strict two-phase locking, serializable snapshot isolation, or optimistic concurrency control with validation.
Use case: Any operation where correctness is paramount: financial transactions, inventory management, anything involving invariants across multiple rows or tables.
Selecting the appropriate isolation level requires understanding your application's correctness requirements and performance constraints. Here's a decision framework:
Many applications can achieve correctness at Read Committed by using SELECT ... FOR UPDATE. This locks the selected rows, preventing other transactions from modifying them until you commit. It's a surgical way to get strong isolation where needed without the overhead of Serializable everywhere.
1234567891011121314151617
-- Using SELECT FOR UPDATE to prevent lost updates at Read Committed -- Without FOR UPDATE: vulnerable to lost updateBEGIN;SELECT balance FROM accounts WHERE account_id = 1; -- Gets 1000-- Another transaction could modify this row right now!UPDATE accounts SET balance = balance - 300 WHERE account_id = 1;COMMIT; -- With FOR UPDATE: protected from concurrent modificationBEGIN;SELECT balance FROM accounts WHERE account_id = 1 FOR UPDATE;-- Row is now locked; other transactions must wait-- Safe to read-modify-writeUPDATE accounts SET balance = balance - 300 WHERE account_id = 1;COMMIT;-- Lock released; other transactions can proceedDatabases use various mechanisms to enforce isolation. Understanding these mechanisms helps explain performance characteristics and failure modes.
| Mechanism | Readers Block Writers? | Writers Block Readers? | Abort Rate | Best For |
|---|---|---|---|---|
| Two-Phase Locking | Yes (for writes) | Yes | Low (waits instead) | Write-heavy, low contention |
| MVCC (Read Committed) | No | No | N/A | Read-heavy, tolerates non-repeatable reads |
| MVCC (Snapshot) | No | No | Medium (on conflict) | Mixed workloads, consistent reads |
| SSI (Serializable) | No | No | Higher (on write conflicts) | Correctness-critical, moderate contention |
MVCC in Detail:
In MVCC systems like PostgreSQL:
xmin (transaction that created this version), xmax (transaction that deleted/replaced this version)xmax = current transaction, and a new version is createdxmin was committed before this transaction started (or is this transaction)xmax is null or was not committed before this transaction startedMVCC creates multiple versions of rows. Old versions that no transaction needs anymore must be cleaned up (vacuumed in PostgreSQL, purged in MySQL). If cleanup can't keep up with update rate, table bloat occurs. This is a trade-off for the concurrency benefits MVCC provides.
When using Serializable isolation (or Repeatable Read in some databases), the database may detect that concurrent transactions cannot be serialized without anomalies. Rather than allowing the anomaly, it aborts one of the transactions with a serialization failure.
This is not an error—it's the isolation mechanism working correctly. The application must be prepared to retry the transaction.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
import psycopg2from psycopg2 import errorsimport time MAX_RETRIES = 3RETRY_DELAY_MS = 100 def execute_with_retry(operation_func): """ Execute a database operation with automatic retry on serialization failure. Essential for Serializable isolation level. """ retries = 0 while retries < MAX_RETRIES: conn = get_db_connection() try: conn.set_isolation_level( psycopg2.extensions.ISOLATION_LEVEL_SERIALIZABLE ) result = operation_func(conn) conn.commit() return result except errors.SerializationFailure as e: # Transaction was aborted due to concurrent conflict # This is expected behavior at Serializable level conn.rollback() retries += 1 if retries < MAX_RETRIES: # Exponential backoff with jitter delay = RETRY_DELAY_MS * (2 ** retries) + random.uniform(0, 50) logging.info(f"Serialization failure, retry {retries}/{MAX_RETRIES} " f"after {delay}ms") time.sleep(delay / 1000) else: logging.error("Max retries exceeded for serializable transaction") raise except Exception as e: conn.rollback() raise finally: conn.close() raise Exception("Transaction failed after max retries") # Example usagedef transfer_funds(conn, from_account, to_account, amount): cursor = conn.cursor() # At Serializable, we don't need FOR UPDATE—the database # ensures serializable behavior automatically cursor.execute( "SELECT balance FROM accounts WHERE account_id = %s", (from_account,) ) from_balance = cursor.fetchone()[0] if from_balance < amount: raise InsufficientFundsError() cursor.execute( "UPDATE accounts SET balance = balance - %s WHERE account_id = %s", (amount, from_account) ) cursor.execute( "UPDATE accounts SET balance = balance + %s WHERE account_id = %s", (amount, to_account) ) return True # Run with automatic retryexecute_with_retry( lambda conn: transfer_funds(conn, 'savings', 'checking', 500))Don't fear serialization failures—embrace them. They mean the database is correctly protecting you from anomalies. Design your application to expect them (idempotent operations, retry logic). A few retries are far cheaper than debugging data corruption from concurrency bugs.
Different database systems implement isolation levels with varying behavior. Understanding these differences is essential for portable applications.
| Database | Default Level | Serializable Implementation | Notable Behavior |
|---|---|---|---|
| PostgreSQL | Read Committed | SSI (MVCC-based, may abort) | Repeatable Read prevents phantoms; Serializable is true serializable |
| MySQL (InnoDB) | Repeatable Read | Gap locking + traditional locking | Repeatable Read uses consistent snapshot; may have some phantom prevention |
| Oracle | Read Committed | Requires manual locking (SELECT FOR UPDATE) | No true Serializable; 'Serializable' is actually snapshot isolation |
| SQL Server | Read Committed | Lock-based or snapshot (configurable) | Has READ_COMMITTED_SNAPSHOT option for MVCC-like behavior |
| SQLite | Serializable | Database-level locking | Simple model; all transactions are effectively serializable |
Oracle's Serializable isolation level is actually Snapshot Isolation, which allows write skew anomalies. If you need true serializability in Oracle, you must use explicit locking (SELECT FOR UPDATE) on all related rows. This is a common source of bugs when porting applications.
12345678910111213141516171819202122
-- PostgreSQL: Set isolation level for a transactionBEGIN;SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;-- All operations in this transaction use Serializable isolationCOMMIT; -- Or set per-session defaultSET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL SERIALIZABLE; -- MySQL: Set isolation levelSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;-- Or set globally/sessionSET GLOBAL TRANSACTION ISOLATION LEVEL SERIALIZABLE;SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; -- SQL Server: Set isolation levelSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;-- Or use the SNAPSHOT option (requires database-level config)SET TRANSACTION ISOLATION LEVEL SNAPSHOT; -- Oracle: "Serializable" (actually snapshot)SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;We've explored isolation as the ACID property that manages concurrent transaction interference. From anomalies to isolation levels to implementation mechanisms, here are the key takeaways:
What's Next:
Isolation ensures correctness during execution. Durability ensures correctness after execution ends. The final ACID property guarantees that once a transaction commits, its effects persist even through system crashes, power failures, and hardware disasters. We'll explore how databases achieve this seemingly impossible guarantee.
You now understand isolation as the property that makes concurrent transactions safe. You know the anomalies that can occur, the isolation levels that prevent them, and how to choose appropriately for your workload. Next, we explore Durability—the guarantee that committed data survives failures.