Loading content...
Imagine a busy bank with thousands of customers making transactions simultaneously. If each transaction had to wait for all others to complete, the bank would grind to a halt. But if transactions were processed carelessly in parallel, money would materialize from nowhere or vanish into the void.
Isolation levels define how database transactions interact with each other when running concurrently. They represent a carefully engineered spectrum of trade-offs: on one end, maximum consistency where each transaction sees the world as if it were the only one running; on the other end, maximum performance where transactions overlap freely but may see inconsistent data.
Understanding isolation levels is critical because the default level your database uses might not be appropriate for your use case, and choosing the wrong level leads to either hard-to-reproduce data bugs or unnecessary performance degradation.
By the end of this page, you will deeply understand each SQL standard isolation level, the anomalies they permit or prevent, how major databases implement them differently, and how to choose the right level for different scenarios. You'll be able to debug concurrency-related data bugs and make informed trade-offs between consistency and performance.
The fundamental tension in database concurrency is between serializability (perfect isolation) and performance (maximum parallelism).
Serializability means transactions behave as if they executed one at a time in some serial order. This provides the strongest consistency guarantee—you never observe impossible states. However, enforcing this requires significant coordination overhead: locks, blocking, or abort/retry cycles.
Performance wants transactions to execute in parallel as much as possible. The more parallelism, the more throughput the system can achieve. But uncontrolled parallelism leads to anomalies—situations where the observed data states couldn't have occurred in any serial execution.
Isolation levels exist because different applications have different tolerance for these anomalies. A banking application processing financial transfers might need strict serializability. A social media feed aggregator might tolerate slight inconsistencies for dramatically better performance.
| Characteristic | Strong Isolation | Weak Isolation |
|---|---|---|
| Consistency | All transactions see consistent state | May observe temporary inconsistencies |
| Anomalies | None possible | Various anomalies permitted |
| Performance | Lower throughput due to blocking | Higher throughput via parallelism |
| Complexity | Deadlock potential higher | Data bugs harder to reproduce |
| Use Cases | Financial, critical systems | Analytics, feeds, non-critical |
Before examining isolation levels, we need a thorough understanding of the anomalies they prevent (or permit). Each anomaly represents a specific way concurrent transaction execution can lead to inconsistent or unexpected results.
A dirty read occurs when a transaction reads data that has been written by another transaction that has not yet committed.
The danger: If the other transaction rolls back, you've read data that never officially existed.
Timeline Example:
T1: BEGIN TRANSACTION
T1: UPDATE accounts SET balance = 500 WHERE id = 'A' // Was 1000
T2: BEGIN TRANSACTION
T2: SELECT balance FROM accounts WHERE id = 'A'
T2: --> Returns 500 (DIRTY READ!)
T1: ROLLBACK // Balance restored to 1000
T2: // Thinks balance is 500, makes decisions based on phantom data
T2: COMMIT
Real-world impact:
These anomalies don't happen on every transaction—they require specific timing of concurrent operations. This makes them particularly dangerous because they may only manifest under load, and they're extremely difficult to reproduce in testing. A system might work perfectly for months before hitting the exact conditions that trigger an anomaly.
The SQL standard defines four isolation levels, each preventing progressively more anomalies. Understanding these levels—and their limitations—is essential for any database developer.
| Isolation Level | Dirty Read | Non-Repeatable Read | Phantom Read |
|---|---|---|---|
| READ UNCOMMITTED | Possible ❌ | Possible ❌ | Possible ❌ |
| READ COMMITTED | Prevented ✓ | Possible ❌ | Possible ❌ |
| REPEATABLE READ | Prevented ✓ | Prevented ✓ | Possible ❌ |
| SERIALIZABLE | Prevented ✓ | Prevented ✓ | Prevented ✓ |
The lowest isolation level. Transactions can see uncommitted changes from other transactions.
Behavior:
Use cases (rare):
In practice: Many databases don't even support this level (PostgreSQL treats it as READ COMMITTED). It's rarely appropriate for business logic.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
-- This might read uncommitted data
SELECT SUM(amount) FROM transactions WHERE date = TODAY;
The SQL standard defines isolation levels in terms of which anomalies they prevent, but database vendors implement these levels using different mechanisms, leading to different actual behaviors. Never assume all databases are the same.
| Database | Default Level | READ COMMITTED | REPEATABLE READ | SERIALIZABLE |
|---|---|---|---|---|
| PostgreSQL | READ COMMITTED | MVCC snapshot per statement | MVCC snapshot per transaction (= Snapshot Isolation) | SSI (abort on conflict) |
| MySQL/InnoDB | REPEATABLE READ | MVCC snapshot per statement | MVCC snapshot per transaction | Gap locking + 2PL |
| SQL Server | READ COMMITTED | Locking (or MVCC with RCSI) | Locking | Locking (2PL) |
| Oracle | READ COMMITTED | MVCC snapshot per statement | N/A (only serializable above) | Snapshot + detection |
123456789101112131415161718192021222324252627282930313233343536
-- PostgreSQL: Check and set isolation levelSHOW default_transaction_isolation; -- Usually 'read committed' -- Set for current transactionBEGIN;SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;-- ... your operations ...COMMIT; -- PostgreSQL's REPEATABLE READ is actually Snapshot Isolation-- It prevents phantoms (unlike SQL standard REPEATABLE READ)BEGIN;SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;SELECT * FROM orders WHERE status = 'PENDING'; -- Returns 5 rows -- Another transaction inserts a PENDING order and commits SELECT * FROM orders WHERE status = 'PENDING'; -- Still returns same 5 rows-- (No phantoms visible in PostgreSQL REPEATABLE READ)COMMIT; -- Check for serialization failures (need retry logic!)DO $$BEGIN FOR i IN 1..3 LOOP BEGIN -- Your serializable transaction PERFORM * FROM accounts WHERE id = 'A' FOR UPDATE; -- ... RETURN; EXCEPTION WHEN serialization_failure THEN -- Retry RAISE NOTICE 'Retry attempt %', i; END; END LOOP;END $$;MySQL's REPEATABLE READ maintains consistent reads through MVCC, but writes (UPDATE, DELETE) operate on the CURRENT committed data, not the snapshot. This means your UPDATE might modify different data than your SELECT showed. Always use SELECT ... FOR UPDATE if you need to read-then-write with guaranteed consistency.
Selecting the appropriate isolation level requires understanding your data consistency requirements, performance needs, and the specific behavior of your database.
| Scenario | Recommended Level | Reasoning |
|---|---|---|
| Simple CRUD operations | READ COMMITTED | Default is usually sufficient; operations are short |
| Reports/analytics queries | REPEATABLE READ | Consistent point-in-time view across the report |
| Read-modify-write patterns | REPEATABLE READ + SELECT FOR UPDATE | Prevents lost updates via explicit locking |
| Financial transfers | SERIALIZABLE | Maximum consistency; cannot tolerate any anomalies |
| Constraints spanning multiple rows | SERIALIZABLE | Prevents write skew that could violate invariants |
| High-volume reads | READ COMMITTED | Minimizes lock contention; accepts some inconsistency |
| Audit logging | SERIALIZABLE in separate tx | Must capture accurate state at time of action |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
@Servicepublic class IsolationExamples { /** * Simple CRUD - READ COMMITTED is fine * Operations are atomic; no consistency requirements across reads */ @Transactional(isolation = Isolation.READ_COMMITTED) public User updateUserEmail(UUID userId, String newEmail) { User user = userRepository.findById(userId) .orElseThrow(() -> new UserNotFoundException(userId)); user.setEmail(newEmail); return userRepository.save(user); } /** * Financial transfer - SERIALIZABLE required * Cannot allow any anomalies; must appear to execute serially */ @Transactional(isolation = Isolation.SERIALIZABLE) public TransferResult transfer(UUID fromAccount, UUID toAccount, BigDecimal amount) { Account source = accountRepository.findByIdForUpdate(fromAccount) .orElseThrow(); Account destination = accountRepository.findByIdForUpdate(toAccount) .orElseThrow(); if (source.getBalance().compareTo(amount) < 0) { throw new InsufficientFundsException(); } source.debit(amount); destination.credit(amount); accountRepository.save(source); accountRepository.save(destination); return TransferResult.success(source, destination); } /** * Report generation - REPEATABLE READ for consistent snapshot * All queries see the same point-in-time state */ @Transactional(isolation = Isolation.REPEATABLE_READ, readOnly = true) public MonthlyReport generateMonthlyReport(YearMonth month) { // All these queries see the same snapshot BigDecimal totalRevenue = orderRepository.sumRevenueForMonth(month); long orderCount = orderRepository.countOrdersForMonth(month); List<TopProduct> topProducts = productRepository.findTopSellingForMonth(month); // If new orders are committed during this method, // we won't see them - report remains consistent return new MonthlyReport(month, totalRevenue, orderCount, topProducts); } /** * Inventory reservation with explicit locking * READ COMMITTED + SELECT FOR UPDATE prevents lost updates */ @Transactional(isolation = Isolation.READ_COMMITTED) public void reserveInventory(UUID productId, int quantity) { // FOR UPDATE acquires exclusive lock, preventing concurrent modifications Inventory inventory = inventoryRepository.findByIdForUpdate(productId) .orElseThrow(); if (inventory.getAvailable() < quantity) { throw new InsufficientInventoryException(); } inventory.reserve(quantity); inventoryRepository.save(inventory); } /** * On-call check with SERIALIZABLE to prevent write skew */ @Transactional(isolation = Isolation.SERIALIZABLE) public void goOffCall(UUID doctorId) { long currentOnCall = doctorRepository.countOnCall(); if (currentOnCall <= 1) { throw new BusinessRuleException( "Cannot go off-call: at least one doctor must remain on-call" ); } Doctor doctor = doctorRepository.findById(doctorId).orElseThrow(); doctor.setOnCall(false); doctorRepository.save(doctor); // Without SERIALIZABLE, two doctors could simultaneously pass the check // and both go off-call, violating the invariant }}If you're unsure, start with a stronger isolation level and measure performance. It's easier to safely weaken isolation after confirming no anomalies are possible than to debug subtle data corruption from too-weak isolation after it's in production.
Understanding how databases implement isolation helps you predict behavior and avoid surprises. There are two main approaches: lock-based and Multi-Version Concurrency Control (MVCC).
12345678910111213141516171819202122232425262728293031323334
-- Explicit locking supplements isolation levels -- SELECT FOR UPDATE: Exclusive lock, blocks other writers AND FOR UPDATE readersBEGIN;SELECT * FROM accounts WHERE id = 'A' FOR UPDATE;-- Other transactions trying to UPDATE this row will BLOCK-- Other transactions with FOR UPDATE on this row will BLOCK-- Other transactions with plain SELECT may proceed (with MVCC)UPDATE accounts SET balance = balance - 100 WHERE id = 'A';COMMIT; -- SELECT FOR SHARE (FOR KEY SHARE in PostgreSQL): Shared lockBEGIN;SELECT * FROM orders WHERE id = '123' FOR SHARE;-- Other transactions can also read FOR SHARE-- Other transactions trying to UPDATE or DELETE will BLOCKCOMMIT; -- NOWAIT: Fail immediately if lock cannot be acquiredBEGIN;SELECT * FROM accounts WHERE id = 'A' FOR UPDATE NOWAIT;-- Throws error immediately if another transaction holds lock-- Useful for try-lock patternsCOMMIT; -- SKIP LOCKED: Skip rows that are locked by other transactionsBEGIN;SELECT * FROM job_queue WHERE status = 'PENDING' ORDER BY created_at LIMIT 1 FOR UPDATE SKIP LOCKED;-- Gets next unlocked job, even if some are locked by other workers-- Great for queue processing without blockingCOMMIT;MVCC databases need to clean up old versions that are no longer needed. PostgreSQL uses VACUUM, MySQL/InnoDB uses a purge thread. Long-running transactions can cause version bloat because old versions can't be cleaned while they might still be needed for some transaction's snapshot.
Beyond choosing isolation levels, specific patterns can prevent anomalies more surgically.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
/** * Pattern 1: Preventing Lost Updates with Optimistic Locking * Uses a version column to detect concurrent modifications */@Entitypublic class Account { @Id private UUID id; private BigDecimal balance; @Version // JPA manages this automatically private Long version;} @Transactionalpublic void updateAccount(UUID accountId, BigDecimal newBalance) { Account account = repository.findById(accountId).orElseThrow(); account.setBalance(newBalance); repository.save(account); // Throws OptimisticLockException if version changed} // With retry logic:public void updateAccountWithRetry(UUID accountId, BigDecimal delta) { int maxRetries = 3; for (int i = 0; i < maxRetries; i++) { try { doUpdateAccount(accountId, delta); return; } catch (OptimisticLockException e) { if (i == maxRetries - 1) throw e; // Retry with fresh data } }} @Transactionalprivate void doUpdateAccount(UUID accountId, BigDecimal delta) { Account account = repository.findById(accountId).orElseThrow(); account.setBalance(account.getBalance().add(delta)); repository.save(account);} /** * Pattern 2: Preventing Lost Updates with Pessimistic Locking * SELECT FOR UPDATE acquires lock immediately */@Transactionalpublic void updateAccountPessimistic(UUID accountId, BigDecimal delta) { // Lock acquired at read time; other transactions wait Account account = repository.findByIdForUpdate(accountId).orElseThrow(); account.setBalance(account.getBalance().add(delta)); repository.save(account);} // Repository method:@Query("SELECT a FROM Account a WHERE a.id = :id")@Lock(LockModeType.PESSIMISTIC_WRITE)Optional<Account> findByIdForUpdate(@Param("id") UUID id); /** * Pattern 3: Atomic Update (Best for simple increments) * Single statement = atomic, no read-modify-write race */@Transactionalpublic void updateAccountAtomic(UUID accountId, BigDecimal delta) { // This is a SINGLE atomic operation - no lost update possible repository.incrementBalance(accountId, delta);} // Repository method:@Modifying@Query("UPDATE Account a SET a.balance = a.balance + :delta WHERE a.id = :id")void incrementBalance(@Param("id") UUID id, @Param("delta") BigDecimal delta); /** * Pattern 4: Preventing Write Skew with Materialized Conflict * Add a row that must be locked, forcing serialization */@Entitypublic class OnCallSlot { @Id private String id = "SINGLETON"; // Only one row exists private int doctorsOnCall;} @Transactional(isolation = Isolation.REPEATABLE_READ)public void goOffCallWithMaterializedLock(UUID doctorId) { // Lock the slot row - forces serialization of on-call changes OnCallSlot slot = slotRepository.findForUpdate("SINGLETON"); if (slot.getDoctorsOnCall() <= 1) { throw new BusinessRuleException("Someone must be on call"); } Doctor doctor = doctorRepository.findById(doctorId).orElseThrow(); doctor.setOnCall(false); doctorRepository.save(doctor); slot.decrementDoctorsOnCall(); slotRepository.save(slot);}Isolation levels are a critical aspect of transaction management that directly impacts both correctness and performance. Let's consolidate the key insights:
What's next:
Even with correct isolation and boundaries, transactions can fail. The next page covers handling transaction failures—how to detect failures, implement retry logic, deal with deadlocks, and ensure your application gracefully recovers from inevitable database errors.
You now have a deep understanding of isolation levels, the anomalies they address, how major databases implement them differently, and patterns to prevent specific concurrency issues. This knowledge is essential for building systems that maintain data integrity under concurrent load.