Loading content...
In 2012, Knight Capital Group lost $440 million in 45 minutes due to a software deployment issue that caused duplicate orders. In 2016, a MongoDB deployment at a startup lost user data because the replica lag wasn't accounted for—users were shown their data as saved, but it never actually persisted. In countless systems, 'eventual consistency' has led to double-spending, duplicate purchases, and conflicting state that required expensive reconciliation.
These aren't edge cases—they're the predictable consequences of applying weak consistency guarantees to problems that fundamentally require strong consistency. The question isn't whether you want linearizability—it's whether your problem demands it.
This page will help you identify when strong consistency is non-negotiable and understand the real-world consequences of choosing weaker alternatives.
By the end of this page, you will be able to identify use cases that fundamentally require linearizability, understand the specific failure modes that occur with weaker consistency, evaluate consistency requirements for your own systems, and make defensible decisions about where to pay the linearizability tax.
At its core, strong consistency is required whenever you need coordination—when multiple parties must agree on state before proceeding, and disagreement leads to incorrect or dangerous outcomes.
Coordination is required when:
Not all operations require the same level of coordination:
| Operation Type | Coordination Need | Consistency Requirement | Example |
|---|---|---|---|
| Idempotent writes | None | Eventual sufficient | Logging events, updating 'last seen' timestamp |
| Commutative updates | Low | Eventual with merge | Incrementing counters (CRDTs), set unions |
| Conditional updates | Medium | Causal or stronger | Add to cart if item in stock |
| Mutual exclusion | High | Linearizable required | Acquiring a distributed lock |
| Unique constraints | High | Linearizable required | Creating unique username, reservation |
| Multi-party agreement | Highest | Strict serializability | Bank transfers, distributed transactions |
Linearizability is required whenever the correctness of your system depends on all participants seeing the same value at the same logical time.
If two nodes can safely see different values temporarily and eventually converge without breaking invariants, eventual consistency works. If disagreement—even momentary—leads to invariant violation, you need linearizability.
Question to ask:
"If two servers process the same request simultaneously with stale data, what happens?"
If the answer is:
- "They both succeed, and we'll deduplicate later" → Eventual consistency OK
- "They both succeed, but that violates a constraint" → Linearizability REQUIRED
Many systems have hidden coordination requirements. 'Check then act' patterns, upserts with constraints, and compare-and-swap operations all require linearizability even though they may not look like obvious coordination problems at first glance.
Distributed locks are perhaps the clearest example of mandatory linearizability. When multiple processes compete for exclusive access to a resource, only one must succeed—and everyone must agree on who that is.
Consider a lock service without linearizability:
Process A (reads from Replica 1):
lock_holder = NULL // Stale read!
→ Acquires lock
Process B (reads from Replica 2):
lock_holder = NULL // Also stale!
→ Also acquires lock
Result: BOTH processes believe they hold the lock
Protected resource is accessed concurrently
Data corruption ensues
With linearizability:
Process A:
Compare-and-swap(lock_holder, NULL, A)
→ Linearization point: lock assigned to A
→ Returns SUCCESS
Process B:
Compare-and-swap(lock_holder, NULL, B)
→ Linearization point: AFTER A's operation
→ lock_holder = A (not NULL)
→ Returns FAILURE
Result: Only A holds the lock
Mutual exclusion is guaranteed
1. Exactly-Once Job Processing
In a job processing system, multiple workers try to claim jobs:
# This pattern REQUIRES linearizability
def claim_job(job_id, worker_id):
# Must atomically check and set
result = store.compare_and_swap(
key=f"job:{job_id}:claimed_by",
expected=None, # Unclaimed
new_value=worker_id
)
return result.success
Without linearizability, the same job can be processed multiple times—causing duplicate emails, double charges, or data inconsistencies.
2. Single-Writer Guarantee
Many distributed systems rely on a single-writer model:
- Database primary must be unique
- Kafka partition leader must be unique
- Shard owner in a distributed store must be unique
Linearizable consensus ensures exactly one winner, preventing split-brain scenarios that corrupt data.
3. Resource Quotas
Enforcing quotas requires atomic check-and-decrement:
def try_allocate(resource, amount):
while True:
current = store.linearizable_read(f"quota:{resource}")
if current < amount:
return QUOTA_EXCEEDED
if store.compare_and_swap(f"quota:{resource}", current, current - amount):
return SUCCESS
# CAS failed due to concurrent modification, retry
Redis's Redlock algorithm attempts to provide distributed locking using multiple Redis instances. However, Martin Kleppmann's analysis showed it can fail under network delays, process pauses, and clock skew. True distributed locking requires a linearizable store like ZooKeeper, etcd, or Consul—not a quorum of eventually consistent instances.
Enforcing uniqueness—whether for usernames, email addresses, reservation IDs, or any other unique identifier—requires linearizability when concurrent inserts are possible.
Consider user registration with unique usernames:
// Without linearizability:
Replica A: "alice" available? → YES → Insert "alice"
Replica B: "alice" available? → YES → Insert "alice" (DUPLICATE!)
// Both checks see "alice" as available because neither has seen
// the other's write yet. Eventually they'll converge on... what?
// Two users named "alice"? One overwrites the other?
With eventual consistency, you have limited options for handling duplicates:
Option 1: Last-Write-Wins
Option 2: First-Write-Wins (if detectable)
Option 3: Keep Both
| Domain | Unique Constraint | Failure Mode Without Linearizability |
|---|---|---|
| User Management | Unique username/email | Duplicate accounts, login confusion |
| E-commerce | Unique order ID | Duplicate orders, double charges |
| Inventory | Unique reservation ID | Double-booking, overselling |
| Finance | Unique transaction ID | Duplicate transactions, reconciliation hell |
| Scheduling | Unique time slot booking | Double-booked appointments |
| File Systems | Unique file path | Conflicting files, data loss |
| DNS | Unique domain record | Conflicting DNS entries, misdirected traffic |
Many APIs use idempotency keys to prevent duplicate requests:
POST /payments
Idempotency-Key: abc123
{"amount": 100}
The server must ensure only one payment is created for abc123, even if the request is sent multiple times. This is precisely the unique constraint problem—and requires linearizable storage for the idempotency key mapping.
# This MUST be linearizable
def create_payment(idempotency_key, payment_data):
# Check-then-insert must be atomic
existing = store.compare_and_swap(
key=f"idempotency:{idempotency_key}",
expected=None,
new_value=PROCESSING
)
if not existing.success:
return get_cached_response(idempotency_key)
# Safe: we hold the unique idempotency key
result = process_payment(payment_data)
store.write(f"idempotency:{idempotency_key}", result)
return result
Even with linearizable locks, ensure downstream resources validate the lock holder. A 'fencing token' (monotonically increasing lock epoch) should be passed to resources, which reject requests from older epochs. This protects against delayed messages from previous lock holders.
Financial systems are the canonical example of mandatory strong consistency. The cost of errors—double-spending, lost transactions, incorrect balances—is measured in lawsuits and regulatory penalties, not just unhappy users.
Consider a simple account with a non-negative balance constraint:
Account balance: $100
Transaction 1: Withdraw $80 (new balance: $20)
Transaction 2: Withdraw $80 (should fail: insufficient funds)
With eventual consistency:
Replica A sees balance = $100: Approve $80 withdrawal
Replica B sees balance = $100: Approve $80 withdrawal (stale!)
Result: Account balance = -$60
Bank has given away $60 it doesn't have
With linearizability:
Transaction 1 linearizes at T1: balance 100 → 20
Transaction 2 linearizes at T2 > T1: sees balance = 20
Insufficient funds → Transaction 2 rejected
Result: Constraint maintained, bank solvent
Transfers between accounts require strict serializability (linearizability + serializability):
Transfer $100 from A to B:
1. Check A.balance >= 100
2. A.balance -= 100
3. B.balance += 100
These three operations must appear atomic.
Any consistency model weaker than strict serializability allows:
Financial systems often have legal requirements for consistency:
| Operation | Consistency Need | Failure Consequence |
|---|---|---|
| Balance inquiry | Read-your-writes (minimum) | User sees stale balance, makes wrong decisions |
| Deposit | Linearizable | Deposits lost or duplicated |
| Withdrawal | Linearizable | Overdrafts, double-spending |
| Transfer | Strict serializable | Lost money, duplicate transactions, constraint violations |
| Interest calculation | Snapshot isolation | Incorrect interest, regulatory issues |
| End-of-day balancing | Linearizable | Failed reconciliation, audit failures |
Some systems use sagas (compensating transactions) instead of distributed transactions. However, sagas only provide eventual consistency for the business outcome—intermediate states are visible, and compensation can fail. For strict financial requirements, linearizable transactions are preferred despite the performance cost.
Leader election is a foundational building block for distributed systems. When multiple nodes compete to become leader, exactly one must win—and everyone must agree on who it is.
Before Partition:
Node A = Follower
Node B = Leader
Node C = Follower
During Partition:
[Node A] | [Node B] | [Node C]
Network partition
Node A: "I can't reach B, starting election..."
Node A: "I am now leader!" (minority partition)
Node B: "I am still leader!" (also minority)
Result: TWO leaders accepting conflicting writes
Split-brain → data divergence → data loss
Linearizable leader election (via consensus) ensures:
Proper leader election:
Node A: "RequestVote(epoch=2)"
Node B: "I'm leader in epoch 1, but can grant vote for epoch 2"
Node C: "Granting vote for epoch 2"
[Network Partition]
Node A has 2 votes (A, C) but needs 2 for quorum
Node A becomes leader in epoch 2
Node B has its own vote only (can't reach A, C)
Node B cannot form quorum
Node B steps down as leader
Result: Exactly one leader at a time
Most distributed systems use leader election internally:
Databases with primary-replica:
Distributed storage:
Coordination services:
Message queues:
All of these use linearizable consensus internally, even if they expose eventually consistent APIs to clients.
A common optimization is lease-based leadership:
Leader election:
1. Node wins election, receives lease for T seconds
2. Node acts as leader for lease duration
3. Must renew lease before expiration
4. If renewal fails, step down
Linearizability requirement:
- Lease grant/renewal must be linearizable
- All nodes must agree on lease holder
- Clock skew must be bounded (NTP or better)
Even with linearizable election, split-brain can occur if a leader experiences a long GC pause, process freeze, or hypervisor preemption. The leader may think it still holds the lease, but others have elected a new leader. Fencing tokens and asymmetric lease timing help mitigate this—always design for the case where old leaders don't know they've been replaced.
When evaluating whether your system requires linearizability, use this structured decision framework.
If disagreement is acceptable (eventually resolved):
If disagreement violates an invariant:
if (condition_is_true) {
perform_action(); // Assumes condition still true
}
This pattern always requires linearizability:
Without linearizability, the condition can change between check and act.
Conflicts resolvable:
Conflicts not resolvable:
Watch for these patterns in your requirements:
It's easier to relax consistency constraints after understanding your system than to strengthen them later. Start with linearizability for critical paths, measure the performance impact, and selectively relax to weaker models only where you can prove it's safe.
We've explored the critical use cases where linearizability isn't optional—where weaker consistency models lead to correctness violations, not just user inconvenience.
What's Next:
Now that we understand when linearizability is required, the final page explores systems that provide strong consistency—examining how production databases and coordination services implement these guarantees and their relative trade-offs.
You can now identify when strong consistency is required: distributed locking, unique constraints, financial operations, and leader election. You have a decision framework for evaluating consistency requirements and understand the consequences of choosing weaker guarantees where linearizability is needed.