Loading learning content...
Understanding CRDT types is foundational, but the real mastery comes from recognizing when and how to apply them. This page bridges theory and practice by examining production use cases where CRDTs excel—and some where they're deliberately avoided.
As you'll see, CRDTs aren't just academic curiosities. They power some of the world's most demanding distributed systems: real-time collaboration tools, global-scale databases, offline-first mobile apps, and multi-datacenter deployments serving billions of users.
By the end of this page, you will understand production CRDT applications including collaborative editing, shopping carts, activity feeds, geo-distributed counters, offline-first apps, and gaming. You'll learn the specific CRDT choices, implementation patterns, and trade-offs in each domain.
Collaborative editing—multiple users simultaneously editing the same document—is perhaps the most demanding CRDT application. Users expect sub-100ms latency, perfect convergence, and no data loss.
The Challenge:
When Alice and Bob both type in a shared document, their edits must be merged automatically. But text editing involves:
The CRDT Solution:
Sequence CRDTs (RGA, YATA, Logoot) assign each character a unique, stable identifier. Position is determined by these identifiers, not numerical indices—so concurrent inserts have deterministic relative ordering.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
## Collaborative Editing Architecture ┌─────────────────────────────────────────────────────────────────┐│ EDITOR APPLICATION ││ ┌─────────────────────────────────────────────────────────┐ ││ │ User Interface │ ││ │ ┌────────────────────────────────────────────────────┐ │ ││ │ │ Document View (rendered text) │ │ ││ │ └────────────────────────────────────────────────────┘ │ ││ │ ▲ │ ││ │ │ (renders from CRDT state) │ ││ │ │ │ ││ │ ┌────────────────────────┴───────────────────────────┐ │ ││ │ │ Local CRDT State │ │ ││ │ │ - Sequence of (id, char, deleted) tuples │ │ ││ │ │ - Cursor positions as CRDT elements │ │ ││ │ │ - Formatting as range annotations │ │ ││ │ └────────────┬───────────────────────────┬──────────┘ │ ││ │ │ │ │ ││ │ Local Ops Remote Ops │ ││ │ │ │ │ ││ │ ▼ ▼ │ ││ │ ┌────────────────────────────────────────────────────┐ │ ││ │ │ Sync Engine │ │ ││ │ │ - Delta extraction │ │ ││ │ │ - Compression/batching │ │ ││ │ │ - Retry/reconnection │ │ ││ │ └────────────────────────────────────────────────────┘ │ ││ └───────────────────────┬─────────────────────────────────┘ ││ │ │└──────────────────────────┼──────────────────────────────────────┘ │ WebSocket / WebRTC ▼┌──────────────────────────────────────────────────────────────────┐│ SYNC SERVER (optional) ││ - Relays operations between clients ││ - Persists document state ││ - Handles presence (who's online, cursor positions) ││ - NO conflict resolution logic needed — CRDTs handle it │└──────────────────────────────────────────────────────────────────┘ ## Key Libraries Yjs (JavaScript)├── YATA algorithm for sequences├── Awareness protocol for presence├── Multiple network providers (WebSocket, WebRTC, y-indexeddb)└── Bindings: ProseMirror, Quill, Monaco, CodeMirror Automerge (Rust/JavaScript) ├── RGA-based sequences├── Full CRDT types (maps, counters, text)├── Binary sync protocol└── Strong TypeScript supportEarlier collaboration tools (Google Docs v1, Etherpad) used Operational Transformation (OT), which requires a central server to order and transform operations. CRDTs enable true peer-to-peer collaboration and simpler server logic—the server just relays, never transforms. Most new collaboration projects choose CRDTs.
Shopping carts are the classic CRDT example from Amazon's Dynamo paper. Users expect to add items from any device, during any network condition, and never lose their selections.
The Problem:
A user adds "iPhone" to their cart on their phone while commuting (offline). Simultaneously, on their laptop at home, they add "AirPods" (online). When the phone reconnects, what should happen?
Traditional Approaches (Broken):
CRDT Solution:
Model the cart as an OR-Set<{item, quantity}> or Map<itemId, PN-Counter>. Add from any replica, merge automatically, get the union of all additions.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170
/** * Shopping Cart as CRDT * * Using OR-Map with PN-Counter values for quantities. * Supports: add item, remove item, update quantity. */interface CartItem { productId: string; name: string; price: number;} interface CartQuantity { P: Map<string, number>; // Additions N: Map<string, number>; // Removals} interface ShoppingCart { // OR-Set semantics for item presence itemTags: Map<string, Set<string>>; // productId -> unique add tags // Item metadata items: Map<string, CartItem>; // PN-Counter for quantities quantities: Map<string, CartQuantity>; replicaId: string; counter: number;} const CartOps = { create: (replicaId: string): ShoppingCart => ({ itemTags: new Map(), items: new Map(), quantities: new Map(), replicaId, counter: 0, }), addItem: ( cart: ShoppingCart, item: CartItem, quantity: number = 1 ): ShoppingCart => { const newCounter = cart.counter + 1; const tag = `${cart.replicaId}:${newCounter}`; // Add tag for OR-Set presence const newTags = new Map(cart.itemTags); const existingTags = newTags.get(item.productId) ?? new Set(); newTags.set(item.productId, new Set([...existingTags, tag])); // Store item metadata const newItems = new Map(cart.items); newItems.set(item.productId, item); // Initialize or increment quantity const newQuantities = new Map(cart.quantities); const existingQty = newQuantities.get(item.productId) ?? { P: new Map([[cart.replicaId, 0]]), N: new Map([[cart.replicaId, 0]]), }; const newP = new Map(existingQty.P); newP.set(cart.replicaId, (newP.get(cart.replicaId) ?? 0) + quantity); newQuantities.set(item.productId, { ...existingQty, P: newP }); return { ...cart, itemTags: newTags, items: newItems, quantities: newQuantities, counter: newCounter, }; }, removeItem: (cart: ShoppingCart, productId: string): ShoppingCart => { // OR-Set remove: clear all tags const newTags = new Map(cart.itemTags); newTags.delete(productId); return { ...cart, itemTags: newTags }; }, updateQuantity: ( cart: ShoppingCart, productId: string, delta: number ): ShoppingCart => { const newQuantities = new Map(cart.quantities); const qty = newQuantities.get(productId); if (!qty) return cart; if (delta > 0) { const newP = new Map(qty.P); newP.set(cart.replicaId, (newP.get(cart.replicaId) ?? 0) + delta); newQuantities.set(productId, { ...qty, P: newP }); } else { const newN = new Map(qty.N); newN.set(cart.replicaId, (newN.get(cart.replicaId) ?? 0) + Math.abs(delta)); newQuantities.set(productId, { ...qty, N: newN }); } return { ...cart, quantities: newQuantities }; }, getQuantity: (cart: ShoppingCart, productId: string): number => { const qty = cart.quantities.get(productId); if (!qty) return 0; const pSum = Array.from(qty.P.values()).reduce((a, b) => a + b, 0); const nSum = Array.from(qty.N.values()).reduce((a, b) => a + b, 0); return Math.max(0, pSum - nSum); // Never negative }, isPresent: (cart: ShoppingCart, productId: string): boolean => { const tags = cart.itemTags.get(productId); return tags !== undefined && tags.size > 0; }, merge: (a: ShoppingCart, b: ShoppingCart): ShoppingCart => { // Merge OR-Set tags const mergedTags = new Map(a.itemTags); for (const [productId, tags] of b.itemTags) { const existing = mergedTags.get(productId) ?? new Set(); mergedTags.set(productId, new Set([...existing, ...tags])); } // Merge item metadata (latest wins per item) const mergedItems = new Map([...a.items, ...b.items]); // Merge quantities (PN-Counter merge) const mergedQuantities = new Map(a.quantities); for (const [productId, bQty] of b.quantities) { const aQty = mergedQuantities.get(productId) ?? { P: new Map(), N: new Map(), }; const mergedP = new Map(aQty.P); for (const [id, val] of bQty.P) { mergedP.set(id, Math.max(mergedP.get(id) ?? 0, val)); } const mergedN = new Map(aQty.N); for (const [id, val] of bQty.N) { mergedN.set(id, Math.max(mergedN.get(id) ?? 0, val)); } mergedQuantities.set(productId, { P: mergedP, N: mergedN }); } return { itemTags: mergedTags, items: mergedItems, quantities: mergedQuantities, replicaId: a.replicaId, counter: Math.max(a.counter, b.counter), }; }, getCartContents: (cart: ShoppingCart): Array<CartItem & { quantity: number }> => { const result: Array<CartItem & { quantity: number }> = []; for (const [productId, tags] of cart.itemTags) { if (tags.size === 0) continue; const item = cart.items.get(productId); if (!item) continue; const quantity = CartOps.getQuantity(cart, productId); if (quantity <= 0) continue; result.push({ ...item, quantity }); } return result; },};CRDTs cannot enforce global invariants like 'quantity ≤ inventory'. If two replicas each believe 3 items remain, both might allow adding 3. Resolution happens at checkout time, potentially rejecting items. This is the fundamental trade-off: availability over constraints. For strict inventory, use consensus at write time instead.
Social media platforms handle massive write volumes from globally distributed users. Likes, shares, comments, and follows must be counted and displayed in near-real-time across millions of concurrent users.
The Challenge:
A viral post might receive thousands of likes per second from users worldwide. Traditional approaches:
CRDT Solution:
Distributed G-Counters / PN-Counters allow each datacenter to accept and count likes locally. Periodic merge gives globally consistent totals without coordination.
| Feature | CRDT Type | Semantics | Trade-offs |
|---|---|---|---|
| Like count | G-Counter | Only increment | Cannot unlike (or use PN-Counter) |
| Like/unlike | PN-Counter | Increment + Decrement | Can go negative briefly |
| Followers set | OR-Set | Follow/unfollow adds/removes | Metadata per follower |
| Comments list | RGA/Sequence | Ordered inserts | Tombstone accumulation |
| Reactions | Map<emoji, G-Counter> | Multiple reaction types | Linear in reaction variety |
| Post content | LWW-Register | Editable post text | Edit conflicts lost |
123456789101112131415161718192021222324252627282930313233343536373839404142434445
## Case Study: SoundCloud's Roshi SoundCloud built Roshi, a CRDT-based time-series store for activity feeds. Problem: Display "activities" (likes, reposts, etc.) sorted by time, with extremely high write throughput, geo-distributed. Solution: LWW-Element-Set per user's feed, stored in Redis. Architecture:┌─────────────────────────────────────────────────────────────────┐│ Application Layer ││ (Like/repost events from user actions) │└─────────────────────────────────┬───────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ Roshi Farm ││ - Multiple Roshi instances ││ - Each accepts writes independently ││ - No coordination between instances ││ - Backed by Redis Cluster │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ Storage: Redis ││ Key: user:<id>:feed ││ Value: Sorted Set (score = timestamp, member = activity_id) ││ ││ LWW Semantics: ││ - Each activity has (id, timestamp, deleted_flag) ││ - Higher timestamp wins for same activity_id ││ - Periodic compaction removes old tombstones │└─────────────────────────────────────────────────────────────────┘ Performance:- Writes: Fire-and-forget to any Roshi instance- Reads: Query any replica, get best-effort consistent view- Consistency: Eventual, typically <1 second convergence- Throughput: Millions of events per second Key Insight:For activity feeds, slightly stale data is acceptable.CRDTs eliminate write coordination entirely.Social platforms often display approximate counts ('2.3M likes' instead of '2,347,892'). This UX choice aligns perfectly with CRDT semantics—approximate display masks the inherent eventual consistency. Users never notice if the count is off by a few hundred during convergence.
Global businesses need data accessible from multiple regions. Cross-datacenter consensus is expensive (100ms+ latency per operation). CRDTs enable active-active multi-datacenter deployments where each datacenter accepts writes independently.
The Production Reality:
Companies like Apple, Riot Games, and Bet365 run active-active configurations across continents. Users connect to their nearest datacenter, which accepts writes locally. Datacenters sync asynchronously; CRDTs resolve any conflicts.
123456789101112131415161718192021222324252627282930313233343536373839404142434445
## Redis Enterprise Active-Active Redis Enterprise offers first-class CRDT support for geo-distribution. Supported CRDT Types:- Strings (LWW-Register)- Counters (PN-Counter) - Sets (OR-Set)- Sorted Sets (OR-Set with scores)- Hashes (Map of LWW-Registers)- Lists (RGA)- Streams (append-only log with CRDT merge) Architecture: ┌───────────────┐ │ US-East │ │ Redis │◄────── US users write here │ Primary │ └───────┬───────┘ │ Async CRDT Sync ┌───────────┼───────────┐ │ │ │ ▼ ▼ ▼┌─────────────┐ ┌─────────────┐ ┌─────────────┐│ EU-West │ │ APAC │ │ US-West ││ Redis │ │ Redis │ │ Redis ││ Primary │ │ Primary │ │ Primary │└─────────────┘ └─────────────┘ └─────────────┘ ▲ ▲ ▲ │ │ │ EU users APAC users West US users write here write here write here Conflict Resolution (Configurable):- LWW (Last-Write-Wins) with vector clocks- Add-Wins for sets - PN-Counter semantics for counters Example: Global Rate Limiter Key: rate_limit:user:123 Type: PN-Counter Each DC increments locally Merge gives global count Slightly over-limit during sync is acceptableCRDTs are perfect for global counters: page views, API call counts, feature flags, rate limiting. Each DC counts locally, syncs periodically. The total is eventually accurate. For rate limiting, brief over-counts are usually acceptable—better than blocking writes on cross-DC consensus.
Mobile apps must work in elevators, subways, airplanes, and rural areas. The traditional approach—fail on network error—creates terrible UX. Offline-first means the app works fully offline, syncing when connectivity returns.
The CRDT Advantage:
CRDTs were designed exactly for this. Each device is a replica. Edits accumulate offline. On reconnection, merge. No special conflict handling code needed.
Architecture Pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
## Offline-First Architecture with CRDTs ┌─────────────────────────────────────────────────────────────┐│ MOBILE DEVICE ││ ┌───────────────────────────────────────────────────────┐ ││ │ Application UI │ ││ │ (Works identically online or offline) │ ││ └───────────────────────────┬───────────────────────────┘ ││ │ ││ ▼ ││ ┌───────────────────────────────────────────────────────┐ ││ │ Local CRDT Store │ ││ │ (IndexedDB / SQLite / Realm) │ ││ │ │ ││ │ ┌──────────────┐ ┌──────────────┐ ┌───────────────┐ │ ││ │ │ User Data │ │ Pending Ops │ │ Sync State │ │ ││ │ │ (CRDT state) │ │ (delta queue)│ │ (vector clock)│ │ ││ │ └──────────────┘ └──────────────┘ └───────────────┘ │ ││ └───────────────────────────┬───────────────────────────┘ ││ │ ││ ▼ ││ ┌───────────────────────────────────────────────────────┐ ││ │ Sync Engine │ ││ │ - Monitors connectivity │ ││ │ - Batches pending operations │ ││ │ - Handles retry with exponential backoff │ ││ │ - Computes deltas efficiently │ ││ └───────────────────────────┬───────────────────────────┘ ││ │ │└──────────────────────────────┼──────────────────────────────┘ │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─│─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ NETWORK (may be offline for hours/days) ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─│─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │ ▼┌─────────────────────────────────────────────────────────────┐│ SYNC SERVER ││ ┌───────────────────────────────────────────────────────┐ ││ │ Server CRDT Store │ ││ │ (Source of truth, merges from all devices) │ ││ └───────────────────────────────────────────────────────┘ ││ ││ Sync Protocol: ││ 1. Client sends: {device_vector_clock, delta} ││ 2. Server merges delta into global state ││ 3. Server computes delta since client's vector clock ││ 4. Server responds: {new_vector_clock, delta_for_client} ││ 5. Client merges server delta ││ 6. Both now have same state │└─────────────────────────────────────────────────────────────┘There's a growing 'local-first' software movement emphasizing user data ownership, offline capability, and privacy. CRDTs are the enabling technology. Instead of cloud-first (data on server, device caches), local-first means data lives on device, cloud syncs. See 'Local-First Software' by Ink & Switch.
Online games have extreme requirements: sub-50ms latency, thousands of concurrent state updates, global player base, and zero tolerance for desync. While full game physics typically use deterministic lockstep or server-authoritative models, CRDTs find homes in specific gaming subsystems.
CRDT Use Cases in Gaming:
| Subsystem | CRDT Type | Why CRDTs Help |
|---|---|---|
| Player presence | G-Set / OR-Set | Who's online, no coordination for join/leave |
| Matchmaking queues | Priority queue CRDT | Distributed queue across regions |
| Chat messages | RGA / Sequence | Ordered messages, offline buffer |
| Stats/leaderboards | PN-Counter + LWW | Kill counts, scores from distributed servers |
| Inventory (non-critical) | OR-Set | Cosmetics, non-tradeable items |
| Guild/clan rosters | OR-Set | Members across shards |
1234567891011121314151617181920212223242526272829303132333435363738
## Case Study: Riot Games (League of Legends) League of Legends has 100M+ monthly players across global regions. Problem: Consistent player state across:- Multiple game servers per region- Cross-region features (shared friends, global events)- Authentication/session systems CRDT Usage:┌─────────────────────────────────────────────────────────────┐│ PLAYER STATE SERVICE ││ ││ ┌─────────────────────────────────────────────────────┐ ││ │ Player Profile (CRDT) │ ││ │ - Summoner name: LWW-Register │ ││ │ - Friends list: OR-Set<SummonerId> │ ││ │ - Owned champions: G-Set (can only gain) │ ││ │ - Match history: Append-only log │ ││ │ - Settings: LWW-Map │ ││ └─────────────────────────────────────────────────────┘ ││ ││ Sync Model: ││ - Each region has local CRDT replica ││ - Background mesh sync between regions ││ - Game servers read local replica (low latency) ││ - Writes accepted locally, propagate eventually ││ ││ NOT CRDT (requires strong consistency): ││ - Ranked matchmaking (must not double-queue) ││ - In-game state (deterministic lockstep) ││ - Trading/purchasing (financial transactions) ││ - Anti-cheat systems (authoritative server) │└─────────────────────────────────────────────────────────────┘ Key Insight:CRDTs handle "soft state" that doesn't need transaction guarantees.Hard state uses traditional consensus. Hybrid architecture.Game physics, financial transactions, competitive ranking—these require strong consistency. CRDTs trade consistency for availability. For games, use CRDTs only for state where eventual consistency is acceptable. Authoritative server remains king for gameplay state.
Understanding when CRDTs are wrong is as important as knowing when they're right. Here are scenarios where CRDTs should be avoided:
Production systems often combine CRDTs for high-volume eventual-consistent data with consensus for critical operations. Example: Use CRDTs for product views/likes, but use Raft/Paxos for purchases. Let each data type have appropriate consistency.
We've explored how CRDTs power production systems across diverse domains. Let's consolidate the key patterns:
What's Next:
The final page addresses CRDT Limitations and Challenges. While CRDTs are powerful, they come with real engineering challenges: tombstone accumulation, metadata overhead, garbage collection, and semantic constraints. Understanding these limitations is essential for successful production deployments.
You now understand how CRDTs are applied in production systems across multiple domains. From collaborative documents to global counters, from shopping carts to gaming infrastructure—you can recognize where CRDTs fit and where they don't. Next, we'll explore the engineering challenges and limitations.