Loading learning content...
Last-Write-Wins is simple but brutal—it discards one user's work entirely. For many applications, this is unacceptable. When two users simultaneously edit a document, add items to a cart, or update a counter, we don't want to lose either contribution. We want to merge them intelligently.
Custom conflict resolution moves beyond timestamp comparison to domain-aware merge logic that preserves user intent across conflicting writes. Instead of asking "which write is later?", we ask "how should these concurrent changes combine?"
This page explores the techniques, patterns, and implementations that make custom resolution work—from simple merge functions to sophisticated approaches like CRDTs and Operational Transformation that power real-time collaboration at scale.
By the end of this page, you will understand: (1) How to design and implement domain-specific merge functions, (2) Conflict-free Replicated Data Types (CRDTs) and their guarantees, (3) Operational Transformation for collaborative editing, (4) When to use each approach, and (5) Production patterns for combining multiple resolution strategies.
Before diving into specific techniques, we must understand what makes data mergeable. Not all data structures can be cleanly merged; understanding the mathematical properties required enables us to design data that supports conflict resolution.
The Merge Function Requirements:
For a merge function to produce consistent results across all replicas, it must satisfy these properties:
Designing for Mergeability:
The key insight is that some data structures are inherently more mergeable than others. Consider these examples:
| Data Structure | Mergeability | Why |
|---|---|---|
| Single integer | Low | No natural merge; increment/set conflict |
| Set (add-only) | High | Union is commutative, associative, idempotent |
| Counter (per-node) | High | Sum of per-node counts always correct |
| Ordered list | Low | Concurrent inserts at same position conflict |
| Map (field-level LWW) | Medium | Field-independent updates merge cleanly |
| Text (raw string) | Low | Character-level conflicts with no context |
| Text (operations) | High | Operations transform and merge |
The Accidental Design Principle:
Often, mergeability is an afterthought. Systems are designed with single-leader assumptions, then retrofitted for multi-leader. This creates friction—data structures that worked under serialization break under concurrency.
The Intentional Design Principle:
When building for multi-leader from the start, design data structures with merge semantics in mind. Ask: "If two users modify this simultaneously, what should happen?" Let that answer shape the data model.
The effort spent designing mergeable data structures pays dividends throughout the system's lifetime. Retrofitting merge logic onto unmergeable data is painful and error-prone. Invest in data modeling upfront.
The simplest form of custom resolution is writing domain-specific merge functions that encode business logic for combining concurrent writes. This approach doesn't require special data structures—just careful reasoning about what merge means in your domain.
Example 1: Shopping Cart Merge
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
/** * Shopping Cart: Users add/remove items; merging should preserve all additions * and respect removals when causally related. */interface CartItem { productId: string; quantity: number; addedAt: number; removedAt?: number; // Tombstone for removal} interface ShoppingCart { userId: string; items: Map<string, CartItem>; // productId -> CartItem version: number;} function mergeShoppingCarts( ancestor: ShoppingCart | null, local: ShoppingCart, remote: ShoppingCart): ShoppingCart { const merged: ShoppingCart = { userId: local.userId, items: new Map(), version: Math.max(local.version, remote.version) + 1 }; // Collect all product IDs from all sources const allProductIds = new Set([ ...local.items.keys(), ...remote.items.keys() ]); for (const productId of allProductIds) { const localItem = local.items.get(productId); const remoteItem = remote.items.get(productId); const ancestorItem = ancestor?.items.get(productId); const mergedItem = mergeCartItem(ancestorItem, localItem, remoteItem); // Only include if not deleted if (mergedItem && !mergedItem.removedAt) { merged.items.set(productId, mergedItem); } } return merged;} function mergeCartItem( ancestor: CartItem | undefined, local: CartItem | undefined, remote: CartItem | undefined): CartItem | null { // Neither has this item if (!local && !remote) return null; // Only one side has it: take that side if (!local) return remote!; if (!remote) return local; // Both have it: merge return { productId: local.productId, // Quantity: sum the deltas from ancestor quantity: mergeQuantities(ancestor, local, remote), addedAt: Math.min(local.addedAt, remote.addedAt), // Removal wins if either removed (delete wins) removedAt: local.removedAt || remote.removedAt };} function mergeQuantities( ancestor: CartItem | undefined, local: CartItem, remote: CartItem): number { const ancestorQty = ancestor?.quantity || 0; const localDelta = local.quantity - ancestorQty; const remoteDelta = remote.quantity - ancestorQty; // Apply both deltas to ancestor quantity return Math.max(0, ancestorQty + localDelta + remoteDelta);}Example 2: User Profile with Field-Level Merge
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
/** * User Profile: Different fields can be updated independently. * Use field-level timestamps to merge without losing either user's changes. */interface VersionedField<T> { value: T; updatedAt: number; updatedBy: string;} interface UserProfile { userId: string; displayName: VersionedField<string>; bio: VersionedField<string>; avatarUrl: VersionedField<string | null>; preferences: VersionedField<UserPreferences>; // Lists use set semantics favoriteProducts: Map<string, VersionedField<boolean>>; // productId -> added?} function mergeProfiles(local: UserProfile, remote: UserProfile): UserProfile { return { userId: local.userId, displayName: mergeField(local.displayName, remote.displayName), bio: mergeField(local.bio, remote.bio), avatarUrl: mergeField(local.avatarUrl, remote.avatarUrl), preferences: mergeField(local.preferences, remote.preferences), favoriteProducts: mergeVersionedMap(local.favoriteProducts, remote.favoriteProducts) };} function mergeField<T>(local: VersionedField<T>, remote: VersionedField<T>): VersionedField<T> { // LWW at field level if (remote.updatedAt > local.updatedAt) return remote; if (local.updatedAt > remote.updatedAt) return local; // Tie-breaker: alphabetic by updatedBy return local.updatedBy > remote.updatedBy ? local : remote;} function mergeVersionedMap<T>( local: Map<string, VersionedField<T>>, remote: Map<string, VersionedField<T>>): Map<string, VersionedField<T>> { const merged = new Map<string, VersionedField<T>>(); // Union of all keys const allKeys = new Set([...local.keys(), ...remote.keys()]); for (const key of allKeys) { const localValue = local.get(key); const remoteValue = remote.get(key); if (!localValue) { merged.set(key, remoteValue!); } else if (!remoteValue) { merged.set(key, localValue); } else { merged.set(key, mergeField(localValue, remoteValue)); } } return merged;}The shopping cart example shows 'three-way merge': considering the common ancestor in addition to both diverged versions. This enables smarter merge decisions (e.g., detecting if a field was changed or just inherited). Systems like Git use three-way merge for similar reasons.
CRDTs (Conflict-free Replicated Data Types) are mathematically designed data structures that guarantee convergence without coordination. Any sequence of operations, received in any order, produces the same final state at all replicas.
Why CRDTs Matter:
Unlike ad-hoc merge functions that might have subtle bugs, CRDTs are proven correct by their mathematical construction. If you use a well-designed CRDT, convergence is guaranteed—not just hoped for.
Two CRDT Families:
Common CRDT Types:
| CRDT Type | Description | Use Case |
|---|---|---|
| G-Counter | Grow-only counter; each node tracks own increments | Like counts, view counts, any monotonic counter |
| PN-Counter | Positive-Negative counter; tracks increments and decrements separately | Inventory counts, balances (with caveats) |
| G-Set | Grow-only set; elements can only be added | Tags, categories, follower lists (add only) |
| 2P-Set (Two-Phase Set) | Elements can be added then removed (once) | Completed tasks, viewed items |
| OR-Set (Observed-Remove Set) | Elements can be added and removed freely | General-purpose set; shopping cart items |
| LWW-Register | Last-Write-Wins register formalized as CRDT | Single values; user preferences |
| LWW-Map | Map where each key uses LWW-Register semantics | User profiles; config settings |
| Sequence CRDT (RGA, YATA) | Ordered list supporting concurrent inserts | Collaborative text editing; ordered lists |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
/** * G-Counter (Grow-only Counter) - State-Based CRDT * Each replica tracks its own increments. Total = sum of all replica counts. */interface GCounter { counts: Record<string, number>; // replicaId -> count} const GCounterOps = { create(): GCounter { return { counts: {} }; }, increment(counter: GCounter, replicaId: string): GCounter { return { counts: { ...counter.counts, [replicaId]: (counter.counts[replicaId] || 0) + 1 } }; }, merge(a: GCounter, b: GCounter): GCounter { const merged: Record<string, number> = { ...a.counts }; for (const [replicaId, count] of Object.entries(b.counts)) { merged[replicaId] = Math.max(merged[replicaId] || 0, count); } return { counts: merged }; }, value(counter: GCounter): number { return Object.values(counter.counts).reduce((sum, c) => sum + c, 0); }}; /** * OR-Set (Observed-Remove Set) - State-Based CRDT * Handles add and remove operations with "add wins" semantics. * Each add operation gets a unique tag; remove removes specific tags. */interface ORSet<T> { elements: Map<T, Set<string>>; // element -> set of unique tags} const ORSetOps = { create<T>(): ORSet<T> { return { elements: new Map() }; }, add<T>(set: ORSet<T>, element: T, uniqueTag: string): ORSet<T> { const newElements = new Map(set.elements); const existing = newElements.get(element) || new Set(); newElements.set(element, new Set([...existing, uniqueTag])); return { elements: newElements }; }, remove<T>(set: ORSet<T>, element: T): ORSet<T> { // Remove all observed tags for this element const newElements = new Map(set.elements); newElements.delete(element); return { elements: newElements }; }, merge<T>(a: ORSet<T>, b: ORSet<T>): ORSet<T> { const merged = new Map<T, Set<string>>(); const allElements = new Set([...a.elements.keys(), ...b.elements.keys()]); for (const element of allElements) { const tagsA = a.elements.get(element) || new Set(); const tagsB = b.elements.get(element) || new Set(); const unionTags = new Set([...tagsA, ...tagsB]); if (unionTags.size > 0) { merged.set(element, unionTags); } } return { elements: merged }; }, contains<T>(set: ORSet<T>, element: T): boolean { const tags = set.elements.get(element); return tags !== undefined && tags.size > 0; }, values<T>(set: ORSet<T>): T[] { return Array.from(set.elements.keys()); }};Don't implement CRDTs from scratch for production. Use battle-tested libraries: Automerge (JavaScript/Rust), Yjs (JavaScript), Riak DT (Erlang), or the CRDT libraries in CockroachDB. These handle edge cases and optimizations that are easy to miss.
Operational Transformation (OT) is the technique that powers real-time collaborative editing in Google Docs, Notion (partially), and many other collaborative tools. Unlike CRDTs which are data-structure focused, OT operates on operations.
The Core Idea:
The Transform Function:
Given two concurrent operations op1 and op2, the transform function produces op1' and op2' such that:
op1 then op2' produces the same result as applying op2 then op1'12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
/** * Simple Operational Transformation for text editing. * Operations: Insert(position, char) and Delete(position) */type Operation = | { type: 'insert'; position: number; char: string } | { type: 'delete'; position: number }; /** * Transform op2 against op1 (op1 was applied first). * Returns transformed op2' that can be applied after op1. */function transform(op1: Operation, op2: Operation): Operation { if (op1.type === 'insert' && op2.type === 'insert') { // Both inserts: adjust position if op2 is after op1 if (op2.position >= op1.position) { return { ...op2, position: op2.position + 1 }; } return op2; } if (op1.type === 'insert' && op2.type === 'delete') { // Insert then delete: adjust delete position if after insert if (op2.position >= op1.position) { return { ...op2, position: op2.position + 1 }; } return op2; } if (op1.type === 'delete' && op2.type === 'insert') { // Delete then insert: adjust insert position if after deleted char if (op2.position > op1.position) { return { ...op2, position: op2.position - 1 }; } return op2; } if (op1.type === 'delete' && op2.type === 'delete') { // Both deletes: adjust position, handle same-char case if (op2.position > op1.position) { return { ...op2, position: op2.position - 1 }; } if (op2.position === op1.position) { // Both deleted same character: op2 becomes no-op return { type: 'delete', position: -1 }; // No-op marker } return op2; } return op2; // Unreachable} // Example usage:// Initial text: "Hello"// User A types 'X' at position 1: op1 = insert(1, 'X') → "HXello"// User B deletes at position 2: op2 = delete(2) → "Helo" (concurrent, different state)// // To merge at A's state (already has op1):// Transform op2 against op1: op2' = delete(3) → "HXelo"//// To merge at B's state (already has op2):// Transform op1 against op2: op1' = insert(1, 'X') (position unchanged) → "HXelo"//// Both converge to "HXelo"OT Challenges:
Modern collaborative editing increasingly uses CRDTs (like Yjs, Automerge, YATA) instead of OT. CRDTs are mathematically simpler, don't require central coordination, and handle offline operation naturally. OT remains in legacy systems and cases where fine-grained server control is needed.
Production systems rarely use a single resolution strategy. Instead, they combine multiple approaches, applying different strategies to different data types and conflict scenarios.
Strategy Selection by Data Type:
| Data Type | Conflict Likelihood | Resolution Strategy | Rationale |
|---|---|---|---|
| User display name | Low | LWW (field-level) | Rarely concurrent; latest is acceptable |
| Profile avatar | Low | LWW (field-level) | Binary; can't merge images |
| Favorite items | Medium | OR-Set CRDT | Set semantics; preserve all additions |
| Shopping cart | Medium | Domain merge function | Quantity deltas + item union |
| Wallet balance | High conflict impact | Single-leader only | Can't tolerate any inconsistency |
| Collaborative document | High | CRDT (Yjs/Automerge) | Specialized for text |
| Notification preferences | Low | Field-level LWW per toggle | Each preference independent |
Implementation Pattern: Resolution Registry
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
/** * Resolution Registry: Different data types use different conflict resolution. * Allows hybrid strategies within a single system. */interface ConflictResolver<T> { resolve(local: T, remote: T, ancestor?: T): T;} class LWWResolver<T extends { updatedAt: number }> implements ConflictResolver<T> { resolve(local: T, remote: T): T { return remote.updatedAt > local.updatedAt ? remote : local; }} class SetUnionResolver<T> implements ConflictResolver<Set<T>> { resolve(local: Set<T>, remote: Set<T>): Set<T> { return new Set([...local, ...remote]); }} class DomainMergeResolver<T> implements ConflictResolver<T> { constructor(private mergeFn: (local: T, remote: T, ancestor?: T) => T) {} resolve(local: T, remote: T, ancestor?: T): T { return this.mergeFn(local, remote, ancestor); }} // Registry mapping data types to their resolversclass ResolutionRegistry { private resolvers = new Map<string, ConflictResolver<any>>(); register<T>(dataType: string, resolver: ConflictResolver<T>): void { this.resolvers.set(dataType, resolver); } resolve<T>(dataType: string, local: T, remote: T, ancestor?: T): T { const resolver = this.resolvers.get(dataType); if (!resolver) { throw new Error(`No resolver registered for data type: ${dataType}`); } return resolver.resolve(local, remote, ancestor); }} // Setupconst registry = new ResolutionRegistry();registry.register('user.displayName', new LWWResolver());registry.register('user.favorites', new SetUnionResolver());registry.register('shopping.cart', new DomainMergeResolver(mergeShoppingCarts)); // Usage during conflict resolutionfunction handleConflict(dataType: string, local: any, remote: any): any { return registry.resolve(dataType, local, remote);}For truly critical data (financial transactions, unique constraints), don't fight multi-leader's complexity. Route those writes to a single leader, accepting higher latency for correctness. Multi-leader is a tool, not a mandate for all data.
Custom conflict resolution is notoriously difficult to get right. Edge cases hide in operation ordering, timing, and concurrent access patterns. Rigorous testing is essential.
Property-Based Testing:
Instead of testing specific scenarios, property-based testing generates random inputs and verifies that invariants hold. For conflict resolution, key properties include:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
import * as fc from 'fast-check'; // Property: G-Counter merge is commutativefc.assert( fc.property( fc.record({ counts: fc.dictionary(fc.string(), fc.nat()) }), fc.record({ counts: fc.dictionary(fc.string(), fc.nat()) }), (a, b) => { const mergeAB = GCounterOps.merge(a, b); const mergeBA = GCounterOps.merge(b, a); return JSON.stringify(mergeAB) === JSON.stringify(mergeBA); } )); // Property: OR-Set add then remove leaves element gonefc.assert( fc.property( fc.string(), // element fc.string(), // tag (element, tag) => { const set1 = ORSetOps.create<string>(); const set2 = ORSetOps.add(set1, element, tag); const set3 = ORSetOps.remove(set2, element); return !ORSetOps.contains(set3, element); } )); // Property: Convergence under arbitrary operation orderfunction testConvergence<State, Op>( initialState: () => State, applyOp: (state: State, op: Op) => State, merge: (a: State, b: State) => State, operations: Op[]): boolean { // Generate all permutations (or sample for large op sets) const permutations = getPermutations(operations); const finalStates = permutations.map(perm => { let state = initialState(); for (const op of perm) { state = applyOp(state, op); } return state; }); // All final states should be equal after merging const referenceState = finalStates[0]; return finalStates.every(s => JSON.stringify(merge(s, referenceState)) === JSON.stringify(merge(referenceState, s)) );}Chaos Testing:
Beyond unit tests, resolution logic must be tested under realistic distributed conditions:
Resolution bugs often manifest only under specific timing conditions that are hard to reproduce in development. Use production traffic shadowing, canary deployments, and comprehensive conflict logging to catch issues before they cause widespread data corruption.
We've explored custom conflict resolution from first principles through production implementation. Let's consolidate the key insights:
Module Conclusion: Multi-Leader Replication
Across this module, we've journeyed through multi-leader replication comprehensively:
Multiple Leaders for Writes: Understanding why single-leader becomes insufficient and how multi-leader architectures address latency and availability
Multi-Datacenter Use Cases: Real-world scenarios from Netflix, Uber, and gaming that demand multi-leader capabilities
Conflict Resolution Strategies: The spectrum from avoidance through automatic resolution to human-involved processes
Last-Write-Wins Deep Dive: The mechanics, failure modes, and production implementations of the most common automatic strategy
Custom Conflict Resolution: Domain-specific merges, CRDTs, OT, and hybrid approaches that preserve data integrity
Multi-leader replication is a powerful tool for global-scale systems, but it demands deep understanding of its trade-offs, careful data modeling, and robust conflict handling. With this foundation, you're equipped to design, implement, and operate multi-leader systems that serve users worldwide with low latency and high availability.
You've completed the Multi-Leader Replication module. You now understand the architecture, use cases, and conflict resolution strategies that power globally distributed systems. Next, explore Quorum-Based Replication to learn how systems achieve tunable consistency through read/write quorums.