Loading content...
With a shared database, you enjoyed a comfortable illusion: all data was immediately consistent. A transaction committed, and every subsequent query saw the updated state. In the Database per Service architecture, this illusion shatters.\n\nWhen User Service updates a user's email and Order Service has a denormalized copy, there's a window—milliseconds, seconds, perhaps minutes—where the two databases disagree. The Order Service shows the old email while User Service shows the new one.\n\nThis is eventual consistency, and it's not a bug to be fixed. It's a fundamental property of distributed systems that we must understand, design for, and communicate appropriately.
This page provides a comprehensive examination of eventual consistency. You'll understand the theoretical foundations (CAP theorem and its implications), learn practical patterns for handling consistency windows, explore user experience strategies for managing user expectations, and master techniques for detecting and resolving inconsistencies.
What is Eventual Consistency?\n\nEventual consistency is a consistency model used in distributed systems. It guarantees that, if no new updates are made to a given piece of data, eventually all accesses to that data will return the last updated value.\n\nThe key word is eventually. There is no guarantee about how long "eventually" takes—it could be milliseconds or hours, depending on the system design.
1234567891011121314151617181920
Timeline of an email update: T=0ms: User updates email in User Service User Service DB: email = "new@example.com" ✓ Order Service DB: email = "old@example.com" (stale) T=10ms: User Service publishes "email_changed" event T=15ms: Event reaches message queue T=50ms: Order Service receives event T=55ms: Order Service updates its local copy User Service DB: email = "new@example.com" ✓ Order Service DB: email = "new@example.com" ✓ Consistency window: 55ms During this window, the two services have different views of the truth.After the window closes, they're consistent again (until the next update).Strong Consistency vs. Eventual Consistency\n\nThese are opposite ends of a spectrum:
| Aspect | Strong Consistency | Eventual Consistency |
|---|---|---|
| Guarantee | All nodes see the same data immediately | All nodes will eventually see the same data |
| After write | Read returns the write immediately | Read may return stale data temporarily |
| Typical latency | Higher (must coordinate nodes) | Lower (nodes act independently) |
| Availability | Lower (must wait for coordination) | Higher (can serve from any node) |
| Scalability | Harder (coordination overhead) | Easier (independent scaling) |
| Complexity | Simpler conceptually | Requires handling inconsistency cases |
The CAP theorem states that distributed systems can provide at most two of three guarantees: Consistency, Availability, and Partition tolerance. Since network partitions are unavoidable in distributed systems, we must choose between Consistency and Availability. Microservices with separate databases typically choose Availability, accepting eventual consistency as the tradeoff.
You might ask: can't we just use distributed transactions to maintain strong consistency across services? Technically, yes—but the costs are prohibitive.\n\nThe Two-Phase Commit Problem\n\nDistributed transactions (like two-phase commit) provide strong consistency across databases but introduce severe limitations:
123456789101112131415161718192021222324
Scenario: Place an order (update 3 services) With Eventual Consistency:├── User Service: Check user (50ms)├── Order Service: Create order (60ms) [parallel]├── Inventory Service: Reserve stock (55ms)└── Total: max(50, 60, 55) ≈ 60ms + overhead With Two-Phase Commit (Strong Consistency):├── Phase 1 (Prepare):│ ├── Coordinator: Ask User Service to prepare (50ms)│ │ └── Wait for response│ ├── Coordinator: Ask Order Service to prepare (60ms)│ │ └── Wait for response│ └── Coordinator: Ask Inventory Service to prepare (55ms)│ └── Wait for response├── Phase 2 (Commit):│ ├── Coordinator: Tell User Service to commit (30ms)│ ├── Coordinator: Tell Order Service to commit (30ms)│ └── Coordinator: Tell Inventory Service to commit (30ms)└── Total: 50+60+55+30+30+30 = 255ms minimum (Often much higher due to lock waiting and retries) Result: 4x+ latency for strong consistencyThe Reality of Business Requirements\n\nHere's the uncomfortable truth: most business processes don't actually require immediate consistency. Consider a few examples:\n\n- E-commerce order confirmation — The order is placed, and the confirmation email arrives seconds later. Users don't expect the email to be instant.\n\n- Banking transfer — Bank transfers take hours or days to settle. The "pending" state is well-understood by users.\n\n- Social media timeline — If your post appears to your followers with a 500ms delay, no one notices.\n\n- Analytics dashboard — Business metrics delayed by minutes are perfectly acceptable.\n\nTrue strong consistency requirements are rarer than engineers often assume. Identify where you genuinely need it (usually financial transactions) and accept eventual consistency elsewhere.
When someone says "we need immediate consistency," ask: "What's the actual business impact of a 5-second delay?" Often, the perceived requirement is based on what was easy with a shared database, not what the business actually needs.
The consistency window is the time between when data is updated in the authoritative source and when all replicas converge to the same state. Designing for eventual consistency means understanding and minimizing this window while handling the cases where it matters.\n\nFactors Affecting Consistency Window:
Minimizing the Window:
1234567891011121314151617181920212223242526272829303132333435363738
// 1. Synchronous event publishing (transactional outbox)// Reduces delay between write and event publish to near-zeroasync function updateUser(userId: string, updates: UserUpdate) { await db.transaction(async (tx) => { await tx.users.update(userId, updates); await tx.outbox.create({ topic: 'user.updated', payload: { userId, ...updates }, }); }); // Event is persisted atomically with the write // Outbox processor publishes within milliseconds} // 2. Push-based processing (vs. polling)// Consumers react immediately to events@Subscribe('user.updated', { mode: 'push' })async function handleUserUpdated(event: UserUpdatedEvent) { await localUserCache.update(event.userId, event);} // 3. Parallel processing with high concurrency// Multiple consumers process events simultaneouslyconst consumer = kafka.consumer({ groupId: 'order-service', maxParallelMessages: 100, // Process 100 events concurrently}); // 4. Read-your-writes for the same user session// Route subsequent reads to the source of truthasync function getUserProfile(userId: string, sessionUserId: string) { if (userId === sessionUserId) { // User is viewing their own profile - read from source return userService.getUser(userId); } // Viewing another user - local cache is fine return localUserCache.get(userId);}Read-Your-Writes Consistency:\n\nThe most noticeable consistency issue is when users don't see their own changes. This can be addressed with read-your-writes patterns:
123456789101112131415161718192021222324
class OrderHistoryService { async getOrderHistory(userId: string, context: RequestContext) { // Check if user recently modified their profile const lastWriteTime = await this.cache.get(`write_ts:${userId}`); if (lastWriteTime && (Date.now() - lastWriteTime) < 5000) { // User wrote within last 5 seconds // Fetch fresh data from source services return this.fetchFreshOrderHistory(userId); } // No recent writes - cached/denormalized data is safe to use return this.getCachedOrderHistory(userId); } async updateOrderAddress(orderId: string, address: Address, userId: string) { await this.orderService.updateAddress(orderId, address); // Mark that this user has recent writes await this.cache.set(`write_ts:${userId}`, Date.now(), { ttl: 10 }); return { success: true }; }}Another approach is session stickiness: for a brief period after a write, route all that user's reads to the authoritative source. This guarantees read-your-writes at the cost of some additional load on the source service. The window can be short (5-10 seconds) since propagation is usually fast.
Designing for eventual consistency requires a shift in thinking. Instead of assuming data is always current, design systems that are resilient to stale data.\n\nPrinciple 1: Idempotent Operations\n\nSince events may be delivered more than once, operations must be idempotent—applying them multiple times has the same effect as applying once:
1234567891011121314151617181920212223242526272829303132333435363738
class OrderEventHandler { @Subscribe('inventory.reserved') async handleInventoryReserved(event: InventoryReservedEvent) { // Idempotency check - have we processed this event already? const processed = await this.eventLog.exists(event.eventId); if (processed) { this.logger.info(`Event ${event.eventId} already processed, skipping`); return; } // Process the event await this.orderDb.orders.update(event.orderId, { inventoryStatus: 'reserved', inventoryReservedAt: event.reservedAt, }); // Record that we've processed this event await this.eventLog.record(event.eventId, { processedAt: new Date() }); }} // Alternative: Use optimistic locking with version numbersasync function applyInventoryReservation(orderId: string, event: InventoryReservedEvent) { const result = await orderDb.orders.updateOne({ _id: orderId, inventoryVersion: { $lt: event.version }, // Only apply if our version is older }, { $set: { inventoryStatus: 'reserved', inventoryVersion: event.version, } }); if (result.modifiedCount === 0) { // Either order doesn't exist or we have a newer version // Both are expected cases in eventual consistency }}Principle 2: Compensating Actions\n\nWhen a later event reveals that earlier state was incorrect, compensating actions correct the system state:
123456789101112131415161718192021222324252627282930313233343536
// Scenario: Order placed, but later we learn inventory was insufficient class OrderSaga { async handleInventoryInsufficient(event: InventoryInsufficientEvent) { const order = await this.orderDb.orders.findById(event.orderId); if (order.status === 'confirmed') { // We already confirmed the order - need to compensate // 1. Update order status await this.orderDb.orders.update(order.id, { status: 'cancelled', cancellationReason: 'inventory_unavailable', }); // 2. Refund payment if charged if (order.paymentStatus === 'charged') { await this.paymentService.refund(order.paymentId); } // 3. Notify customer await this.notificationService.send(order.userId, { type: 'order_cancelled', orderId: order.id, reason: 'Unfortunately, some items are no longer available.', }); // 4. Publish compensating event await this.events.publish('order.cancelled', { orderId: order.id, reason: 'inventory_insufficient', compensatedAt: new Date(), }); } }}Principle 3: Embrace Partial Failures\n\nIn distributed systems, partial failures are the norm. Design operations to complete successfully even when some services are unavailable:
1234567891011121314151617181920212223242526
async function enrichOrderForDisplay(order: Order): Promise<EnrichedOrder> { // Attempt to enrich with data from other services const [userResult, productsResult] = await Promise.allSettled([ userService.getUser(order.userId), productService.getProducts(order.items.map(i => i.productId)), ]); return { ...order, customer: userResult.status === 'fulfilled' ? userResult.value : { name: 'Customer', email: null }, // Graceful fallback items: order.items.map(item => ({ ...item, product: productsResult.status === 'fulfilled' ? productsResult.value.find(p => p.id === item.productId) : { name: `Product ${item.productId}`, imageUrl: '/placeholder.png' }, })), _enrichmentStatus: { user: userResult.status, products: productsResult.status, }, };}For each operation, explicitly design what happens when other services are slow, return stale data, or are completely unavailable. The unhappy path shouldn't be an afterthought—it's a first-class design concern in eventually consistent systems.
Eventual consistency is a backend concern that can affect user experience. Thoughtful UX design can make eventual consistency invisible or even beneficial.\n\nCommunicate State Appropriately:\n\nRather than showing potentially stale data as definitive truth, communicate that operations are in progress:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// Pattern 1: Optimistic Updates with Confirmationfunction OrderConfirmation({ order }) { return ( <div className="order-confirmation"> <h1>Order Placed Successfully!</h1> <p>Order #{order.id}</p> {/* Status indicates work in progress */} <div className="status-pending"> <Spinner /> Processing your order... </div> {/* Set expectation for async confirmation */} <p className="muted"> You'll receive a confirmation email shortly. </p> </div> );} // Pattern 2: Show "Last Updated" timestampsfunction UserProfile({ user }) { return ( <div className="profile"> <h2>{user.name}</h2> <p>{user.email}</p> {/* Communicate data freshness */} <span className="updated-at"> Last updated: {formatRelativeTime(user.updatedAt)} </span> </div> );} // Pattern 3: Explicit pending statesfunction PaymentStatus({ payment }) { const statusMessages = { pending: 'Payment is being processed...', processing: 'Verifying with your bank...', confirmed: 'Payment confirmed!', failed: 'Payment failed. Please try again.', }; return ( <div className={`payment-status status-${payment.status}`}> {payment.status === 'pending' && <Spinner />} {statusMessages[payment.status]} </div> );}Optimistic UI Updates:\n\nShow the expected result immediately, then reconcile with the actual result when it arrives:
1234567891011121314151617181920212223242526
function useLikeButton(postId: string) { const [isLiked, setIsLiked] = useState(false); const [isPending, setIsPending] = useState(false); const toggleLike = async () => { // Optimistically update UI immediately setIsLiked(!isLiked); setIsPending(true); try { // Make API call in background await api.toggleLike(postId); // Success - optimistic update was correct } catch (error) { // Failed - revert optimistic update setIsLiked(isLiked); toast.error('Could not update. Please try again.'); } finally { setIsPending(false); } }; return { isLiked, isPending, toggleLike };} // User sees instant feedback; system catches up in backgroundGmail exemplifies eventual consistency done right. When you send an email, the UI shows it as sent immediately. If sending fails, Gmail notifies you and keeps the draft. The user perceives instant response; the complexity is hidden. Follow this model in your applications.
When multiple updates happen concurrently, conflicts can occur. Eventual consistency systems need strategies to resolve these conflicts deterministically.\n\nLast Write Wins (LWW):\n\nThe simplest strategy: the update with the latest timestamp wins.
1234567891011121314
async function applyUserUpdate(update: UserUpdateEvent) { const currentUser = await userDb.users.findById(update.userId); // Only apply if this update is newer if (!currentUser || update.updatedAt > currentUser.updatedAt) { await userDb.users.update(update.userId, { ...update.data, updatedAt: update.updatedAt, }); return { applied: true }; } return { applied: false, reason: 'newer_version_exists' };}Problems with LWW:\n\n- Clock skew between servers can cause incorrect ordering\n- Doesn't handle true concurrent updates (which write is actually 'last'?)\n- May lose updates silently\n\nVersion Vectors / Vector Clocks:\n\nA more sophisticated approach tracks causality using version vectors:
123456789101112131415161718192021222324252627282930313233343536373839404142
interface VersionVector { [nodeId: string]: number;} function compareVersions(a: VersionVector, b: VersionVector): 'before' | 'after' | 'concurrent' { let aBeforeB = false; let bBeforeA = false; const allNodes = new Set([...Object.keys(a), ...Object.keys(b)]); for (const node of allNodes) { const aVersion = a[node] || 0; const bVersion = b[node] || 0; if (aVersion < bVersion) aBeforeB = true; if (bVersion < aVersion) bBeforeA = true; } if (aBeforeB && !bBeforeA) return 'before'; if (bBeforeA && !aBeforeB) return 'after'; return 'concurrent'; // True conflict!} async function applyUpdate(update: VersionedUpdate) { const current = await db.find(update.id); switch (compareVersions(update.version, current.version)) { case 'after': // Update is newer, apply it await db.update(update.id, update.data, update.version); break; case 'before': // We already have a newer version, ignore break; case 'concurrent': // True conflict - needs resolution await resolveConflict(current, update); break; }}Conflict Resolution Strategies for Concurrent Updates:
| Strategy | Description | Use Case |
|---|---|---|
| Last Write Wins | Most recent timestamp wins | Simple, when losing updates is acceptable |
| First Write Wins | First update wins, reject later concurrent writes | When initial value has primacy |
| Merge | Combine both updates intelligently | When updates are to different fields |
| Application-Specific | Business logic determines winner | Complex domain rules |
| User Resolution | Present conflict to user for manual resolution | Critical data, rare conflicts |
12345678910111213141516171819202122232425262728293031
async function mergeUserUpdates(current: User, update: UserUpdateEvent): Promise<User> { const merged: Partial<User> = {}; const conflicts: FieldConflict[] = []; for (const field of ['name', 'email', 'phone', 'address'] as const) { if (update.changedFields.includes(field)) { if (update.baseVersion[field] === current[field]) { // Field unchanged since update was made - safe to apply merged[field] = update.data[field]; } else { // Field was also changed elsewhere - conflict! conflicts.push({ field, currentValue: current[field], updateValue: update.data[field], baseValue: update.baseVersion[field], }); } } } if (conflicts.length > 0) { // Log conflicts for monitoring await conflictLog.record(current.id, conflicts); // Apply non-conflicting changes, keep current for conflicts // (or apply business-specific resolution logic) } return { ...current, ...merged };}How to resolve conflicts is fundamentally a business decision, not a technical one. Last Write Wins might silently lose important updates. Merge might combine data in unexpected ways. Involve product stakeholders in defining conflict resolution rules for critical data.
In an eventually consistent system, you need visibility into whether consistency is actually being achieved and how long it takes.\n\nConsistency Metrics to Track:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
class ConsistencyMonitor { // Track propagation latency @Subscribe('*') async measurePropagationLatency(event: DomainEvent) { const publishTime = event.metadata.publishedAt; const processTime = Date.now(); const latencyMs = processTime - publishTime; this.metrics.histogram('event_propagation_latency_ms', latencyMs, { eventType: event.type, sourceService: event.metadata.source, }); // Alert if latency exceeds SLA if (latencyMs > 5000) { this.alerting.warn('High event propagation latency', { eventType: event.type, latencyMs, eventId: event.id, }); } } // Periodic consistency check @Cron('*/5 * * * *') async checkConsistency() { const sampleSize = 1000; const sampleIds = await this.getSampleIds(sampleSize); let matches = 0; let mismatches = 0; const mismatchDetails: MismatchDetail[] = []; for (const id of sampleIds) { const [source, replica] = await Promise.all([ this.sourceService.get(id), this.localReplica.get(id), ]); if (this.isConsistent(source, replica)) { matches++; } else { mismatches++; mismatchDetails.push({ id, source, replica }); } } this.metrics.gauge('consistency_check_match_rate', matches / sampleSize); this.metrics.gauge('consistency_check_mismatch_count', mismatches); if (mismatches > sampleSize * 0.01) { // >1% mismatch rate this.alerting.error('High inconsistency rate detected', { matchRate: matches / sampleSize, mismatchCount: mismatches, sampleSize, samples: mismatchDetails.slice(0, 10), // Include examples }); } }}Reconciliation Jobs:\n\nPeriodic reconciliation catches inconsistencies that event-based sync missed:
123456789101112131415161718192021222324252627282930313233343536373839404142
class ReconciliationJob { @Cron('0 * * * *') // Every hour async reconcile() { const cursor = await this.getReconciliationCursor(); const batchSize = 10000; let processed = 0; let fixed = 0; while (true) { const sourceRecords = await this.sourceService.list({ after: cursor, limit: batchSize, }); if (sourceRecords.length === 0) break; for (const source of sourceRecords) { const replica = await this.replicaDb.findById(source.id); if (!replica || !this.isConsistent(source, replica)) { // Fix the inconsistency await this.replicaDb.upsert(source.id, this.transform(source)); fixed++; this.logger.info('Fixed inconsistency', { id: source.id, hadReplica: !!replica, }); } processed++; } await this.updateReconciliationCursor(sourceRecords[sourceRecords.length - 1].id); } this.metrics.counter('reconciliation_processed', processed); this.metrics.counter('reconciliation_fixed', fixed); this.logger.info('Reconciliation complete', { processed, fixed }); }}Event-based sync is your primary consistency mechanism. Periodic reconciliation is your safety net. Even well-designed event systems occasionally lose or corrupt events. Reconciliation jobs will catch and fix these issues before they become user-visible problems.
When an operation must update multiple services atomically (or at least with all-or-nothing semantics), the Saga Pattern provides eventual consistency with compensation for failures.\n\nWhat is a Saga?\n\nA saga is a sequence of local transactions across services. Each step has a compensating action that can undo its effect if a later step fails.\n\nSaga Execution:
12345678910111213141516171819202122232425262728293031323334
Order Placement Saga: Step 1: Create Order (Order Service) Compensation: Cancel Order Step 2: Reserve Inventory (Inventory Service) Compensation: Release Inventory Step 3: Process Payment (Payment Service) Compensation: Refund Payment Step 4: Schedule Shipment (Shipping Service) Compensation: Cancel Shipment Happy Path:─────────────────────────────────────────────────────────────── Create Order ──► Reserve Inventory ──► Process Payment ──► Schedule Shipment ✓ ✓ ✓ ✓ DONE! Failure with Compensation:─────────────────────────────────────────────────────────────── Create Order ──► Reserve Inventory ──► Process Payment ──► Schedule Shipment ✓ ✓ ✓ ✗ FAIL! │ ◄───────────────────────────────────────────┘ Compensate! Refund Payment ─────────────────────────┐ ✓ │ Release Inventory ◄─────────────────────┘ ✓ Cancel Order ✓ COMPENSATEDImplementation: Orchestration-Based Saga\n\nAn orchestrator coordinates the saga steps:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
class OrderSagaOrchestrator { async execute(orderRequest: CreateOrderRequest): Promise<SagaResult> { const saga = new Saga('create-order', orderRequest.correlationId); try { // Step 1: Create Order const order = await this.orderService.create(orderRequest); saga.recordStep('create_order', order.id, { compensate: () => this.orderService.cancel(order.id), }); // Step 2: Reserve Inventory const reservation = await this.inventoryService.reserve(order.items); saga.recordStep('reserve_inventory', reservation.id, { compensate: () => this.inventoryService.release(reservation.id), }); // Step 3: Process Payment const payment = await this.paymentService.charge(order.userId, order.total); saga.recordStep('process_payment', payment.id, { compensate: () => this.paymentService.refund(payment.id), }); // Step 4: Schedule Shipment const shipment = await this.shippingService.schedule(order); saga.recordStep('schedule_shipment', shipment.id, { compensate: () => this.shippingService.cancel(shipment.id), }); // All steps succeeded return saga.complete(); } catch (error) { // A step failed - compensate all completed steps this.logger.error('Saga failed, compensating', { error, saga: saga.id }); await saga.compensate(); // Runs compensations in reverse order return saga.fail(error); } }} class Saga { private steps: SagaStep[] = []; recordStep(name: string, resourceId: string, handlers: { compensate: () => Promise<void> }) { this.steps.push({ name, resourceId, ...handlers, status: 'completed' }); } async compensate() { // Compensate in reverse order for (const step of this.steps.reverse()) { try { await step.compensate(); step.status = 'compensated'; } catch (error) { step.status = 'compensation_failed'; // Log but continue - we want to try all compensations this.logger.error(`Compensation failed for step ${step.name}`, { error }); } } }}Compensating actions can fail too. Your saga implementation must handle compensation failures gracefully—log them, alert operators, and potentially queue for manual resolution. Never assume compensations will always succeed.
While eventual consistency is appropriate for most scenarios, some situations genuinely require stronger guarantees.\n\nScenarios Requiring Stronger Consistency:
| Scenario | Why Strong Consistency | Approach |
|---|---|---|
| Financial transfers | Double-spend prevention | Synchronous coordination or serialized queue |
| Inventory reservation (high-value) | Avoid overselling limited items | Lock inventory during checkout, synchronous |
| Unique constraint enforcement | Cannot have duplicates | Centralized uniqueness check |
| Security-critical operations | Prevent race-based bypasses | Synchronous authorization checks |
| Regulatory requirements | Audit compliance mandates | Design for specific regulatory needs |
Strategies for Achieving Stronger Consistency:\n\n1. Single Service Ownership\n\nKeep strongly consistent data within a single service:
12345678910111213141516171819202122
// All wallet operations in a single Wallet Service with ACID transactionsclass WalletService { async transfer(fromUserId: string, toUserId: string, amount: Money) { return this.db.transaction(async (tx) => { // Lock both wallets const fromWallet = await tx.wallets.findById(fromUserId, { lock: 'FOR UPDATE' }); const toWallet = await tx.wallets.findById(toUserId, { lock: 'FOR UPDATE' }); if (fromWallet.balance < amount) { throw new InsufficientFundsError(); } // Atomic balance updates await tx.wallets.update(fromUserId, { balance: fromWallet.balance - amount }); await tx.wallets.update(toUserId, { balance: toWallet.balance + amount }); // Record transaction await tx.transactions.create({ from: fromUserId, to: toUserId, amount }); }); // Strong consistency within this service; eventual to others }}2. Synchronous Cross-Service Calls for Critical Paths\n\nFor specific critical operations, accept the latency cost of synchronous coordination:
12345678910111213141516171819202122232425262728
async function purchaseLimitedItem(userId: string, itemId: string) { // Synchronous check and reserve - cannot use eventual consistency // because limited items might sell out between async steps // Step 1: Synchronous inventory lock const lock = await inventoryService.lockItem(itemId, { duration: 30000, // 30 second lock holder: userId, }); if (!lock.acquired) { throw new ItemNotAvailableError(); } try { // Step 2: Process payment synchronously const payment = await paymentService.charge(userId, lock.price); // Step 3: Complete purchase await inventoryService.completePurchase(lock.lockId, payment.id); return { success: true, orderId: lock.lockId }; } catch (error) { // Release lock on any failure await inventoryService.releaseLock(lock.lockId); throw error; }}Most operations can tolerate eventual consistency. Identify the specific paths that cannot (usually involving money or irreversible actions) and design those specifically for stronger consistency. Don't over-apply strong consistency—it has real costs.
Eventual consistency is not a compromise—it's a deliberate architectural choice that enables scalability, availability, and service independence. Understanding and designing for eventual consistency is essential for successful microservices systems.
Module Complete:\n\nThis concludes the Database Decomposition module. You now have a comprehensive understanding of how to move from shared databases to Database per Service architecture, including the challenges of shared databases, the Database per Service pattern, data migration strategies, handling cross-service queries, and embracing eventual consistency.\n\nThe techniques in this module are foundational for any organization moving from monolithic to microservices architecture. Database decomposition is often the most challenging aspect of this transition, but with the patterns and strategies covered here, you can approach it systematically and safely.
Congratulations! You've completed the Database Decomposition module. You now understand the challenges of shared databases, the principles of Database per Service, strategies for data migration, patterns for cross-service queries, and how to design systems that embrace eventual consistency. These skills are essential for successfully decomposing monolithic systems.