Loading learning content...
In traditional software systems, the database holds the current state of entities—this is what we consider "real" data. Everything else—logs, audit trails, change history—is derived, secondary, and often incomplete. But event sourcing inverts this relationship entirely.
In event sourcing, events are the source of truth. The immutable sequence of events is the authoritative record of what happened. Current state? That's just a convenient view—a cache, if you will—derived from replaying events.
This philosophical shift has profound practical implications. When events are primary, we gain capabilities that are impossible or expensive with state-based systems: perfect audit trails, time travel queries, multiple read models, and the ability to reinterpret history with new business logic.
This page explores why treating events as the source of truth fundamentally changes what's possible in your system design.
By the end of this page, you will understand: (1) why events constitute the authoritative truth in an event-sourced system, (2) the philosophical and practical implications of this inversion, (3) how immutability provides strong guarantees, and (4) how multiple interpretations of the same event stream enable powerful capabilities.
In any data system, we must decide what constitutes the authoritative source of truth. Everything else is derived from this source and can be reconstructed from it. This choice has cascading implications for system design, reliability, and capabilities.
In traditional systems:
Source of Truth: Current State (database rows)
↓ derives
History: Audit logs, CDC, temporal tables (often incomplete)
In event-sourced systems:
Source of Truth: Event Stream (immutable, append-only)
↓ derives
Current State: Computed by replaying events
Read Models: Projections optimized for specific queries
Reports: Aggregations computed from events
This inversion is not merely cosmetic—it fundamentally alters what information is preserved, what operations are possible, and what guarantees the system can provide.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
// ============================================================// TRADITIONAL SYSTEM: State is primary, history is derived// ============================================================ class TraditionalOrderSystem { private db: Database; async updateOrderStatus(orderId: string, newStatus: string): Promise<void> { // Get current state const order = await this.db.orders.findById(orderId); const oldStatus = order.status; // Overwrite with new state - old status is now GONE from primary store await this.db.orders.update(orderId, { status: newStatus }); // Try to preserve history in secondary store - but this can fail! try { await this.db.auditLog.insert({ orderId, field: 'status', oldValue: oldStatus, newValue: newStatus, changedAt: new Date(), }); } catch (error) { // Audit log failed but order is already updated // History is now inconsistent with reality console.error('Audit log failed, history is incomplete'); } } // Key problems: // 1. Primary operation (update) succeeds even if audit fails // 2. Audit log is second-class citizen - often incomplete // 3. Reconstructing full history requires correlating multiple sources // 4. No guarantee audit log is accurate without comparing to backups} // ============================================================// EVENT-SOURCED SYSTEM: Events are primary, state is derived// ============================================================ class EventSourcedOrderSystem { private eventStore: EventStore; async updateOrderStatus(orderId: string, newStatus: string): Promise<void> { // Load current state by replaying events const events = await this.eventStore.loadStream(`order-${orderId}`); const currentState = this.rehydrate(events); // Create event describing what happened const event: OrderStatusChanged = { type: 'OrderStatusChanged', aggregateId: orderId, timestamp: new Date(), version: events.length + 1, payload: { previousStatus: currentState.status, newStatus: newStatus, changedBy: getCurrentUser(), }, }; // Append event - this IS the primary write operation // If this succeeds, history is preserved // If this fails, nothing happened (no partial state) await this.eventStore.append(`order-${orderId}`, event); // State derivation can happen synchronously or asynchronously // But the event is the truth - state is just a view } private rehydrate(events: OrderEvent[]): OrderState { return events.reduce((state, event) => this.apply(state, event), this.createInitialState()); } // Key advantages: // 1. Single write operation - events ARE the history AND the state source // 2. No inconsistency possible between history and current state // 3. History is guaranteed complete by construction // 4. Can reconstruct any past state by replaying to that point}In event sourcing, recording history and updating state collapse into a single operation: appending an event. This eliminates the class of bugs where state and history diverge—a common source of audit failures, compliance issues, and debugging nightmares in traditional systems.
The power of events as source of truth stems from a critical property: immutability. Once an event is recorded, it cannot be changed or deleted. This isn't a limitation—it's a feature that provides extraordinary guarantees.
Why immutability matters:
Reproducibility: Given the same events, you always get the same state. No hidden mutations, no side effects, no surprises.
Audit Integrity: Immutable history cannot be falsified after the fact. If an event says "$1000 was withdrawn at 3:00 PM," that's what happened.
Safe Concurrency: Immutable data can be shared freely between threads, processes, and services without locks or coordination.
Caching Friendliness: Immutable data has infinite cache lifetime. Event #47 will always be event #47—no invalidation needed.
Distributed Systems Compatibility: Immutable facts can be replicated across nodes without conflict resolution.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128
// ============================================================// DESIGNING IMMUTABLE EVENTS// ============================================================ // Events should be true value objects - no setters, no mutationsinterface ImmutableEvent { readonly eventId: string; readonly aggregateId: string; readonly timestamp: Date; readonly version: number; readonly type: string; readonly payload: Readonly<Record<string, unknown>>;} // Use frozen objects to enforce immutability at runtimefunction createEvent<T extends ImmutableEvent>(event: T): Readonly<T> { return Object.freeze({ ...event, payload: Object.freeze(event.payload), });} // Example: creating an immutable eventconst paymentReceived = createEvent({ eventId: crypto.randomUUID(), aggregateId: 'order-123', timestamp: new Date(), version: 5, type: 'PaymentReceived', payload: { amount: 99.99, currency: 'USD', paymentMethod: 'credit_card', transactionId: 'txn-abc-123', },}); // Attempting mutation will fail (in strict mode) or be silently ignoredtry { (paymentReceived as any).payload.amount = 0; // This should fail! console.log('Mutation failed silently - amount still:', paymentReceived.payload.amount);} catch (error) { console.log('Mutation properly rejected');} // ============================================================// THE IMMUTABILITY CONTRACT IN EVENT STORES// ============================================================ interface EventStoreContract { // APPEND-ONLY: New events are added, never modified append(streamId: string, events: ImmutableEvent[]): Promise<void>; // READ: Get events, optionally from a version loadStream(streamId: string, fromVersion?: number): Promise<ImmutableEvent[]>; // SUBSCRIBE: Watch for new events (useful for projections) subscribe(streamId: string, handler: (event: ImmutableEvent) => void): Subscription; // ❌ NOTICE: No update() or delete() methods exist // This is intentional - events are facts that happened, not mutable records} // ============================================================// HANDLING "CORRECTIONS" WITHOUT MUTATION// ============================================================ // What if something was recorded incorrectly? You don't edit - you append! interface PaymentCorrected extends ImmutableEvent { type: 'PaymentCorrected'; payload: { originalEventId: string; // Reference to the event being corrected originalAmount: number; correctedAmount: number; reason: string; correctedBy: string; };} // Example correction flow:// 1. Event #5: PaymentReceived { amount: 99.99 } - IMMUTABLE, stays in stream// 2. Realize amount should have been $89.99// 3. Event #6: PaymentCorrected { originalEventId: 5, correctedAmount: 89.99 }// // The state projection will apply both events:// - See PaymentReceived with $99.99// - See PaymentCorrected referencing it// - Calculate correct current state as $89.99// // But the HISTORY preserves what actually happened:// "Payment was recorded as $99.99, then corrected to $89.99" function applyPaymentEvents(state: OrderState, event: PaymentEvent): OrderState { switch (event.type) { case 'PaymentReceived': return { ...state, amountPaid: state.amountPaid + event.payload.amount, paymentHistory: [...state.paymentHistory, { type: 'received', amount: event.payload.amount, eventId: event.eventId, }], }; case 'PaymentCorrected': // Find the original payment and apply the correction const originalPayment = state.paymentHistory.find( p => p.eventId === event.payload.originalEventId ); if (!originalPayment) return state; const adjustment = event.payload.correctedAmount - event.payload.originalAmount; return { ...state, amountPaid: state.amountPaid + adjustment, corrections: [...state.corrections, { originalEventId: event.payload.originalEventId, adjustment, reason: event.payload.reason, }], }; default: return state; }}When mistakes are made, event sourcing doesn't erase history—it adds new events that record the correction. This is exactly how accounting works: you never erase a ledger entry; you add a correcting entry. The complete history, including the mistake and its correction, remains visible for audit and forensics.
Perhaps the most powerful implication of events-as-truth is that state becomes an interpretation. The same event stream can be interpreted differently depending on what you're trying to compute. This enables multiple views, each optimized for different purposes, all derived from the same authoritative source.
The separation of storage and interpretation:
Traditional databases conflate storage and interpretation. The way you store data (the schema) dictates how you can query it. Changing interpretations requires schema migrations.
Event sourcing separates these concerns:
This separation means you can create new interpretations without touching the stored data.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191
// ============================================================// SINGLE EVENT STREAM, MULTIPLE PROJECTIONS// ============================================================ // The event stream for an e-commerce ordertype OrderEvent = | OrderPlaced | PaymentReceived | ItemPicked | ItemPacked | OrderShipped | OrderDelivered | ReturnRequested | ReturnReceived | RefundIssued; // ============================================================// PROJECTION 1: Order Status for Customer-Facing API// ============================================================ interface CustomerOrderView { orderId: string; status: 'pending' | 'paid' | 'processing' | 'shipped' | 'delivered' | 'returned'; estimatedDelivery: Date | null; trackingNumber: string | null;} function projectToCustomerView(events: OrderEvent[]): CustomerOrderView { // Customer cares about high-level status, not internal details return events.reduce((view, event) => { switch (event.type) { case 'OrderPlaced': return { ...view, orderId: event.aggregateId, status: 'pending' }; case 'PaymentReceived': return { ...view, status: 'paid' }; case 'ItemPicked': case 'ItemPacked': return { ...view, status: 'processing' }; case 'OrderShipped': return { ...view, status: 'shipped', trackingNumber: event.payload.trackingNumber, estimatedDelivery: event.payload.estimatedDelivery, }; case 'OrderDelivered': return { ...view, status: 'delivered' }; case 'RefundIssued': return { ...view, status: 'returned' }; default: return view; } }, { orderId: '', status: 'pending' as const, estimatedDelivery: null, trackingNumber: null });} // ============================================================// PROJECTION 2: Warehouse Operations View// ============================================================ interface WarehouseOrderView { orderId: string; items: Array<{ productId: string; quantity: number; location: string; pickStatus: 'pending' | 'picked' | 'packed'; }>; assignedPicker: string | null; pickStartTime: Date | null; packCompletedTime: Date | null;} function projectToWarehouseView(events: OrderEvent[]): WarehouseOrderView { // Warehouse cares about pick/pack details, not payment status return events.reduce((view, event) => { switch (event.type) { case 'OrderPlaced': return { ...view, orderId: event.aggregateId, items: event.payload.items.map(item => ({ productId: item.productId, quantity: item.quantity, location: item.warehouseLocation, pickStatus: 'pending' as const, })), }; case 'ItemPicked': return { ...view, assignedPicker: event.payload.pickerId, pickStartTime: view.pickStartTime || event.timestamp, items: view.items.map(item => item.productId === event.payload.productId ? { ...item, pickStatus: 'picked' as const } : item ), }; case 'ItemPacked': return { ...view, items: view.items.map(item => item.productId === event.payload.productId ? { ...item, pickStatus: 'packed' as const } : item ), packCompletedTime: view.items.every(i => i.productId === event.payload.productId || i.pickStatus === 'packed' ) ? event.timestamp : null, }; default: return view; } }, { orderId: '', items: [], assignedPicker: null, pickStartTime: null, packCompletedTime: null });} // ============================================================// PROJECTION 3: Financial Analytics View// ============================================================ interface FinancialOrderView { orderId: string; grossRevenue: number; paymentReceived: number; refundsIssued: number; netRevenue: number; paymentTimestamp: Date | null; refundTimestamp: Date | null; profitMargin: number;} function projectToFinancialView(events: OrderEvent[]): FinancialOrderView { // Finance cares about money flow, not fulfillment status return events.reduce((view, event) => { switch (event.type) { case 'OrderPlaced': const gross = event.payload.items.reduce( (sum, item) => sum + (item.price * item.quantity), 0 ); const cost = event.payload.items.reduce( (sum, item) => sum + (item.cost * item.quantity), 0 ); return { ...view, orderId: event.aggregateId, grossRevenue: gross, profitMargin: (gross - cost) / gross, }; case 'PaymentReceived': return { ...view, paymentReceived: view.paymentReceived + event.payload.amount, paymentTimestamp: event.timestamp, }; case 'RefundIssued': return { ...view, refundsIssued: view.refundsIssued + event.payload.amount, netRevenue: view.paymentReceived - view.refundsIssued - event.payload.amount, refundTimestamp: event.timestamp, }; default: return view; } }, { orderId: '', grossRevenue: 0, paymentReceived: 0, refundsIssued: 0, netRevenue: 0, paymentTimestamp: null, refundTimestamp: null, profitMargin: 0 });} // ============================================================// KEY INSIGHT: Same events, different interpretations// ============================================================ async function demonstrateMultipleViews(orderId: string) { // Load events once from the source of truth const events = await eventStore.loadStream(`order-${orderId}`); // Create three completely different views from the same events const customerView = projectToCustomerView(events); const warehouseView = projectToWarehouseView(events); const financialView = projectToFinancialView(events); console.log('Customer sees:', customerView.status); // 'shipped' console.log('Warehouse sees:', warehouseView.packCompletedTime); // '2024-01-15T14:30:00Z' console.log('Finance sees:', financialView.netRevenue); // 89.99 // Each stakeholder gets exactly the view they need // All derived from the same authoritative event stream}The power of multiple projections:
Optimization: Each projection is optimized for its query patterns. Customer queries are fast because they don't include warehouse details.
Isolation: Changes to the financial projection don't affect the warehouse view.
Evolution: New business questions can be answered by adding new projections, without schema migrations.
Retroactive Insights: A new projection can be built by replaying historical events—you can answer questions that were never anticipated.
Imagine your CEO asks: 'What was our average fulfillment time last quarter?' In a traditional system, if you didn't track this metric, you can't answer. In event sourcing, you create a new projection that computes fulfillment time from existing events—retroactively answering questions using historical data.
When events are the source of truth, temporal queries become first-class citizens. You can answer not just "what is the state now" but "what was the state at any point in the past." This capability, often called "temporal sovereignty," is extraordinarily valuable for certain domains.
Use cases for temporal queries:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145
// ============================================================// TEMPORAL QUERIES: STATE AT ANY POINT IN TIME// ============================================================ interface EventStream<E extends BaseEvent> { events: E[]; aggregateId: string;} class TemporalQueryEngine<S, E extends BaseEvent> { constructor( private createInitialState: () => S, private applyEvent: (state: S, event: E) => S, ) {} // Get state as of a specific timestamp stateAt(stream: EventStream<E>, asOf: Date): S { const eventsUpToTimestamp = stream.events.filter( e => e.timestamp <= asOf ); return this.rehydrate(eventsUpToTimestamp); } // Get state at a specific version (event count) stateAtVersion(stream: EventStream<E>, version: number): S { const eventsUpToVersion = stream.events.slice(0, version); return this.rehydrate(eventsUpToVersion); } // Get the full history of state changes stateHistory(stream: EventStream<E>): Array<{ version: number; timestamp: Date; state: S }> { let currentState = this.createInitialState(); return stream.events.map((event, index) => { currentState = this.applyEvent(currentState, event); return { version: index + 1, timestamp: event.timestamp, state: { ...currentState }, // Clone to avoid mutation }; }); } // Find when a condition first became true findWhen(stream: EventStream<E>, predicate: (state: S) => boolean): Date | null { let currentState = this.createInitialState(); for (const event of stream.events) { currentState = this.applyEvent(currentState, event); if (predicate(currentState)) { return event.timestamp; } } return null; } // Get state changes between two points in time stateChangesBetween( stream: EventStream<E>, from: Date, to: Date ): { before: S; after: S; eventsDuringPeriod: E[] } { const eventsBefore = stream.events.filter(e => e.timestamp < from); const eventsWithin = stream.events.filter( e => e.timestamp >= from && e.timestamp <= to ); return { before: this.rehydrate(eventsBefore), after: this.rehydrate([...eventsBefore, ...eventsWithin]), eventsDuringPeriod: eventsWithin, }; } private rehydrate(events: E[]): S { return events.reduce( (state, event) => this.applyEvent(state, event), this.createInitialState() ); }} // ============================================================// EXAMPLE: Insurance Claim Investigation// ============================================================ interface PolicyState { policyId: string; holder: string; coverageType: 'basic' | 'standard' | 'premium'; coverageAmount: number; isActive: boolean; riders: string[];} const policyQueryEngine = new TemporalQueryEngine<PolicyState, PolicyEvent>( () => ({ policyId: '', holder: '', coverageType: 'basic', coverageAmount: 0, isActive: false, riders: [] }), applyPolicyEvent); async function investigateClaim(policyId: string, incidentDate: Date) { const stream = await eventStore.loadStream(`policy-${policyId}`); // Key question: What coverage did the policyholder have AT THE TIME of the incident? const stateAtIncident = policyQueryEngine.stateAt(stream, incidentDate); console.log('Policy state at time of incident:'); console.log(' Active:', stateAtIncident.isActive); // Was the policy active? console.log(' Coverage:', stateAtIncident.coverageType); // What tier? console.log(' Amount:', stateAtIncident.coverageAmount); // What limit? console.log(' Riders:', stateAtIncident.riders); // What add-ons? // When did coverage last change before the incident? const lastChange = policyQueryEngine.findWhen( { ...stream, events: stream.events.filter(e => e.timestamp <= incidentDate).reverse() }, (state) => state.coverageAmount !== stateAtIncident.coverageAmount ); console.log('Coverage last changed:', lastChange); // This is IMPOSSIBLE with traditional state-based systems unless you // explicitly maintained temporal history tables - which you probably didn't} // ============================================================// EXAMPLE: Financial Audit// ============================================================ async function generateAuditReport(accountId: string, quarterStart: Date, quarterEnd: Date) { const stream = await eventStore.loadStream(`account-${accountId}`); // Get complete state transition history for the period const changes = accountQueryEngine.stateChangesBetween(stream, quarterStart, quarterEnd); console.log('Account audit for Q4 2024:'); console.log(' Opening balance:', changes.before.balance); console.log(' Closing balance:', changes.after.balance); console.log(' Transactions:', changes.eventsDuringPeriod.length); // Complete transaction-by-transaction breakdown for (const event of changes.eventsDuringPeriod) { console.log(` ${event.timestamp}: ${event.type} - ${JSON.stringify(event.payload)}`); } // Auditors can verify EVERY transaction, not just the final state // This is the power of events as source of truth}Traditional databases require specialized temporal features (like PostgreSQL's temporal tables or SQL Server's temporal tables) to answer historical queries—and these features must be enabled before the data exists. Event sourcing provides temporal capability by construction: if you have the events, you have the complete history.
Because events are the source of truth and state is derived, we gain an extraordinary capability: reprocessing. We can replay events with new logic, fix bugs retroactively, create new projections for historical data, and even correct past mistakes in our event handlers.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
// ============================================================// SCENARIO: Bug discovered in sales commission calculation// ============================================================ // ORIGINAL (BUGGY) projection logicfunction calculateCommission_v1(sale: SaleEvent): number { // Bug: forgot to apply commission cap for large sales return sale.payload.amount * 0.15; // 15% commission} // Sales rep's commission for a $100,000 sale = $15,000// But company policy caps commission at $5,000! // ============================================================// THE FIX: Update projection logic and reprocess// ============================================================ function calculateCommission_v2(sale: SaleEvent): number { const rawCommission = sale.payload.amount * 0.15; const COMMISSION_CAP = 5000; return Math.min(rawCommission, COMMISSION_CAP);} // Reprocessing infrastructureinterface ProjectionRebuilder<S, E extends BaseEvent> { rebuildFromScratch( streamPrefix: string, apply: (state: S, event: E) => S, persist: (id: string, state: S) => Promise<void> ): Promise<ReprocessingResult>;} async function fixCommissionCalculation() { const rebuilder = new ProjectionRebuilder<CommissionState, SaleEvent>(); console.log('Starting commission projection rebuild...'); console.log('Step 1: Create new projection table (commission_v2)'); await createTable('commission_v2'); console.log('Step 2: Replay all sales events with corrected logic'); const result = await rebuilder.rebuildFromScratch( 'sale-', // Stream prefix for all sale aggregates (state, event) => ({ ...state, totalCommission: state.totalCommission + calculateCommission_v2(event), salesCount: state.salesCount + 1, // The corrected calculation is now applied to ALL historical events }), (salesRepId, state) => db.insert('commission_v2', { salesRepId, ...state }) ); console.log(`Step 3: Rebuilt ${result.eventsProcessed} events, ${result.aggregatesUpdated} reps`); // Compare old vs new to identify discrepancies console.log('Step 4: Identifying commission adjustments needed...'); const discrepancies = await findDiscrepancies('commission_v1', 'commission_v2'); for (const d of discrepancies) { console.log(` Rep ${d.repId}: was $${d.old}, should be $${d.new}, owes company $${d.old - d.new}`); } console.log('Step 5: Switch production to commission_v2'); await updateConfig('commission.table', 'commission_v2'); console.log('Step 6: Archive and eventually drop commission_v1'); // Done! All historical commissions are now correct.} // ============================================================// KEY INSIGHT: In traditional systems, this bug would require:// 1. Manually querying sales records// 2. Recalculating each commission// 3. Hoping you have all the data you need// 4. Creating adjustment entries// 5. Praying you didn't miss anything//// With event sourcing:// 1. Fix the code// 2. Reprocess// 3. Correct state emerges automatically// ============================================================While powerful, reprocessing isn't free. Long event histories can take significant time to replay. Side effects (sending emails, calling external APIs) must be suppressed during reprocessing. Event schema changes over time must be handled. We'll cover these challenges in the trade-offs section.
When events are your source of truth internally, integrating with event-driven architectures externally becomes natural. The same events that drive your internal state can be published to message brokers, enabling loose coupling between services, eventual consistency patterns, and reactive architectures.
| Capability | Traditional Approach | Event-Sourced Approach |
|---|---|---|
| Notifying other services | Write to database, then publish event (two operations that can fail independently) | Append event to store, store handles publication (single atomic operation) |
| Building read replicas | Complex CDC setup, brittle coupling to schema | Subscribe to event stream, build projection from events |
| Sync to analytics | ETL jobs running on schedule, often stale | Stream events to data warehouse in real-time |
| Audit trail for compliance | Separate audit logging system, often incomplete | Event stream IS the audit trail |
| Disaster recovery | Restore from backup, lose recent changes | Replay events to rebuild any projection |
The natural alignment:
Event sourcing and event-driven architecture share a common worldview: business happens through events. An order is placed. A payment is received. An item is shipped. Both patterns model the world as a stream of immutable occurrences.
This alignment means that event-sourced systems integrate naturally with message brokers (Kafka, RabbitMQ), stream processors (Flink, Spark Streaming), and event-driven services. The events you're already storing become the events you publish for integration—no translation layer needed.
Some event stores (like EventStoreDB) include built-in subscription capabilities, blurring the line between database and message broker. This unification simplifies architectures: one system for both persistence and distribution of events.
We've explored the profound implications of treating events as the source of truth. Let's consolidate the key insights:
What's next:
Now that we understand why events are the source of truth, we'll examine the practical mechanics of reconstructing state from events. This includes efficient replay strategies, the role of snapshots in optimization, and handling event schema evolution over time.
You now understand the philosophical and practical implications of treating events as the source of truth. This inversion from traditional persistence enables temporal queries, multiple projections, reprocessing capabilities, and natural event-driven integration.