Loading content...
In event sourcing, current state doesn't exist as a stored record—it's computed. Every time you need to work with an aggregate, you load its complete event stream and replay those events to reconstruct where things stand right now. This process, called rehydration or state reconstruction, is both the superpower and the challenge of event sourcing.
The superpower: you can reconstruct state at any point in time, debug issues by replaying events step by step, and change how you interpret events after the fact. The challenge: what happens when an aggregate has 10,000 events? 100,000? How do you keep rehydration fast enough for real-time operations?
By the end of this page, you will understand the mechanics of state rehydration, performance characteristics and bottlenecks, strategies for keeping rehydration fast, how to project historical state at any point in time, and when lazy loading versus eager loading makes sense.
Rehydration is fundamentally a fold operation: starting from an initial empty state, we apply each event in sequence to produce the final state. This is pure functional programming—given the same events in the same order, we always get the same result.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149
// The fundamental rehydration algorithm interface AggregateState { readonly version: number; // Domain-specific state fields} type EventApplier<TState, TEvent> = (state: TState, event: TEvent) => TState; // Pure rehydration functionfunction rehydrate<TState extends AggregateState, TEvent>( initialState: TState, events: TEvent[], apply: EventApplier<TState, TEvent>): TState { return events.reduce( (currentState, event, index) => ({ ...apply(currentState, event), version: index + 1, }), initialState );} // Example: Bank Account aggregateinterface BankAccountState extends AggregateState { accountId: string; balance: number; status: 'active' | 'frozen' | 'closed'; overdraftLimit: number; createdAt?: Date; lastTransactionAt?: Date;} type BankAccountEvent = | { type: 'AccountOpened'; accountId: string; timestamp: Date } | { type: 'MoneyDeposited'; amount: number; timestamp: Date } | { type: 'MoneyWithdrawn'; amount: number; timestamp: Date } | { type: 'OverdraftLimitSet'; limit: number } | { type: 'AccountFrozen'; reason: string } | { type: 'AccountUnfrozen' } | { type: 'AccountClosed'; timestamp: Date }; const initialBankAccountState: BankAccountState = { accountId: '', balance: 0, status: 'active', overdraftLimit: 0, version: 0,}; function applyBankAccountEvent( state: BankAccountState, event: BankAccountEvent): BankAccountState { switch (event.type) { case 'AccountOpened': return { ...state, accountId: event.accountId, status: 'active', createdAt: event.timestamp, }; case 'MoneyDeposited': return { ...state, balance: state.balance + event.amount, lastTransactionAt: event.timestamp, }; case 'MoneyWithdrawn': return { ...state, balance: state.balance - event.amount, lastTransactionAt: event.timestamp, }; case 'OverdraftLimitSet': return { ...state, overdraftLimit: event.limit, }; case 'AccountFrozen': return { ...state, status: 'frozen', }; case 'AccountUnfrozen': return { ...state, status: 'active', }; case 'AccountClosed': return { ...state, status: 'closed', }; default: // Unknown event types are ignored for forward compatibility return state; }} // Usageasync function loadBankAccount( eventStore: EventStore, accountId: string): Promise<BankAccountState> { const events = await eventStore.readStream<BankAccountEvent>( `BankAccount-${accountId}` ); return rehydrate( initialBankAccountState, events, applyBankAccountEvent );} // Example event stream and resulting stateconst exampleEvents: BankAccountEvent[] = [ { type: 'AccountOpened', accountId: 'acct_123', timestamp: new Date('2024-01-01') }, { type: 'MoneyDeposited', amount: 1000, timestamp: new Date('2024-01-02') }, { type: 'MoneyWithdrawn', amount: 200, timestamp: new Date('2024-01-03') }, { type: 'MoneyDeposited', amount: 500, timestamp: new Date('2024-01-15') }, { type: 'OverdraftLimitSet', limit: 100 },]; const currentState = rehydrate( initialBankAccountState, exampleEvents, applyBankAccountEvent); // Result:// {// accountId: 'acct_123',// balance: 1300, // 0 + 1000 - 200 + 500// status: 'active',// overdraftLimit: 100,// createdAt: Date('2024-01-01'),// lastTransactionAt: Date('2024-01-15'),// version: 5// }Notice how each event handler returns a new state object rather than mutating the existing one. This immutability ensures rehydration is deterministic and side-effect-free. It also enables powerful debugging: you can inspect state at any step of the replay.
Rehydration performance depends on several factors. Understanding these characteristics is essential for designing systems that remain fast as event counts grow.
| Factor | Impact | Mitigation |
|---|---|---|
| Event count (N) | O(N) time complexity per load | Snapshots, aggregate splitting |
| Event size | I/O bandwidth, deserialization cost | Compact event schemas, efficient serialization |
| Apply function complexity | CPU time per event | Keep apply functions simple and pure |
| State size | Memory pressure, GC overhead | Lazy computation, streaming |
| Network latency | Round-trip time to event store | Co-location, caching, batch reads |
| Deserialization | JSON parsing cost can dominate | Binary formats (Protobuf, Avro) |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
// Measuring rehydration performance interface RehydrationMetrics { streamId: string; eventCount: number; totalBytes: number; readTimeMs: number; deserializationTimeMs: number; applyTimeMs: number; totalTimeMs: number;} async function measureRehydration( eventStore: EventStore, streamId: string): Promise<RehydrationMetrics> { const metrics: Partial<RehydrationMetrics> = { streamId }; const startTotal = performance.now(); // Measure read time const startRead = performance.now(); const rawEvents = await eventStore.readStreamRaw(streamId); metrics.readTimeMs = performance.now() - startRead; metrics.eventCount = rawEvents.length; metrics.totalBytes = rawEvents.reduce((sum, e) => sum + e.byteLength, 0); // Measure deserialization time const startDeser = performance.now(); const events = rawEvents.map(raw => deserialize(raw)); metrics.deserializationTimeMs = performance.now() - startDeser; // Measure apply time const startApply = performance.now(); let state = initialState; for (const event of events) { state = apply(state, event); } metrics.applyTimeMs = performance.now() - startApply; metrics.totalTimeMs = performance.now() - startTotal; return metrics as RehydrationMetrics;} // Benchmark results for different event counts// (Example data - actual results vary by hardware/implementation) // Event Count | Read (ms) | Deser (ms) | Apply (ms) | Total (ms)// -----------|-----------|------------|------------|------------// 100 | 2 | 1 | 0.1 | 3// 1,000 | 8 | 5 | 1 | 14// 10,000 | 45 | 35 | 8 | 88// 100,000 | 320 | 280 | 65 | 665// 1,000,000 | 2,800 | 2,400 | 580 | 5,780 // Key observations:// 1. Read time dominates for small event counts// 2. Deserialization becomes significant at scale// 3. Apply time is usually the smallest component// 4. Total time grows linearly with event count // Performance logging middlewareclass InstrumentedAggregate<TState, TEvent> { private metrics: RehydrationMetrics[] = []; async load(eventStore: EventStore, id: string): Promise<TState> { const metrics = await measureRehydration(eventStore, id); this.metrics.push(metrics); // Log slow rehydrations if (metrics.totalTimeMs > 100) { console.warn( `Slow rehydration: ${id} took ${metrics.totalTimeMs}ms ` + `for ${metrics.eventCount} events` ); } // Alert on very large streams if (metrics.eventCount > 10000) { console.warn( `Large aggregate: ${id} has ${metrics.eventCount} events. ` + `Consider implementing snapshots.` ); } return this.rehydrate(metrics.events); }}An aggregate that accumulates events over time (like a user's activity log) can grow unbounded. Set alerts for aggregates exceeding 1,000 events. At 10,000+ events, rehydration latency becomes noticeable. Plan mitigation strategies early.
One of event sourcing's most powerful capabilities is temporal queries: reconstructing state at any point in history. This enables debugging, auditing, and answering questions that would be impossible with CRUD systems.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146
// Time-travel queries in event sourcing // Reconstruct state at a specific point in timeasync function stateAsOf<TState>( eventStore: EventStore, streamId: string, asOfTimestamp: Date, initialState: TState, apply: (state: TState, event: StoredEvent) => TState): Promise<TState> { const events = await eventStore.readStream(streamId); // Filter to only events that occurred before the timestamp const relevantEvents = events.filter( event => event.occurredAt <= asOfTimestamp ); return relevantEvents.reduce(apply, initialState);} // Reconstruct state at a specific versionasync function stateAtVersion<TState>( eventStore: EventStore, streamId: string, version: number, initialState: TState, apply: (state: TState, event: StoredEvent) => TState): Promise<TState> { const events = await eventStore.readStream(streamId, { maxCount: version // Only read up to the specified version }); return events.reduce(apply, initialState);} // Compare state between two points in timeinterface StateDiff<TState> { before: TState; after: TState; eventsBetween: StoredEvent[]; changes: Array<{ field: keyof TState; oldValue: unknown; newValue: unknown; }>;} async function compareStateOverTime<TState extends object>( eventStore: EventStore, streamId: string, fromTime: Date, toTime: Date, initialState: TState, apply: (state: TState, event: StoredEvent) => TState): Promise<StateDiff<TState>> { const allEvents = await eventStore.readStream(streamId); const beforeEvents = allEvents.filter(e => e.occurredAt <= fromTime); const betweenEvents = allEvents.filter( e => e.occurredAt > fromTime && e.occurredAt <= toTime ); const before = beforeEvents.reduce(apply, initialState); const after = [...beforeEvents, ...betweenEvents].reduce(apply, initialState); // Calculate diff const changes: StateDiff<TState>['changes'] = []; for (const key of Object.keys(before) as Array<keyof TState>) { if (before[key] !== after[key]) { changes.push({ field: key, oldValue: before[key], newValue: after[key], }); } } return { before, after, eventsBetween: betweenEvents, changes, };} // Practical use cases // 1. Debugging: What was the order state when the bug occurred?async function debugOrderIssue(orderId: string, bugReportTime: Date) { const stateAtBugTime = await stateAsOf( eventStore, `Order-${orderId}`, bugReportTime, initialOrderState, applyOrderEvent ); console.log('Order state when bug was reported:', stateAtBugTime); // Show events around the bug time const events = await eventStore.readStream(`Order-${orderId}`); const nearbyEvents = events.filter(e => Math.abs(e.occurredAt.getTime() - bugReportTime.getTime()) < 3600000 // ±1 hour ); console.log('Events around bug time:', nearbyEvents);} // 2. Auditing: What changed on this account last month?async function monthlyAuditReport(accountId: string, month: Date) { const startOfMonth = new Date(month.getFullYear(), month.getMonth(), 1); const endOfMonth = new Date(month.getFullYear(), month.getMonth() + 1, 0); const diff = await compareStateOverTime( eventStore, `Account-${accountId}`, startOfMonth, endOfMonth, initialAccountState, applyAccountEvent ); return { stateAtMonthStart: diff.before, stateAtMonthEnd: diff.after, changesApplied: diff.eventsBetween.length, fieldChanges: diff.changes, allEvents: diff.eventsBetween, };} // 3. Compliance: Prove the exact sequence of eventsasync function generateComplianceReport(entityId: string) { const events = await eventStore.readStream(entityId); return events.map(event => ({ timestamp: event.occurredAt, action: event.eventType, performedBy: event.metadata.userId, correlationId: event.metadata.correlationId, details: event.payload, // Cryptographic proof of ordering previousEventHash: event.metadata.previousHash, eventHash: computeHash(event), }));}When a customer says 'Something changed and I don't know why,' you can replay their account's events step by step, watching state evolve. This debugging capability is impossible with CRUD systems where history is lost on update.
Not every operation needs complete state. Sometimes you only need to check a single field before deciding to proceed. Lazy loading strategies can dramatically reduce rehydration costs for these cases.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132
// Lazy loading strategies for event sourcing // Strategy 1: Partial state rehydration// Only compute specific fields needed for current operationinterface LazyAggregate<TFullState, TPartialState> { loadPartial(fields: Array<keyof TFullState>): Promise<TPartialState>; loadFull(): Promise<TFullState>;} class LazyOrderAggregate implements LazyAggregate<OrderState, Partial<OrderState>> { constructor( private eventStore: EventStore, private orderId: string ) {} async loadPartial(fields: Array<keyof OrderState>): Promise<Partial<OrderState>> { const events = await this.eventStore.readStream(`Order-${this.orderId}`); // Only compute requested fields const partialState: Partial<OrderState> = {}; for (const event of events) { if (fields.includes('status') && event.type === 'OrderStatusChanged') { partialState.status = event.payload.newStatus; } if (fields.includes('totalAmount') && event.type === 'OrderPlaced') { partialState.totalAmount = event.payload.totalAmount; } // ... only compute fields we care about } return partialState; } async loadFull(): Promise<OrderState> { const events = await this.eventStore.readStream(`Order-${this.orderId}`); return events.reduce(applyOrderEvent, initialOrderState); }} // Strategy 2: Early termination// Stop replaying once we have the answerasync function isOrderCancelled( eventStore: EventStore, orderId: string): Promise<boolean> { // Read events in reverse order (most recent first) const events = await eventStore.readStreamBackward(`Order-${orderId}`); for (const event of events) { if (event.type === 'OrderCancelled') { return true; // Found cancellation, done early } if (event.type === 'OrderPlaced') { return false; // Reached creation without cancellation } } return false; // Stream empty or no relevant events} // Strategy 3: Cached projections for common queriesclass OrderQueryService { private statusCache: Map<string, string> = new Map(); constructor( private eventStore: EventStore, private projection: Projection ) { // Subscribe to events to keep cache warm this.eventStore.subscribe('$ce-Order', (event) => { if (event.type === 'OrderPlaced') { this.statusCache.set(event.aggregateId, 'pending'); } if (event.type === 'OrderStatusChanged') { this.statusCache.set(event.aggregateId, event.payload.newStatus); } }); } // O(1) lookup instead of O(n) rehydration async getStatus(orderId: string): Promise<string | undefined> { // Check cache first if (this.statusCache.has(orderId)) { return this.statusCache.get(orderId); } // Fall back to read model return this.projection.getOrderStatus(orderId); }} // Strategy 4: Batched loading for multiple aggregatesasync function loadMultipleOrders( eventStore: EventStore, orderIds: string[]): Promise<Map<string, OrderState>> { // Bad: Sequential loading // for (const id of orderIds) { // const state = await loadOrder(id); // N round trips // } // Good: Parallel loading const results = await Promise.all( orderIds.map(async id => { const events = await eventStore.readStream(`Order-${id}`); const state = events.reduce(applyOrderEvent, initialOrderState); return [id, state] as const; }) ); return new Map(results);} // Even better: Single query with batch readasync function loadMultipleOrdersBatched( eventStore: EventStore, orderIds: string[]): Promise<Map<string, OrderState>> { // Read multiple streams in one round trip const streamIds = orderIds.map(id => `Order-${id}`); const eventsByStream = await eventStore.readMultipleStreams(streamIds); const result = new Map<string, OrderState>(); for (const [streamId, events] of eventsByStream) { const orderId = streamId.replace('Order-', ''); const state = events.reduce(applyOrderEvent, initialOrderState); result.set(orderId, state); } return result;}For most read operations, you don't need real-time rehydration. Maintain read models (projections) that are updated asynchronously from the event stream. These provide O(1) queries for common cases, and full rehydration is only needed for command handling.
Some aggregates naturally accumulate many events over their lifetime. A user account that's existed for years, an inventory item with thousands of adjustments, or a shopping cart that's been modified hundreds of times. Several strategies help manage these long streams.
Snapshots: Periodic State Checkpoints
The most common solution: periodically save the computed state as a snapshot, then only replay events since the last snapshot.
Events: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
↓ ↓
Snapshots: [S@3] [S@10]
To load current state:
1. Load snapshot S@10 (instant)
2. Replay events 11+ (if any)
Pros:
Cons:
Different use cases call for different rehydration approaches. Here are common patterns you'll encounter in production systems.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171
// Common rehydration patterns in production // Pattern 1: Repository pattern with cachingclass CachingAggregateRepository<TAggregate extends Aggregate> { private cache: LRUCache<string, TAggregate>; constructor( private eventStore: EventStore, private aggregateFactory: AggregateFactory<TAggregate>, cacheSize: number = 1000 ) { this.cache = new LRUCache({ max: cacheSize }); } async load(id: string): Promise<TAggregate> { // Check cache const cached = this.cache.get(id); if (cached) { // Verify cache is current const latestVersion = await this.eventStore.getStreamVersion(id); if (cached.version === latestVersion) { return cached; } // Cache stale, load incremental events return this.loadIncremental(cached, id, cached.version); } // Full load const aggregate = await this.loadFull(id); this.cache.set(id, aggregate); return aggregate; } private async loadFull(id: string): Promise<TAggregate> { const events = await this.eventStore.readStream(id); const aggregate = this.aggregateFactory.create(id); aggregate.loadFromHistory(events); return aggregate; } private async loadIncremental( cached: TAggregate, id: string, fromVersion: number ): Promise<TAggregate> { const newEvents = await this.eventStore.readStream(id, { fromVersion: fromVersion + 1 }); // Apply only new events for (const event of newEvents) { cached.apply(event); } this.cache.set(id, cached); return cached; } async save(aggregate: TAggregate): Promise<void> { const events = aggregate.getUncommittedEvents(); if (events.length === 0) return; await this.eventStore.append( aggregate.id, events, aggregate.version - events.length ); // Update cache with committed state this.cache.set(aggregate.id, aggregate); }} // Pattern 2: Unit of Work with multiple aggregatesclass UnitOfWork { private loaded = new Map<string, Aggregate>(); private dirty = new Set<string>(); constructor(private repository: AggregateRepository) {} async get<T extends Aggregate>(id: string): Promise<T> { if (this.loaded.has(id)) { return this.loaded.get(id) as T; } const aggregate = await this.repository.load<T>(id); this.loaded.set(id, aggregate); return aggregate; } markDirty(id: string): void { this.dirty.add(id); } async commit(): Promise<void> { const savePromises = []; for (const id of this.dirty) { const aggregate = this.loaded.get(id); if (aggregate && aggregate.hasUncommittedEvents()) { savePromises.push(this.repository.save(aggregate)); } } await Promise.all(savePromises); this.dirty.clear(); } rollback(): void { // Discard all loaded aggregates this.loaded.clear(); this.dirty.clear(); }} // Pattern 3: Async rehydration with background prefetchclass PrefetchingRepository<T extends Aggregate> { private prefetchQueue: PriorityQueue<string>; private prefetchCache: Map<string, Promise<T>>; constructor(private baseRepository: AggregateRepository<T>) { this.startPrefetcher(); } // Hint that we'll need these aggregates soon prefetch(ids: string[], priority: number = 0): void { for (const id of ids) { if (!this.prefetchCache.has(id)) { this.prefetchQueue.enqueue(id, priority); } } } async load(id: string): Promise<T> { // Check if prefetch is in progress const prefetching = this.prefetchCache.get(id); if (prefetching) { return prefetching; } // Load synchronously return this.baseRepository.load(id); } private async startPrefetcher(): Promise<void> { while (true) { const id = await this.prefetchQueue.dequeue(); if (!this.prefetchCache.has(id)) { const promise = this.baseRepository.load(id); this.prefetchCache.set(id, promise); await promise; // Actually perform the load } } }} // Usage in command handler with prefetchingasync function handleBatchOperation( repository: PrefetchingRepository<OrderAggregate>, orderIds: string[]): Promise<void> { // Start prefetching all orders we'll need repository.prefetch(orderIds, 1); // Process orders (prefetch is happening in background) for (const orderId of orderIds) { const order = await repository.load(orderId); // Often instant due to prefetch await order.process(); await repository.save(order); }}Rehydration logic must be thoroughly tested. Bugs here are insidious: they can cause silent data corruption that's only discovered much later. Adopt these testing strategies to ensure correctness.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123
// Testing strategies for state reconstruction // Strategy 1: Given-When-Then for individual eventsdescribe('OrderAggregate rehydration', () => { it('should apply OrderPlaced correctly', () => { const events = [ { type: 'OrderPlaced', payload: { items: [...], totalAmount: 100 } }, ]; const state = rehydrate(initialOrderState, events, applyOrderEvent); expect(state.status).toBe('pending'); expect(state.totalAmount).toBe(100); expect(state.version).toBe(1); }); it('should accumulate state correctly across events', () => { const events = [ { type: 'OrderPlaced', payload: { totalAmount: 100 } }, { type: 'PaymentReceived', payload: { amount: 100 } }, { type: 'OrderShipped', payload: { trackingNumber: 'TRK123' } }, ]; const state = rehydrate(initialOrderState, events, applyOrderEvent); expect(state.status).toBe('shipped'); expect(state.version).toBe(3); });}); // Strategy 2: Property-based testing for invariantsdescribe('OrderAggregate invariants', () => { it('should always have non-negative item count', () => { fc.assert( fc.property( fc.array(orderEventArbitrary()), (events) => { const state = rehydrate(initialOrderState, events, applyOrderEvent); return state.items.length >= 0; } ) ); }); it('should be deterministic - same events produce same state', () => { fc.assert( fc.property( fc.array(orderEventArbitrary()), (events) => { const state1 = rehydrate(initialOrderState, events, applyOrderEvent); const state2 = rehydrate(initialOrderState, events, applyOrderEvent); return deepEqual(state1, state2); } ) ); }); it('should handle event order correctly', () => { fc.assert( fc.property( fc.array(orderEventArbitrary()), (events) => { // Interleaving deposits and withdrawals const state = rehydrate(initialOrderState, events, applyOrderEvent); // Calculate expected total manually const expectedTotal = events .filter(e => e.type === 'ItemAdded' || e.type === 'ItemRemoved') .reduce((sum, e) => { if (e.type === 'ItemAdded') return sum + e.payload.amount; if (e.type === 'ItemRemoved') return sum - e.payload.amount; return sum; }, 0); return state.totalAmount === expectedTotal; } ) ); });}); // Strategy 3: Regression tests from production eventsdescribe('OrderAggregate regression tests', () => { // Capture real event streams that caused bugs const productionCases = [ { name: 'double payment bug from JIRA-1234', events: loadFixture('jira-1234-events.json'), expectedState: { status: 'paid', paymentCount: 1 }, }, { name: 'cancelled order still shipped bug', events: loadFixture('cancelled-shipped-bug.json'), expectedState: { status: 'cancelled', shippedAt: null }, }, ]; for (const { name, events, expectedState } of productionCases) { it(`should handle: ${name}`, () => { const state = rehydrate(initialOrderState, events, applyOrderEvent); expect(state).toMatchObject(expectedState); }); }}); // Strategy 4: Snapshot correctness verificationdescribe('Snapshot testing', () => { it('should produce same state with and without snapshot', async () => { const orderId = 'test-order-123'; const events = generateManyEvents(500); // Method 1: Full rehydration const fullRehydration = rehydrate(initialState, events, apply); // Method 2: Snapshot + partial rehydration const snapshot = rehydrate(initialState, events.slice(0, 250), apply); const partialEvents = events.slice(250); const snapshotRehydration = rehydrate(snapshot, partialEvents, apply); // Must produce identical state expect(snapshotRehydration).toEqual(fullRehydration); });});When bugs are discovered in production, capture the actual event stream that triggered the bug. Add this as a regression test. Over time, you build a valuable test suite based on real-world scenarios that no handwritten test would have imagined.
We've covered the complete mechanics of state reconstruction in event sourcing. Let's consolidate the key insights:
What's next:
While rehydration works for aggregates of any size, long event streams become increasingly expensive. The next page dives deep into snapshots for performance—how to create, store, version, and use snapshots to keep rehydration fast regardless of event count.
You now understand how to reconstruct state from events, the performance characteristics involved, and strategies for handling long event streams. The deterministic nature of rehydration enables time-travel, debugging, and reliable state recovery. Next, we'll optimize this process with snapshots.