Loading learning content...
When a user completes a purchase on an e-commerce platform, the ripple effect is enormous. The order needs to trigger inventory reservation, payment processing, shipping preparation, loyalty points calculation, recommendation engine updates, analytics event capture, email confirmation, push notification, fraud detection analysis, and potentially dozens more downstream actions.
This is fan-out in action—a single event spawning parallel processing across many independent systems. But not all fan-out is created equal. The pattern you choose affects system reliability, development velocity, operational complexity, and cost.
This page explores the major fan-out patterns, their trade-offs, and how to choose the right approach for different scenarios.
By the end of this page, you will understand simple fan-out, hierarchical fan-out, scatter-gather patterns, parallel processing strategies, and the trade-offs between synchronous and asynchronous fan-out approaches.
The most straightforward fan-out pattern: publish an event to a single topic, and multiple independent subscriptions each receive the event. This is the native behavior of pub-sub systems.
When to Use Simple Fan-Out:
Limitations:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// Simple fan-out: One publish, many independent consumers // Producer: Order Service (knows nothing about consumers)async function publishOrderCompleted(order: Order): Promise<void> { const event: OrderCompletedEvent = { eventId: generateId(), eventType: 'ORDER_COMPLETED', timestamp: new Date().toISOString(), payload: { orderId: order.id, customerId: order.customerId, items: order.items, totalAmount: order.totalAmount, shippingAddress: order.shippingAddress, }, }; // Single publish - broker handles fan-out to all subscribers await pubsub.topic('orders.completed').publish(event);} // Consumer 1: Inventory Serviceasync function inventoryConsumer(message: Message): Promise<void> { const event = parse<OrderCompletedEvent>(message); for (const item of event.payload.items) { await inventoryService.reserveStock(item.sku, item.quantity); } await message.ack();} // Consumer 2: Email Service (completely independent)async function emailConsumer(message: Message): Promise<void> { const event = parse<OrderCompletedEvent>(message); await emailService.sendOrderConfirmation({ to: event.payload.customerId, orderId: event.payload.orderId, items: event.payload.items, total: event.payload.totalAmount, }); await message.ack();} // Consumer 3: Analytics Pipeline (different processing characteristics)async function analyticsConsumer(messages: Message[]): Promise<void> { // Batch processing - analytics can handle higher latency const events = messages.map(m => parse<OrderCompletedEvent>(m)); await analyticsService.ingestOrderEvents(events); await Promise.all(messages.map(m => m.ack()));}When fan-out becomes too broad at a single level, or when different event types need different distribution paths, hierarchical fan-out introduces intermediate topics that re-broadcast events to downstream subscribers.
Why Hierarchical Fan-Out?
As systems grow, simple fan-out faces challenges:
Hierarchical fan-out introduces event routers (sometimes called event bridges) that:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// Event Router: Translates origin events to domain events // Origin event (from Order Service)interface OrderCompletedEvent { orderId: string; customerId: string; items: OrderItem[]; totalAmount: number; shippingAddress: Address;} // Domain event (for Fulfillment domain)interface FulfillmentRequiredEvent { fulfillmentId: string; orderId: string; warehouse: string; // Enriched: determined by address priority: 'standard' | 'express'; items: FulfillmentItem[]; // Transformed: includes warehouse location shipBy: string; // Derived: based on shipping method} // Router: Order Events → Fulfillment Domainclass FulfillmentEventRouter { async handleOrderCompleted(event: OrderCompletedEvent): Promise<void> { // 1. Filter: Only orders with physical goods need fulfillment if (!this.hasPhysicalItems(event.items)) { return; // Digital-only orders skip fulfillment } // 2. Enrich: Determine optimal warehouse const warehouse = await this.warehouseService.findOptimalWarehouse( event.items, event.shippingAddress ); // 3. Transform: Convert to domain event const fulfillmentEvent: FulfillmentRequiredEvent = { fulfillmentId: generateId(), orderId: event.orderId, warehouse: warehouse.id, priority: this.determinePriority(event), items: await this.enrichItemsWithWarehouseLocations(event.items, warehouse), shipBy: this.calculateShipDeadline(event), }; // 4. Republish to domain topic await this.broker.publish('fulfillment.required', fulfillmentEvent); }}Event routers introduce an extra hop and potential failure point. Ensure routers are highly available, idempotent (can replay origin events safely), and monitored. If the router fails, downstream consumers see no events. Consider router replicas with consumer-group semantics for horizontal scaling.
When consumers only need a subset of events from a topic, filtered fan-out applies selection criteria at the broker level, delivering only matching messages to each subscription.
The Problem with Client-Side Filtering:
Without broker-side filtering, consumers receive all events and discard unwanted ones:
Topic: payment-events (10,000 events/sec)
├─ Subscription A (wants PAYMENT_COMPLETED events) → receives 10,000/sec, uses 2,000/sec, discards 8,000/sec
├─ Subscription B (wants PAYMENT_FAILED events) → receives 10,000/sec, uses 500/sec, discards 9,500/sec
└─ Subscription C (wants high-value events >$1000) → receives 10,000/sec, uses 100/sec, discards 9,900/sec
This wastes network bandwidth, consumer CPU, and acknowledgment overhead.
With Broker-Side Filtering:
Topic: payment-events (10,000 events/sec)
├─ Subscription A (filter: eventType = 'PAYMENT_COMPLETED') → receives 2,000/sec
├─ Subscription B (filter: eventType = 'PAYMENT_FAILED') → receives 500/sec
└─ Subscription C (filter: amount > 1000) → receives 100/sec
The broker evaluates filter expressions before delivery, dramatically reducing unnecessary traffic.
| Platform | Filter Type | Example | Notes |
|---|---|---|---|
| Google Pub/Sub | Attribute-based | attributes.eventType = 'COMPLETED' | Filters on message attributes; no payload filtering |
| AWS SNS | Message filtering | { 'eventType': ['COMPLETED', 'FAILED'] } | JSON-based filter on message attributes |
| RabbitMQ | Routing keys + topic exchange | payments.completed.* vs payments.failed.* | Uses hierarchical routing key matching |
| Apache Kafka | No native filtering | N/A | Consumers filter; use Kafka Streams for pre-filtering to new topics |
| Azure Event Grid | Advanced filters | Subject begins with /payments/high-value | Rich filter expressions on properties |
123456789101112131415161718192021222324252627282930313233343536
// Example: Google Pub/Sub with subscription filters // Publisher: Add filterable attributes to messagesasync function publishPaymentEvent(event: PaymentEvent): Promise<void> { await pubsub.topic('payment-events').publishMessage({ data: Buffer.from(JSON.stringify(event)), attributes: { // Attributes are used for filtering (not the payload) eventType: event.type, // 'COMPLETED', 'FAILED', 'REFUNDED' region: event.merchantRegion, // 'US', 'EU', 'APAC' amountTier: event.amount > 1000 ? 'high' : 'standard', currency: event.currency, }, });} // Subscription 1: Only high-value completed paymentsconst highValueCompletedSub = await pubsub .topic('payment-events') .createSubscription('high-value-completed-sub', { filter: 'attributes.eventType = "COMPLETED" AND attributes.amountTier = "high"', }); // Subscription 2: Only failed payments in EU regionconst euFailedSub = await pubsub .topic('payment-events') .createSubscription('eu-failed-payments-sub', { filter: 'attributes.eventType = "FAILED" AND attributes.region = "EU"', }); // Subscription 3: All refunds (no additional filtering)const refundsSub = await pubsub .topic('payment-events') .createSubscription('all-refunds-sub', { filter: 'attributes.eventType = "REFUNDED"', });Design filterable attributes upfront—changing them requires publisher changes and subscriber migrations. Use coarse-grained categories (status, region, priority) rather than fine-grained values. Remember that filter evaluation has a cost; extremely complex filters may not outperform client-side filtering.
While pure pub-sub is fire-and-forget (producer doesn't wait for consumers), the scatter-gather pattern overlays request-response semantics: scatter a request to multiple services, then gather their responses to form a composite result.
How It Works:
Scatter-Gather in Messaging:
The pattern can be implemented with pub-sub by:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
// Scatter-Gather with async messaging interface ScatterRequest { correlationId: string; requestType: string; payload: unknown; replyTo: string; // Topic for responses timeout: number;} interface GatherResponse { correlationId: string; source: string; payload: unknown; success: boolean; error?: string;} class ScatterGatherOrchestrator { private pendingRequests = new Map<string, { resolve: (responses: GatherResponse[]) => void; responses: GatherResponse[]; expectedCount: number; timer: NodeJS.Timeout; }>(); async scatterGather( request: { type: string; payload: unknown }, targets: string[], timeoutMs: number = 5000 ): Promise<GatherResponse[]> { const correlationId = generateCorrelationId(); return new Promise(async (resolve) => { // Set up response collection this.pendingRequests.set(correlationId, { resolve, responses: [], expectedCount: targets.length, timer: setTimeout(() => this.handleTimeout(correlationId), timeoutMs), }); // Scatter: Publish to all targets in parallel await Promise.all(targets.map(target => this.broker.publish(`requests.${target}`, { correlationId, requestType: request.type, payload: request.payload, replyTo: 'responses.orchestrator', timeout: timeoutMs, }) )); }); } async handleResponse(message: Message): Promise<void> { const response: GatherResponse = JSON.parse(message.data); const pending = this.pendingRequests.get(response.correlationId); if (!pending) { // Late response (already timed out) - discard return message.ack(); } // Gather: Collect response pending.responses.push(response); // Check if complete if (pending.responses.length >= pending.expectedCount) { clearTimeout(pending.timer); this.pendingRequests.delete(response.correlationId); pending.resolve(pending.responses); } await message.ack(); } private handleTimeout(correlationId: string): void { const pending = this.pendingRequests.get(correlationId); if (pending) { this.pendingRequests.delete(correlationId); // Resolve with partial responses pending.resolve(pending.responses); } }}Use scatter-gather when you need results from multiple services to compose a response (product pages, search aggregation, multi-source pricing). For fire-and-forget fan-out where you don't need responses, simple pub-sub is more appropriate and simpler to implement.
In event-sourced systems, the event log serves as both the source of truth and the fan-out mechanism. Every state change is captured as an immutable event, and multiple consumers project different views from the same event stream.
Why This Matters:
Event sourcing fan-out differs from traditional pub-sub in crucial ways:
Implementation Considerations:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
// Event Sourcing: Multiple projections from same event stream type AccountEvent = | { type: 'ACCOUNT_OPENED'; accountId: string; customerId: string; openedAt: string } | { type: 'MONEY_DEPOSITED'; accountId: string; amount: number; timestamp: string } | { type: 'MONEY_WITHDRAWN'; accountId: string; amount: number; timestamp: string } | { type: 'ACCOUNT_CLOSED'; accountId: string; closedAt: string }; // Projection 1: Current Balanceclass BalanceProjection { private balances = new Map<string, number>(); apply(event: AccountEvent): void { switch (event.type) { case 'ACCOUNT_OPENED': this.balances.set(event.accountId, 0); break; case 'MONEY_DEPOSITED': this.balances.set( event.accountId, (this.balances.get(event.accountId) || 0) + event.amount ); break; case 'MONEY_WITHDRAWN': this.balances.set( event.accountId, (this.balances.get(event.accountId) || 0) - event.amount ); break; case 'ACCOUNT_CLOSED': this.balances.delete(event.accountId); break; } } getBalance(accountId: string): number { return this.balances.get(accountId) || 0; }} // Projection 2: Monthly Transaction Summaryclass MonthlySummaryProjection { private summaries = new Map<string, { deposits: number; withdrawals: number; count: number }>(); apply(event: AccountEvent): void { if (event.type === 'MONEY_DEPOSITED' || event.type === 'MONEY_WITHDRAWN') { const month = event.timestamp.substring(0, 7); // YYYY-MM const key = `${event.accountId}:${month}`; const current = this.summaries.get(key) || { deposits: 0, withdrawals: 0, count: 0 }; if (event.type === 'MONEY_DEPOSITED') { current.deposits += event.amount; } else { current.withdrawals += event.amount; } current.count++; this.summaries.set(key, current); } }} // Consumer: Fan-out to all projectionsasync function consumeAccountEvents(): Promise<void> { const balanceProjection = new BalanceProjection(); const summaryProjection = new MonthlySummaryProjection(); // ... more projections // Subscribe from earliest for full rebuild, or from checkpoint for live await eventStream.subscribe('account-events', { startFrom: checkpoint }, async (event) => { // Fan-out to all projections (same event, multiple handlers) balanceProjection.apply(event); summaryProjection.apply(event); // ... more projections // Checkpoint for recovery await saveCheckpoint(event.position); });}As subscriber count grows, fan-out performance becomes critical. Understanding the mechanics helps you design systems that scale efficiently.
| Factor | Impact | Mitigation |
|---|---|---|
| Subscriber count | Linear increase in broker work per message | Hierarchical fan-out; separate topics by domain |
| Message size | Network bandwidth multiplied by subscribers | Reference pattern (URI instead of payload); compression |
| Subscriber speed variance | Slow subscribers don't block fast ones (good) | Ensure backpressure per subscription; monitor lag |
| Acknowledgment overhead | Per-subscription ack tracking | Batch acknowledgments; tune ack deadlines |
| Message ordering | Ordering constraints limit parallelism | Relax ordering where possible; partition by entity |
The Reference Pattern for Large Payloads
When events contain large data (images, documents, batch records), fan-out multiplies bandwidth consumption. The reference pattern separates data storage from event routing:
This keeps the message bus lean and fast, while large data flows directly from storage to consumer.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
// Reference Pattern: Large payload via object storage interface LargeDocumentEvent { eventId: string; eventType: 'DOCUMENT_UPLOADED'; timestamp: string; // Metadata (small, included in event) documentId: string; fileName: string; contentType: string; fileSizeBytes: number; uploadedBy: string; // Reference to large content (not included in event) contentRef: { bucket: string; key: string; region: string; presignedUrl?: string; // Optional: pre-signed URL with expiry };} // Publisher: Store content, publish referenceasync function publishDocumentUploaded( document: Buffer, metadata: DocumentMetadata): Promise<void> { // Step 1: Store large content in object storage const key = `documents/${metadata.documentId}/${metadata.fileName}`; await s3.putObject({ Bucket: 'documents-bucket', Key: key, Body: document, ContentType: metadata.contentType, }); // Step 2: Publish small event with reference const event: LargeDocumentEvent = { eventId: generateId(), eventType: 'DOCUMENT_UPLOADED', timestamp: new Date().toISOString(), documentId: metadata.documentId, fileName: metadata.fileName, contentType: metadata.contentType, fileSizeBytes: document.length, uploadedBy: metadata.userId, contentRef: { bucket: 'documents-bucket', key, region: 'us-east-1', }, }; // Event is small (~500 bytes) regardless of document size await pubsub.topic('documents.uploaded').publish(event);} // Consumer: Fetch content on demandasync function processDocumentEvent(event: LargeDocumentEvent): Promise<void> { // Only fetch if this consumer needs the content if (needsContent(event)) { const content = await s3.getObject({ Bucket: event.contentRef.bucket, Key: event.contentRef.key, }); await processContent(content.Body); } else { // Some consumers only need metadata await processMetadataOnly(event); }}When consumers are in different accounts or networks, include a time-limited presigned URL in the event. This allows consumers to fetch content without needing storage credentials, while the URL expires (typically 15 minutes to 1 hour) to limit exposure.
Certain patterns seem convenient but lead to fragile, hard-to-maintain systems. Recognizing these anti-patterns helps you avoid common pitfalls.
| Symptom | Likely Anti-Pattern | Root Cause |
|---|---|---|
| Producer code changes when consumers change | Orchestrated fan-out | Producer knows about consumers |
| Events have many optional fields | Polluted events | Events designed for consumers, not domain |
| All consumers process all event types | God topic | Missing topic segmentation |
| High disk usage on broker | Phantom subscriptions | Stale subscriptions accumulating messages |
| Producer latency tied to consumer latency | Synchronous disguised as async | Producer waiting for processing |
We've explored the major patterns for distributing events to multiple consumers. Let's consolidate the key concepts:
What's Next:
Now that we understand how events flow to multiple consumers, we'll explore message filtering—the techniques for ensuring each consumer receives precisely the events it needs, without wasting resources on irrelevant messages.
You now understand the major fan-out patterns that enable one-to-many event distribution. From simple pub-sub to hierarchical routing to scatter-gather aggregation, these patterns form the architectural vocabulary for building scalable, event-driven systems.