Loading content...
The fastest way to complete a request is to not do the work during the request. Asynchronous processing is the art of accepting a request, acknowledging it immediately, and completing the actual work in the background.
Consider these scenarios:
| Operation | Sync Processing | Async Processing |
|---|---|---|
| Send email | 500ms (SMTP) | 5ms (queue) → later |
| Process video | 30 seconds | 50ms (queue) → webhook when done |
| Generate report | 10 seconds | 50ms → polling/push notification |
| Third-party API | 200-2000ms | 5ms (queue) → retry on failure |
| Update search index | 500ms | 5ms → eventual consistency |
In each case, the user-facing response time drops from seconds to milliseconds. The work still happens—just not blocking the user.
By completing this page, you will understand when to use async processing, how to implement reliable job queues, patterns for handling async results, and the tradeoffs between latency and complexity that async introduces.
Not everything should be async. Adding asynchrony introduces complexity—eventual consistency, error handling challenges, and user experience considerations. Use async when benefits clearly outweigh costs.
Strong Candidates for Async:
The Async Decision Framework:
1. Can the operation be done after responding to user?
└─ NO → Must be sync
└─ YES → Continue
2. Does the operation take more than 100-200ms?
└─ NO → Consider staying sync (complexity vs benefit)
└─ YES → Strong async candidate
3. Can the operation fail without immediate user notification?
└─ NO → Need robust error handling/notification
└─ YES → Natural async fit
4. Is eventual consistency acceptable for this data?
└─ NO → Consider sync, or async with polling/push
└─ YES → Async is appropriate
Every async operation needs: queue infrastructure, worker processes, error handling, retry logic, monitoring, and a way to communicate completion to users. Don't add this complexity for operations that could simply be faster synchronously. Profile first, then optimize.
Message queues are the backbone of async processing. They provide durable, reliable handoff between the request handler (producer) and background workers (consumers).
Queue Selection:
| Queue System | Strengths | Best For | Latency |
|---|---|---|---|
| Redis (Bull/BullMQ) | Simple, fast, built-in delays/retries | Job queues, background tasks | ~1ms |
| RabbitMQ | Flexible routing, mature, AMQP | Complex routing, enterprise messaging | 1-5ms |
| Amazon SQS | Managed, scalable, no ops | AWS workloads, simplicity | 10-50ms |
| Apache Kafka | High throughput, replay, streaming | Event sourcing, log processing | 5-20ms |
| Google Cloud Pub/Sub | Managed, global, at-least-once | GCP workloads, fan-out | 50-100ms |
Queue Architecture:
┌──────────┐ ┌───────────┐ ┌──────────┐
│ API │───▶│ Queue │───▶│ Worker │
│ Handler │ │ (Redis/ │ │ Process │
└──────────┘ │ SQS) │ └──────────┘
│ └───────────┘ │
│ │
▼ ▼
[Immediate [Durable Storage] [Actual
Response] Work]
The API handler:
The worker:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109
// BullMQ implementation for Node.jsimport { Queue, Worker, Job } from 'bullmq';import Redis from 'ioredis'; const connection = new Redis({ host: 'localhost', port: 6379, maxRetriesPerRequest: null // Required for BullMQ}); // === DEFINE QUEUES === // Email queue with specific optionsconst emailQueue = new Queue('email', { connection, defaultJobOptions: { attempts: 3, backoff: { type: 'exponential', delay: 1000 // 1s, 2s, 4s }, removeOnComplete: 100, // Keep last 100 completed removeOnFail: 1000 // Keep last 1000 failed }}); // Video processing queue (longer timeouts)const videoQueue = new Queue('video-processing', { connection, defaultJobOptions: { attempts: 2, timeout: 600000 // 10 minute timeout }}); // === API HANDLER (Producer) === async function handleSignupRequest(req: Request) { // Validate and create user synchronously const user = await db.users.create({ data: { email: req.body.email, name: req.body.name } }); // Queue async tasks - non-blocking await emailQueue.add('welcome-email', { userId: user.id, email: user.email, template: 'welcome' }, { priority: 1 // High priority }); await emailQueue.add('verify-email', { userId: user.id, email: user.email }, { delay: 60000 // Send after 1 minute }); // Respond immediately return { success: true, user: { id: user.id, email: user.email } }; // Total response time: ~50ms instead of 500ms+} // === WORKER (Consumer) === const emailWorker = new Worker('email', async (job: Job) => { console.log(`Processing job ${job.id}: ${job.name}`); switch (job.name) { case 'welcome-email': await sendWelcomeEmail(job.data.email); break; case 'verify-email': await sendVerificationEmail(job.data.email); break; default: throw new Error(`Unknown job type: ${job.name}`); } return { sent: true, timestamp: Date.now() };}, { connection, concurrency: 10, // Process 10 emails concurrently limiter: { max: 100, duration: 1000 // Max 100 jobs per second (rate limit) }}); // Worker event handlingemailWorker.on('completed', (job) => { console.log(`Job ${job.id} completed`);}); emailWorker.on('failed', (job, err) => { console.error(`Job ${job?.id} failed: ${err.message}`); // Alert, metrics, etc.}); // Graceful shutdownprocess.on('SIGTERM', async () => { await emailWorker.close(); await emailQueue.close(); process.exit(0);});Workers may process the same message twice (at-least-once delivery). Design jobs to be idempotent—processing twice should have the same effect as processing once. Use unique job IDs, check for existing results before processing, and use database transactions with conflict handling.
When work happens in the background, users need to know the result. Several patterns exist for communicating completion.
1. Fire-and-Forget:
Enqueue and never check result. Simplest pattern, suitable when:
2. Polling:
Return a job ID, let client poll for status. Good for:
3. WebSocket/Server-Sent Events:
Push updates to connected clients. Best for:
4. Webhooks:
Call a URL when complete. Ideal for:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148
// Pattern 1: Fire-and-Forgetapp.post('/api/analytics/track', async (req, res) => { // Don't await - fire and forget analyticsQueue.add('track-event', req.body).catch(console.error); // Immediate response res.status(202).json({ accepted: true });}); // Pattern 2: Pollingapp.post('/api/reports/generate', async (req, res) => { // Create job and return ID const job = await reportQueue.add('generate', { userId: req.user.id, reportType: req.body.type }); // Return job ID for polling res.status(202).json({ jobId: job.id, statusUrl: `/api/jobs/${job.id}/status` });}); app.get('/api/jobs/:jobId/status', async (req, res) => { const job = await reportQueue.getJob(req.params.jobId); if (!job) { return res.status(404).json({ error: 'Job not found' }); } const state = await job.getState(); const progress = job.progress; if (state === 'completed') { return res.json({ status: 'completed', result: job.returnvalue, downloadUrl: job.returnvalue?.url }); } if (state === 'failed') { return res.json({ status: 'failed', error: job.failedReason }); } return res.json({ status: state, // 'waiting' | 'active' | 'delayed' progress, retryAfter: 1000 // Suggest polling interval });}); // Pattern 3: WebSocket updatesimport { Server } from 'socket.io'; const io = new Server(httpServer); // Map user connectionsconst userSockets = new Map<string, Set<string>>(); io.on('connection', (socket) => { const userId = socket.handshake.auth.userId; // Track connection if (!userSockets.has(userId)) { userSockets.set(userId, new Set()); } userSockets.get(userId)!.add(socket.id); socket.on('disconnect', () => { userSockets.get(userId)?.delete(socket.id); });}); // Worker emits updatesconst videoWorker = new Worker('video-processing', async (job) => { const { userId, videoId } = job.data; // Update progress throughout processing await job.updateProgress(10); notifyUser(userId, { type: 'progress', jobId: job.id, progress: 10 }); await extractFrames(videoId); await job.updateProgress(50); notifyUser(userId, { type: 'progress', jobId: job.id, progress: 50 }); const result = await transcodeVideo(videoId); await job.updateProgress(100); notifyUser(userId, { type: 'completed', jobId: job.id, result }); return result;}, { connection }); function notifyUser(userId: string, data: object) { const sockets = userSockets.get(userId); if (sockets) { for (const socketId of sockets) { io.to(socketId).emit('job-update', data); } }} // Pattern 4: Webhooksapp.post('/api/video/process', async (req, res) => { const job = await videoQueue.add('process', { videoUrl: req.body.videoUrl, webhookUrl: req.body.webhookUrl, // Client-provided callback webhookSecret: crypto.randomBytes(32).toString('hex') }); res.status(202).json({ jobId: job.id, message: 'Processing started. Webhook will be called on completion.' });}); // In worker, on completion:async function notifyWebhook(job: Job) { const { webhookUrl, webhookSecret } = job.data; const payload = { jobId: job.id, status: 'completed', result: job.returnvalue, timestamp: new Date().toISOString() }; const signature = crypto .createHmac('sha256', webhookSecret) .update(JSON.stringify(payload)) .digest('hex'); await fetch(webhookUrl, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-Signature': signature }, body: JSON.stringify(payload) });}Start with polling—it's simplest and works everywhere. Add WebSocket for better UX when operations take longer than a few seconds and you have web clients. Use webhooks for B2B/API integrations. Fire-and-forget only when you truly don't need confirmation.
Async jobs fail—APIs timeout, services crash, resources exhaust. Robust error handling is essential to prevent data loss and maintain system reliability.
Retry Strategies:
| Strategy | Pattern | Best For |
|---|---|---|
| Immediate | Retry instantly | Transient glitches (network blip) |
| Fixed Delay | Wait N seconds between retries | Rate-limited APIs |
| Exponential Backoff | 1s, 2s, 4s, 8s... | Unknown failure cause, general-purpose |
| Exponential + Jitter | Random within backoff range | High-concurrency to prevent thundering herd |
| Linear Backoff | 1s, 2s, 3s, 4s... | Gentle ramp-up, known recovery time |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124
// Comprehensive retry configurationconst emailQueue = new Queue('email', { connection, defaultJobOptions: { attempts: 5, backoff: { type: 'exponential', delay: 1000 // Base delay } // Retry timeline: 1s, 2s, 4s, 8s, 16s (total ~31s) }}); // Custom backoff with jitterconst paymentQueue = new Queue('payment', { connection, defaultJobOptions: { attempts: 10, backoff: { type: 'custom' } }}); // Custom backoff functionpaymentQueue.on('failed', async (job, err, prev) => { // Calculate next delay with jitter const baseDelay = 1000 * Math.pow(2, job.attemptsMade); const jitter = Math.random() * 1000; const nextDelay = Math.min(baseDelay + jitter, 300000); // Max 5 min console.log(`Job ${job.id} retry in ${nextDelay}ms`);}); // Dead Letter Queue (DLQ) for failed jobsconst dlq = new Queue('dead-letter', { connection }); const emailWorker = new Worker('email', async (job) => { try { await sendEmail(job.data); } catch (error) { // Classify error if (isRetryable(error)) { throw error; // Will be retried } // Non-retryable: move to DLQ await dlq.add('failed-email', { originalJob: job.data, error: error.message, failedAt: new Date().toISOString(), attempts: job.attemptsMade }); // Don't throw - job is "handled" return { movedToDLQ: true }; }}, { connection }); function isRetryable(error: Error): boolean { // Network errors, timeouts, 5xx - retryable if (error.message.includes('ECONNREFUSED')) return true; if (error.message.includes('timeout')) return true; if (error.message.includes('503')) return true; // Invalid data, auth errors - not retryable if (error.message.includes('400')) return false; if (error.message.includes('401')) return false; if (error.message.includes('invalid')) return false; // Unknown - retry to be safe return true;} // DLQ processor - for manual review or different handlingconst dlqWorker = new Worker('dead-letter', async (job) => { // Log for manual review console.error('DLQ Job:', JSON.stringify(job.data, null, 2)); // Send alert await alerting.send({ title: 'Job in Dead Letter Queue', details: job.data, severity: 'warning' }); // Store in persistent storage for later analysis await db.failedJobs.create({ data: { queueName: job.data.originalQueue, payload: job.data.originalJob, error: job.data.error, failedAt: job.data.failedAt } });}, { connection }); // Circuit breaker for dependency failuresimport CircuitBreaker from 'opossum'; const emailService = new CircuitBreaker(sendEmailToProvider, { timeout: 3000, // Fail if takes > 3s errorThresholdPercentage: 50, // Open if 50% fail resetTimeout: 30000 // Try again after 30s}); emailService.on('open', () => { console.warn('Email service circuit OPEN - pausing'); emailQueue.pause(); // Stop processing while provider is down}); emailService.on('halfOpen', () => { console.log('Email service circuit half-open - testing'); emailQueue.resume();}); emailService.on('close', () => { console.log('Email service circuit CLOSED - normal operation');}); // Use circuit breaker in workerconst emailWorkerWithCB = new Worker('email', async (job) => { await emailService.fire(job.data);}, { connection });Some messages will never succeed no matter how many times you retry—invalid data, deleted resources, authorization revoked. These 'poison messages' can block queue processing. Always limit max retries, create a dead-letter queue for persistent failures, and monitor DLQ size. A growing DLQ indicates a systemic problem.
Event-driven architecture extends async processing from individual jobs to system-wide decoupling. Instead of direct service-to-service calls, services emit events that other services can consume.
From Request-Response to Events:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136
// Event-driven order processingimport { Kafka, Producer, Consumer } from 'kafkajs'; const kafka = new Kafka({ clientId: 'order-service', brokers: ['kafka:9092']}); const producer = kafka.producer(); // === ORDER SERVICE (Producer) === async function createOrder(orderData: OrderInput): Promise<Order> { // Create order in database - the only sync operation const order = await db.orders.create({ data: orderData }); // Emit event for downstream processing await producer.send({ topic: 'orders', messages: [{ key: order.id, value: JSON.stringify({ type: 'OrderCreated', timestamp: new Date().toISOString(), data: { orderId: order.id, userId: order.userId, items: order.items, totalAmount: order.totalAmount } }) }] }); // Respond immediately - all other work happens async return order; // Response time: ~50ms instead of 500ms+} // === PAYMENT SERVICE (Consumer) === const paymentConsumer = kafka.consumer({ groupId: 'payment-service' }); await paymentConsumer.subscribe({ topic: 'orders' }); await paymentConsumer.run({ eachMessage: async ({ message }) => { const event = JSON.parse(message.value!.toString()); if (event.type === 'OrderCreated') { try { // Process payment const payment = await processPayment(event.data); // Emit result event await producer.send({ topic: 'payments', messages: [{ key: event.data.orderId, value: JSON.stringify({ type: 'PaymentProcessed', data: { orderId: event.data.orderId, paymentId: payment.id } }) }] }); } catch (error) { // Emit failure event await producer.send({ topic: 'payments', messages: [{ key: event.data.orderId, value: JSON.stringify({ type: 'PaymentFailed', data: { orderId: event.data.orderId, error: error.message } }) }] }); } } }}); // === INVENTORY SERVICE (Consumer) === const inventoryConsumer = kafka.consumer({ groupId: 'inventory-service' }); await inventoryConsumer.subscribe({ topic: 'orders' }); await inventoryConsumer.run({ eachMessage: async ({ message }) => { const event = JSON.parse(message.value!.toString()); if (event.type === 'OrderCreated') { // Reserve inventory await reserveInventory(event.data.items); await producer.send({ topic: 'inventory', messages: [{ key: event.data.orderId, value: JSON.stringify({ type: 'InventoryReserved', data: { orderId: event.data.orderId } }) }] }); } }}); // === NOTIFICATION SERVICE (Multi-event Consumer) === const notificationConsumer = kafka.consumer({ groupId: 'notification-service' }); await notificationConsumer.subscribe({ topics: ['orders', 'payments', 'shipping'] }); await notificationConsumer.run({ eachMessage: async ({ topic, message }) => { const event = JSON.parse(message.value!.toString()); switch (event.type) { case 'OrderCreated': await sendEmail(event.data.userId, 'order-confirmation', event.data); break; case 'PaymentProcessed': await sendEmail(event.data.userId, 'payment-receipt', event.data); break; case 'PaymentFailed': await sendEmail(event.data.userId, 'payment-failed', event.data); break; case 'OrderShipped': await sendEmail(event.data.userId, 'shipping-notification', event.data); break; } }});Events should be immutable facts: 'OrderCreated', not 'CreateOrder'. Include all necessary data in the event (avoid requiring callback to originating service). Add schema version for evolution. Use past tense for events (OrderCreated, PaymentProcessed). Separate event types for different outcomes (PaymentProcessed vs PaymentFailed).
When async operations span multiple services, traditional ACID transactions don't work across service boundaries. The Saga pattern manages distributed transactions through a sequence of local transactions with compensating actions for rollbacks.
Saga Execution:
Order Saga:
1. Create Order (Order Service)
└─ Compensation: Cancel Order
2. Reserve Inventory (Inventory Service)
└─ Compensation: Release Inventory
3. Process Payment (Payment Service)
└─ Compensation: Refund Payment
4. Ship Order (Shipping Service)
└─ Compensation: Cancel Shipment
If step 3 fails:
- Execute compensation for step 2 (release inventory)
- Execute compensation for step 1 (cancel order)
- Saga marked as failed
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139
// Saga Orchestration Patterninterface SagaStep<T> { name: string; execute: (context: T) => Promise<T>; compensate: (context: T) => Promise<T>;} class SagaOrchestrator<T extends object> { private steps: SagaStep<T>[] = []; private completedSteps: SagaStep<T>[] = []; addStep(step: SagaStep<T>): this { this.steps.push(step); return this; } async execute(initialContext: T): Promise<{ success: boolean; context: T; error?: Error }> { let context = { ...initialContext }; try { // Execute each step in order for (const step of this.steps) { console.log(`Executing: ${step.name}`); context = await step.execute(context); this.completedSteps.push(step); } return { success: true, context }; } catch (error) { console.error(`Saga failed at step, rolling back...`); // Compensate in reverse order for (const step of this.completedSteps.reverse()) { try { console.log(`Compensating: ${step.name}`); context = await step.compensate(context); } catch (compError) { // Compensation failed - requires manual intervention console.error(`CRITICAL: Compensation failed for ${step.name}`); await alertOperations(step.name, compError); } } return { success: false, context, error: error as Error }; } }} // Define Order Sagainterface OrderContext { orderId?: string; userId: string; items: Array<{ productId: string; quantity: number }>; reservationId?: string; paymentId?: string; totalAmount: number;} const orderSaga = new SagaOrchestrator<OrderContext>() .addStep({ name: 'CreateOrder', execute: async (ctx) => { const order = await orderService.create({ userId: ctx.userId, items: ctx.items, status: 'PENDING' }); return { ...ctx, orderId: order.id }; }, compensate: async (ctx) => { if (ctx.orderId) { await orderService.cancel(ctx.orderId); } return ctx; } }) .addStep({ name: 'ReserveInventory', execute: async (ctx) => { const reservation = await inventoryService.reserve({ orderId: ctx.orderId!, items: ctx.items }); return { ...ctx, reservationId: reservation.id }; }, compensate: async (ctx) => { if (ctx.reservationId) { await inventoryService.release(ctx.reservationId); } return ctx; } }) .addStep({ name: 'ProcessPayment', execute: async (ctx) => { const payment = await paymentService.charge({ orderId: ctx.orderId!, userId: ctx.userId, amount: ctx.totalAmount }); return { ...ctx, paymentId: payment.id }; }, compensate: async (ctx) => { if (ctx.paymentId) { await paymentService.refund(ctx.paymentId); } return ctx; } }) .addStep({ name: 'ConfirmOrder', execute: async (ctx) => { await orderService.confirm(ctx.orderId!); // Emit event for downstream (shipping, notification) await eventBus.emit('OrderConfirmed', { orderId: ctx.orderId }); return ctx; }, compensate: async (ctx) => { // Order was confirmed but saga failed after - unlikely but handle await orderService.markFailed(ctx.orderId!); return ctx; } }); // Usageasync function handleOrderRequest(orderData: OrderInput) { const result = await orderSaga.execute({ userId: orderData.userId, items: orderData.items, totalAmount: calculateTotal(orderData.items) }); if (result.success) { return { success: true, orderId: result.context.orderId }; } else { return { success: false, error: result.error?.message }; }}During saga execution, the system is in an inconsistent state—inventory reserved but payment not yet processed. Design for this: show 'order processing' status, handle concurrent operations on the same resources, use optimistic locking. Sagas trade strong consistency for availability and partition tolerance.
While async processing reduces latency for users, it introduces its own performance considerations that impact end-to-end processing time.
Queue Latency:
Time from enqueue to dequeue is not zero. Factors:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109
// Batch processing for efficiencyconst notificationWorker = new Worker('notifications', async (job) => { // Job contains batch of notifications const notifications = job.data.notifications; // Get all user emails in single query const userIds = [...new Set(notifications.map(n => n.userId))]; const users = await db.users.findMany({ where: { id: { in: userIds } }, select: { id: true, email: true, preferences: true } }); const userMap = new Map(users.map(u => [u.id, u])); // Send all emails await Promise.all( notifications.map(async (n) => { const user = userMap.get(n.userId); if (user && user.preferences.emailEnabled) { await sendEmail(user.email, n.template, n.data); } }) ); return { sent: notifications.length };}, { connection, concurrency: 5}); // Batch collector - accumulate and flushclass BatchCollector<T> { private batch: T[] = []; private flushTimeout?: NodeJS.Timeout; constructor( private readonly queue: Queue, private readonly jobName: string, private readonly maxBatchSize: number = 100, private readonly maxWaitMs: number = 1000 ) {} async add(item: T): Promise<void> { this.batch.push(item); if (this.batch.length >= this.maxBatchSize) { await this.flush(); } else if (!this.flushTimeout) { // Schedule flush if not already scheduled this.flushTimeout = setTimeout(() => this.flush(), this.maxWaitMs); } } private async flush(): Promise<void> { if (this.flushTimeout) { clearTimeout(this.flushTimeout); this.flushTimeout = undefined; } if (this.batch.length === 0) return; const items = this.batch; this.batch = []; await this.queue.add(this.jobName, { notifications: items }); }} const notificationBatcher = new BatchCollector( notificationQueue, 'batch-send', 100, // flush at 100 items 1000 // or after 1 second); // Usage: add individual items, they're batched automaticallyawait notificationBatcher.add({ userId: '123', template: 'welcome', data: {} }); // End-to-end latency monitoringconst jobMetrics = new Map<string, { enqueued: number; started?: number; completed?: number }>(); // Track enqueue timequeue.on('added', (job) => { jobMetrics.set(job.id!, { enqueued: Date.now() });}); // Track processing startworker.on('active', (job) => { const metrics = jobMetrics.get(job.id!); if (metrics) { metrics.started = Date.now(); const waitTime = metrics.started - metrics.enqueued; histogram.observe({ type: 'queue_wait_time' }, waitTime); }}); // Track completionworker.on('completed', (job) => { const metrics = jobMetrics.get(job.id!); if (metrics) { metrics.completed = Date.now(); const processingTime = metrics.completed - (metrics.started || metrics.enqueued); const totalTime = metrics.completed - metrics.enqueued; histogram.observe({ type: 'processing_time' }, processingTime); histogram.observe({ type: 'total_time' }, totalTime); jobMetrics.delete(job.id!); }});Async processing is a powerful latency optimization strategy—it reduces user-facing response time by deferring work to background processes. This enables near-instant responses for operations that would otherwise take seconds or minutes.
What's Next:
Async processing defers work; connection pooling makes the remaining synchronous work faster. The final page explores how connection pooling eliminates the overhead of establishing new connections, reducing latency for database access, HTTP clients, and other networked resources.
You now understand how to use asynchronous processing to dramatically reduce user-facing latency. From job queues to event-driven architecture to saga patterns, these techniques enable instant responses while complex work completes in the background.