Loading content...
Individual Promises are useful, but the true power of the Future/Promise pattern emerges when you compose multiple async operations. Real-world applications don't make one async call in isolation—they orchestrate dozens or hundreds of operations with complex dependencies, parallel execution, timeout constraints, and fallback strategies.
The composition question: Given multiple Promises, how do we:
This page answers these questions with battle-tested composition patterns that form the vocabulary of professional async programming.
By the end of this page, you will master Promise composition: Promise.all() for parallel execution, Promise.race() for competition, Promise.allSettled() for resilient aggregation, Promise.any() for first-success semantics, and advanced patterns like concurrency limiting and conditional chaining. These patterns are essential for building efficient, resilient async systems.
The pattern: When you have multiple independent async operations and need all results before proceeding, Promise.all() runs them in parallel and resolves when all succeed.
Key characteristics:
12345678910111213141516171819202122232425262728293031323334353637383940414243
// Promise.all(): Wait for ALL to succeed // Basic usage: fetch multiple resources in parallelasync function loadDashboard(userId: string): Promise<Dashboard> { // All three fetch operations start immediately // Total time = max(user, orders, notifications), not sum const [user, orders, notifications] = await Promise.all([ fetchUser(userId), // Takes ~200ms fetchOrders(userId), // Takes ~350ms fetchNotifications(userId) // Takes ~150ms ]); // Total time: ~350ms (not 700ms if sequential) return { user, orders, notifications };} // Order preservation: results match input orderconst urls = ['/api/a', '/api/b', '/api/c'];const [resultA, resultB, resultC] = await Promise.all( urls.map(url => fetch(url).then(r => r.json())));// Results are in order, regardless of which request finished first // Fail-fast behaviorasync function fragileOperation() { try { const results = await Promise.all([ Promise.resolve('success 1'), Promise.reject(new Error('failure!')), // This fails Promise.resolve('success 3'), ]); } catch (error) { // Catches immediately when ANY promise rejects console.log(error.message); // "failure!" // Note: 'success 1' and 'success 3' are lost }} // When to use Promise.all():// - Loading data where ALL pieces are required// - Validating multiple inputs (any failure = invalid) // - Pre-warming caches for multiple keys// - Batch operations where partial success is uselessA common mistake: for (item of items) { await fetch(item); } runs sequentially—one at a time. Total time: sum of all. Use await Promise.all(items.map(item => fetch(item))) to run in parallel. Total time: max of all. This can be a 10-100x speedup for I/O-bound operations.
The problem with Promise.all(): It fails fast—one rejection loses all results. But what if you want to know which operations succeeded and which failed? What if partial success is acceptable?
The solution: Promise.allSettled() waits for all Promises to settle (resolve or reject) and returns the outcome of each.
Key characteristics:
{ status: 'fulfilled', value } or { status: 'rejected', reason }12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
// Promise.allSettled(): Get ALL results, regardless of individual failures // Sending notifications to multiple channels - some may failasync function notifyAllChannels( message: string, channels: NotificationChannel[]): Promise<NotificationReport> { const results = await Promise.allSettled( channels.map(channel => channel.send(message)) ); // Categorize results const successes: string[] = []; const failures: { channel: string; error: string }[] = []; results.forEach((result, index) => { if (result.status === 'fulfilled') { successes.push(channels[index].name); } else { failures.push({ channel: channels[index].name, error: result.reason.message }); } }); return { totalSent: successes.length, totalFailed: failures.length, successes, failures, partialSuccess: successes.length > 0 && failures.length > 0 };} // Usage exampleconst report = await notifyAllChannels('Server alert!', [ emailChannel, // succeeds slackChannel, // succeeds smsChannel, // fails (quota exceeded) webhookChannel // succeeds]);// report = {// totalSent: 3, totalFailed: 1,// successes: ['email', 'slack', 'webhook'],// failures: [{ channel: 'sms', error: 'Quota exceeded' }],// partialSuccess: true// } // Batch processing with error trackingasync function processBatch<T, R>( items: T[], processor: (item: T) => Promise<R>): Promise<BatchResult<R>> { const results = await Promise.allSettled(items.map(processor)); return { succeeded: results .filter((r): r is PromiseFulfilledResult<R> => r.status === 'fulfilled') .map(r => r.value), failed: results .filter((r): r is PromiseRejectedResult => r.status === 'rejected') .map((r, i) => ({ item: items[i], error: r.reason })) };}Sometimes you don't want to wait for all Promises—you want the first one to complete. This pattern has two flavors:
Promise.race(): Resolves/rejects with the first settled Promise (whether success or failure)
Promise.any(): Resolves with the first successful Promise (ignores rejections until all fail)
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
// Promise.race(): First to settle wins (success OR failure) // Classic use case: Implementing timeoutsfunction withTimeout<T>( promise: Promise<T>, timeoutMs: number): Promise<T> { const timeout = new Promise<never>((_, reject) => { setTimeout(() => reject(new Error('Operation timed out')), timeoutMs); }); return Promise.race([promise, timeout]);} // Usagetry { const result = await withTimeout(fetchData(), 5000);} catch (error) { if (error.message === 'Operation timed out') { // Handle timeout } else { // Handle fetch error }} // Racing multiple data sourcesasync function fetchWithFallback(endpoint: string): Promise<Data> { return Promise.race([ fetchFromPrimary(endpoint), createDelayedFallback(endpoint, 2000) // Fallback after 2s ]);} function createDelayedFallback(endpoint: string, delay: number): Promise<Data> { return new Promise((resolve, reject) => { setTimeout(() => { fetchFromBackup(endpoint).then(resolve, reject); }, delay); });} // Promise.any(): First SUCCESS wins (rejections ignored) // Try multiple CDNs, use whichever responds first successfullyasync function fetchFromFastestCdn(resourcePath: string): Promise<Resource> { return Promise.any([ fetchFromCdn('https://cdn1.example.com' + resourcePath), fetchFromCdn('https://cdn2.example.com' + resourcePath), fetchFromCdn('https://cdn3.example.com' + resourcePath), ]); // Resolves with first successful response // Other requests continue but results are ignored // Only rejects if ALL CDNs fail (AggregateError)} // Handling AggregateError when all failtry { const resource = await Promise.any([ Promise.reject(new Error('CDN 1 down')), Promise.reject(new Error('CDN 2 down')), Promise.reject(new Error('CDN 3 down')), ]);} catch (error) { if (error instanceof AggregateError) { console.log('All failed:', error.errors); // error.errors = [Error('CDN 1 down'), Error('CDN 2 down'), ...] }}| Aspect | Promise.race() | Promise.any() |
|---|---|---|
| Settles when | First promise settles (success OR failure) | First promise succeeds (failures ignored initially) |
| Empty input | Never settles (pending forever) | Rejects with AggregateError |
| All reject | Rejects with first rejection | Rejects with AggregateError containing all errors |
| Use case | Timeouts, first response | Redundant sources, fallbacks, hedged requests |
| Risk | A fast failure beats a slow success | Potentially ignores important failures |
At scale, tail latency matters. Google's research shows that sending the same request to multiple backends (hedging) and using the first response significantly reduces p99 latency. Promise.any() is the primitive for this: send to 3 replicas, use whichever answers first.
A critical skill in async programming is recognizing when operations depend on each other (must be sequential) vs. when they're independent (can be parallel). The difference often determines whether a workflow takes 2 seconds or 20 seconds.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
// CASE 1: Sequential - each step depends on the previousasync function processUserOrder(userId: string): Promise<Receipt> { // Must be sequential: each step uses the previous result const user = await fetchUser(userId); // 100ms const cart = await fetchCart(user.cartId); // 150ms const payment = await processPayment(user, cart); // 500ms const receipt = await generateReceipt(payment); // 50ms // Total: 800ms (sum of all) return receipt;} // CASE 2: Parallel - operations are independentasync function loadUserDashboard(userId: string): Promise<Dashboard> { // Can be parallel: no dependencies between these calls const [profile, orders, notifications, recommendations] = await Promise.all([ fetchProfile(userId), // 100ms fetchOrders(userId), // 200ms fetchNotifications(userId), // 80ms fetchRecommendations(userId) // 300ms ]); // Total: 300ms (max of all) return { profile, orders, notifications, recommendations };} // CASE 3: Mixed - some sequential, some parallelasync function processCheckout(userId: string): Promise<OrderResult> { // Phase 1: Get user and cart (can be parallel) const [user, cart] = await Promise.all([ fetchUser(userId), fetchCart(userId) ]); // Phase 2: Depends on Phase 1, but these can be parallel const [paymentMethod, shippingOptions] = await Promise.all([ getPaymentMethod(user.defaultPaymentId), calculateShipping(cart.items, user.address) ]); // Phase 3: Depends on Phase 2 const order = await createOrder(user, cart, paymentMethod, shippingOptions); // Phase 4: Notifications (fire-and-forget, don't await) Promise.allSettled([ sendEmailConfirmation(user.email, order), sendSmsConfirmation(user.phone, order), updateAnalytics(order) ]).then(results => { logNotificationResults(results); // Log but don't block }); return order;} // Anti-pattern: Sequential when it could be parallelasync function slowDashboard(userId: string): Promise<Dashboard> { // ❌ BAD: Unnecessarily sequential const profile = await fetchProfile(userId); // 100ms const orders = await fetchOrders(userId); // 200ms const notifications = await fetchNotifications(userId); // 80ms const recommendations = await fetchRecommendations(userId); // 300ms // Total: 680ms 😢 return { profile, orders, notifications, recommendations };}The Decision Framework:
For each async operation, ask: Does it depend on the result of a previous operation?
Dependency Graph Visualization:
Sequential (must wait): Parallel (can overlap):
A → B → C → D A ─┐
[100ms each] B ─┼→ [wait for max]
Total: 400ms C ─┤
D ─┘
Total: max(A,B,C,D)
Every await is a potential performance bug if the awaited Promise doesn't depend on previous results. Review your async functions: if you see multiple awaits in sequence, ask whether any can be parallelized with Promise.all().
The problem: Promise.all() starts ALL operations immediately. For 1,000 items, that's 1,000 concurrent operations. This can:
The solution: Limit concurrency—run N operations at a time, starting new ones as others complete.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
// Concurrency-limited parallel execution // Implementation: Pool-based concurrency limiterasync function mapWithConcurrency<T, R>( items: T[], mapper: (item: T) => Promise<R>, concurrency: number): Promise<R[]> { const results: R[] = new Array(items.length); const executing: Set<Promise<void>> = new Set(); async function enqueue(item: T, index: number) { const promise = mapper(item).then(result => { results[index] = result; }); const wrapped = promise.finally(() => executing.delete(wrapped)); executing.add(wrapped); // If pool is full, wait for one to complete if (executing.size >= concurrency) { await Promise.race(executing); } } // Start processing all items for (let i = 0; i < items.length; i++) { await enqueue(items[i], i); } // Wait for remaining operations await Promise.all(executing); return results;} // Usage: Process 10,000 items, max 50 concurrentconst urls = Array.from({ length: 10000 }, (_, i) => `/api/item/${i}`);const results = await mapWithConcurrency( urls, url => fetch(url).then(r => r.json()), 50 // Max 50 concurrent requests); // Library solution: p-limit (npm)import pLimit from 'p-limit'; const limit = pLimit(10); // Max 10 concurrent const results = await Promise.all( urls.map(url => limit(() => fetch(url)))); // Library solution: p-map (npm)import pMap from 'p-map'; const results = await pMap( urls, url => fetch(url).then(r => r.json()), { concurrency: 10 }); // Real-world example: Rate-limited API callsconst rateLimit = pLimit(5); // 5 concurrent max async function bulkUpdateUsers(updates: UserUpdate[]) { return Promise.all( updates.map(update => rateLimit(async () => { const result = await api.updateUser(update); await delay(100); // Additional rate limiting return result; }) ) );}| Scenario | Recommended Limit | Reasoning |
|---|---|---|
| Database queries | 5-20 | Connection pool size, query load |
| HTTP requests (same host) | 6-10 | Browser default ~6, server ~10 |
| HTTP requests (diverse hosts) | 50-100 | Spread across many servers |
| File I/O | 10-50 | Disk I/O parallelism, OS buffers |
| CPU-bound work | of CPU cores | More threads = context switch overhead |
| External API with rate limit | Match rate limit | Avoid hitting 429 errors |
Beyond the basic combinators, complex async workflows require advanced patterns. These patterns appear repeatedly in production systems.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
// Retry with exponential backoffasync function withRetry<T>( operation: () => Promise<T>, options: { maxRetries: number; initialDelayMs: number; maxDelayMs: number; backoffFactor: number; retryIf?: (error: Error) => boolean; }): Promise<T> { let lastError: Error; let delay = options.initialDelayMs; for (let attempt = 0; attempt <= options.maxRetries; attempt++) { try { return await operation(); } catch (error) { lastError = error; // Check if we should retry this error if (options.retryIf && !options.retryIf(error)) { throw error; } if (attempt < options.maxRetries) { // Wait before retrying await new Promise(r => setTimeout(r, delay)); delay = Math.min(delay * options.backoffFactor, options.maxDelayMs); } } } throw lastError;} // Usageconst data = await withRetry( () => fetch('/api/data').then(r => { if (!r.ok) throw new Error(`HTTP ${r.status}`); return r.json(); }), { maxRetries: 3, initialDelayMs: 100, maxDelayMs: 5000, backoffFactor: 2, retryIf: (err) => err.message.includes('503') // Only retry 503s });Notice how these patterns compose: withRetry(withTimeout(operation)) creates a robust operation that times out and retries. Each pattern is a building block; combining them creates production-grade resilience without complex code.
We've now mastered the vocabulary of Promise composition—the patterns that transform isolated async operations into sophisticated concurrent workflows. Let's consolidate:
What's Next:
We've covered how to compose successful Promises. But what about when things go wrong? The next page dives deep into error handling with Futures—propagation semantics, recovery strategies, cleanup, and designing async error handling that's robust and maintainable.
You now command the full vocabulary of Promise composition: parallel execution, racing, resilient aggregation, concurrency control, and advanced patterns. These tools transform async spaghetti into elegant, efficient concurrent workflows.