Loading content...
Understanding long polling conceptually is straightforward: hold the connection, respond when data arrives. But building a production-grade long polling system requires mastering dozens of subtle mechanics that distinguish amateur implementations from battle-tested infrastructure.
In this page, we'll dissect long polling at the protocol level, examining exactly how bytes flow across the wire, how servers manage thousands of waiting connections efficiently, and how the reconnection dance maintains reliability despite network chaos.
By the end of this page, you'll understand the HTTP-level mechanics of long polling, server-side patterns for managing pending requests, the client reconnection protocol, and how to handle the complex state machine that emerges from concurrent long-lived connections.
Long polling relies on specific HTTP behaviors that aren't always obvious. Let's examine exactly what happens at the protocol level when a long poll request is made.
The Wire-Level Exchange:
When a client initiates a long poll, here's the actual HTTP traffic:
12345678910111213141516171819202122232425262728
GET /api/events/poll?since=1704067200000 HTTP/1.1Host: api.example.comConnection: keep-aliveAccept: application/jsonAuthorization: Bearer eyJhbGciOiJIUzI1NiIs...X-Long-Poll-Timeout: 30Cache-Control: no-cache --- Connection established, waiting for response ------ 27.3 seconds later --- HTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 247Date: Mon, 01 Jan 2024 12:00:27 GMTX-Poll-Timestamp: 1704067227000 { "events": [ { "id": "evt_12345", "type": "notification", "data": {"message": "New comment on your post"}, "timestamp": 1704067227000 } ], "nextSince": 1704067227001}Critical Protocol Details:
1. Connection Persistence:
The Connection: keep-alive header ensures the TCP connection can be reused. However, in long polling, this is somewhat misleading—each long poll typically creates a new HTTP request, though the underlying TCP connection may be reused.
2. Cache Prevention:
The Cache-Control: no-cache header is essential. Without it, aggressive proxies or browsers might cache responses, returning stale data instead of waiting for fresh events. Some implementations also add:
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
3. Content-Length and Chunked Encoding:
Long poll responses must specify either Content-Length or use Transfer-Encoding: chunked. The server cannot send headers until it knows the response body, so the entire response is buffered before transmission.
4. Connection Timeout Negotiation:
The X-Long-Poll-Timeout header (custom header, implementation-dependent) signals how long the client is willing to wait. This allows server-side configurability without hardcoding timeouts.
Many proxies have default timeouts (60 seconds for nginx, 30 seconds for some cloud load balancers). If your long poll timeout exceeds the proxy timeout, connections will be silently terminated. Always configure timeouts below the minimum proxy timeout in your infrastructure stack.
HTTP/1.1 vs HTTP/2 Considerations:
Long polling behaves differently under HTTP/2:
| Aspect | HTTP/1.1 | HTTP/2 |
|---|---|---|
| Connection per request | Often 1:1 (limited by browser ~6) | Multiplexed streams |
| Head-of-line blocking | Problem—one slow poll blocks others | No—streams independent |
| Header compression | None | HPACK compression |
| Server push | Not available | Available but not used for long poll |
| Connection limit impact | Significant—6 long polls = no room for assets | Minimal—hundreds of concurrent streams |
HTTP/2's stream multiplexing elegantly solves the connection limit problem that plagued HTTP/1.1 long polling implementations.
Managing long-lived requests server-side requires patterns distinct from typical request handling. Instead of process-and-respond, the server must park requests and resume them later.
The Pending Request Store:
The core data structure for long polling is a map from the request identifier (user, channel, or topic) to the pending request context:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145
interface PendingRequest { userId: string; channelId: string; response: Response; timer: NodeJS.Timeout; createdAt: number; lastEventId: string | null;} class LongPollManager { // Map from channel ID to set of pending requests private pending: Map<string, Set<PendingRequest>> = new Map(); // Map from request to its channel (for cleanup) private requestToChannel: Map<PendingRequest, string> = new Map(); private readonly defaultTimeout = 30000; private readonly maxPendingPerChannel = 10000; /** * Register a new long poll request * Returns true if registered, false if rejected (limits exceeded) */ register( channelId: string, userId: string, response: Response, lastEventId: string | null ): PendingRequest | null { // Check capacity limits const channelPending = this.pending.get(channelId); if (channelPending && channelPending.size >= this.maxPendingPerChannel) { return null; // Reject - channel overloaded } // Create pending request record const request: PendingRequest = { userId, channelId, response, timer: setTimeout(() => this.timeout(request), this.defaultTimeout), createdAt: Date.now(), lastEventId, }; // Register in data structures if (!this.pending.has(channelId)) { this.pending.set(channelId, new Set()); } this.pending.get(channelId)!.add(request); this.requestToChannel.set(request, channelId); // Track metrics metrics.gauge('longpoll.pending', this.getTotalPending()); return request; } /** * Deliver events to all pending requests for a channel */ async broadcast(channelId: string, events: Event[]): Promise<number> { const channelPending = this.pending.get(channelId); if (!channelPending || channelPending.size === 0) { return 0; // No one waiting } let delivered = 0; const toNotify = Array.from(channelPending); for (const request of toNotify) { // Filter events based on lastEventId (don't re-send seen events) const newEvents = this.filterEvents(events, request.lastEventId); if (newEvents.length > 0) { this.respond(request, { events: newEvents }); delivered++; } } return delivered; } /** * Send response and cleanup */ private respond(request: PendingRequest, data: object): void { // Clear timeout clearTimeout(request.timer); // Remove from tracking structures const channelId = this.requestToChannel.get(request); if (channelId) { this.pending.get(channelId)?.delete(request); this.requestToChannel.delete(request); } // Send response try { request.response.json(data); } catch (error) { // Client already disconnected - log but don't throw console.warn('Failed to send long poll response:', error); } // Track metrics const duration = Date.now() - request.createdAt; metrics.histogram('longpoll.response_time', duration); } /** * Handle request timeout */ private timeout(request: PendingRequest): void { this.respond(request, { events: [], timeout: true }); metrics.increment('longpoll.timeout'); } /** * Handle client disconnect */ cleanup(request: PendingRequest): void { clearTimeout(request.timer); const channelId = this.requestToChannel.get(request); if (channelId) { this.pending.get(channelId)?.delete(request); this.requestToChannel.delete(request); } metrics.increment('longpoll.client_disconnect'); } private getTotalPending(): number { let total = 0; for (const channel of this.pending.values()) { total += channel.size; } return total; } private filterEvents(events: Event[], lastEventId: string | null): Event[] { if (!lastEventId) return events; const lastIndex = events.findIndex(e => e.id === lastEventId); return lastIndex >= 0 ? events.slice(lastIndex + 1) : events; }}Memory Considerations:
Each pending request consumes memory for:
Total per connection: ~600-900 bytes
For 100,000 concurrent long polls: ~60-90 MB just for request tracking, plus the HTTP connection overhead managed by the runtime.
Don't confuse long poll request memory with TCP connection memory. Each long poll holds an open HTTP request, which keeps a TCP connection alive. Node.js/Go can handle 100K+ concurrent connections efficiently, but ensure your server is tuned: increase file descriptor limits (ulimit -n), adjust TCP buffer sizes, and monitor memory carefully.
The client side of long polling is deceptively complex. A robust client must handle multiple edge cases: successful responses, timeouts, errors, and network changes—all while maintaining a seamless user experience.
The Complete Client State Machine:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
Long Polling Client State Machine══════════════════════════════════════════════════════════════ ┌─────────────────┐ │ │ │ DISCONNECTED │◀──── Initial state │ │ └────────┬────────┘ │ │ start() called ▼ ┌─────────────────┐ │ │ ┌───▶│ CONNECTING │ │ │ │ │ └────────┬────────┘ │ │ │ │ Connection established │ ▼ │ ┌─────────────────┐ │ │ │ │ │ POLLING │◀───┐ │ │ │ │ │ └────────┬────────┘ │ │ │ │ reconnect │ ┌───────┼───────┐ │ after delay │ │ │ │ │ Data received │ │ │ │ │ OR timeout │ ▼ ▼ ▼ │ │ Error Timeout Data │ │ │ │ │ │ │ │ │ └─────┘ │ │ │ │ ▼ ▼ │ ┌─────────────────┐ └────│ │ │ BACKOFF │ │ │ └────────┬────────┘ │ │ max retries exceeded ▼ ┌─────────────────┐ │ │ │ FAILED │ │ │ └─────────────────┘123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188
type LongPollState = 'disconnected' | 'connecting' | 'polling' | 'backoff' | 'failed'; interface LongPollOptions { endpoint: string; pollTimeout: number; // How long server holds connection (match server) requestTimeout: number; // Client-side abort timeout (slightly > pollTimeout) maxRetries: number; // Max consecutive failures before giving up baseBackoff: number; // Initial backoff delay maxBackoff: number; // Maximum backoff delay jitterFactor: number; // Randomization factor (0-1) onEvent: (events: Event[]) => void; onStateChange?: (state: LongPollState) => void; onError?: (error: Error) => void;} class LongPollClient { private state: LongPollState = 'disconnected'; private lastEventId: string | null = null; private retryCount = 0; private abortController: AbortController | null = null; private reconnectTimer: number | null = null; constructor(private options: LongPollOptions) {} /** * Start the long polling connection */ start(): void { if (this.state !== 'disconnected' && this.state !== 'failed') { console.warn('Already connected or connecting'); return; } this.setState('connecting'); this.poll(); } /** * Stop long polling gracefully */ stop(): void { this.abortController?.abort(); if (this.reconnectTimer) { clearTimeout(this.reconnectTimer); this.reconnectTimer = null; } this.setState('disconnected'); } /** * Execute single poll cycle */ private async poll(): Promise<void> { this.abortController = new AbortController(); try { this.setState('polling'); const url = new URL(this.options.endpoint); if (this.lastEventId) { url.searchParams.set('since', this.lastEventId); } const response = await fetch(url.toString(), { method: 'GET', headers: { 'Accept': 'application/json', 'Cache-Control': 'no-cache', 'X-Long-Poll-Timeout': String(this.options.pollTimeout / 1000), }, signal: AbortSignal.timeout(this.options.requestTimeout), }); // Success - reset retry counter this.retryCount = 0; if (response.status === 200) { // Data received const data = await response.json(); if (data.events && data.events.length > 0) { // Update cursor for next poll this.lastEventId = data.events[data.events.length - 1].id; // Deliver events to application this.options.onEvent(data.events); } // Immediately reconnect for next batch this.scheduleReconnect(0); } else if (response.status === 204) { // Timeout - no data, but not an error this.scheduleReconnect(0); } else { // Unexpected status throw new Error(`Unexpected response status: ${response.status}`); } } catch (error) { this.handleError(error as Error); } } /** * Handle poll errors with exponential backoff */ private handleError(error: Error): void { // Don't treat manual abort as error if (error.name === 'AbortError' && this.state === 'disconnected') { return; } this.retryCount++; this.options.onError?.(error); if (this.retryCount >= this.options.maxRetries) { this.setState('failed'); return; } // Calculate backoff with exponential increase and jitter const exponentialDelay = Math.min( this.options.baseBackoff * Math.pow(2, this.retryCount - 1), this.options.maxBackoff ); const jitter = exponentialDelay * this.options.jitterFactor * Math.random(); const delay = exponentialDelay + jitter; this.setState('backoff'); this.scheduleReconnect(delay); } /** * Schedule reconnection after delay */ private scheduleReconnect(delay: number): void { if (this.state === 'failed' || this.state === 'disconnected') { return; } this.reconnectTimer = window.setTimeout(() => { this.reconnectTimer = null; this.poll(); }, delay); } /** * Update state and notify listener */ private setState(state: LongPollState): void { if (this.state !== state) { this.state = state; this.options.onStateChange?.(state); } } /** * Get current state */ getState(): LongPollState { return this.state; }} // Usage exampleconst client = new LongPollClient({ endpoint: 'https://api.example.com/events/poll', pollTimeout: 30000, requestTimeout: 35000, // 5s buffer over server timeout maxRetries: 10, baseBackoff: 1000, maxBackoff: 30000, jitterFactor: 0.3, onEvent: (events) => { events.forEach(event => handleEvent(event)); }, onStateChange: (state) => { updateConnectionIndicator(state); }, onError: (error) => { console.error('Long poll error:', error); }}); client.start();After receiving data or a timeout, the client reconnects with zero delay. This maintains the "always connected" behavior that makes long polling feel real-time. The brief gap between requests (typically < 100ms) is imperceptible to users and doesn't miss events (server-side event buffering handles the gap).
A critical aspect of long polling is exactly-once delivery of events. Between poll cycles, how do we ensure the client receives all events without duplicates?
The Cursor Mechanism:
Long polling uses a cursor (also called sequence number, offset, or event ID) to track position in the event stream. The client sends its last seen cursor, and the server returns only newer events.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
// Event storage with cursor supportinterface EventStore { // Append new event and return its cursor append(channelId: string, event: object): Promise<string>; // Get events after cursor, up to limit getAfter(channelId: string, cursor: string | null, limit: number): Promise<{ events: Array<{ cursor: string; data: object }>; hasMore: boolean; }>;} // Implementation using timestamp-based cursorsclass TimestampEventStore implements EventStore { private events: Map<string, Array<{ cursor: string; data: object; timestamp: number }>> = new Map(); private sequence = 0; async append(channelId: string, event: object): Promise<string> { const timestamp = Date.now(); const cursor = `${timestamp}-${++this.sequence}`; if (!this.events.has(channelId)) { this.events.set(channelId, []); } this.events.get(channelId)!.push({ cursor, data: event, timestamp, }); return cursor; } async getAfter( channelId: string, cursor: string | null, limit: number = 100 ): Promise<{ events: Array<{ cursor: string; data: object }>; hasMore: boolean }> { const channelEvents = this.events.get(channelId) || []; let startIndex = 0; if (cursor) { // Parse cursor to get timestamp-sequence const [ts, seq] = cursor.split('-').map(Number); startIndex = channelEvents.findIndex(e => { const [ets, eseq] = e.cursor.split('-').map(Number); return ets > ts || (ets === ts && eseq > seq); }); if (startIndex < 0) startIndex = channelEvents.length; } const events = channelEvents.slice(startIndex, startIndex + limit); const hasMore = startIndex + limit < channelEvents.length; return { events, hasMore }; }} // Long poll handler using cursorapp.get('/events/poll', async (req, res) => { const channelId = req.query.channel as string; const since = req.query.since as string | undefined; const timeout = Math.min( parseInt(req.headers['x-long-poll-timeout'] as string || '30') * 1000, 60000 // Max 60 seconds ); // First check for existing events const initial = await eventStore.getAfter(channelId, since || null, 100); if (initial.events.length > 0) { return res.json({ events: initial.events, hasMore: initial.hasMore }); } // No events - wait for new ones const startTime = Date.now(); const cleanup = () => { eventBus.off(`channel:${channelId}`, onEvent); }; const timer = setTimeout(() => { cleanup(); res.status(204).end(); }, timeout); const onEvent = async (cursor: string) => { // Small delay to allow batching await sleep(50); clearTimeout(timer); cleanup(); // Fetch all events since original cursor const result = await eventStore.getAfter(channelId, since || null, 100); res.json({ events: result.events, hasMore: result.hasMore }); }; eventBus.once(`channel:${channelId}`, onEvent); req.on('close', () => { clearTimeout(timer); cleanup(); });});Cursor Design Considerations:
| Cursor Type | Format | Pros | Cons |
|---|---|---|---|
| Timestamp | 1704067200000 | Simple, sortable | Duplicates possible, clock skew |
| Timestamp + Sequence | 1704067200000-1 | Unique, sortable | Slightly complex parsing |
| UUID | evt_abc123... | Globally unique | Not sortable, requires index |
| Monotonic Sequence | 12345 | Simplest, fastest | Single point of generation |
| Composite | {partition}-{seq} | Scalable | Complex client handling |
Cursor Ordering Invariants:
The cursor system must guarantee:
If an event is created between the client checking for events and registering its listener, it could be missed. The solution: always check for events AFTER registering the listener, with appropriate synchronization. Alternatively, use a small time window overlap where the client re-requests recently-seen events.
A single user often needs updates from multiple sources: notifications, chat messages, feed updates. Creating separate long poll connections for each would be wasteful. Multiplexing allows a single long poll to subscribe to multiple channels.
The Multiplexed Long Poll Pattern:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
// Client subscribes to multiple channels with a single connectionPOST /events/subscribe HTTP/1.1Content-Type: application/json { "channels": [ { "id": "notifications:user123", "since": "1704067200000-0" }, { "id": "chat:room456", "since": "1704067190000-5" }, { "id": "presence:team789", "since": null } ], "timeout": 30} // Server implementationinterface ChannelSubscription { id: string; since: string | null;} app.post('/events/subscribe', async (req, res) => { const { channels, timeout = 30 } = req.body as { channels: ChannelSubscription[]; timeout: number; }; const timeoutMs = Math.min(timeout * 1000, 60000); const results: Map<string, any[]> = new Map(); // Check for existing events on all channels for (const channel of channels) { const events = await eventStore.getAfter(channel.id, channel.since, 50); if (events.events.length > 0) { results.set(channel.id, events.events); } } // If we have any events, return immediately if (results.size > 0) { return res.json({ channels: Object.fromEntries(results), }); } // No events - set up listeners on all channels const listeners: Map<string, (...args: any[]) => void> = new Map(); let resolved = false; const cleanup = () => { for (const [channelId, listener] of listeners) { eventBus.off(`channel:${channelId}`, listener); } listeners.clear(); }; const resolve = async () => { if (resolved) return; resolved = true; clearTimeout(timer); cleanup(); // Fetch current state of all channels const results: Map<string, any[]> = new Map(); for (const channel of channels) { const events = await eventStore.getAfter(channel.id, channel.since, 50); if (events.events.length > 0) { results.set(channel.id, events.events); } } if (results.size > 0) { res.json({ channels: Object.fromEntries(results) }); } else { res.status(204).end(); } }; const timer = setTimeout(resolve, timeoutMs); // Register listener for each channel for (const channel of channels) { const listener = () => { // Small batching delay setTimeout(resolve, 50); }; listeners.set(channel.id, listener); eventBus.once(`channel:${channel.id}`, listener); } // Handle client disconnect req.on('close', () => { clearTimeout(timer); cleanup(); });});Multiplexing Benefits:
| Metric | Separate Connections | Multiplexed |
|---|---|---|
| Connections per user | N (one per channel) | 1 |
| Server memory | N × connection overhead | 1 × connection overhead |
| TCP slots consumed | N | 1 |
| HTTP request overhead | N × headers | 1 × headers (channels in body) |
| Cursor management | N separate cursors | Single atomic cursor set |
Chrome Connection Limit Context:
Browsers limit concurrent HTTP/1.1 connections per domain (6 in Chrome). With separate long polls:
Multiplexing solves this completely.
Allow clients to update their channel subscriptions without disconnecting. The server can accept 'subscribe' and 'unsubscribe' requests that modify the pending request's channel list. This enables dynamic UIs where channels change based on what the user is viewing.
When multiple events occur in rapid succession, naive long polling would respond after the first event, forcing the client to immediately reconnect and potentially miss the burst. Batching solves this by waiting briefly to collect related events.
The Batching Window:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
class BatchingLongPollManager { private pending: Map<string, PendingRequest[]> = new Map(); private batchTimers: Map<string, NodeJS.Timeout> = new Map(); private readonly batchWindow = 100; // ms to wait for more events private readonly maxBatchSize = 100; // max events per response /** * Trigger event delivery with batching */ async triggerEvent(channelId: string, event: Event): Promise<void> { // If no pending requests, nothing to do if (!this.pending.has(channelId) || this.pending.get(channelId)!.length === 0) { return; } // If batch timer already running, let it handle this event if (this.batchTimers.has(channelId)) { return; } // Start batch timer const timer = setTimeout( () => this.flushBatch(channelId), this.batchWindow ); this.batchTimers.set(channelId, timer); } /** * Deliver batched events */ private async flushBatch(channelId: string): Promise<void> { // Clear timer reference this.batchTimers.delete(channelId); const requests = this.pending.get(channelId); if (!requests || requests.length === 0) { return; } // Fetch all pending events const events = await eventStore.getRecent(channelId, this.maxBatchSize); // Deliver to each waiting request, filtering by their cursor for (const request of [...requests]) { const filtered = events.filter(e => !request.lastEventId || e.cursor > request.lastEventId ); if (filtered.length > 0) { this.respond(request, { events: filtered }); } } } /** * Force immediate flush (timeout or max batch size) */ forceFlush(channelId: string): void { const timer = this.batchTimers.get(channelId); if (timer) { clearTimeout(timer); this.batchTimers.delete(channelId); } this.flushBatch(channelId); }}Batching Tradeoffs:
| Batch Window | Latency Impact | Efficiency Gain | Use Case |
|---|---|---|---|
| 0ms (no batching) | Minimal | None | Interactive chat |
| 50-100ms | Imperceptible | Moderate | Most applications |
| 200-500ms | Noticeable | High | Feed updates |
| 1000ms+ | Significant | Maximum | Analytics dashboards |
Coalescing:
For certain event types, newer events supersede older ones. Coalescing replaces queued events with the latest version:
// Example: Presence updates
// Instead of: ["user online", "user away", "user busy", "user online"]
// Coalesce to: ["user online"] (latest state only)
class CoalescingEventBuffer {
private buffer: Map<string, Event> = new Map();
add(coalescingKey: string, event: Event): void {
// Replace any existing event with same key
this.buffer.set(coalescingKey, event);
}
flush(): Event[] {
const events = Array.from(this.buffer.values());
this.buffer.clear();
return events;
}
}
Coalescing is ideal for state updates where only the current state matters (online status, typing indicators, cursor position).
Long-lived connections encounter more error conditions than traditional request-response. A robust implementation must handle each gracefully.
Error Taxonomy:
| Error Type | Detection | Server Response | Client Action |
|---|---|---|---|
| Network failure | Connection reset | N/A | Backoff + reconnect |
| Server timeout | Timer fires | 204 No Content | Immediate reconnect |
| Client timeout | Abort controller | N/A | Warning + reconnect |
| Authentication expired | Token validation | 401 Unauthorized | Refresh token + retry |
| Rate limited | Request counter | 429 Too Many Requests | Backoff using Retry-After |
| Server overload | Queue full | 503 Service Unavailable | Exponential backoff |
| Invalid cursor | Cursor not found | 400 Bad Request + reset cursor | Start from null cursor |
| Channel not found | Lookup fails | 404 Not Found | Remove subscription |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
class ResilientLongPollClient { private handleResponse(response: Response): ResponseAction { switch (response.status) { case 200: return { action: 'process', reconnectDelay: 0 }; case 204: return { action: 'skip', reconnectDelay: 0 }; case 401: return { action: 'reauthenticate', callback: async () => { await this.refreshAuthToken(); } }; case 429: const retryAfter = parseInt( response.headers.get('Retry-After') || '60' ); return { action: 'backoff', reconnectDelay: retryAfter * 1000 }; case 400: // Check if cursor reset needed const body = await response.json().catch(() => null); if (body?.error === 'INVALID_CURSOR') { this.resetCursor(); return { action: 'skip', reconnectDelay: 0 }; } return { action: 'fail', error: new Error('Bad request') }; case 503: case 502: case 504: return { action: 'backoff', reconnectDelay: this.calculateExponentialBackoff() }; default: return { action: 'backoff', reconnectDelay: 5000 }; } } private calculateExponentialBackoff(): number { const base = 1000; const max = 60000; const jitter = 0.3; const delay = Math.min( base * Math.pow(2, this.consecutiveErrors), max ); return delay * (1 + jitter * Math.random()); } private resetCursor(): void { // Warning: May cause duplicate events console.warn('Cursor reset - events may be replayed'); this.lastEventId = null; this.options.onCursorReset?.(); }}When a server fails, all clients reconnect simultaneously. Without jittered backoff, this creates a "retry storm" that can prevent recovery. Always use randomized delays. For 10,000 clients with 30-second jitter, reconnects spread across 30 seconds instead of hitting at once.
We've dissected the internal mechanics that make long polling work reliably at scale. Let's consolidate the key insights:
What's Next:
Now that we understand the mechanics, we'll examine the most challenging aspect of long polling: timeout handling. Managing server-side timeouts, proxy timeouts, and client timeouts requires careful orchestration to maintain reliable connections.
You now understand the internal mechanics of long polling: protocol behavior, server-side management, client reconnection, cursors, multiplexing, batching, and error handling. Next, we'll tackle the timeout handling strategies that keep long polling connections reliable.