Loading content...
Network connections are inherently unreliable. Servers restart, networks hiccup, mobile devices switch between WiFi and cellular, and proxies timeout idle connections. A production-grade SSE implementation must handle these disruptions gracefully—reconnecting automatically and recovering lost messages without user intervention.\n\nThe SSE specification includes built-in reconnection capabilities, but understanding their mechanics and limitations is essential for building truly resilient systems. Beyond the basics, sophisticated recovery strategies ensure no messages are lost even during extended outages.
By the end of this page, you will understand EventSource's automatic reconnection behavior, how to use Last-Event-ID for message recovery, custom retry strategies including exponential backoff with jitter, server-side best practices for supporting reconnection, and production patterns for zero message loss.
One of SSE's most valuable features is automatic reconnection—a capability built into the EventSource specification that WebSocket lacks. Understanding this behavior helps you leverage it effectively.\n\nDefault Reconnection Behavior\n\nWhen an SSE connection is interrupted, the EventSource object automatically attempts to reconnect:
Reconnection Triggers\n\nEventSource attempts reconnection in several scenarios:
| Scenario | Behavior | Notes |
|---|---|---|
| Server closes connection | Reconnects after retry interval | Normal restart/redeployment scenario |
| Network interruption | Reconnects when network available | Browser detects network change |
| Connection timeout | Reconnects after retry interval | Proxy/firewall terminated idle connection |
| Server sends empty response | Reconnects immediately | Unusual edge case |
| HTTP 5xx error | Reconnects after retry interval | Server-side transient error |
| HTTP 4xx error | No retry (connection closed) | Client error - requires intervention |
| HTTP 204 | No retry (connection closed) | Server explicitly says 'no content' |
| Close called programmatically | No retry | Intentional disconnection |
12345678910111213141516171819202122232425262728293031323334353637383940
const events = new EventSource('/api/events'); // Track connection statelet connectionAttempts = 0;let lastConnectedAt = null; events.onopen = () => { connectionAttempts++; lastConnectedAt = new Date(); console.log(`Connected (attempt #${connectionAttempts}) at ${lastConnectedAt}`); // Reset reconnection UI hideReconnectionBanner();}; events.onerror = (error) => { // Check the readyState to understand what's happening switch (events.readyState) { case EventSource.CONNECTING: // Actively trying to reconnect console.log('Reconnecting...'); showReconnectionBanner('Reconnecting to server...'); break; case EventSource.OPEN: // Error during open connection (rare) console.error('Error while connected:', error); break; case EventSource.CLOSED: // Permanently closed - won't retry console.error('Connection permanently closed'); showReconnectionBanner('Connection failed. Please refresh the page.'); break; }}; // Note: EventSource doesn't expose the actual retry interval// or provide hooks into the retry mechanism directly.// For more control, see custom retry strategies below.EventSource treats HTTP 4xx responses as permanent failures and stops retrying. This can be problematic if your auth token expires during an active connection—the reconnection attempt returns 401, and SSE gives up. Implement token refresh logic that detects auth failures and creates a new EventSource with fresh credentials.
The id field in SSE messages enables a powerful recovery mechanism: when reconnecting, the client automatically sends the last received event ID to the server, allowing the server to resume from where the client left off.\n\nHow Last-Event-ID Works
123456789101112131415161718192021222324252627282930313233343536
# Initial connection requestGET /api/events HTTP/1.1Host: example.comAccept: text/event-stream # Server response - streaming events with IDsHTTP/1.1 200 OKContent-Type: text/event-stream id: 1001event: messagedata: {"text": "Hello"} id: 1002event: messagedata: {"text": "World"} id: 1003event: notificationdata: {"type": "alert"} # Connection drops here... # Reconnection request - note the Last-Event-ID header!GET /api/events HTTP/1.1Host: example.comAccept: text/event-streamLast-Event-ID: 1003 # Server should resume from 1004 onwardsHTTP/1.1 200 OKContent-Type: text/event-stream id: 1004event: messagedata: {"text": "First message after reconnect"}Server-Side Implementation\n\nThe server must handle the Last-Event-ID header to provide message recovery:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
const express = require('express');const app = express(); // In-memory message store (use Redis/DB in production)const messageStore = new Map(); // id -> { event, data, timestamp }let currentEventId = 0; // Store messages for recoveryfunction storeMessage(event, data) { const id = String(++currentEventId); messageStore.set(id, { id, event, data, timestamp: Date.now() }); // Cleanup old messages (keep last 1000 or last hour) pruneOldMessages(); return id;} function pruneOldMessages() { const ONE_HOUR = 60 * 60 * 1000; const cutoff = Date.now() - ONE_HOUR; for (const [id, msg] of messageStore) { if (msg.timestamp < cutoff) { messageStore.delete(id); } }} // Get messages after a given IDfunction getMessagesSince(lastEventId) { if (!lastEventId) return []; const messages = []; let foundLast = false; for (const [id, msg] of messageStore) { if (foundLast) { messages.push(msg); } if (id === lastEventId) { foundLast = true; } } return messages;} // SSE endpoint with recovery supportapp.get('/api/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); // Check for recovery request const lastEventId = req.headers['last-event-id']; if (lastEventId) { console.log(`Client reconnecting, last ID: ${lastEventId}`); // Send missed messages const missedMessages = getMessagesSince(lastEventId); console.log(`Sending ${missedMessages.length} missed messages`); for (const msg of missedMessages) { res.write(`id: ${msg.id}\n`); res.write(`event: ${msg.event}\n`); res.write(`data: ${JSON.stringify(msg.data)}\n\n`); } } // Set recommended retry interval res.write('retry: 5000\n\n'); // Send connection confirmation res.write(': connected\n\n'); // Subscribe to future events const clientId = registerClient(res); req.on('close', () => { unregisterClient(clientId); });}); // When broadcasting, store and include IDfunction broadcast(event, data) { const id = storeMessage(event, data); const message = `id: ${id}\nevent: ${event}\ndata: ${JSON.stringify(data)}\n\n`; for (const client of clients.values()) { client.write(message); }}In distributed systems, sequential numeric IDs become challenging. Consider using composite IDs like 'server-1:12345' or timestamp-based IDs like '1705312200000-abc'. This ensures uniqueness across servers while maintaining orderability for recovery purposes.
While EventSource's built-in retry is convenient, production systems often need more sophisticated retry logic—exponential backoff, jitter, maximum retry limits, and custom error handling.\n\nThe retry Field\n\nServers can suggest retry intervals using the retry: field:
1234567891011121314151617
HTTP/1.1 200 OKContent-Type: text/event-stream retry: 10000: Client should wait 10 seconds before reconnecting data: First message : During high load, increase retry intervalretry: 30000 data: Server under load, backing off : Back to normalretry: 5000 data: Load normalizedAdvanced Retry with Custom Implementation\n\nFor full control over retry behavior, implement a wrapper around EventSource:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172
interface RetryConfig { initialDelay: number; // Start delay (ms) maxDelay: number; // Maximum delay (ms) backoffMultiplier: number; // Delay multiplier per attempt jitterFactor: number; // Random jitter (0-1) maxRetries: number; // Max attempts (0 = infinite)} const DEFAULT_RETRY_CONFIG: RetryConfig = { initialDelay: 1000, maxDelay: 60000, backoffMultiplier: 2, jitterFactor: 0.1, maxRetries: 0, // Infinite}; class ResilientEventSource { private url: string; private eventSource: EventSource | null = null; private config: RetryConfig; private retryCount = 0; private currentDelay: number; private retryTimeout: ReturnType<typeof setTimeout> | null = null; private isIntentionallyClosed = false; private lastEventId: string | null = null; // Event handlers onmessage: ((event: MessageEvent) => void) | null = null; onerror: ((error: Event, retryInfo: RetryInfo) => void) | null = null; onopen: (() => void) | null = null; onreconnecting: ((info: RetryInfo) => void) | null = null; constructor(url: string, config: Partial<RetryConfig> = {}) { this.url = url; this.config = { ...DEFAULT_RETRY_CONFIG, ...config }; this.currentDelay = this.config.initialDelay; this.connect(); } private connect() { if (this.isIntentionallyClosed) return; // Include last event ID in URL if reconnecting let connectionUrl = this.url; if (this.lastEventId) { const separator = this.url.includes('?') ? '&' : '?'; connectionUrl = `${this.url}${separator}lastEventId=${this.lastEventId}`; } this.eventSource = new EventSource(connectionUrl); this.eventSource.onopen = () => { // Reset retry state on successful connection this.retryCount = 0; this.currentDelay = this.config.initialDelay; this.onopen?.(); }; this.eventSource.onmessage = (event) => { if (event.lastEventId) { this.lastEventId = event.lastEventId; } this.onmessage?.(event); }; this.eventSource.onerror = (error) => { if (this.isIntentionallyClosed) return; // Close the failed connection this.eventSource?.close(); this.eventSource = null; // Check retry limits if (this.config.maxRetries > 0 && this.retryCount >= this.config.maxRetries) { this.onerror?.(error, { attempt: this.retryCount, willRetry: false, nextRetryIn: 0, }); return; } // Calculate next delay with jitter const jitter = this.currentDelay * this.config.jitterFactor * Math.random(); const delay = Math.min( this.currentDelay + jitter, this.config.maxDelay ); this.retryCount++; const retryInfo: RetryInfo = { attempt: this.retryCount, willRetry: true, nextRetryIn: delay, }; this.onreconnecting?.(retryInfo); this.onerror?.(error, retryInfo); // Schedule retry this.retryTimeout = setTimeout(() => { this.currentDelay = Math.min( this.currentDelay * this.config.backoffMultiplier, this.config.maxDelay ); this.connect(); }, delay); }; } addEventListener(event: string, handler: (e: MessageEvent) => void) { this.eventSource?.addEventListener(event, (e) => { if ((e as MessageEvent).lastEventId) { this.lastEventId = (e as MessageEvent).lastEventId; } handler(e as MessageEvent); }); } close() { this.isIntentionallyClosed = true; if (this.retryTimeout) { clearTimeout(this.retryTimeout); } this.eventSource?.close(); } // Force immediate reconnection reconnect() { this.eventSource?.close(); this.currentDelay = this.config.initialDelay; this.retryCount = 0; this.connect(); } get readyState(): number { return this.eventSource?.readyState ?? EventSource.CLOSED; }} interface RetryInfo { attempt: number; willRetry: boolean; nextRetryIn: number;} // Usageconst events = new ResilientEventSource('/api/events', { initialDelay: 1000, maxDelay: 30000, backoffMultiplier: 1.5, jitterFactor: 0.2, maxRetries: 10,}); events.onopen = () => console.log('Connected!'); events.onmessage = (e) => console.log('Message:', e.data); events.onreconnecting = (info) => { console.log(`Reconnecting (attempt ${info.attempt}) in ${info.nextRetryIn}ms`); showReconnectionUI(info);}; events.onerror = (error, info) => { if (!info.willRetry) { console.error('Max retries exceeded, giving up'); showPermanentErrorUI(); }};Without jitter, when a server restarts, all clients reconnect at exactly the same moment, potentially overwhelming the server again. Jitter spreads the reconnection attempts over time, smoothing the load. This is especially critical at scale—even 10% jitter across 100,000 clients significantly reduces reconnection thundering herd.
Real-world systems require more than basic reconnection. These patterns address common production scenarios that simple retry logic doesn't handle.\n\nPattern 1: Auth Token Refresh During Reconnection
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
class AuthAwareEventSource { private events: EventSource | null = null; private url: string; private getToken: () => Promise<string>; private refreshToken: () => Promise<string>; constructor( url: string, tokenProvider: { get: () => Promise<string>; refresh: () => Promise<string> } ) { this.url = url; this.getToken = tokenProvider.get; this.refreshToken = tokenProvider.refresh; } async connect() { const token = await this.getToken(); const urlWithToken = `${this.url}?token=${encodeURIComponent(token)}`; this.events = new EventSource(urlWithToken); this.events.onerror = async (error) => { if (this.events?.readyState === EventSource.CLOSED) { // Connection failed - might be auth issue await this.handleConnectionFailure(); } else { // Reconnecting - EventSource handles automatically console.log('SSE reconnecting...'); } }; } private async handleConnectionFailure() { try { // Try refreshing the token console.log('Connection failed, attempting token refresh...'); const newToken = await this.refreshToken(); // Retry connection with new token await this.connect(); console.log('Reconnected with refreshed token'); } catch (refreshError) { console.error('Token refresh failed:', refreshError); this.onAuthFailure?.(); } } onAuthFailure: (() => void) | null = null; close() { this.events?.close(); }} // Usage with token refreshconst events = new AuthAwareEventSource('/api/events', { get: () => Promise.resolve(localStorage.getItem('accessToken') || ''), refresh: async () => { const response = await fetch('/auth/refresh', { method: 'POST', credentials: 'include', }); const { accessToken } = await response.json(); localStorage.setItem('accessToken', accessToken); return accessToken; },}); events.onAuthFailure = () => { // Redirect to login window.location.href = '/login?returnTo=' + encodeURIComponent(window.location.href);}; await events.connect();Pattern 2: Coordinated Multi-Stream Recovery\n\nWhen applications use multiple SSE streams, recovery must be coordinated:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
class StreamCoordinator { private streams: Map<string, EventSource> = new Map(); private lastEventIds: Map<string, string> = new Map(); private reconnectInProgress = false; addStream(name: string, url: string, handlers: StreamHandlers) { const fullUrl = this.buildUrl(url, name); const stream = new EventSource(fullUrl); stream.onmessage = (event) => { if (event.lastEventId) { this.lastEventIds.set(name, event.lastEventId); } handlers.onMessage(event); }; stream.onerror = () => { this.handleStreamError(name, url, handlers); }; this.streams.set(name, stream); } private buildUrl(baseUrl: string, streamName: string): string { const lastId = this.lastEventIds.get(streamName); if (lastId) { const sep = baseUrl.includes('?') ? '&' : '?'; return `${baseUrl}${sep}lastEventId=${lastId}`; } return baseUrl; } private async handleStreamError( name: string, url: string, handlers: StreamHandlers ) { // Close failed stream this.streams.get(name)?.close(); this.streams.delete(name); // Coordinate with other streams - don't spam reconnections if (this.reconnectInProgress) { console.log(`Stream '${name}' waiting for coordinated reconnect`); return; } // Check if this is a systemic failure const healthyStreams = Array.from(this.streams.values()) .filter(s => s.readyState === EventSource.OPEN); if (healthyStreams.length === 0) { // All streams failed - likely server-side issue console.log('All streams failed, coordinated reconnection...'); await this.coordinatedReconnect(); } else { // Just this stream failed - reconnect independently console.log(`Stream '${name}' reconnecting`); setTimeout(() => { this.addStream(name, url, handlers); }, 1000 + Math.random() * 2000); // Jittered delay } } private async coordinatedReconnect() { this.reconnectInProgress = true; // Close all streams for (const stream of this.streams.values()) { stream.close(); } // Wait for server to recover await this.waitForServerHealth(); // Reconnect all streams with jitter this.reconnectInProgress = false; // ... reconnection logic } private async waitForServerHealth(): Promise<void> { // Exponential backoff health check let delay = 1000; while (delay < 60000) { try { const response = await fetch('/api/health'); if (response.ok) return; } catch {} await new Promise(r => setTimeout(r, delay)); delay *= 2; } }} interface StreamHandlers { onMessage: (event: MessageEvent) => void; onError?: (error: Event) => void;}Pattern 3: Checkpoint Recovery for Large Message Streams
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
// For high-volume streams, store processed checkpoints// This allows recovery even after page reload class CheckpointedEventSource { private events: EventSource | null = null; private checkpoint: string | null = null; private storageKey: string; constructor(private url: string, streamId: string) { this.storageKey = `sse_checkpoint_${streamId}`; this.checkpoint = this.loadCheckpoint(); } private loadCheckpoint(): string | null { try { return localStorage.getItem(this.storageKey); } catch { return null; } } private saveCheckpoint(id: string) { try { localStorage.setItem(this.storageKey, id); this.checkpoint = id; } catch { // localStorage might be full or disabled } } connect(handlers: { onMessage: (event: MessageEvent) => Promise<void>; // Async for processing onError?: (error: Event) => void; }) { let connectionUrl = this.url; if (this.checkpoint) { const sep = this.url.includes('?') ? '&' : '?'; connectionUrl = `${this.url}${sep}afterId=${this.checkpoint}`; } this.events = new EventSource(connectionUrl); this.events.onmessage = async (event) => { // Process message await handlers.onMessage(event); // Only checkpoint after successful processing if (event.lastEventId) { this.saveCheckpoint(event.lastEventId); } }; this.events.onerror = (error) => { handlers.onError?.(error); // EventSource will auto-reconnect, and we'll send the checkpoint }; } // Allow manual checkpoint clear (e.g., when user logs out) clearCheckpoint() { try { localStorage.removeItem(this.storageKey); this.checkpoint = null; } catch {} } close() { this.events?.close(); }} // Server must handle 'afterId' parameter for recovery// GET /api/events?afterId=12345 // Usageconst events = new CheckpointedEventSource('/api/events', 'main-stream'); events.connect({ onMessage: async (event) => { const data = JSON.parse(event.data); await processMessageWithRetry(data); // Your processing logic // Checkpoint saved after this returns successfully }, onError: (error) => { showReconnectionNotice(); },});Only save checkpoints AFTER successfully processing a message. If you checkpoint before processing and the processing fails, you'll lose that message permanently. This pattern requires idempotent processing—the same message might be delivered twice if failure occurs between processing and checkpointing.
Effective reconnection handling requires server-side support. These practices ensure clients can recover smoothly from any disconnection scenario.\n\nEssential Server Behaviors
id: field for recovery tracking.retry: field to suggest appropriate reconnection intervals.123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145
const express = require('express');const Redis = require('ioredis'); const app = express();const redis = new Redis(); // Constantsconst MESSAGE_TTL_SECONDS = 3600; // 1 hour recovery windowconst MAX_STORED_MESSAGES = 10000;const KEEPALIVE_INTERVAL_MS = 25000; // Reconnection surge protectionclass ConnectionThrottler { constructor(maxConnectionsPerSecond = 100) { this.maxPerSecond = maxConnectionsPerSecond; this.currentSecond = 0; this.connectionsThisSecond = 0; } canConnect() { const now = Math.floor(Date.now() / 1000); if (now !== this.currentSecond) { this.currentSecond = now; this.connectionsThisSecond = 0; } if (this.connectionsThisSecond >= this.maxPerSecond) { return false; } this.connectionsThisSecond++; return true; }} const throttler = new ConnectionThrottler(100); // Store message in Redis with automatic expirationasync function storeMessage(event, data) { const id = await redis.incr('sse:event_counter'); const message = JSON.stringify({ id, event, data, timestamp: Date.now() }); await redis.multi() .zadd('sse:messages', id, message) .expire('sse:messages', MESSAGE_TTL_SECONDS * 2) .exec(); // Trim to max size await redis.zremrangebyrank('sse:messages', 0, -MAX_STORED_MESSAGES - 1); return String(id);} // Get messages after a given IDasync function getMessagesSince(lastEventId) { if (!lastEventId) return []; // Get all messages with score > lastEventId const messages = await redis.zrangebyscore( 'sse:messages', Number(lastEventId) + 1, '+inf' ); return messages.map(m => JSON.parse(m));} // SSE endpointapp.get('/api/events', async (req, res) => { // Surge protection if (!throttler.canConnect()) { res.status(503) .set('Retry-After', '2') .send('Too many connections, try again shortly'); return; } // Set SSE headers res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate'); res.setHeader('Connection', 'keep-alive'); res.setHeader('X-Accel-Buffering', 'no'); // Handle Last-Event-ID for recovery const lastEventId = req.headers['last-event-id']; if (lastEventId) { console.log(`Recovery request from ID: ${lastEventId}`); const missedMessages = await getMessagesSince(lastEventId); console.log(`Sending ${missedMessages.length} missed messages`); for (const msg of missedMessages) { res.write(`id: ${msg.id}\nevent: ${msg.event}\ndata: ${JSON.stringify(msg.data)}\n\n`); } } // Suggest retry interval res.write('retry: 5000\n\n'); // Connection confirmation res.write(': connected\n\n'); // Keepalive const keepaliveTimer = setInterval(() => { if (!res.writableEnded) { res.write(': keepalive\n\n'); } }, KEEPALIVE_INTERVAL_MS); // Subscribe to Redis pub/sub for new messages const subscriber = new Redis(); subscriber.subscribe('sse:broadcast'); subscriber.on('message', (channel, message) => { if (!res.writableEnded) { res.write(message); } }); // Cleanup on disconnect req.on('close', () => { clearInterval(keepaliveTimer); subscriber.unsubscribe(); subscriber.disconnect(); });}); // Broadcast to all clientsasync function broadcast(event, data) { const id = await storeMessage(event, data); const message = `id: ${id}\nevent: ${event}\ndata: ${JSON.stringify(data)}\n\n`; await redis.publish('sse:broadcast', message);} // Graceful shutdownprocess.on('SIGTERM', async () => { console.log('Shutting down, closing SSE connections...'); // The framework will handle closing connections // Clients will receive close event and reconnect process.exit(0);}); app.listen(3000);In multi-server deployments, use a shared message store (Redis, Kafka) so any server can handle recovery requests. The reconnecting client might connect to a different server than before, but should still receive all missed messages.
We've comprehensively covered reconnection handling for Server-Sent Events. Let's consolidate the key insights:
What's Next:\n\nNow that we understand reconnection handling, we'll explore practical use cases for SSE—understanding which applications benefit most from this technology and how to implement them effectively.
You now have comprehensive knowledge of SSE reconnection handling. You can implement custom retry strategies, build recovery-aware servers, handle authentication edge cases, and ensure zero message loss even during extended outages. Next, we'll explore practical SSE use cases.