Loading learning content...
When you're scrolling Facebook and a notification appears—"Your friend Sarah just posted"—or a like count ticks up before your eyes, it feels like magic. The feed seems to know what's happening in real-time, updating without you doing anything.
But behind this seamless experience is an enormously complex real-time infrastructure that must:
Real-time updates are what make social media feel alive. Without them, Facebook would feel like email—static content you have to explicitly check. With them, the platform becomes a living, breathing window into your social world.
By the end of this page, you will understand the different real-time update mechanisms (WebSockets, long polling, push notifications), learn how to design scalable connection management systems, master techniques for routing updates to interested users, and explore consistency challenges in real-time systems.
Not all real-time updates are created equal. Different events have different latency requirements, delivery guarantees, and routing complexity.
| Update Type | Example | Latency Target | Delivery Guarantee |
|---|---|---|---|
| New Content | Friend posted a photo | < 30 seconds | At-most-once (best effort) |
| Engagement Updates | Like count increased | < 5 seconds | Eventual (coalescing OK) |
| Social Context | Friend reacted to a post | < 10 seconds | At-most-once |
| Typing Indicators | Friend is typing... | < 1 second | Best effort (ephemeral) |
| Read Receipts | Friend saw your post | < 5 seconds | At-most-once |
| Live Video | Friend started live stream | < 3 seconds | At-least-once (important) |
| Break-in News | Trending/breaking content | < 60 seconds | At-most-once |
12345678910111213141516171819202122232425262728293031323334353637
// Different update granularities require different architectures // Full Post Update - Send complete post datainterface NewPostUpdate { type: 'new_post'; post: FullPostData; // All content, author info, etc. timestamp: number;} // Incremental Counter Update - Send only deltainterface EngagementDelta { type: 'engagement_delta'; postId: string; field: 'likes' | 'comments' | 'shares'; delta: number; // +1, -1, etc.} // Coalesced Update - Batched changesinterface BatchedUpdate { type: 'batch'; postId: string; likes: number; // Absolute count (not delta) comments: number; lastUpdated: number;} // Presence Update - Ephemeral stateinterface PresenceUpdate { type: 'presence'; userId: string; state: 'online' | 'away' | 'offline'; lastSeen?: number;} // Strategy: Use deltas for high-frequency updates (engagement)// Use full data for low-frequency updates (new posts)// Use absolute counts when coalescing (batch updates)A viral post might receive 1000 likes per second. Sending 1000 individual updates is wasteful. Instead, coalesce updates: send one batch update every 2 seconds with the current count. Users can't perceive the difference, and you save 99% of messages.
Maintaining persistent connections with billions of clients is one of the hardest problems in distributed systems. Each connection consumes memory, file descriptors, and CPU for heartbeats.
| Protocol | Connection Type | Throughput | Latency | Battery Impact |
|---|---|---|---|---|
| WebSocket | Full-duplex persistent | High | ~50ms | Medium |
| Server-Sent Events | Half-duplex persistent | Medium | ~50ms | Low |
| Long Polling | Sequential requests | Low | ~100ms | High |
| MQTT | Lightweight persistent | High | ~50ms | Very Low |
| HTTP/2 Push | Server-initiated | Medium | ~100ms | Medium |
123456789101112131415161718192021222324252627282930313233
# Connection server capacity planning # Per-server limits (modern hardware)MAX_CONNECTIONS_PER_SERVER = 100_000 # Limited by file descriptorsMEMORY_PER_CONNECTION = 10_KB # State, buffersCPU_FOR_HEARTBEATS = 0.1_MS # Per connection per second # Global scaleCONCURRENT_USERS = 500_000_000 # Peak DAU online simultaneously# (Not all 2B users are online at once; typical: 25%) # Server count neededservers_needed = CONCURRENT_USERS / MAX_CONNECTIONS_PER_SERVER# = 500M / 100K = 5,000 connection servers # Memory per servermemory_per_server = MAX_CONNECTIONS_PER_SERVER * MEMORY_PER_CONNECTION# = 100K * 10KB = 1 GB per server # Heartbeat CPU per serverheartbeat_cpu_ms = MAX_CONNECTIONS_PER_SERVER * CPU_FOR_HEARTBEATS# = 100K * 0.1ms = 10,000 ms = 10 seconds CPU per second# Requires multi-core (typically 16+ cores per server) # Connection registry storageregistry_entry_size = 100 # bytes (userId -> serverId mapping)registry_storage = CONCURRENT_USERS * registry_entry_size# = 500M * 100 = 50 GB (fits in Redis cluster) # Message throughput (updates per second)updates_per_user_per_minute = 2 # avg new content, engagementstotal_updates_per_second = CONCURRENT_USERS * updates_per_user_per_minute / 60# = 500M * 2 / 60 = 16.7 million messages/second12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
// Distributed connection registry// Maps users to their current connection servers interface ConnectionRegistry { // Primary lookup: user → connection server userToServer: Map<UserId, ServerId>; // Reverse lookup: server → users (for reconnection handling) serverToUsers: Map<ServerId, Set<UserId>>; // Connection metadata connectionMeta: Map<UserId, ConnectionMetadata>;} interface ConnectionMetadata { userId: string; serverId: string; connectedAt: timestamp; lastHeartbeat: timestamp; clientType: 'ios' | 'android' | 'web' | 'desktop'; capabilities: string[]; // Supported update types} // Registry operationsclass DistributedRegistry { private redis: RedisCluster; async registerConnection(userId: string, serverId: string, meta: ConnectionMetadata) { const key = `conn:${userId}`; const serverKey = `server:${serverId}`; await this.redis.multi() .hset(key, 'server', serverId) .hset(key, 'meta', JSON.stringify(meta)) .sadd(serverKey, userId) .expire(key, 300) // TTL for stale connections .exec(); } async unregisterConnection(userId: string) { const serverKey = await this.getServerForUser(userId); await this.redis.multi() .del(`conn:${userId}`) .srem(`server:${serverKey}`, userId) .exec(); } async getServerForUser(userId: string): Promise<string | null> { return this.redis.hget(`conn:${userId}`, 'server'); } async getUsersOnServer(serverId: string): Promise<string[]> { return this.redis.smembers(`server:${serverId}`); } // Heartbeat-based liveness checking async refreshConnection(userId: string) { await this.redis.expire(`conn:${userId}`, 300); }}Connections drop constantly—mobile networks are unreliable, users switch apps, devices sleep. Never assume a connection exists; always handle 'user not connected' gracefully by falling back to push notifications or queuing updates for next session.
When a post is created or engagement occurs, the update must be routed to all interested users. This requires determining who is interested and where they are connected.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
// Update Routing Service class UpdateRouter { async routeNewPost(post: Post) { // Step 1: Get potentially interested users const followers = await socialGraph.getFollowers(post.authorId); // Step 2: Filter to online users only (efficiency) const onlineFollowers = await connectionRegistry.filterOnline(followers); // Step 3: Further filter by relevance (reduce noise) const relevantFollowers = await this.filterByRelevance( onlineFollowers, post ); // Step 4: Group by connection server const serverGroups = await this.groupByServer(relevantFollowers); // Step 5: Enqueue messages per server for (const [serverId, userIds] of serverGroups) { await messageQueue.enqueue(`server:${serverId}`, { type: 'new_post', post: this.compactPost(post), targetUsers: userIds, }); } } async filterByRelevance(userIds: string[], post: Post): Promise<string[]> { // Don't push every post to every follower // Use lightweight ranking to determine who would care const results = []; for (const userId of userIds) { const affinity = await affinityScore.get(userId, post.authorId); // Only push to users with strong relationship if (affinity > 0.5) { results.push(userId); } } return results; } async routeEngagement(engagement: Engagement) { // Step 1: Get post author (always notify) const post = await postStore.get(engagement.postId); // Step 2: Get current viewers of this post const viewers = await viewerTracker.getCurrentViewers(engagement.postId); // Step 3: Route to author + viewers const targets = [post.authorId, ...viewers]; const serverGroups = await this.groupByServer(targets); for (const [serverId, userIds] of serverGroups) { await messageQueue.enqueue(`server:${serverId}`, { type: 'engagement_update', postId: engagement.postId, engagementType: engagement.type, count: await engagementStore.getCount(engagement.postId, engagement.type), targetUsers: userIds, }); } }}For frequently updated content, a subscription model scales better than per-event routing.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
// Pub/Sub for Feed Subscriptions interface Subscription { userId: string; topic: string; // post:{postId}, author:{authorId}, etc. createdAt: timestamp; expiresAt: timestamp; // Auto-cleanup for inactive} class FeedPubSub { private redis: RedisCluster; // When user views a post, subscribe to its updates async subscribeToPost(userId: string, postId: string) { const topic = `post:${postId}`; await this.redis.sadd(`topic:${topic}`, userId); await this.redis.expire(`topic:${topic}`, 3600); // 1 hour TTL // Track user's subscriptions for cleanup await this.redis.sadd(`user_subs:${userId}`, topic); } // When engagement happens, notify subscribers async publishToPost(postId: string, update: EngagementUpdate) { const topic = `post:${postId}`; const subscribers = await this.redis.smembers(`topic:${topic}`); // Batch by connection server const serverGroups = await this.groupByServer(subscribers); for (const [serverId, userIds] of serverGroups) { await this.deliverToServer(serverId, userIds, update); } } // Cleanup when user disconnects async unsubscribeAll(userId: string) { const topics = await this.redis.smembers(`user_subs:${userId}`); for (const topic of topics) { await this.redis.srem(`topic:${topic}`, userId); } await this.redis.del(`user_subs:${userId}`); }} // Advantages of Pub/Sub:// - O(1) lookup for interested users (vs O(n) follower expansion)// - Automatic scoping to active viewers// - Natural TTL-based cleanup// - Scales with active interest, not total follower countFor posts with high engagement (viral content), push updates only to users actively viewing the post—not all followers. This transforms a million-follower fan-out into a thousand-active-viewer targeted delivery.
The client must incorporate real-time updates into the feed UI without disrupting the user experience. This requires careful handling of update ordering, conflict resolution, and UI updates.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
// Client-side feed state manager interface FeedState { posts: Map<PostId, Post>; // Normalized post storage ordering: PostId[]; // Current feed order pendingUpdates: Update[]; // Buffered updates lastSyncToken: string; // For incremental sync scrollPosition: number; // Current scroll position} class FeedStateManager { private state: FeedState; private socket: WebSocket; // Handle incoming real-time update handleUpdate(update: Update) { switch (update.type) { case 'new_post': this.handleNewPost(update.post); break; case 'engagement_delta': this.handleEngagementDelta(update); break; case 'post_deleted': this.handlePostDeleted(update.postId); break; } } handleNewPost(post: Post) { // Don't immediately insert at top (jarring UX) // Instead, show "New posts available" indicator // Store the post this.state.posts.set(post.id, post); // Add to pending if user has scrolled if (this.state.scrollPosition > 0) { this.state.pendingUpdates.push({ type: 'new_post', postId: post.id, timestamp: Date.now(), }); // Update UI indicator this.showNewPostsIndicator(this.state.pendingUpdates.length); } else { // User is at top, insert directly with animation this.insertPostAtTop(post); } } handleEngagementDelta(update: EngagementDelta) { const post = this.state.posts.get(update.postId); if (!post) return; // Post not in view // Apply delta to local state post.engagement[update.field] += update.delta; // Update UI immediately (no buffering needed) this.updatePostUI(update.postId); } // When user clicks "Show new posts" flushPendingUpdates() { // Sort pending posts by timestamp const newPosts = this.state.pendingUpdates .filter(u => u.type === 'new_post') .map(u => this.state.posts.get(u.postId)) .sort((a, b) => b.createdAt - a.createdAt); // Scroll to top with animation this.scrollToTop(); // Insert all pending posts for (const post of newPosts) { this.insertPostAtTop(post, { animate: true }); } // Clear pending this.state.pendingUpdates = []; this.hideNewPostsIndicator(); }}1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
// Incremental sync protocol interface SyncResponse { updates: Update[]; // New/changed content since last sync deletions: string[]; // Removed content syncToken: string; // New sync cursor hasMore: boolean; // Pagination} class SyncManager { async syncFeed(): Promise<void> { const token = await storage.get('feedSyncToken') || '0'; const response = await api.syncFeed({ since: token, limit: 100 }); // Apply updates for (const update of response.updates) { await this.applyUpdate(update); } // Handle deletions for (const postId of response.deletions) { await this.removePost(postId); } // Save new token await storage.set('feedSyncToken', response.syncToken); // Continue if more updates available if (response.hasMore) { await this.syncFeed(); } } // Handle reconnection after offline period async onReconnect() { // 1. Flush local action queue to server await this.flushPendingActions(); // 2. Sync to get latest state await this.syncFeed(); // 3. Resolve any conflicts await this.resolveConflicts(); // 4. Update UI this.refreshFeedUI(); } async flushPendingActions() { const pending = await actionQueue.getAll(); for (const action of pending) { try { await this.sendAction(action); await actionQueue.remove(action.id); } catch (e) { if (e.isConflict) { // Action no longer valid (e.g., liking deleted post) await actionQueue.remove(action.id); await this.revertLocalAction(action); } // Other errors: keep in queue for retry } } }}Users expect their actions to reflect immediately. When you tap 'Like', the UI should update in <100ms—don't wait for server confirmation. This requires careful handling of rollbacks when server rejects the action, but it's essential for a responsive experience.
When users aren't actively connected, push notifications become the fallback for important updates. This requires integration with platform-specific push services (APNs for iOS, FCM for Android).
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
// Push notification evaluation service interface NotificationCandidate { userId: string; event: Event; priority: 'high' | 'medium' | 'low';} class NotificationEvaluator { async shouldSendPushNotification( candidate: NotificationCandidate ): Promise<boolean> { const userId = candidate.userId; // Check 1: User notification settings const settings = await notificationSettings.get(userId); if (!settings.pushEnabled) return false; if (!settings.allowedTypes.includes(candidate.event.type)) return false; // Check 2: Rate limiting (prevent notification spam) const recentCount = await this.getRecentNotificationCount(userId, '1h'); if (recentCount > 20) { // Too many notifications, skip low-priority if (candidate.priority === 'low') return false; } if (recentCount > 50) { // Way too many, skip medium too if (candidate.priority !== 'high') return false; } // Check 3: Quiet hours (user preference) if (settings.quietHoursEnabled) { const userLocalTime = await this.getUserLocalTime(userId); if (this.isQuietHour(userLocalTime, settings.quietHours)) { if (candidate.priority !== 'high') return false; } } // Check 4: Recent app activity (don't notify if user just used app) const lastActivity = await this.getLastAppActivity(userId); if (Date.now() - lastActivity < 5 * 60 * 1000) { // 5 minutes // User was recently active, they might see it organically if (candidate.priority === 'low') return false; } // Check 5: Deduplication (don't repeat similar notifications) const similar = await this.findSimilarRecentNotification(userId, candidate); if (similar) { // Coalesce: update count instead of new notification await this.updateNotificationCount(similar); return false; } return true; } // Coalescing: "John and 5 others liked your post" async coalesceEngagementNotifications( userId: string, postId: string, engagementType: string ): Promise<NotificationPayload> { const recent = await this.getRecentEngagements(postId, engagementType, '1h'); if (recent.length === 1) { return { title: `${recent[0].userName} liked your post`, body: recent[0].postPreview, }; } else { return { title: `${recent[0].userName} and ${recent.length - 1} others liked your post`, body: recent[0].postPreview, }; } }}| Event Type | Priority | Rationale |
|---|---|---|
| Message received | High | Direct person-to-person communication |
| Friend request | High | Actionable social event |
| Comment on my post | Medium | Meaningful engagement |
| Like on my post | Low | High frequency, low urgency |
| Friend posted | Low | Informational, not urgent |
| Trending topic | Low | Discovery, not personal |
Every notification competes with every other app for user attention. Send too many low-value notifications and users will disable notifications entirely. The goal is to send the RIGHT notifications, not the MOST notifications.
Real-time updates can arrive out of order, duplicate, or conflict with the current feed state. Handling these edge cases is critical for a coherent user experience.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
// Update ordering and deduplication class UpdateOrderer { // Logical clock for update ordering private lastSeenClock: Map<string, number> = new Map(); // Dedup window private seenUpdates: Set<string> = new Set(); shouldApplyUpdate(update: Update): boolean { // Deduplication by update ID if (this.seenUpdates.has(update.id)) { return false; // Already processed } this.seenUpdates.add(update.id); // Size bound the dedup set if (this.seenUpdates.size > 10000) { // Remove oldest entries const toRemove = [...this.seenUpdates].slice(0, 5000); toRemove.forEach(id => this.seenUpdates.delete(id)); } return true; } orderUpdates(updates: Update[]): Update[] { // Sort by logical clock (server-assigned sequence number) return updates.sort((a, b) => a.sequenceNumber - b.sequenceNumber); } handleEngagementUpdate(postId: string, newCount: number, timestamp: number) { // For counters, use last-write-wins with timestamp const key = `engagement:${postId}`; const lastTimestamp = this.lastSeenClock.get(key) || 0; if (timestamp > lastTimestamp) { this.lastSeenClock.set(key, timestamp); return true; // Apply this update } else { return false; // Stale update, ignore } }} // Alternative: Use CRDTs for conflict-free merginginterface GCounterCRDT { // Per-server counters that can be merged counts: Map<ServerId, number>; merge(other: GCounterCRDT): GCounterCRDT { // Take max of each server's count const merged = new Map(); for (const [server, count] of this.counts) { merged.set(server, Math.max(count, other.counts.get(server) || 0)); } for (const [server, count] of other.counts) { if (!merged.has(server)) { merged.set(server, count); } } return { counts: merged }; } value(): number { return [...this.counts.values()].reduce((a, b) => a + b, 0); }}| Data Type | Consistency | Conflict Resolution |
|---|---|---|
| Post content | Strong (author write) | No conflict (single writer) |
| Like counts | Eventual | Last-write-wins with server clock |
| Feed order | Eventual | Re-fetch on next load |
| User's own actions | Read-your-writes | Optimistic with rollback |
| Delete operations | Strong | Immediate propagation |
Users don't notice if the like count is 1,023 vs 1,027. They DO notice if the count goes backward (1,027 → 1,023). Design for monotonically increasing client-side state, allowing temporary over-counting but never under-counting.
We've explored the infrastructure that makes feeds feel alive with real-time updates. Let's consolidate the key concepts:
What's Next:
With real-time updates covered, the next page explores Caching Strategies—the multi-layer caching architecture that makes Facebook's feed fast despite the personalization complexity.
You now understand how feeds maintain freshness through real-time updates. Connection management, update routing, client synchronization, and push notifications work together to create the illusion that your feed is always current.