Loading learning content...
In competitive online gaming, latency is the ultimate enemy. Professional esports players can react in 150-200 milliseconds; at this level, an additional 30ms of network delay is the difference between landing a shot and missing entirely. A 100ms spike at the wrong moment can cost a tournament.
Online gaming represents the most latency-demanding application of UDP. Unlike streaming video where a small buffer hides delays, games require immediate, bidirectional, continuous state synchronization between players and servers. This extreme requirement has driven innovations in network protocol design, prediction algorithms, and distributed systems architecture.
By the end of this page, you will understand why gaming demands UDP, analyze client-side prediction and server reconciliation, comprehend dead reckoning and lag compensation, understand hit detection and anti-cheat considerations, and architect game server infrastructure for global scale.
Online games must create the illusion that all players exist in a shared, consistent world—despite physical distances introducing unavoidable delays. Understanding the latency budget is essential for game network design.
The Total Response Time:
Player A presses button
↓ (1ms) Local input processing
Client sends command to server
↓ (30ms) Network latency (client → server)
Server processes command
↓ (16ms) Server tick (60 Hz = 16.67ms)
Server broadcasts state update
↓ (30ms) Network latency (server → client B)
Client B receives and renders
↓ (16ms) Render frame
Player B sees action
─────────────────────────────────
Total: ~93ms minimum (best case)
In practice, with jitter, processing delays, and typical internet latency, players often experience 80-150ms total response time.
| Genre | Acceptable Latency | Tolerance | Examples |
|---|---|---|---|
| First-Person Shooters (FPS) | <50ms ideal, <100ms acceptable | Very Low | Counter-Strike, Valorant, Call of Duty |
| Fighting Games | <50ms (often less) | Extremely Low | Street Fighter, Mortal Kombat, Tekken |
| Racing Games | <80ms ideal | Low | Forza, Gran Turismo, F1 |
| MOBA | <100ms acceptable | Moderate | League of Legends, Dota 2 |
| Battle Royale | <100ms ideal | Moderate | Fortnite, PUBG, Apex Legends |
| MMO/RPG | <200ms acceptable | High | World of Warcraft, Final Fantasy XIV |
| Turn-Based | <1000ms acceptable | Very High | Hearthstone, Civilization |
Why TCP Fails for Real-Time Games:
Consider what happens when a packet is lost in a fast-paced shooter:
With TCP:
With UDP:
Server 'tick rate' is how often the server processes input and updates game state. At 64 ticks/second, each tick is 15.6ms. Higher tick rates (128Hz) mean finer granularity and less delay between input and processing, but require more bandwidth and CPU. Competitive games often use 128Hz; casual games may use 20-30Hz.
Games use two primary network architectures, each with distinct tradeoffs. The choice depends on game type, player count, and competitive requirements.
Client-Server Architecture:
┌──────────────┐
│ Authoritative │
│ Server │
│ (Game State) │
└────────┬───────────┘
┌──────────────────┼──────────────────┐
│ │ │
┌────┴───┐ ┌────┴───┐ ┌────┴───┐
│ Client │ │ Client │ │ Client │
│ (Pred) │ │ (Pred) │ │ (Pred) │
└────────┘ └────────┘ └────────┘
Peer-to-Peer Architecture:
┌────────┐ ┌────────┐
│ Host │◄══════════════════►│ Peer │
│ (Auth) │ │ │
└───┬────┘ └───┬────┘
│ │
│ ┌────────┐ │
└─────────►│ Peer │◄─────────┘
│ │
└────────┘
Modern Hybrid: Listen Servers and Relay
Many games use hybrid approaches:
In P2P games, the host has zero latency to the authoritative state—a significant competitive advantage. Fighting games address this with rollback netcode (see later section). Competitive shooters almost universally use dedicated servers to ensure fair play.
If clients waited for server confirmation of every action, games would feel sluggish and unplayable at any significant latency. Client-side prediction solves this by having clients predict the outcome of their actions immediately.
The Prediction Concept:
Client Server
│ │
T=0 │ Player presses 'Move Forward' │
│ → Immediately predicts movement │
│ → Shows player moving locally │
│ │
│ ────────[Input]────────────────►│
T=60ms│ │ Server receives input
│ │ Simulates same movement
│ │
│◄────────[Confirmation]─────────│
T=120ms│ Server confirms position │
│ → Matches prediction (no change) │
│ │
From the player's perspective, movement is instant—they see themselves move at T=0, not T=120ms.
Key Prediction Rules:
Client and Server Run Same Simulation
Client Predicts Ahead by RTT
Server State is Authoritative
Only Predict What You Control
123456789101112131415161718192021222324252627282930313233343536
class ClientPrediction { private pendingInputs: Input[] = []; private serverSequence: number = 0; // Called every frame on client processLocalInput(input: Input) { // 1. Apply input locally (prediction) this.predictedState = this.applyInput(this.predictedState, input); // 2. Tag with sequence number input.sequence = ++this.localSequence; // 3. Store for later reconciliation this.pendingInputs.push(input); // 4. Send to server this.sendToServer(input); } // Called when server sends authoritative state onServerUpdate(serverState: GameState, lastProcessedSequence: number) { // 1. Accept server's state as truth this.confirmedState = serverState; // 2. Discard acknowledged inputs this.pendingInputs = this.pendingInputs.filter( input => input.sequence > lastProcessedSequence ); // 3. Re-apply unacknowledged inputs (reconciliation) this.predictedState = this.confirmedState; for (const input of this.pendingInputs) { this.predictedState = this.applyInput(this.predictedState, input); } }}If prediction matches server confirmation (most of the time), the player sees smooth, responsive gameplay. If mismatch occurs (collision with another player, server-side validation failure), the client must correct—but well-designed games minimize visible corrections.
Predictions can be wrong. When they are, the client must reconcile its predicted state with the server's authoritative state—ideally without jarring visual corrections.
Why Predictions Fail:
| Cause | Example | Frequency |
|---|---|---|
| Another player | Enemy shot you, affecting your position | Common |
| Server validation | Server rejected impossible action | Occasional |
| Timing differences | Network jitter caused different simulation order | Common |
| Cheating detection | Server blocked suspicious input | Rare (hopefully) |
| Bug/desync | Client-server simulation diverged | Rare |
Reconciliation Process:
Time Client Predicted Server Says Action
────────────────────────────────────────────────────
T=0 Position: (100, 0) - Player runs forward
T=50 Position: (105, 0) - Still predicting...
T=100 Position: (110, 0) Input #5 ACK Server confirms (100, 0)!
↓ ↓
MISMATCH: Client predicted (110, 0), server says (100, 0)
Reconciliation:
1. Reset to server state (100, 0)
2. Re-apply inputs #6, #7, #8 (unconfirmed)
3. Arrive at corrected position (106, 0)
4. Blend/snap visuals to corrected position
Visual Correction Strategies:
'Rubber banding' is when players visibly snap back to previous positions—a sign of frequent mispredictions or large corrections. It indicates severe latency, packet loss, or simulation desync. Good games minimize this through better prediction, tolerance bands, and smooth blending.
For other players (entities you don't control), you receive their positions from the server—but updates arrive at discrete intervals (e.g., every 16ms at 60Hz). How do you render smooth motion between updates?
The Problem:
Server Updates: [State@T100] [State@T116] [State@T132]
│ │ │
Client Receives: ├──────60ms───►├──────60ms───►
│ │ │
Render at 144Hz: ....│..│..│..│..│..│..│..│..│..│..│
(need positions for all these frames!)
Solution 1: Interpolation (Most Common)
Render other players in the past, smoothly interpolating between received positions:
Server sends: Position at T=100, Position at T=116
Client renders at T=108: Interpolate 50% between positions
Interpolated Position = Position@T100 + 0.5 * (Position@T116 - Position@T100)
Interpolation delay (typically 1-3 network updates) ensures you always have two points to interpolate between.
Solution 2: Dead Reckoning (Extrapolation)
Predict where an entity will be based on current velocity and acceleration:
Last known: Position = (100, 0), Velocity = (5, 0)
Time elapsed: 20ms
Predicted Position = (100, 0) + (5, 0) * 0.020 = (100.1, 0)
Extrapolation is risky—the entity might have changed direction—but eliminates visual delay.
Comparison:
| Aspect | Interpolation | Extrapolation (Dead Reckoning) |
|---|---|---|
| Visual accuracy | High (uses actual data) | Variable (guesses future) |
| Latency | Adds delay (30-100ms) | No added delay |
| Correction on update | Smooth (blend between states) | Can be jarring (direction change) |
| Best for | Competitive games (accuracy) | Casual games (responsiveness) |
| Risk | Shooting 'behind' fast movers | Prediction errors |
123456789101112131415161718192021222324252627282930313233
class EntityInterpolator { private stateBuffer: {time: number, state: State}[] = []; // Called when server sends entity state onEntityUpdate(serverTime: number, state: State) { this.stateBuffer.push({time: serverTime, state}); // Keep only recent states while (this.stateBuffer.length > 10) { this.stateBuffer.shift(); } } // Called every render frame getInterpolatedState(renderTime: number): State { // Render ~100ms in the past (interpolation delay) const targetTime = renderTime - INTERPOLATION_DELAY; // Find bracketing states for (let i = 0; i < this.stateBuffer.length - 1; i++) { const before = this.stateBuffer[i]; const after = this.stateBuffer[i + 1]; if (before.time <= targetTime && targetTime <= after.time) { // Interpolate between before and after const t = (targetTime - before.time) / (after.time - before.time); return this.lerp(before.state, after.state, t); } } // No bracketing states; extrapolate from last known return this.extrapolate(this.stateBuffer.at(-1)!, renderTime); }}Interpolation means you're always seeing other players' past states (~50-100ms ago). Combined with your RTT to server, this means your 'world' is significantly behind the server. Lag compensation (next section) addresses the gameplay implications of this delay.
Due to interpolation delay and network latency, when you shoot at an enemy, your view of their position is outdated. How can the server determine if the shot hit?
The Problem Visualized:
Time →
Server: Enemy at position A───────►B (moved to B)
↑
Client: Sees enemy at position A (100ms ago)
Fires shot at A
Should hit be registered at:
- A (where player aimed)?
- B (where enemy actually is now)?
Solution: Server-Side Rewind
The server records historical positions and 'rewinds' to check hits:
1. Client fires at timestamp T=1000 (client local time)
2. Server receives shot at T=1060 (60ms network delay)
3. Server calculates: Shot was fired at client's T=1000
4. Server rewinds world state to T=1000 - RTT/2 ≈ T=940
5. Server checks: Was enemy at position A at T=940?
6. If yes → HIT REGISTERED
7. Server applies damage, notifies both clients
The Rewind Algorithm:
12345678910111213141516171819202122232425262728293031323334
class LagCompensation { // Store historical hitboxes for each player private hitboxHistory: Map<PlayerId, {time: number, hitbox: Hitbox}[]>; processShot(shooter: Player, aim: Ray, clientTime: number) { // 1. Calculate what time the shooter saw when firing const viewTime = this.serverTime - shooter.ping / 2 - INTERPOLATION_DELAY; // 2. For each potential target, get hitbox at that time for (const target of this.players) { if (target === shooter) continue; const historicalHitbox = this.getHitboxAtTime(target.id, viewTime); // 3. Check if shot ray intersects historical hitbox if (historicalHitbox && this.rayIntersects(aim, historicalHitbox)) { // HIT! Apply damage at CURRENT time this.applyDamage(target, shooter.weapon.damage); this.notifyHit(shooter, target); return; } } // Miss this.notifyMiss(shooter); } getHitboxAtTime(playerId: PlayerId, time: number): Hitbox | null { const history = this.hitboxHistory.get(playerId); // Interpolate between stored snapshots to reconstruct hitbox return this.interpolateHitbox(history, time); }}Lag compensation creates a 'peeker's advantage': a player peeking around a corner sees their enemy before the enemy's client shows the peeker. With lag compensation, the peeker's shots register even though the victim was 'behind cover' from their perspective. There's no perfect solution—games balance between favoring shooters (satisfying hits) and defenders (fair damage avoidance).
Rollback netcode is a specialized networking approach developed for fighting games, where frame-perfect timing is essential. Unlike delay-based netcode (which adds input lag to synchronize players), rollback simulates immediately and corrects mistakes afterward.
Delay-Based vs. Rollback:
How Rollback Works:
Frame 10: Simulate with predicted input for opponent (assume no action)
Frame 11: Simulate with predicted input
Frame 12: Receive actual input from Frame 10 → prediction was WRONG!
Rollback Process:
1. Restore game state to Frame 10
2. Apply ACTUAL opponent input for Frame 10
3. Re-simulate Frame 11 with actual input
4. Re-simulate Frame 12 with actual input
5. Current frame now has correct state
The GGPO Library:
GGPO (Good Game, Peace Out) pioneered rollback netcode for fighting games. Key features:
Rollback netcode requires deterministic simulation (both players compute identical results from identical inputs), fast state save/restore, and efficient resimulation (must redo multiple frames in one render frame). Games with complex physics or large state are harder to implement with rollback.
Games must transmit state updates efficiently—minimizing bandwidth while providing enough information for smooth gameplay. Packet design is a critical optimization.
Typical Game Packet Contents:
| Field | Size | Purpose |
|---|---|---|
| Sequence Number | 2-4 bytes | Order packets, detect loss |
| Timestamp | 4 bytes | Server knows when input was made |
| Input Bitmask | 2 bytes | Which buttons pressed (WASD, shoot, jump) |
| Aim Direction | 4-8 bytes | Where player is looking (yaw/pitch or quaternion) |
| Action Data | variable | Specific actions (item use, chat) |
| ACK of Server | 2-4 bytes | Last server packet received |
Server-to-Client Packet (State Update):
┌─────────────────────────────────────────────────────┐
│ Header (12 bytes) │
│ ├ Sequence: 12345 │
│ ├ ACK: 9876 (last client input processed) │
│ ├ Timestamp: 1705500000 │
│ └ Entity Count: 15 │
├─────────────────────────────────────────────────────┤
│ Local Player (always full, ~40 bytes) │
│ ├ Position: (1234.5, 567.8, 90.1) │
│ ├ Velocity: (5.0, 0.0, 0.0) │
│ ├ Health: 85 │
│ └ Ammo: 24 │
├─────────────────────────────────────────────────────┤
│ Other Entities (delta compressed, ~20 bytes each) │
│ ├ Entity #5: Position only (others unchanged) │
│ ├ Entity #8: Position + Animation │
│ └ ... │
└─────────────────────────────────────────────────────┘
Bandwidth Optimization Techniques:
A typical competitive game targets 1-5 KB/s per player (8-40 Kbps). At 64 ticks/second, that's ~15-80 bytes per update. Every byte matters! Games like Counter-Strike have meticulously optimized netcode developed over 20+ years.
Competitive games require server infrastructure distributed globally to minimize player latency. Understanding how major games deploy their servers illustrates UDP networking at scale.
Regional Server Distribution:
Global Game Backend
│
┌──────────────────┼──────────────────┐
│ │ │
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
│ NA-WEST │ │ EUROPE │ │ ASIA │
│ Cluster │ │ Cluster │ │ Cluster │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
│ LA │ │ London │ │ Tokyo │
│ Seattle │ │ Frankfurt│ │ Singapore│
│ Dallas │ │ Stockholm│ │ Seoul │
└─────────┘ └─────────┘ └─────────┘
| Component | Role | Technologies |
|---|---|---|
| Game Servers | Run actual game simulation | Custom engines on bare metal/VMs |
| Matchmaking | Group players by skill/region | Distributed queues, ranking algorithms |
| Relay Networks | Optimize UDP routing | Steam Datagram Relay, Riot Direct |
| Edge PoPs | Reduce latency to players | Anycast, geographic load balancing |
| Anti-Cheat | Detect/prevent cheating | Server-side validation, client monitoring |
| Analytics | Track performance, player behavior | Time-series databases, real-time dashboards |
Custom Routing Networks:
Major game publishers build their own network infrastructure to bypass internet routing inefficiencies:
Steam Datagram Relay (Valve):
Riot Direct (Riot Games):
Why Build Custom Networks?
Modern games use container orchestration (Kubernetes, Agones) to dynamically provision game servers based on demand. A match request triggers server allocation within seconds. After the match, the server is recycled. This enables efficient resource utilization while maintaining low-latency bare-metal performance.
Online gaming pushes UDP networking to its limits, requiring ultra-low latency, efficient bandwidth usage, and sophisticated application-layer reliability. Let's consolidate the key insights:
The UDP Gaming Stack:
┌────────────────────────────────────────────────┐
│ Game Logic (prediction, reconciliation) │
├────────────────────────────────────────────────┤
│ Networking Layer (delta compression, ACKs) │
├────────────────────────────────────────────────┤
│ UDP (raw speed, no guarantees) │
├────────────────────────────────────────────────┤
│ Custom Relay Network (optimized routing) │
└────────────────────────────────────────────────┘
Gaming demonstrates that UDP is not just 'TCP without reliability'—it's a platform for building application-specific reliability and consistency exactly suited to the use case. The techniques developed for games (prediction, reconciliation, lag compensation) represent advanced distributed systems engineering.
You have now completed the UDP Applications module. You understand how DNS, DHCP, SNMP, streaming media, and gaming each leverage UDP's characteristics for their specific requirements. From name resolution to real-time multiplayer, you can analyze why UDP is chosen and how applications build reliability atop an unreliable transport.