Loading learning content...
HTTP/3 is not merely "HTTP/2 over QUIC." It is a fundamental redesign of HTTP's mapping to a transport protocol, leveraging QUIC's unique capabilities to solve problems that HTTP/2 inherited from TCP. Where HTTP/2 added multiplexing at the application layer atop TCP's single byte stream, HTTP/3 uses QUIC's native stream multiplexing, achieving true parallel request handling without head-of-line blocking.
The Evolution:
Standardized in RFC 9114 (June 2022), HTTP/3 is now supported by all major browsers and carries approximately 25-30% of web traffic as of 2024.
By the end of this page, you will understand how HTTP/3 maps to QUIC's stream architecture, the QPACK header compression mechanism and why HTTP/2's HPACK couldn't be used, how HTTP/3 eliminates head-of-line blocking that affected HTTP/2, server push in HTTP/3 and its practical limitations, and the deployment landscape and future trajectory of HTTP/3.
HTTP/2 was a significant advancement over HTTP/1.1, introducing multiplexing, header compression (HPACK), and server push. However, HTTP/2 inherited a fundamental limitation from TCP: transport-layer head-of-line (HoL) blocking.
The Multiplexing Promise:
HTTP/2's key feature was multiplexing multiple HTTP requests and responses over a single TCP connection. Instead of opening 6+ connections like HTTP/1.1, a browser could send all requests over one connection, interleaving frames from different request/response streams.
The Problem:
TCP provides a single, ordered byte stream. When the HTTP/2 multiplexed frames are packed into this stream and a single TCP packet is lost, TCP stalls the entire stream waiting for the retransmission—even though other HTTP/2 streams' frames were delivered successfully.
Ironically, HTTP/2's multiplexing made this worse than HTTP/1.1 with multiple connections. With 6 separate TCP connections, packet loss on one connection only stalls 1/6 of the requests. With HTTP/2's single connection, packet loss stalls all requests.
| Protocol | Connection Model | Packet Loss Impact |
|---|---|---|
| HTTP/1.1 (6 conn) | 6 parallel TCP connections | Loss stalls 1/6 of requests (17%) |
| HTTP/2 (TCP) | 1 TCP connection, multiplexed | Loss stalls ALL requests (100%) |
| HTTP/3 (QUIC) | 1 QUIC connection, native streams | Loss stalls only affected stream (~10-20%) |
At 2% packet loss (common on mobile networks), HTTP/2 over TCP performs measurably worse than HTTP/1.1 with multiple connections. This counterintuitive result—where the "newer" protocol is slower—drove the development of HTTP/3. QUIC was designed specifically to provide multiplexing without HoL blocking.
HTTP/3 exploits QUIC's native stream multiplexing. Each HTTP request/response exchange uses a dedicated QUIC stream. Stream independence means packet loss on one stream doesn't block others.
Request/Response Streams:
For each HTTP request, the client opens a new bidirectional QUIC stream:
Each request/response pair is self-contained on its stream, isolated from other concurrent requests.
12345678910111213141516171819202122232425262728293031
HTTP/3 Request/Response on QUIC Streams:========================================= Client opens bidirectional stream for each request: Stream 0 (client-initiated bidi): Client → Server: HEADERS frame (GET /index.html) Server → Client: HEADERS frame (200 OK, content-type: text/html) Server → Client: DATA frame (first 16KB of HTML) Server → Client: DATA frame (rest of HTML + FIN) Stream 4 (client-initiated bidi): Client → Server: HEADERS frame (GET /style.css) Server → Client: HEADERS frame (200 OK, content-type: text/css) Server → Client: DATA frame (CSS content + FIN) Stream 8 (client-initiated bidi): Client → Server: HEADERS frame (GET /script.js) Server → Client: HEADERS frame (200 OK, content-type: application/javascript) Server → Client: DATA frame (JS content + FIN) Key insight: - If Stream 4's packet is lost, only CSS stalls- Stream 0 (HTML) and Stream 8 (JS) continue unaffected- Application receives resources as they complete, not in order Special streams (unidirectional): Stream 2 (client→server): QPACK encoder instructions Stream 3 (server→client): QPACK encoder instructions Stream 6 (client→server): Control stream (SETTINGS) Stream 7 (server→client): Control stream (SETTINGS)The critical difference from HTTP/2: QUIC doesn't enforce ordering across streams. When the application reads from Stream 8, it receives data as available—not delayed by Stream 4's issues. This maps perfectly to HTTP's independent request semantics.
HTTP/3 defines its own framing layer on top of QUIC streams. These HTTP/3 frames are distinct from QUIC frames—HTTP/3 frames are carried within QUIC STREAM frames.
HTTP/3 Frame Structure:
Each HTTP/3 frame consists of:
This simple structure allows easy parsing and extensibility.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
HTTP/3 Frame Format:==================== +------------+------------+-------------------+| Type | Length | Payload || (varint) | (varint) | (Length bytes) |+------------+------------+-------------------+ Core Frame Types:================= DATA (0x00): Carries HTTP message body data. +------------+------------+------------------+ | Type: 0x00 | Length | Message Body | +------------+------------+------------------+ HEADERS (0x01): Carries QPACK-encoded HTTP headers. +------------+------------+------------------+ | Type: 0x01 | Length | Encoded Headers | +------------+------------+------------------+ CANCEL_PUSH (0x03): Cancels a server push before completion. +------------+------------+------------------+ | Type: 0x03 | Length | Push ID | +------------+------------+------------------+ SETTINGS (0x04): Configuration parameters (on control stream only). +------------+------------+------------------+ | Type: 0x04 | Length | Setting entries | +------------+------------+------------------+ PUSH_PROMISE (0x05): Announces a server push (on request stream). +------------+------------+------------------+ | Type: 0x05 | Length | Push ID+Headers | +------------+------------+------------------+ GOAWAY (0x07): Initiates graceful connection shutdown. +------------+------------+------------------+ | Type: 0x07 | Length | Stream/Push ID | +------------+------------+------------------+ MAX_PUSH_ID (0x0d): Client grants server permission to push. +------------+------------+------------------+ | Type: 0x0d | Length | Max Push ID | +------------+------------+------------------+| Frame | Type ID | Sent On | Purpose |
|---|---|---|---|
| DATA | 0x00 | Request streams | Carry request/response body |
| HEADERS | 0x01 | Request streams | Carry request/response headers (QPACK encoded) |
| CANCEL_PUSH | 0x03 | Control stream | Cancel a promised push |
| SETTINGS | 0x04 | Control stream | Connection configuration |
| PUSH_PROMISE | 0x05 | Request stream | Announce upcoming push |
| GOAWAY | 0x07 | Control stream | Graceful shutdown |
| MAX_PUSH_ID | 0x0d | Control stream | Permit server pushes |
HTTP/3 has fewer frame types than HTTP/2 because QUIC handles many concerns at the transport layer. No WINDOW_UPDATE (QUIC has flow control), no PRIORITY (deprecated, replaced by extensible mechanism), no RST_STREAM (QUIC's RESET_STREAM), no PING (QUIC has PING frame). This simplifies HTTP/3.
HTTP headers are repetitive and verbose. Header compression significantly reduces overhead, especially for small requests. HTTP/2 used HPACK for header compression, but HPACK has a fundamental incompatibility with QUIC: it requires strict ordering.
Why HPACK Doesn't Work for HTTP/3:
HPACK maintains a dynamic table of recently-seen headers. When encoding a header that matches a table entry, HPACK sends just the index (1-2 bytes instead of many). The encoder and decoder must process headers in identical order to keep tables synchronized.
With QUIC, requests may arrive out of order (different streams). If Request B references a header inserted by Request A, but Request A's packet is delayed, Request B can't be decoded—reintroducing head-of-line blocking at the HTTP layer!
QPACK: The Solution:
QPACK (RFC 9204) redesigns header compression for unordered delivery. It uses the same encoding concepts as HPACK but adds explicit synchronization mechanisms.
12345678910111213141516171819202122232425262728293031323334353637
QPACK Encoding with Synchronization:===================================== Initial state: Static table: 61 entries (predefined) Dynamic table: empty (insert count = 0) Time 1: Client sends Request A on Stream 0 Encodes: ":method: GET" (static index 17) ":path: /api/users" (literal, insert to dynamic) Sends on encoder stream: Insert ":path: /api/users" at IC=1 Sends on Stream 0: HEADERS with RIC=1 (This header block requires insert count >= 1) Time 2: Client sends Request B on Stream 4 (before IC=1 acked) Wants to reference ":path: /api/users" (dynamic index IC=1) Option A (no blocking): Encode literally, don't reference Sends on Stream 4: HEADERS with RIC=0 Larger but won't block Option B (allow blocking): Reference dynamic entry Sends on Stream 4: HEADERS with RIC=1 Decoder will wait for encoder stream to catch up Time 3: Decoder receives Stream 4 packet If RIC=1 but only IC=0 processed: Option A: Headers already decoded (no reference) Option B: Block until encoder stream delivers IC=1 Time 4: Decoder processes encoder stream, IC=1 inserted Decoder sends Section Ack on decoder stream Any blocked headers now decodable Time 5: Encoder receives ack Encoder knows decoder has IC=1 Safe to evict old entries, reference IC=1 freelyQPACK allows endpoints to choose their tradeoff. Setting max_blocked_streams=0 means never block (lower compression, lowest latency). Higher values allow more blocking (better compression, potential latency). In practice, most implementations allow some blocking—the encoder stream is small and fast.
Let's trace a complete HTTP/3 request/response cycle, understanding exactly what happens at each layer.
The Complete Flow:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
HTTP/3 Request: GET https://example.com/api/data================================================ 1. CONNECTION ESTABLISHMENT (if not already connected) ───────────────────────────────────────────────── QUIC 1-RTT or 0-RTT handshake After handshake: - Control stream 2 (client→server): client SETTINGS - Control stream 6 (server→client): server SETTINGS - QPACK encoder streams established (3, 7) - QPACK decoder streams established (if needed) 2. REQUEST PREPARATION ─────────────────── HTTP/3 client prepares request headers: :method = GET :scheme = https :authority = example.com :path = /api/data accept = application/json accept-encoding = gzip, deflate, br user-agent = Mozilla/5.0... 3. QPACK ENCODING ────────────── QPACK encodes headers: :method GET → 0x00 0x00 (static index 17) :scheme https → 0x00 0x17 (static index 23) :authority → 0x50 0x0b example.com (literal with name index) :path → 0x00 0x01 0x09 /api/data (literal) accept → Literal or dynamic reference ... Encoded header block: ~50 bytes (vs ~200 bytes uncompressed) Required Insert Count: 0 (only static references) 4. HTTP/3 FRAMING ────────────── Wrap in HEADERS frame: ┌──────────────────────────────────┐ │ Type: 0x01 (HEADERS) │ │ Length: 50 │ │ Payload: [QPACK encoded headers] │ └──────────────────────────────────┘ 5. QUIC STREAM ──────────── Open new bidirectional stream (ID=0 for first request) Send STREAM frame containing HTTP/3 HEADERS frame: QUIC STREAM Frame: Stream ID: 0 Offset: 0 Length: 53 Data: [HTTP/3 HEADERS frame bytes] FIN: true (no request body for GET) 6. QUIC PACKET ──────────── Pack STREAM frame into QUIC short header packet: QUIC Packet: Header: 0x40 (short header, key phase 0) DCID: [server's connection ID] Packet Number: 12 (encrypted) Payload (encrypted): STREAM frame (ID=0, HTTP/3 request) [possibly other frames like ACK] 7. UDP DATAGRAM ───────────── QUIC packet becomes UDP payload: UDP: [src port 54321] [dst port 443] [length] [checksum] IP: [client IP] → [server IP] Packet sent!12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
HTTP/3 Response: 200 OK with JSON body======================================= 1. SERVER RECEIVES REQUEST ──────────────────────── UDP datagram → QUIC packet → STREAM frame → HTTP/3 HEADERS QPACK decodes headers Server processes: GET /api/data 2. SERVER PREPARES RESPONSE ───────────────────────── Response headers: :status = 200 content-type = application/json content-length = 1234 cache-control = max-age=60 Response body: {"users": [...], "total": 42, ...} 3. SERVER SENDS RESPONSE ────────────────────── On same Stream 0 (bidirectional): Packet 1: STREAM frame (ID=0): HTTP/3 HEADERS frame: :status 200, content-type, etc. HTTP/3 DATA frame (partial): First 1000 bytes of JSON Packet 2: STREAM frame (ID=0): HTTP/3 DATA frame: Remaining 234 bytes of JSON FIN bit set (stream complete) 4. CLIENT RECEIVES RESPONSE ───────────────────────── Packet 1 arrives: QPACK decodes response headers Application notified: 200 OK, JSON, 1234 bytes First 1000 bytes delivered to app Packet 2 arrives: Final 234 bytes delivered Stream 0 marked complete Application has full response Total latency (0-RTT case): Request sent at T=0 Response headers at T=RTT Full body at T=RTT + transmission time No connection establishment overhead!Notice how cleanly HTTP/3 maps to QUIC. One stream per request, bidirectional for request/response, independent from other streams. The complexity is in QPACK (header compression synchronization) and the control/encoder streams—the core request pattern is simpler than HTTP/2.
HTTP/3 inherits server push from HTTP/2, with modifications for QUIC's stream model. Server push allows a server to send resources before the client requests them—potentially improving page load time by anticipating client needs.
How HTTP/3 Push Works:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
HTTP/3 Server Push Example:=========================== Client requests: GET /index.html (Stream 0)Server decides to push: /style.css On Stream 0 (original request stream): Server sends: PUSH_PROMISE Frame { Type: 0x05 Length: ... Push ID: 0 Encoded Header Field Section: :method GET :scheme https :authority example.com :path /style.css } HEADERS Frame (response to index.html) DATA Frame (index.html content) On Stream 3 (server-initiated unidirectional, for Push ID 0): Server sends: Push stream type indicator Push ID: 0 HEADERS Frame { :status 200 content-type: text/css ... } DATA Frame { (CSS content) } Client behavior: - Receives PUSH_PROMISE, stores promise for /style.css - When parser sees <link href="/style.css">, checks cache - Push already in progress! No new request needed. - Uses pushed response when Stream 3 completes. If client doesn't want push: Client sends CANCEL_PUSH(Push ID: 0) on control stream Server should stop sending on Stream 3Despite being specified in HTTP/2 and HTTP/3, server push has seen minimal real-world adoption. Studies show push often hurts performance: servers push resources clients already have cached, and push consumes bandwidth that could serve explicitly-requested resources. Chrome has disabled HTTP/2 push by default. Consider push deprecated in practice; use preload hints instead.
Why Push Failed:
No cache check: Server doesn't know if client has /style.css cached. Pushing cached resources wastes bandwidth.
Priority inversion: Pushed resources may compete with critical resources client explicitly requested.
Complex implementation: Correct push logic requires sophisticated understanding of page structure.
Better alternatives: <link rel=preload> hints and Early Hints (103 responses) let clients control what's fetched.
CDN conflicts: Push from origin may conflict with CDN caching strategies.
The Future: HTTP/3 includes push for completeness, but the industry has largely moved on. Most HTTP/3 deployments disable or ignore push entirely.
HTTP/3 connections are long-lived, potentially carrying thousands of requests. Proper management—configuration, errors, and shutdown—is essential for robust implementations.
SETTINGS Frame:
Each endpoint sends a SETTINGS frame on its control stream immediately after opening the connection. Settings are not negotiated—each side declares its own limits and the peer must respect them.
Key HTTP/3 Settings:
| Setting | ID | Default | Meaning |
|---|---|---|---|
| SETTINGS_MAX_FIELD_SECTION_SIZE | 0x06 | Unlimited | Max decompressed header block size |
| SETTINGS_QPACK_MAX_TABLE_CAPACITY | 0x01 | 0 | Max dynamic table size encoder may use |
| SETTINGS_QPACK_BLOCKED_STREAMS | 0x07 | 0 | Max streams that can block on QPACK |
| SETTINGS_ENABLE_CONNECT_PROTOCOL | 0x08 | 0 | Extended CONNECT for WebSocket/WebTransport |
| SETTINGS_H3_DATAGRAM | 0x33 | 0 | HTTP Datagrams (unreliable) support |
Graceful Shutdown (GOAWAY):
When a server wants to stop accepting new requests (for maintenance, load shedding, etc.), it sends GOAWAY:
GOAWAY Frame {
Type: 0x07
Stream ID: 100 // Last stream that will be processed
}
This tells the client:
Clients should open a new connection and retry unprocessed requests.
Error Handling:
HTTP/3 defines error codes for various failure modes:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
HTTP/3 Error Codes (sent in QUIC CONNECTION_CLOSE or RESET_STREAM):==================================================================== Connection Errors (close entire connection): H3_NO_ERROR (0x0100): Graceful closure, no error H3_GENERAL_PROTOCOL_ERROR (0x0101): Unspecific protocol error H3_INTERNAL_ERROR (0x0102): Implementation error H3_STREAM_CREATION_ERROR (0x0103): Stream violates HTTP/3 rules H3_CLOSED_CRITICAL_STREAM (0x0104): Control or QPACK stream closed H3_FRAME_UNEXPECTED (0x0105): Frame not allowed on this stream H3_FRAME_ERROR (0x0106): Frame malformed H3_EXCESSIVE_LOAD (0x0107): Connection closed due to excessive load H3_SETTINGS_ERROR (0x0109): SETTINGS frame error H3_REQUEST_REJECTED (0x010b): Request not processed (retry possible) H3_REQUEST_CANCELLED (0x010c): Request cancelled by client QPACK Errors: QPACK_DECOMPRESSION_FAILED (0x0200): Could not decompress headers QPACK_ENCODER_STREAM_ERROR (0x0201): Encoder stream error QPACK_DECODER_STREAM_ERROR (0x0202): Decoder stream errorHTTP/3's performance benefits compound with connection reuse. Keep connections alive, use 0-RTT for resumed connections, and only close when necessary. A well-managed HTTP/3 connection can carry thousands of requests with near-zero per-request overhead.
HTTP/3 has achieved remarkable deployment velocity. Understanding the current landscape helps contextualize where the technology stands and where it's heading.
Browser Support:
All major browsers support HTTP/3:
Browsers attempt HTTP/3 when servers advertise support (via Alt-Svc or DNS HTTPS records).
Server/CDN Support:
| Provider | Status | Notes |
|---|---|---|
| Cloudflare | ✅ Production | Default for all domains since 2020 |
| Google Cloud CDN | ✅ Production | HTTP/3 load balancing available |
| AWS CloudFront | ✅ Production | Generally available since 2022 |
| Fastly | ✅ Production | HTTP/3 available for all customers |
| Akamai | ✅ Production | HTTP/3 with QUIC |
| Microsoft Azure CDN | ✅ Production | HTTP/3 support in Azure Front Door |
| nginx | ✅ Production | Native QUIC since 1.25.0 (2023) |
| Caddy | ✅ Production | Built-in HTTP/3 support |
| HAProxy | ⚠️ Experimental | QUIC support in development |
| Apache | ⚠️ Limited | Via mod_h2 with external QUIC support |
Traffic Share:
As of 2024:
Adoption Drivers:
Adoption Barriers:
HTTP/3 is no longer experimental. It's deployed at scale by the largest web properties, supported by all major browsers, and available from major CDNs with one-click enablement. New deployments should seriously consider HTTP/3 as the default, with TCP/HTTP2 as fallback.
Real-world performance comparisons show HTTP/3's advantages, particularly under adverse network conditions that expose HTTP/2's HoL blocking problem.
Google's Measurements (Published Data):
Cloudflare's Measurements:
When HTTP/3 Matters Most:
| Condition | HTTP/3 Advantage | Reason |
|---|---|---|
| High packet loss (>1%) | Large (20-50%+) | HoL blocking elimination |
| High latency (>200ms) | Significant (10-30%) | 0-RTT, reduced handshake |
| Mobile networks | Significant (15-25%) | Connection migration, 0-RTT |
| Network transitions | Dramatic (seconds→ms) | Connection survives change |
| Low latency, no loss | Minimal (~5%) | Benefits less pronounced |
| Stable wired connection | Small | TCP performs well here |
On low-latency, low-loss networks (like datacenter-to-datacenter or fiber home connections), HTTP/3's advantages are smaller. HTTP/2 performs well when TCP's HoL blocking rarely manifests. HTTP/3's value is in making the web work better for users who aren't on ideal networks—which is most of the world.
We've explored HTTP/3 and its foundation on QUIC in depth. Let's consolidate the key concepts from this entire module:
Module Complete: QUIC Protocol
You've now completed a comprehensive exploration of the QUIC protocol and HTTP/3. You understand:
This knowledge prepares you for working with modern internet infrastructure, understanding web performance optimization, and appreciating the engineering decisions that shape how billions of people experience the web daily.
You have completed the QUIC Protocol module. You now possess deep understanding of the transport protocol that powers HTTP/3 and represents the future of reliable internet communication. QUIC's combination of encrypted-by-default transport, native multiplexing, connection migration, and 0-RTT establishes new performance baselines that will define web experience for the next decade.