Loading learning content...
HTTP has undergone remarkable evolution since its 1991 debut. What began as a one-line text protocol transferring simple hypertext documents has transformed into a sophisticated binary multiplexed system powering modern internet infrastructure.
Each HTTP version addressed limitations revealed by the previous generation. HTTP/1.0 added metadata absent in HTTP/0.9. HTTP/1.1 introduced connection reuse that HTTP/1.0 lacked. HTTP/2 solved efficiency problems inherent in text-based HTTP/1.1. HTTP/3 overcame TCP limitations that HTTP/2 couldn't escape.
This evolution wasn't theoretical—it was driven by practical necessity. As web pages grew from kilobytes to megabytes, as smartphones demanded mobile-optimized performance, as user expectations rose to near-instant responsiveness, HTTP evolved to meet these demands.
Understanding HTTP versions is essential for modern web engineering. You'll encounter all major versions in production systems, make deployment decisions between them, and debug version-specific behaviors. This page provides that comprehensive understanding.
By the end of this page, you will understand the complete evolution from HTTP/0.9 to HTTP/3, the specific innovations each version introduced, the limitations each version addressed, how modern HTTP/2 and HTTP/3 achieve dramatically improved performance, and how to choose appropriate HTTP versions for your applications.
HTTP began with beautiful simplicity. The original protocol, now retrospectively called HTTP/0.9, consisted of a single request type with no headers, no metadata, and no status codes.
The Complete Protocol:
Request:
GET /path/to/document.html
Response:
<HTML>
Welcome to CERN...
</HTML>
[connection closed]
That's it. One line from client, HTML content from server, connection closed. No version number (added retrospectively), no headers, no status indicators.
Design Rationale:
Tim Berners-Lee designed HTTP/0.9 for a specific context:
The simplicity was intentional and crucial. Anyone could implement an HTTP client or server in an afternoon. This accessibility drove rapid adoption across diverse systems.
| Feature | HTTP/0.9 Support | Notes |
|---|---|---|
| Request Methods | GET only | No POST, PUT, DELETE, etc. |
| Headers | None | No metadata at all |
| Status Codes | None | No 200, 404, 500, etc. |
| Version Indicator | None | Added retrospectively |
| Content Types | HTML only | No images, CSS, etc. |
| Persistent Connections | No | One request per connection |
| Caching | None | No cache control |
| Authentication | None | No built-in auth |
Limitations:
HTTP/0.9's simplicity quickly became constraint:
No Error Handling: If a document didn't exist, the server might return an error page as HTML. The client couldn't distinguish success from failure.
No Content Types: Servers could only return HTML. No images, no stylesheets, no other formats.
No Metadata: No way to specify encoding, language, caching, or any other information about the response.
No Methods: No way to submit data (forms), update resources, or delete content.
Inefficient Connections: Each request required a new TCP connection—expensive for pages with multiple resources.
As the web grew beyond academic documents to commercial sites with images, forms, and dynamic content, these limitations became untenable. HTTP/1.0 followed within five years.
HTTP/0.9's extreme simplicity enabled the web's initial adoption. Complexity would have slowed implementation and fragmented the ecosystem. The lesson: start simple, prove value, then add complexity. Many protocols fail by starting complex.
By the mid-1990s, the web had exploded beyond hypertext. Commercial sites, online shopping, search engines—all demanded capabilities HTTP/0.9 lacked. HTTP/1.0 (RFC 1945) added the essential infrastructure.
Key Innovations:
1. Headers
Both requests and responses gained headers—key-value pairs carrying metadata:
GET /index.html HTTP/1.0
User-Agent: Mozilla/1.0
Accept: text/html, image/gif
HTTP/1.0 200 OK
Content-Type: text/html
Content-Length: 1234
Server: CERN/3.0
<HTML>...</HTML>
Headers enabled:
2. Status Codes
Servers returned numeric codes indicating request outcome:
200 OK — Success301 Moved Permanently — Resource relocated404 Not Found — Resource doesn't exist500 Internal Server Error — Server failedClients could now programmatically handle different outcomes.
123456789101112131415161718192021222324
# RequestGET /products/shoes.html HTTP/1.0Host: shop.example.comUser-Agent: Mozilla/2.0Accept: text/html, image/gif, image/jpegAccept-Language: en-USAccept-Encoding: gzip # ResponseHTTP/1.0 200 OKDate: Sun, 17 Jan 2025 10:30:00 GMTServer: Apache/1.3Content-Type: text/html; charset=UTF-8Content-Length: 15234Expires: Sun, 17 Jan 2025 11:30:00 GMTLast-Modified: Sat, 16 Jan 2025 08:00:00 GMT <!DOCTYPE HTML><html><head><title>Shoes</title></head><body>...page content...</body></html>3. Additional Methods
HTTP/1.0 introduced methods beyond GET:
4. Content Types
The Content-Type header enabled serving diverse content:
text/html — HTML documentstext/plain — Plain textimage/gif, image/jpeg — Imagesapplication/octet-stream — Binary files5. Basic Authentication
GET /secure HTTP/1.0
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Critical Limitation: One Request Per Connection
HTTP/1.0's fatal flaw: each request required a new TCP connection. For a page with 50 resources:
This was extremely inefficient, especially as pages grew to include dozens of images, stylesheets, and scripts.
With 100ms round-trip time, a TCP handshake alone takes 300ms. For a page with 50 resources, connection overhead alone exceeds 15 seconds. This fundamental inefficiency drove the development of HTTP/1.1's persistent connections.
HTTP/1.1 (RFC 2068, later RFC 2616, then RFC 7230-7235) addressed HTTP/1.0's efficiency issues and added crucial features. It remained the dominant protocol for nearly two decades—a remarkable lifespan in technology.
Key Innovation 1: Persistent Connections (Keep-Alive)
The biggest change: connections remain open by default for multiple requests.
# HTTP/1.0 behavior (simplified)
Connect → Request 1 → Response 1 → Disconnect
Connect → Request 2 → Response 2 → Disconnect
Connect → Request 3 → Response 3 → Disconnect
# HTTP/1.1 behavior
Connect → Request 1 → Response 1 → Request 2 → Response 2 → Request 3 → Response 3 → Disconnect
One TCP handshake serves multiple requests. For pages with dozens of resources, this was transformative.
Key Innovation 2: Host Header Requirement
HTTP/1.1 requires the Host header:
GET /page.html HTTP/1.1
Host: www.example.com
This enabled virtual hosting—multiple websites on a single IP address. Before this, each website needed its own IP. With IPv4 addresses limited, virtual hosting was essential for the web's growth.
Key Innovation 3: Chunked Transfer Encoding
Servers can send responses in chunks without knowing total size upfront:
HTTP/1.1 200 OK
Transfer-Encoding: chunked
25
<37 bytes of content>
1a
<26 bytes of content>
0
This enables streaming responses and dynamically generated content.
| Feature | HTTP/1.0 | HTTP/1.1 |
|---|---|---|
| Connection Reuse | New connection per request | Persistent by default |
| Virtual Hosting | One site per IP | Host header enables multi-site |
| Streaming Response | Requires Content-Length | Chunked encoding |
| Cache Control | Expires header only | Cache-Control with fine directives |
| Range Requests | Not supported | Resume downloads, byte ranges |
| Conditional Requests | Limited (If-Modified-Since) | ETag, If-None-Match added |
| Methods | GET, POST, HEAD | Added PUT, DELETE, OPTIONS, TRACE |
Key Innovation 4: Advanced Caching
The Cache-Control header replaced the simple Expires:
Cache-Control: public, max-age=3600, must-revalidate
Directives include:
max-age: Seconds until staleno-cache: Must revalidate before useno-store: Don't cache at allprivate: Don't cache in shared cachespublic: Can cache in shared cachesKey Innovation 5: Conditional Requests with ETags
# First request
GET /resource HTTP/1.1
# Response includes ETag
HTTP/1.1 200 OK
ETag: "abc123"
# Later request
GET /resource HTTP/1.1
If-None-Match: "abc123"
# If unchanged:
HTTP/1.1 304 Not Modified
Etags enable precise cache validation without transferring content.
Key Innovation 6: Range Requests
GET /large-video.mp4 HTTP/1.1
Range: bytes=1000000-1999999
HTTP/1.1 206 Partial Content
Content-Range: bytes 1000000-1999999/50000000
Enables resumable downloads and video seeking.
Persistent Limitation: Head-of-Line Blocking
Despite improvements, HTTP/1.1 has a fundamental issue: requests are processed sequentially. If request 1 is slow, requests 2, 3, 4 wait—even if they're ready.
To mitigate head-of-line blocking, browsers open multiple connections (typically 6) per domain. Sites used 'domain sharding'—distributing resources across cdn1.example.com, cdn2.example.com, etc. These hacks worked but added complexity. HTTP/2 properly solved the problem.
HTTP/2 (RFC 7540) represented the most significant change since HTTP/0.9. Based on Google's SPDY protocol, it completely reimagined HTTP's wire format while preserving its semantics.
Fundamental Change: Binary Framing
HTTP/1.1 is text-based—human-readable but inefficient to parse. HTTP/2 is binary:
HTTP/1.1: GET /index.html HTTP/1.1\r\nHost: example.com\r\n\r\n
HTTP/2: [Binary frame: length, type, flags, stream-id, payload]
Binary is:
Key Innovation 1: Multiplexing
Multiple requests share a single connection, interleaved as frames:
Connection carries:
Stream 1: Request for /index.html → Response frames
Stream 3: Request for /style.css → Response frames
Stream 5: Request for /script.js → Response frames
Frames interleave freely:
[Stream 1 DATA][Stream 3 DATA][Stream 5 DATA][Stream 1 DATA]...
No head-of-line blocking at HTTP level. A slow response on stream 1 doesn't block streams 3 and 5.
Key Innovation 2: Header Compression (HPACK)
HTTP/1.1 headers are verbose and repetitive. The same headers (Host, User-Agent, Accept) appear in every request. HTTP/2's HPACK compresses headers:
:method GET, :path /index.html)Compression ratios of 80-90% are common, especially for repeated requests.
Key Innovation 3: Server Push
Servers can proactively send resources before clients request them:
/index.html/index.html/style.css and /script.js (knowing they'll be needed)/style.css, it's already cached# Server sends PUSH_PROMISE frame
PUSH_PROMISE: I'm going to send /style.css
# Then sends the resource
[DATA frames for /style.css]
Key Innovation 4: Stream Prioritization
Clients indicate resource priority:
Servers use priorities to optimize delivery order.
Key Innovation 5: Flow Control
Per-stream and connection-level flow control prevents fast senders from overwhelming slow receivers. Each endpoint advertises receive capacity; senders respect limits.
HTTP/2 Semantics Unchanged:
Despite revolutionary wire format changes:
Application code works unchanged. The improvement is transparent.
HTTP/2 solves HTTP-level head-of-line blocking but runs over TCP, which has its own ordering requirement. A single lost TCP packet blocks all HTTP/2 streams until retransmission completes—TCP-level head-of-line blocking. This limitation drove HTTP/3's development.
HTTP/3 (RFC 9114) represents a fundamental architectural shift: abandoning TCP for QUIC (Quick UDP Internet Connections). This addresses limitations intrinsic to TCP that HTTP/2 couldn't overcome.
The TCP Problem:
TCP provides reliable, ordered delivery. If packet 5 of 10 is lost:
For HTTP/2 multiplexing multiple streams over one TCP connection, a lost packet in Stream 1 blocks Streams 2, 3, 4—even though their packets arrived fine. This is TCP head-of-line blocking.
The QUIC Solution:
QUIC (RFC 9000) runs over UDP and implements reliability at the application layer with stream-level independence:
Key Innovation 1: Stream Independence
QUIC Connection
├── Stream 1: /index.html [packet lost → retransmit → blocks only stream 1]
├── Stream 2: /style.css [continues receiving and delivering]
└── Stream 3: /script.js [continues receiving and delivering]
Each stream is independently reliable. Application data from ready streams flows immediately.
| Aspect | HTTP/2 over TCP | HTTP/3 over QUIC |
|---|---|---|
| Transport Protocol | TCP | QUIC over UDP |
| Head-of-Line Blocking | TCP-level blocking persists | Eliminated (stream independent) |
| Connection Setup | TCP handshake + TLS handshake | Combined (0-RTT possible) |
| Connection Migration | IP change breaks connection | Connection survives IP changes |
| Encryption | Optional TLS layer | Built-in mandatory encryption |
| Congestion Control | OS kernel TCP stack | Application-layer, customizable |
| Header Compression | HPACK | QPACK (adapted for QUIC) |
Key Innovation 2: Connection Migration
TCP connections are identified by the 4-tuple: (source IP, source port, dest IP, dest port). If your phone switches from WiFi to cellular, the source IP changes—TCP connection breaks, new handshake required.
QUIC connections use a Connection ID independent of network addresses. When IP changes, the Connection ID persists—no reconnection, no data loss.
This is crucial for mobile scenarios:
Key Innovation 3: 0-RTT Connection Resumption
For returning connections, QUIC can send application data immediately without waiting for handshake completion:
First connection: Full handshake (1 RTT)
Reconnection: Application data in first packet (0-RTT)
The client caches cryptographic parameters from previous connections. On reconnection, it encrypts data using cached keys—server responds immediately.
Security Note: 0-RTT data is not replay-protected. Servers must handle carefully (idempotent requests only).
Key Innovation 4: Built-in Encryption
QUIC integrates TLS 1.3 directly. All QUIC traffic is encrypted—there's no unencrypted QUIC. This provides:
Key Innovation 5: Improved Congestion Control
Unlike TCP's kernel-space congestion control, QUIC operates in user space. This enables:
QPACK: Header Compression for QUIC
HTTP/3 uses QPACK instead of HPACK. HPACK assumed ordered delivery (TCP). QUIC's out-of-order delivery required QPACK, which uses separate streams for header compression state.
Major platforms (Google, Facebook, Cloudflare, Akamai) already serve significant traffic over HTTP/3. Browser support is universal among modern browsers. For sites using CDNs, enabling HTTP/3 is often a configuration change—the performance benefits come automatically.
Understanding when each HTTP version excels helps make deployment decisions.
Scenario 1: High-Latency Networks (>100ms RTT)
Mobile networks, satellite connections, intercontinental links:
HTTP/1.1: Severely impacted. Each request waits for response; latency multiplied by request count. Domain sharding helps but adds DNS lookups.
HTTP/2: Major improvement. Single connection, multiplexed requests reduce round-trip impact. Header compression reduces bytes.
HTTP/3: Best performance. 0-RTT eliminates initial handshake latency. Connection migration handles network changes gracefully.
Scenario 2: Lossy Networks (>1% packet loss)
WiFi congestion, developing markets, network transitions:
HTTP/1.1: Each lost packet affects only its connection. With 6 connections, loss in one doesn't impact others. (Accidental advantage of multiple connections)
HTTP/2: Worse than HTTP/1.1! All streams share one TCP connection; any loss blocks all streams until retransmission.
HTTP/3: Best performance. Stream independence means loss in one stream doesn't block others. QUIC's improved loss recovery further helps.
Scenario 3: Many Small Requests (APIs)
HTTP/1.1: Overhead per request significant relative to payload. Keep-alive helps but requests still sequential per connection.
HTTP/2: Excellent. Header compression eliminates repetitive headers. Multiplexing sends all requests immediately.
HTTP/3: Excellent, with additional connection migration benefit for mobile API consumers.
| Scenario | HTTP/1.1 | HTTP/2 | HTTP/3 |
|---|---|---|---|
| Low latency, no loss | Good | Excellent | Excellent |
| High latency, no loss | Poor | Good | Excellent (0-RTT) |
| Any latency, high loss | Acceptable | Poor (HOL blocking) | Excellent |
| Many small requests | Poor (overhead) | Excellent (compression) | Excellent |
| Large downloads | Good | Good | Good |
| Mobile/WiFi transitions | Poor (reconnect) | Poor (reconnect) | Excellent (migration) |
| Browser compatibility | Universal | Universal modern | Modern browsers |
Real-World Performance Data:
Studies comparing HTTP versions show:
HTTP/2 vs HTTP/1.1: 10-30% page load improvement typical. Higher improvement for pages with many resources.
HTTP/3 vs HTTP/2: 5-15% improvement typical. Higher improvement on lossy/unstable networks (mobile).
0-RTT benefit: 100ms+ saved on subsequent connections in high-latency scenarios.
Connection migration: Prevents complete connection resets during network changes (seconds saved).
The Practical Reality:
Most sites should support HTTP/2 and HTTP/3:
HTTP/2: Universally supported by modern browsers and servers. Significant improvement over HTTP/1.1 with minimal complexity.
HTTP/3: Supported by all modern browsers. CDNs often handle transparently. Benefits mobile users significantly.
Fallback: Browsers automatically negotiate the best supported protocol. HTTP/3 → HTTP/2 → HTTP/1.1 fallback happens seamlessly.
Enable HTTP/3 and HTTP/2 via your CDN or load balancer—browsers fallback automatically. Focus optimization efforts on what HTTP can't fix: server processing time, image optimization, JavaScript performance. Protocol improvements are valuable but rarely the biggest opportunity.
How do clients and servers agree on which HTTP version to use? Each version uses different mechanisms.
HTTP/1.0 → HTTP/1.1: Version in Request Line
The client indicates version in the request:
GET / HTTP/1.1
Host: example.com
Server responds with its version. If client sends HTTP/1.1 and server only supports HTTP/1.0, server responds with HTTP/1.0.
HTTP/2 Discovery:
Option 1: ALPN (Application-Layer Protocol Negotiation)
During TLS handshake, client advertises supported protocols:
ClientHello: ALPN = [h2, http/1.1]
ServerHello: ALPN = h2
h2 = HTTP/2. If server doesn't support HTTP/2, it selects http/1.1.
Option 2: HTTP Upgrade (non-TLS)
Rarely used, but defined:
GET / HTTP/1.1
Host: example.com
Upgrade: h2c
Connection: Upgrade, HTTP2-Settings
HTTP/1.1 101 Switching Protocols
Upgrade: h2c
[binary HTTP/2 frames follow]
In practice, HTTP/2 is almost always used with HTTPS (ALPN).
HTTP/3 Discovery:
HTTP/3 uses QUIC, which uses UDP port 443. Browsers can't know upfront that a server supports QUIC.
Alt-Svc Header:
Servers advertise HTTP/3 support via the Alt-Svc response header:
HTTP/2 200 OK
Alt-Svc: h3=":443"; ma=86400
This means: 'I also support HTTP/3 (h3) on the same hostname, port 443, and this info is valid for 86400 seconds.'
Browsers cache Alt-Svc and attempt QUIC connections for subsequent requests. If QUIC fails (blocked by firewall), they fallback to TCP (HTTP/2).
123456789101112131415161718192021
# First Request (over HTTP/2)GET / HTTP/2Host: example.com # Response indicates HTTP/3 availabilityHTTP/2 200 OKAlt-Svc: h3=":443"; ma=86400Content-Type: text/html... # Browser caches Alt-Svc information# Next request/page load attempts QUIC (UDP) connection # If QUIC connection succeeds:GET / HTTP/3Host: example.com[over QUIC/UDP] # If QUIC blocked (firewall), fallback to HTTP/2:GET / HTTP/2[over TCP]DNS SVCB/HTTPS Records:
Newer mechanism: DNS records advertise protocol support:
example.com. 300 IN HTTPS 1 . alpn="h3,h2" ipv4hint=93.184.216.34
This tells clients before any connection that the server supports h3 (HTTP/3) and h2 (HTTP/2). Browsers can attempt QUIC immediately without waiting for Alt-Svc.
Version Fallback:
Browsers implement graceful fallback:
This happens automatically—users see no interruption. Network operators blocking UDP won't break websites; browsers simply use TCP.
Server Configuration:
Modern servers/CDNs typically require:
CDNs like Cloudflare, Fastly, and Akamai handle this automatically.
Some enterprise networks block UDP 443 (QUIC) or non-standard protocols. HTTP/3 will fail, and clients fallback to HTTP/2. This is by design—HTTP/3 is an optimization, not a requirement. Ensure your service works over HTTP/2 as the baseline.
With multiple HTTP versions available, how should you configure your applications?
General Recommendation: Support All Modern Versions
Configure your server/CDN to support:
Clients negotiate the best version automatically. You get optimal performance without breaking compatibility.
When to Prioritize HTTP/3:
When HTTP/2 Is Sufficient:
When HTTP/1.1 Still Matters:
Backend/API Considerations:
For backend services and APIs:
Load Balancer Placement:
[Client] — HTTP/3 → [CDN/Edge] — HTTP/2 → [Load Balancer] — HTTP/1.1 → [Backend]
Terminate HTTP/3 at edge (CDN), HTTP/2 at load balancer. Backend connections are often HTTP/1.1 or HTTP/2—simpler, sufficient for internal communication.
Cost-Benefit Analysis:
HTTP/3 implementation effort:
HTTP/3 benefit:
Use a CDN that handles HTTP/2 and HTTP/3 automatically. Focus engineering effort on what CDNs can't optimize: server processing, database queries, application architecture. Protocol performance is largely solved—application performance remains the challenge.
We've traced HTTP's evolution across three decades—from a one-line text protocol to a sophisticated binary multiplexed system. Let's consolidate the key insights:
What's Next:
With HTTP's version evolution understood, the final page of this module explores the broader Web Architecture—how browsers, servers, proxies, CDNs, and DNS work together to deliver web content. You'll understand the complete system that HTTP operates within, enabling you to design, debug, and optimize web applications holistically.
You now understand HTTP's complete version evolution: the problems each version solved, the innovations it introduced, and the limitations that drove the next generation. This knowledge enables informed decisions about protocol configuration and debugging version-specific behaviors in production systems.