Loading learning content...
Consider the postal service. When you drop a letter in a mailbox, what guarantee do you have? The postal service promises to try to deliver your letter. They'll collect it, sort it, transport it across the country, and attempt delivery to the addressed location. They do their best.
But do they guarantee delivery? No. Letters occasionally get lost. Packages get damaged. Address errors happen. The postal service provides best-effort delivery—they'll try their hardest, but they make no absolute promises.
Now consider registered mail. For an extra fee, the postal service provides tracking, requires signatures, insures against loss, and guarantees delivery or proof of non-delivery. This is guaranteed delivery—more expensive, slower, but with certainty.
UDP operates like regular mail. It does its best to deliver your datagram, but makes no promises. If you need guarantees, you'll need to arrange them yourself—or use a different service (TCP).
This best-effort model is not a compromise or a cost-cutting measure. It's a deliberate architectural choice that enables the internet to scale, remain resilient, and serve diverse applications efficiently.
By the end of this page, you will understand the best-effort delivery model in depth, how it arose from IP's fundamental design, why it remains the right choice for specific application classes, and how it contrasts with guaranteed delivery services.
Best-effort delivery is a network service model in which the network attempts to deliver data but provides no guarantees about:
In the best-effort model, every packet is treated equally. There are no priorities, no reservations, no special treatment. If the network is congested, packets are dropped without discrimination. If paths are unequal, packets take whatever route is available. The network does its best given current conditions.
The key insight:
Best-effort doesn't mean 'unreliable' or 'low quality.' It means the network makes no additional commitments beyond attempting delivery. In practice, modern networks deliver packets successfully with high probability—usually 99%+ on well-provisioned networks. Best-effort describes the service contract, not the actual performance.
The alternative: Guaranteed delivery
In a guaranteed delivery model, the network commits to specific performance levels:
Traditional telephone networks (circuit-switched) provided such guarantees. ATM (Asynchronous Transfer Mode) networks attempted to bring similar guarantees to data networking.
Why the internet chose best-effort:
The internet was designed for resilience and scalability, not guarantees:
Best-effort isn't just UDP's philosophy—it's IP's fundamental service model. UDP preserves IP's best-effort nature at the transport layer. TCP, by contrast, builds reliability on top of IP's unreliable foundation. UDP is IP made accessible to applications.
To understand UDP's best-effort nature, we must understand the layer beneath it. IP (Internet Protocol) is itself a best-effort protocol, and UDP simply exposes this reality to applications.
What IP provides:
What IP does not provide:
The 'dumb network, smart endpoints' philosophy:
The internet was deliberately designed with intelligence at the edges (hosts) rather than in the core (routers). Routers are simple: receive packets, look up destinations, forward packets. They maintain no state about conversations, flows, or connections.
This design has profound consequences:
UDP as IP with ports:
This often-used description is remarkably accurate. UDP adds only:
Nothing else. UDP doesn't add reliability, ordering, or any other service. It exposes IP's best-effort model directly to applications, with just enough additional structure to route data to the correct process.
UDP is 'transparent' to IP's characteristics in the same way that clear glass is transparent to light. It passes through IP's best-effort nature without modification. What you see at the application layer is essentially what IP provides, just with port numbers added.
Over the decades, there have been numerous efforts to move beyond pure best-effort delivery to provide guaranteed Quality of Service (QoS). Understanding these attempts—and their limited success—illuminates why best-effort remains dominant.
QoS approaches:
| Approach | Mechanism | Guarantees | Outcome |
|---|---|---|---|
| IntServ (RSVP) | Per-flow reservation through all routers | Bandwidth, delay, jitter | Too complex to scale; largely abandoned |
| DiffServ | Traffic classification + priority queuing | Relative priority, not absolute | Widely deployed in enterprise/ISP networks |
| MPLS | Label-switched paths with traffic engineering | Path guarantees, bandwidth allocation | Used in carrier backbones |
| ATM | Virtual circuits with QoS classes | Strong guarantees per circuit | Replaced by Ethernet for cost reasons |
| SDN/QoE | Centralized control with dynamic policies | Application-aware routing | Emerging in data centers and CDNs |
Why best-effort keeps winning:
1. Over-provisioning is cheaper
Rather than implementing complex QoS mechanisms, network operators often find it cheaper to simply add more bandwidth. A 10 Gbps link with best-effort is simpler than a 1 Gbps link with elaborate QoS.
2. Cross-domain coordination is hard
QoS works well within a single administrative domain. But the internet spans thousands of domains, each with different policies and equipment. End-to-end QoS would require unprecedented coordination.
3. Applications adapit
Modern applications (video streaming, VoIP, gaming) have developed sophisticated techniques to handle network variability:
These application-layer adaptations often work better than network-layer QoS because they understand application semantics.
4. Best-effort is 'good enough'
For the vast majority of traffic, best-effort delivers acceptable performance. Only a small fraction of traffic truly requires guarantees, and those flows can often be handled specially without redesigning the entire network.
Best-effort describes the service model, not the performance. A well-engineered best-effort network delivers high-quality service almost all the time. The 'effort' is often excellent. The key difference is that no guarantees are made—performance may degrade during congestion or failures.
Best-effort delivery isn't just about what's missing—it's about what becomes possible. The absence of guarantees enables characteristics that wouldn't be achievable otherwise.
Statistical Multiplexing
Best-effort networks use statistical multiplexing—sharing bandwidth based on actual usage rather than reserved capacity.
Consider a link shared by 100 users, each of whom might need 1 Mbps occasionally but averages 100 Kbps. With guaranteed reservations, you'd need 100 Mbps of capacity. With statistical multiplexing, 10-20 Mbps might suffice because not everyone needs full bandwidth simultaneously.
This efficiency is central to the internet's economics. We can provide shared connectivity far more cheaply than dedicated circuits.
Graceful Degradation
When demand exceeds capacity, best-effort networks degrade gracefully:
With guaranteed service, exceeding capacity means requests are rejected entirely—new connections fail, or existing connections are torn down to preserve guarantees for others.
Flexibility for Diverse Applications
Best-effort treats all packets equally, which means:
When the internet was created, no one anticipated video streaming, VoIP, online gaming, or IoT. Yet best-effort accommodated all of them without network-level changes.
Best-effort succeeds because 'good enough' is often better than 'perfect.' A service that's 99.9% reliable and cheap is often preferable to a service that's 99.999% reliable but expensive and complex. The internet scaled to billions of users precisely because it didn't try to guarantee everything.
Let's bring the concept of best-effort back to UDP specifically. What does it mean in practice when your application uses UDP?
When you call sendto() with UDP:
What 'successfully' does NOT mean:
sendto() succeeding means only that the OS accepted the datagram. Everything after that is best-effort.
12345678910111213141516171819202122
import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # This will "succeed" even if the destination doesn't exist# Best-effort: we attempted delivery, that's all we promisesock.sendto(b"Hello", ("192.168.1.100", 12345))print("Send returned successfully") # No exception means the OS accepted the data# It says NOTHING about whether delivery occurred # Even this "impossible" destination will "succeed"sock.sendto(b"Hello Mars", ("203.0.113.1", 99999)) # Documention IP, impossibleprint("This also 'succeeds' at the UDP layer") # Only local errors are reported:# - Socket not created# - Local network interface down# - Destination unreachable per local routing table sock.close()What happens in the network:
After sendto() returns, your datagram enters the best-effort delivery system:
Best-effort means every step tries but none guarantees. Under normal conditions, most datagrams succeed. Under stress, failure becomes more common.
A successful sendto() creates a dangerous illusion. The function returns success, suggesting the message was sent. But no network activity may have occurred, or the datagram may have been dropped milliseconds later. Applications must never assume that sendto() success means delivery success.
Congestion is the critical stress test for best-effort delivery. When network capacity is exceeded, how does best-effort handle the situation?
The congestion problem:
How routers handle congestion:
| Algorithm | Behavior | Fairness |
|---|---|---|
| Tail Drop | Drop packets when queue is full | Unfair to bursty senders; can cause synchronization |
| Random Early Detection (RED) | Start dropping probabilistically before queue fills | Better flow desynchronization; somewhat fairer |
| Weighted Fair Queuing | Separate queues per flow; round-robin service | Fair among active flows |
| Active Queue Management (AQM) | Signal congestion via drops or ECN marks | Enables end-to-end congestion control |
UDP and congestion control:
Here's UDP's controversial aspect: UDP has no congestion control.
TCP reduces its sending rate when it detects congestion (packet loss or delay). This is cooperative behavior—TCP senders share available capacity fairly.
UDP senders, by default, don't respond to congestion. A UDP application can send at full rate regardless of network conditions. This creates potential problems:
Why this isn't chaos in practice:
The IETF's RFC 8085 provides guidance for using UDP responsibly. It recommends that UDP applications sending more than a few datagrams per RTT should implement congestion control. Applications like QUIC and WebRTC follow these guidelines. Raw UDP applications should be aware of their network impact.
Applications using UDP must design around best-effort delivery. Here are proven strategies:
Strategy 1: Accept Loss
For applications where old data is worthless, simply accept that some datagrams will be lost:
Strategy 2: Retry Before Giving Up
For request-response applications, retry on timeout:
for attempt in 1..max_retries:
send_request()
if wait_for_response(timeout):
return response
timeout *= 2 # Exponential backoff
raise Failure
Strategy 3: Use Redundancy
Send redundant data so loss can be recovered:
Strategy 4: Implement Selective Reliability
Not all data needs equal reliability:
This selective approach delivers reliability where needed without penalizing time-sensitive data.
Strategy 5: Monitor and Adapt
Best-effort performance varies. Track metrics and adapt:
Strategy 6: Design for Graceful Degradation
When conditions deteriorate, fail gracefully:
Best-effort means planning for failure. Not 'if' packets are lost, but 'when' packets are lost. Applications that assume perfect delivery will fail for real users on real networks. Applications that embrace best-effort deliver good experiences even when the network doesn't cooperate.
We've explored the best-effort delivery model comprehensively. Let's consolidate the key insights:
The philosophical takeaway:
Best-effort is not a compromise or a shortcut—it's a philosophy. It says: provide a simple, scalable foundation and let endpoints implement the semantics they need. This philosophy built the internet, enabled unforeseen applications, and continues to power the most demanding real-time services.
What's next:
We've now covered UDP's characteristics (minimal design), connectionlessness (no sessions), unreliability (no guarantees), and best-effort (try without promising). In our final page, we'll bring it all together by examining UDP's advantages—the concrete benefits that make UDP the right choice for specific applications.
You now understand the best-effort delivery model—what it means, why it's the internet's foundation, and how applications successfully operate within it. Next, we'll explore the specific advantages UDP provides.