Loading content...
Networks are not merely conduits for moving bits—they are shared resource systems providing bandwidth, connectivity, storage, and computational capabilities to many simultaneous users. Understanding network resources—their nature, how they're shared, and how conflicts are resolved—is fundamental to network design, operation, and application development.
When you stream a video while your neighbor downloads files while a business runs video conferences, the same network infrastructure serves all. How does this work? How are resources allocated? What happens when demand exceeds supply?
By the end of this page, you will:
• Identify the key resources that networks provide • Understand bandwidth, capacity, and throughput distinctions • Explain resource sharing mechanisms and their trade-offs • Analyze Quality of Service (QoS) concepts and implementations • Apply resource concepts to network capacity planning and troubleshooting
Networks provide and manage several distinct types of resources. Each resource type has unique characteristics, constraints, and management challenges.
| Resource | Description | Measurement | Scarcity Impact |
|---|---|---|---|
| Bandwidth | Data transmission capacity of links | bps (bits per second) | Congestion, delays, dropped packets |
| Buffer Memory | Temporary storage in network devices | Bytes, packets | Packet loss when buffers overflow |
| Processing Power | CPU cycles in devices for packet processing | Packets per second, CPU % | Forwarding delays, device overload |
| Addresses | IP addresses, port numbers, MAC addresses | Count of available | Address exhaustion (IPv4), NAT required |
| Connectivity | Physical/logical paths between locations | Paths, uptime | Unreachability if paths fail |
| Time/Slots | Time-division access in shared media | Microseconds, slots | Collision, access delays (WiFi) |
Bandwidth: The Primary Resource
Bandwidth is typically the most visible and constrained network resource. It's important to distinguish related but different concepts:
Bandwidth (Capacity):
Throughput (Actual):
Goodput (Application):
The distinction matters: A user expects to download at link speed but actually receives goodput—potentially much less than the advertised bandwidth.
The Bandwidth-Delay Product (BDP) is the amount of data 'in flight' on a link:
BDP = Bandwidth × Round-Trip Time
Example: 100 Mbps link with 50ms RTT: BDP = 100 Mbps × 0.050s = 5 Mbits = 625 KB
TCP windows must be at least this large to fully utilize the link. This is why high-bandwidth, high-latency links (transcontinental fiber) require special TCP tuning.
Networks are fundamentally shared resources. Unlike dedicated point-to-point connections, modern networks allow many users to share the same infrastructure. This sharing dramatically reduces cost but creates resource allocation challenges.
Statistical Multiplexing:
Packet switching exploits statistical multiplexing—the observation that not all users need resources simultaneously:
Example Calculation:
Assume each user:
With 1000 users:
This dramatic efficiency gain makes the shared Internet economically viable. Dedicated 10 Mbps per user would be prohibitively expensive.
The Trade-Off: Statistical multiplexing provides efficiency but not guarantees. When many users are simultaneously active, congestion occurs. This is the fundamental nature of shared networks.
Network providers oversubscribe—they sell more capacity than they physically have, betting on statistical multiplexing. An ISP selling '100 Mbps' to 1000 customers doesn't have 100 Gbps of upstream capacity—they might have 10 Gbps, assuming most users are idle most of the time.
This usually works. But during peak hours or unusual events (everyone working from home during a pandemic), oversubscription leads to congestion. Understanding this explains why 'my Internet is slow in the evening.'
Congestion occurs when network demand exceeds available capacity. It's the inevitable consequence of shared resources and is one of the most significant challenges in network design and operation.
Congestion Control Mechanisms:
TCP Congestion Control: TCP actively adjusts sending rate to avoid congestion:
Algorithms:
Network-Based Control:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
"""Simplified TCP Congestion Control SimulationDemonstrates slow start and congestion avoidance""" class SimpleTCPCongestionControl: def __init__(self): self.cwnd = 1 # Congestion window (MSS units) self.ssthresh = 64 # Slow start threshold self.phase = "slow_start" self.history = [] def on_ack_received(self): """Called when ACK is received (successful transmission)""" if self.phase == "slow_start": # Exponential growth: double window self.cwnd += 1 # For each ACK, increase by 1 MSS if self.cwnd >= self.ssthresh: self.phase = "congestion_avoidance" else: # Linear growth: increase by 1/cwnd per ACK # Net effect: +1 MSS per RTT self.cwnd += 1 / self.cwnd self.history.append(('ack', self.cwnd, self.phase)) def on_packet_loss(self): """Called when packet loss is detected""" # Multiplicative decrease self.ssthresh = max(self.cwnd / 2, 2) self.cwnd = 1 # Restart slow start (TCP Tahoe style) self.phase = "slow_start" self.history.append(('loss', self.cwnd, self.phase)) # Simulate 20 successful transmissions, then a losscc = SimpleTCPCongestionControl() print("Simulation: 20 ACKs, then loss, then 10 more ACKs")print("-" * 50) for i in range(20): cc.on_ack_received() print(f"ACK {i+1}: cwnd = {cc.cwnd:.2f}, phase = {cc.phase}") print("\n*** PACKET LOSS DETECTED ***\n")cc.on_packet_loss()print(f"After loss: cwnd = {cc.cwnd:.2f}, ssthresh = {cc.ssthresh:.2f}")print() for i in range(10): cc.on_ack_received() print(f"ACK {i+21}: cwnd = {cc.cwnd:.2f}, phase = {cc.phase}")TCP congestion control is designed to be fair—multiple flows sharing a bottleneck should each get approximately equal share. However, 'fair' is nuanced:
• UDP doesn't respond to congestion—it can starve TCP flows • Some TCP variants are more aggressive than others • Short flows never exit slow start, getting less than long flows • Flows with different RTTs get different shares (RTT-unfairness)
Despite imperfections, TCP's congestion control has kept the Internet stable for decades.
Quality of Service (QoS) refers to mechanisms that provide differentiated treatment to different types of traffic. Instead of treating all packets equally (best-effort), QoS enables prioritization based on application needs.
| Application | Bandwidth | Latency | Jitter | Loss Tolerance |
|---|---|---|---|---|
| VoIP | Low (100 Kbps) | Critical (<150ms) | Critical (<30ms) | Low (<1%) |
| Video Conference | Medium (2-10 Mbps) | Critical (<150ms) | Critical (<30ms) | Low (<1%) |
| Video Streaming | High (5-25 Mbps) | Moderate (buffering) | Moderate | Very Low |
| Online Gaming | Low-Medium | Critical (<50ms) | Critical | Medium |
| Web Browsing | Variable | Noticeable (but tolerate) | Not critical | Must retry |
| File Transfer | As available | Not critical | Not critical | Zero (TCP ensures) |
| Low | Minutes acceptable | Not applicable | Zero (TCP) |
QoS Mechanisms:
Traffic Classification:
Traffic Shaping:
Queuing Disciplines:
Admission Control:
QoS only works when enforced everywhere along the path. Your enterprise can prioritize VoIP, but once traffic leaves your network to the ISP, those markings may be ignored or re-marked. The public Internet is largely best-effort—which is why real-time applications use adaptive techniques (buffering, quality switching) rather than relying on QoS guarantees.
IP addresses are a finite resource—perhaps the most constrained in modern networking. The IPv4 address space exhaustion and the transition to IPv6 represent one of the largest infrastructure changes in Internet history.
| Characteristic | IPv4 | IPv6 |
|---|---|---|
| Address Size | 32 bits | 128 bits |
| Total Addresses | ~4.3 billion | ~340 undecillion (3.4×10³⁸) |
| Format | Dotted decimal (192.168.1.1) | Hexadecimal (2001:db8::1) |
| Status | Exhausted globally | Vast availability |
| Private Addresses | 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 | fc00::/7 (ULA) |
| Broadcast | Supported | Replaced by multicast |
The IPv4 Exhaustion Crisis:
The Internet's architects in the 1970s allocated 32-bit addresses, believing 4.3 billion addresses would be plenty. They were wrong:
Timeline:
Mitigation Strategies:
Network Address Translation (NAT):
Classless Inter-Domain Routing (CIDR):
IPv6 Deployment:
Despite being standardized in 1998, IPv6 adoption took decades because:
• IPv6 is not backward compatible—requires parallel infrastructure • NAT 'solved' the immediate problem, reducing urgency • Costs of dual-stack outweigh benefits for many organizations • No compelling features for users—speed/security arguments weak
Adoption accelerated when mobile operators (needing massive address space) and large content providers (Google, Facebook) deployed IPv6, creating critical mass.
Beyond raw connectivity, networks provide services that are themselves shared resources. These services have capacity limits and require management.
Service Resource Planning:
Network service capacity planning requires understanding:
Baseline Load:
Growth Projections:
Headroom:
Failure Modes:
Scaling Strategy:
Cloud services create an illusion of infinite resources—'just spin up more instances.' In reality:
• Cloud regions have physical capacity limits • Quotas limit individual account usage • Scaling takes time (seconds to minutes) • Costs grow with usage (potentially dramatically)
Even in the cloud, capacity planning remains essential. The model changes from 'buy equipment' to 'manage costs and quotas.'
Effective resource management requires visibility. Network monitoring provides data for capacity planning, troubleshooting, and optimization.
| Method | What It Measures | Tools | Use Case |
|---|---|---|---|
| SNMP Polling | Device counters (bytes, errors, CPU) | Cacti, Nagios, PRTG | Capacity trends, health dashboards |
| Flow Analysis | Traffic patterns (source, dest, protocol) | NetFlow, sFlow, ntopng | Who's using bandwidth? What protocols? |
| Packet Capture | Full packet contents | Wireshark, tcpdump | Deep troubleshooting, forensics |
| Synthetic Probing | Simulated transactions | Smokeping, ThousandEyes | Application experience, SLA verification |
| Log Analysis | Event records from devices | ELK stack, Splunk | Security, troubleshooting, audit |
| APM | Application-level metrics | New Relic, Datadog, Dynatrace | End-user experience, service dependencies |
Key Metrics to Monitor:
Bandwidth/Utilization:
Latency:
Errors:
Availability:
Session Counts:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
"""Simple bandwidth utilization calculation from SNMP countersDemonstrates fundamental network monitoring concepts""" from datetime import datetimefrom dataclasses import dataclass @dataclassclass InterfaceCounters: """SNMP counter values at a point in time""" timestamp: datetime if_in_octets: int # Bytes received if_out_octets: int # Bytes transmitted interface_speed: int # Interface speed in bps def calculate_utilization(prev: InterfaceCounters, curr: InterfaceCounters) -> dict: """ Calculate bandwidth utilization between two measurement points """ time_delta = (curr.timestamp - prev.timestamp).total_seconds() if time_delta <= 0: raise ValueError("Time delta must be positive") # Calculate bits transferred (counter difference × 8) in_bits = (curr.if_in_octets - prev.if_in_octets) * 8 out_bits = (curr.if_out_octets - prev.if_out_octets) * 8 # Calculate rates (bits per second) in_bps = in_bits / time_delta out_bps = out_bits / time_delta # Calculate utilization percentage in_util = (in_bps / curr.interface_speed) * 100 out_util = (out_bps / curr.interface_speed) * 100 return { 'interval_seconds': time_delta, 'in_bps': in_bps, 'out_bps': out_bps, 'in_mbps': in_bps / 1_000_000, 'out_mbps': out_bps / 1_000_000, 'in_utilization_pct': in_util, 'out_utilization_pct': out_util } # Example: 1 Gbps interface, 5-minute polling intervalinterface_speed = 1_000_000_000 # 1 Gbps sample_1 = InterfaceCounters( timestamp=datetime(2024, 1, 15, 10, 0, 0), if_in_octets=1_000_000_000, if_out_octets=500_000_000, interface_speed=interface_speed) sample_2 = InterfaceCounters( timestamp=datetime(2024, 1, 15, 10, 5, 0), # 5 minutes later if_in_octets=1_450_000_000, # 450 MB received if_out_octets=650_000_000, # 150 MB sent interface_speed=interface_speed) result = calculate_utilization(sample_1, sample_2) print("Interface Utilization Report")print("=" * 40)print(f"Interval: {result['interval_seconds']:.0f} seconds")print(f"Inbound: {result['in_mbps']:.2f} Mbps ({result['in_utilization_pct']:.1f}%)")print(f"Outbound: {result['out_mbps']:.2f} Mbps ({result['out_utilization_pct']:.1f}%)")Don't just monitor—alert on actionable conditions:
• Utilization >80% — Warning; may hit capacity • Utilization >90% — Critical; likely causing issues • Sudden drops — Possible failure • Error rates increasing — Hardware or cable issues • Latency increases — Congestion or path changes
Avoid alert fatigue: Too many false positives cause real alerts to be ignored.
Networks are shared resource systems providing bandwidth, connectivity, addresses, and services to many simultaneous users. Understanding resource characteristics, sharing mechanisms, and management techniques is essential for network design, operation, and troubleshooting.
Module Complete:
With this page, you've completed Module 1: Network Fundamentals. You now possess a solid foundation in:
This foundation prepares you for deeper exploration of the Internet, protocols, network models, and the layered architecture that makes modern networking possible.
Congratulations! You've completed the Network Fundamentals module. You now have a comprehensive understanding of what computer networks are, how they're structured, what applications they serve, how communication occurs, and how resources are managed. This foundation will support everything you learn in subsequent modules about protocols, architectures, and implementation.