Loading learning content...
Having mastered the individual components of fragmentation calculations—fragment sizes, offsets, counts, and overhead—it's time to integrate these skills through comprehensive problem-solving. This page presents carefully designed problems that mirror exam questions and real-world network analysis scenarios.
Each problem builds upon concepts from previous pages, requiring you to:
Work through each problem step-by-step, checking your understanding before viewing solutions.
This page covers: (1) Basic fragmentation calculations, (2) Multi-hop fragmentation scenarios, (3) Protocol overhead analysis, (4) Reverse engineering from fragment captures, (5) Network design optimization, and (6) Comprehensive exam-style problems.
Let's begin with fundamental problems that test core calculation skills.
Problem 1.1: Standard Ethernet Fragmentation
An IP datagram has a total length of 4,500 bytes (including a 20-byte header). It must be transmitted over an Ethernet link with MTU 1,500 bytes.
Calculate: a) Maximum data per fragment b) Number of fragments c) Data size in each fragment d) Fragment Offset field value for each fragment e) Total bytes transmitted (including all headers)
Before viewing the solution, work through the problem yourself. Remember: Maximum fragment data = ((MTU - Header) ÷ 8) × 8, and Fragment Offset is in 8-byte units.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
import math # Problem 1.1: Standard Ethernet Fragmentationprint("Problem 1.1 Solution")print("="*60) # Giventotal_length = 4500 # Including headerip_header = 20mtu = 1500 # a) Maximum data per fragmentoriginal_data = total_length - ip_header # 4480 bytesmax_frag_data = ((mtu - ip_header) // 8) * 8print(f"a) Original data: {original_data} bytes")print(f" Max data per fragment: {max_frag_data} bytes") # b) Number of fragmentsnum_fragments = math.ceil(original_data / max_frag_data)print(f"\nb) Number of fragments: {num_fragments}") # c) Data size in each fragmentprint(f"\nc) Data sizes:")remaining = original_datafor i in range(num_fragments): if remaining >= max_frag_data: frag_data = max_frag_data else: frag_data = remaining print(f" Fragment {i+1}: {frag_data} bytes") remaining -= frag_data # d) Fragment Offset valuesprint(f"\nd) Fragment Offset values:")cumulative = 0for i in range(num_fragments): offset_bytes = cumulative offset_field = cumulative // 8 frag_data = max_frag_data if i < num_fragments-1 else original_data - cumulative mf = 1 if i < num_fragments-1 else 0 print(f" Fragment {i+1}: Offset = {offset_field} (byte {offset_bytes}), MF = {mf}") cumulative += frag_data # e) Total bytes transmittedtotal_transmitted = original_data + (num_fragments * ip_header)print(f"\ne) Total bytes transmitted: {total_transmitted} bytes")print(f" ({original_data} data + {num_fragments * ip_header} headers)")print(f" Overhead: {(num_fragments-1) * ip_header} bytes additional headers")Problem 1.2: Non-Standard MTU
A 3,200-byte IP datagram (20-byte header, 3,180 bytes data) encounters a link with MTU 1,006 bytes.
Calculate: a) Maximum fragment data (note the 8-byte alignment!) b) Number of fragments c) Size of the last fragment's data d) Bytes "wasted" due to alignment per fragment
12345678910111213141516171819202122232425262728293031323334
import math # Problem 1.2: Non-Standard MTU with alignmentprint("Problem 1.2 Solution")print("="*60) # Givenoriginal_data = 3180mtu = 1006ip_header = 20 # a) Maximum fragment data with 8-byte alignmentavailable = mtu - ip_header # 986 bytesmax_frag_data = (available // 8) * 8 # 984 bytesprint(f"a) Available space: {available} bytes")print(f" Max fragment data (8-byte aligned): {max_frag_data} bytes")print(f" Note: {available} ÷ 8 = {available/8} → floor to {available//8} × 8 = {max_frag_data}") # b) Number of fragmentsnum_fragments = math.ceil(original_data / max_frag_data)print(f"\nb) Number of fragments: {num_fragments}")print(f" {original_data} ÷ {max_frag_data} = {original_data/max_frag_data:.4f} → ⌈⌉ = {num_fragments}") # c) Last fragment data sizelast_frag_data = original_data - (num_fragments - 1) * max_frag_dataprint(f"\nc) Last fragment data: {last_frag_data} bytes")print(f" 3180 - (3 × 984) = 3180 - 2952 = {last_frag_data}") # d) Wasted bytes per fragmentwasted_per_frag = available - max_frag_datatotal_wasted = wasted_per_frag * (num_fragments - 1) # Last fragment can use any sizeprint(f"\nd) Wasted per non-final fragment: {wasted_per_frag} bytes")print(f" Total wasted: {total_wasted} bytes (first {num_fragments-1} fragments)")print(f" Last fragment uses only {last_frag_data}/{available} available bytes")Problem 2.1: Cascading Fragmentation
A 10,000-byte IP datagram (20-byte header, 9,980 bytes data) traverses a path with the following MTUs:
Calculate: a) Fragments after each link b) Total fragments reaching destination c) Fragment Offset values of all final fragments d) Total header overhead at destination
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
import math # Problem 2.1: Cascading Fragmentationprint("Problem 2.1 Solution: Cascading Fragmentation")print("="*60) def max_frag_data(mtu, header=20): return ((mtu - header) // 8) * 8 def fragment_at_hop(current_fragments, mtu): """Fragment a list of (offset_bytes, size) tuples at given MTU.""" max_data = max_frag_data(mtu) new_fragments = [] for offset_bytes, size in current_fragments: if size <= max_data: new_fragments.append((offset_bytes, size)) else: curr_offset = offset_bytes remaining = size while remaining > 0: frag_size = min(remaining, max_data) # Align all but potentially last piece if remaining > max_data: frag_size = max_data new_fragments.append((curr_offset, frag_size)) curr_offset += frag_size remaining -= frag_size return new_fragments # Initial datagramoriginal_data = 9980initial = [(0, original_data)] # (offset, size) mtu_path = [4000, 1500, 576] for hop, mtu in enumerate(mtu_path): print(f"\n--- Link {hop+1}: MTU {mtu} ---") print(f"Max fragment data: {max_frag_data(mtu)} bytes") if hop == 0: fragments = fragment_at_hop(initial, mtu) else: fragments = fragment_at_hop(fragments, mtu) print(f"Fragments after this link: {len(fragments)}") # b) Final fragment countprint(f"\n" + "="*60)print(f"b) Total fragments at destination: {len(fragments)}") # c) Fragment Offset valuesprint(f"\nc) Final fragment details:")for i, (offset, size) in enumerate(fragments): offset_field = offset // 8 mf = 0 if i == len(fragments)-1 else 1 print(f" Frag {i+1:2d}: Offset = {offset_field:4d} (byte {offset:5d}), " f"Size = {size:3d} bytes, MF = {mf}") # d) Header overheadip_header = 20total_headers = len(fragments) * ip_headeroriginal_header = ip_headeroverhead = total_headers - original_headerprint(f"\nd) Header overhead:")print(f" Total headers: {len(fragments)} × {ip_header} = {total_headers} bytes")print(f" Original header: {original_header} bytes")print(f" Overhead: {overhead} bytes") # Verify total datatotal_data = sum(size for _, size in fragments)print(f"\nVerification: Total data = {total_data} bytes " f"({'✓' if total_data == original_data else '✗'})")Problem 2.2: Shortcut Verification
Verify that the final fragment count from Problem 2.1 equals what we'd get by fragmenting directly at the minimum MTU.
1234567891011121314151617181920
import math # Problem 2.2: Shortcut verificationprint("Problem 2.2: Shortcut Verification")print("="*60) original_data = 9980min_mtu = 576ip_header = 20 max_data_at_min = ((min_mtu - ip_header) // 8) * 8direct_fragments = math.ceil(original_data / max_data_at_min) print(f"Original data: {original_data} bytes")print(f"Minimum MTU: {min_mtu}")print(f"Max fragment data at min MTU: {max_data_at_min} bytes")print(f"Direct fragmentation: ⌈{original_data}/{max_data_at_min}⌉ = {direct_fragments} fragments")print(f"\nThis matches the cascaded result: {direct_fragments} == 19 ✓")print("\nKey insight: Final fragment count depends ONLY on minimum MTU,")print("regardless of where or how many times fragmentation occurs.")Problem 3.1: VoIP Packet Fragmentation
A VoIP application sends 160 bytes of audio data every 20ms using UDP (8-byte header) over IPv4 (20-byte header). The packets traverse a VPN tunnel that adds:
The physical network has MTU 1,500 bytes.
Calculate: a) Complete encapsulated packet size b) Will fragmentation occur? c) If using larger audio payloads (1,000 bytes), would fragmentation occur? d) Maximum audio payload before fragmentation
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
# Problem 3.1: VoIP Packet Fragmentationprint("Problem 3.1 Solution: VoIP over IPsec VPN")print("="*60) # Given parametersaudio_data = 160udp_header = 8inner_ip = 20esp_header = 8esp_trailer = 10esp_auth = 12outer_ip = 20mtu = 1500 print("Encapsulation stack:")print(f" Audio data: {audio_data} bytes")print(f" UDP header: {udp_header} bytes")print(f" Inner IP: {inner_ip} bytes")print(f" ESP header: {esp_header} bytes")print(f" ESP trailer: {esp_trailer} bytes")print(f" ESP auth: {esp_auth} bytes")print(f" Outer IP: {outer_ip} bytes") # a) Complete encapsulated sizeinner_packet = audio_data + udp_header + inner_ipipsec_payload = inner_packet + esp_traileripsec_total = esp_header + ipsec_payload + esp_authouter_packet = outer_ip + ipsec_total print(f"\na) Encapsulated packet size:")print(f" Inner packet: {inner_packet} bytes")print(f" After IPsec: {ipsec_total} bytes")print(f" Outer packet: {outer_packet} bytes") # b) Fragmentation checkprint(f"\nb) MTU: {mtu} bytes")if outer_packet <= mtu: print(f" {outer_packet} ≤ {mtu}: NO fragmentation ✓")else: print(f" {outer_packet} > {mtu}: Fragmentation required!") # c) With larger audio payloadprint(f"\nc) With 1,000 bytes audio:")larger_audio = 1000larger_inner = larger_audio + udp_header + inner_iplarger_ipsec = esp_header + larger_inner + esp_trailer + esp_authlarger_outer = outer_ip + larger_ipsecprint(f" Outer packet: {larger_outer} bytes")if larger_outer <= mtu: print(f" {larger_outer} ≤ {mtu}: NO fragmentation")else: print(f" {larger_outer} > {mtu}: Fragmentation REQUIRED") # Calculate fragments outer_data = larger_ipsec max_frag = ((mtu - outer_ip) // 8) * 8 import math frags = math.ceil(outer_data / max_frag) print(f" Fragments: {frags}") # d) Maximum audio payloadprint(f"\nd) Maximum audio payload without fragmentation:")# Work backwards from MTUmax_outer_data = mtu - outer_ip # 1480max_ipsec_payload = max_outer_data - esp_header - esp_auth # 1460max_inner = max_ipsec_payload - esp_trailer # 1450max_audio = max_inner - udp_header - inner_ip # 1422print(f" Max outer IP data: {max_outer_data} bytes")print(f" Max IPsec payload: {max_ipsec_payload} bytes")print(f" Max inner packet: {max_inner} bytes")print(f" Max audio data: {max_audio} bytes")Problem 3.2: DNS Response Fragmentation
A DNS server responds with a 2,048-byte answer (including 12-byte DNS header) over UDP. The response traverses a path with minimum MTU 576 bytes.
Calculate: a) Complete IP datagram size b) Number of fragments c) Why DNS traditionally limited responses to 512 bytes d) Efficiency (data/total transmitted) for this response
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
import math # Problem 3.2: DNS Response Fragmentationprint("Problem 3.2 Solution: DNS Response Fragmentation")print("="*60) # Givendns_response = 2048 # Including DNS headerudp_header = 8ip_header = 20min_mtu = 576 # a) Complete IP datagram sizeudp_data = dns_response # DNS is the UDP payloadip_data = udp_data + udp_headerip_datagram = ip_data + ip_header print(f"a) IP datagram size:")print(f" DNS response: {dns_response} bytes")print(f" UDP datagram: {udp_data + udp_header} bytes")print(f" IP datagram: {ip_datagram} bytes") # b) Number of fragmentsmax_frag_data = ((min_mtu - ip_header) // 8) * 8num_fragments = math.ceil(ip_data / max_frag_data) print(f"\nb) Fragmentation at MTU {min_mtu}:")print(f" Max fragment data: {max_frag_data} bytes")print(f" IP payload to fragment: {ip_data} bytes")print(f" Fragments: ⌈{ip_data}/{max_frag_data}⌉ = {num_fragments}") # c) Historical DNS limitprint(f"\nc) Why 512-byte DNS limit:")traditional_dns = 512traditional_udp = traditional_dns + udp_header # 520traditional_ip = traditional_udp + ip_header # 540print(f" 512-byte DNS + 8 UDP + 20 IP = {traditional_ip} bytes")print(f" {traditional_ip} < {min_mtu} (minimum MTU): NO fragmentation")print(f" This ensured DNS worked across ALL IPv4 networks without fragmentation") # d) Efficiencytotal_transmitted = ip_data + (num_fragments * ip_header)efficiency = (dns_response / total_transmitted) * 100overhead = (num_fragments - 1) * ip_header print(f"\nd) Efficiency analysis:")print(f" DNS data (what we care about): {dns_response} bytes")print(f" Total transmitted: {total_transmitted} bytes")print(f" Header overhead: {overhead} bytes")print(f" Efficiency: {efficiency:.1f}%")print(f" For comparison, unfragmented efficiency: " f"{(dns_response/ip_datagram)*100:.1f}%")Problem 4.1: Reconstruct Original Datagram
A packet capture shows four IP fragments with the following details:
| Fragment | Total Length | ID | Flags | Offset (field) |
|---|---|---|---|---|
| A | 556 | 0x1234 | MF=1 | 0 |
| B | 556 | 0x1234 | MF=1 | 67 |
| C | 556 | 0x1234 | MF=1 | 134 |
| D | 280 | 0x1234 | MF=0 | 201 |
All fragments have IHL=5 (20-byte header).
Calculate: a) Data size in each fragment b) Byte range covered by each fragment c) Original datagram's total data size d) Original datagram's Total Length e) Verify continuity (no gaps or overlaps)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
# Problem 4.1: Reconstruct Original Datagramprint("Problem 4.1 Solution: Reconstruct from Capture")print("="*60) # Captured fragmentsfragments = [ {'name': 'A', 'total_len': 556, 'ihl': 5, 'mf': 1, 'offset': 0}, {'name': 'B', 'total_len': 556, 'ihl': 5, 'mf': 1, 'offset': 67}, {'name': 'C', 'total_len': 556, 'ihl': 5, 'mf': 1, 'offset': 134}, {'name': 'D', 'total_len': 280, 'ihl': 5, 'mf': 0, 'offset': 201},] print("a) Data size in each fragment:")for f in fragments: header_size = f['ihl'] * 4 data_size = f['total_len'] - header_size f['data_size'] = data_size f['header_size'] = header_size print(f" Fragment {f['name']}: {f['total_len']} - {header_size} = {data_size} bytes") print("\nb) Byte range covered by each fragment:")for f in fragments: start_byte = f['offset'] * 8 end_byte = start_byte + f['data_size'] - 1 f['start_byte'] = start_byte f['end_byte'] = end_byte print(f" Fragment {f['name']}: bytes {start_byte:4d} to {end_byte:4d} " f"(offset {f['offset']} × 8 = {start_byte}, +{f['data_size']}-1 = {end_byte})") print("\nc) Original datagram data size:")last = fragments[-1] # Fragment with MF=0original_data = last['offset'] * 8 + last['data_size']print(f" Last fragment offset × 8 + last fragment data")print(f" = {last['offset']} × 8 + {last['data_size']}")print(f" = {last['offset'] * 8} + {last['data_size']}")print(f" = {original_data} bytes") print("\nd) Original datagram Total Length:")original_total = original_data + 20 # Original had one 20-byte headerprint(f" Data + original header = {original_data} + 20 = {original_total} bytes") print("\ne) Continuity verification:")sorted_frags = sorted(fragments, key=lambda f: f['offset'])all_continuous = Truefor i in range(len(sorted_frags) - 1): current = sorted_frags[i] next_f = sorted_frags[i + 1] expected_next_start = current['end_byte'] + 1 actual_next_start = next_f['start_byte'] if actual_next_start == expected_next_start: status = "✓" elif actual_next_start > expected_next_start: status = f"GAP of {actual_next_start - expected_next_start} bytes!" all_continuous = False else: status = f"OVERLAP of {expected_next_start - actual_next_start} bytes!" all_continuous = False print(f" {current['name']}→{next_f['name']}: " f"Expected next at {expected_next_start}, found at {actual_next_start} {status}") print(f"\n Continuity: {'PASSED ✓' if all_continuous else 'FAILED ✗'}")Problem 4.2: Missing Fragment Detection
A reassembly buffer contains fragments for ID 0xABCD:
| Fragment | Total Length | Offset | MF |
|---|---|---|---|
| 1 | 1500 | 0 | 1 |
| 2 | 1500 | 185 | 1 |
| 3 | 1500 | 555 | 0 |
Determine: a) Is this fragment set complete? b) If not, what's missing? c) Expected offset for the missing fragment
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
# Problem 4.2: Missing Fragment Detectionprint("Problem 4.2 Solution: Missing Fragment Detection")print("="*60) # Captured fragments (IHL=5 assumed)fragments = [ {'num': 1, 'total_len': 1500, 'offset': 0, 'mf': 1}, {'num': 2, 'total_len': 1500, 'offset': 185, 'mf': 1}, {'num': 3, 'total_len': 1500, 'offset': 555, 'mf': 0},] ip_header = 20 # Calculate data sizes and byte rangesfor f in fragments: f['data'] = f['total_len'] - ip_header f['start'] = f['offset'] * 8 f['end'] = f['start'] + f['data'] - 1 print("Fragment analysis:")for f in fragments: print(f" Fragment {f['num']}: Offset={f['offset']:3d} ({f['start']:5d}-{f['end']:5d}), " f"Data={f['data']:4d}, MF={f['mf']}") # Check for gapsprint("\na) Completeness check:")sorted_frags = sorted(fragments, key=lambda f: f['offset']) missing = []prev_end = -1 for f in sorted_frags: expected_start = prev_end + 1 actual_start = f['start'] if actual_start > expected_start: gap_start = expected_start gap_end = actual_start - 1 gap_size = gap_end - gap_start + 1 missing.append((gap_start, gap_end, gap_size)) print(f" GAP detected: bytes {gap_start} to {gap_end} ({gap_size} bytes)") prev_end = f['end'] if not missing: print(" Fragment set is COMPLETE ✓")else: print(f"\nb) Missing fragment analysis:") for gap_start, gap_end, gap_size in missing: expected_offset = gap_start // 8 print(f" Missing: bytes {gap_start} to {gap_end}") print(f" Expected offset (field value): {expected_offset}") print(f" Expected data size: {gap_size} bytes") print(f"\nc) Expected offset for missing fragment:")if missing: gap_start = missing[0][0] print(f" Byte position {gap_start} → offset field = {gap_start} ÷ 8 = {gap_start // 8}") print(f" Fragment 2 ends at byte {sorted_frags[1]['end']}") print(f" Fragment 3 starts at byte {sorted_frags[2]['start']}") print(f" Missing fragment should have offset = {(sorted_frags[1]['end']+1) // 8}")Problem 5.1: Tunnel MTU Configuration
You're configuring a GRE tunnel between two sites. The underlying network has MTU 1500. Each GRE packet adds:
Calculate: a) Maximum tunnel MTU to avoid fragmentation of tunneled packets b) Recommended TCP MSS for hosts using this tunnel c) Maximum UDP payload without fragmentation d) If an application sends 1400-byte UDP payloads, will fragmentation occur?
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
# Problem 5.1: GRE Tunnel MTU Configurationprint("Problem 5.1 Solution: GRE Tunnel Configuration")print("="*60) # Givenphysical_mtu = 1500gre_header = 4outer_ip = 20inner_ip = 20tcp_header = 20 # Minimumudp_header = 8 # a) Maximum tunnel MTUtunnel_overhead = gre_header + outer_ip # 24 bytesmax_tunnel_mtu = physical_mtu - tunnel_overhead print(f"a) Maximum tunnel MTU:")print(f" Physical MTU: {physical_mtu}")print(f" GRE + outer IP overhead: {tunnel_overhead} bytes")print(f" Max tunnel MTU: {max_tunnel_mtu} bytes")print(f" (This is the max inner IP datagram size)") # b) Recommended TCP MSStcp_mss = max_tunnel_mtu - inner_ip - tcp_header print(f"\nb) Recommended TCP MSS:")print(f" Tunnel MTU: {max_tunnel_mtu}")print(f" Minus inner IP header: -{inner_ip}")print(f" Minus TCP header: -{tcp_header}")print(f" TCP MSS: {tcp_mss} bytes") # c) Maximum UDP payloadmax_udp_payload = max_tunnel_mtu - inner_ip - udp_header print(f"\nc) Maximum UDP payload:")print(f" Tunnel MTU: {max_tunnel_mtu}")print(f" Minus inner IP: -{inner_ip}")print(f" Minus UDP header: -{udp_header}")print(f" Max UDP payload: {max_udp_payload} bytes") # d) Check 1400-byte UDP payloadprint(f"\nd) 1400-byte UDP payload check:")test_payload = 1400inner_datagram = test_payload + udp_header + inner_ip # 1428print(f" UDP payload: {test_payload}")print(f" Inner IP datagram: {inner_datagram} bytes")print(f" Tunnel MTU: {max_tunnel_mtu} bytes") if inner_datagram <= max_tunnel_mtu: print(f" {inner_datagram} ≤ {max_tunnel_mtu}: NO fragmentation ✓")else: print(f" {inner_datagram} > {max_tunnel_mtu}: Fragmentation REQUIRED") print(f"\nRecommendation:")print(f" Configure 'ip mtu {max_tunnel_mtu}' on tunnel interfaces")print(f" Or use 'ip tcp adjust-mss {tcp_mss}' for TCP optimization")Problem 5.2: Data Center Jumbo Frame Analysis
A data center uses 9,000-byte MTU internally. Traffic exits to the Internet through a firewall with 1,500-byte MTU. A server sends 8,000-byte NFS data blocks.
Calculate: a) Can the 8,000-byte blocks traverse the internal network without fragmentation? b) How many fragments at the Internet edge? c) Total overhead if the server sends 1 million blocks/day d) Efficiency improvement if blocks were sized for 1,500-byte MTU
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
import math # Problem 5.2: Data Center Jumbo Frame Analysisprint("Problem 5.2 Solution: Jumbo Frame Analysis")print("="*60) # Giveninternal_mtu = 9000edge_mtu = 1500nfs_data = 8000ip_header = 20 # a) Internal network checkprint("a) Internal network (MTU 9000):")ip_datagram = nfs_data + ip_headerprint(f" NFS data: {nfs_data} bytes")print(f" IP datagram: {ip_datagram} bytes")if ip_datagram <= internal_mtu: print(f" {ip_datagram} ≤ {internal_mtu}: NO fragmentation internally ✓")else: print(f" Fragmentation required internally") # b) Fragments at Internet edgeprint(f"\nb) At Internet edge (MTU {edge_mtu}):")max_frag_data = ((edge_mtu - ip_header) // 8) * 8num_fragments = math.ceil(nfs_data / max_frag_data)print(f" Max fragment data: {max_frag_data} bytes")print(f" Fragments per NFS block: {num_fragments}") # c) Daily overhead for 1M blocksblocks_per_day = 1_000_000extra_headers_per_block = (num_fragments - 1) * ip_headerdaily_overhead = blocks_per_day * extra_headers_per_block print(f"\nc) Daily overhead (1M blocks):")print(f" Extra headers per block: {extra_headers_per_block} bytes")print(f" Daily overhead: {daily_overhead:,} bytes")print(f" = {daily_overhead / 1e6:.1f} MB")print(f" = {daily_overhead / 1e9:.3f} GB") # d) Efficiency comparisonprint(f"\nd) Efficiency comparison:") # Current: 8000-byte blockscurrent_transmitted = nfs_data + (num_fragments * ip_header)current_efficiency = nfs_data / current_transmitted * 100 # Optimized: size for 1500 MTU (no fragmentation)optimized_data = max_frag_data # 1480 bytes per datagramdatagrams_optimized = math.ceil(nfs_data / optimized_data)optimized_transmitted = nfs_data + (datagrams_optimized * ip_header)optimized_efficiency = nfs_data / optimized_transmitted * 100 print(f" Current (8000-byte blocks):")print(f" Transmitted: {current_transmitted} bytes for {nfs_data} bytes data")print(f" Efficiency: {current_efficiency:.2f}%")print(f"\n Optimized (1480-byte blocks):")print(f" Datagrams: {datagrams_optimized}")print(f" Transmitted: {optimized_transmitted} bytes for {nfs_data} bytes data")print(f" Efficiency: {optimized_efficiency:.2f}%") # Hmm, more datagrams = more headers. Let's reconsider.# Actually, the question might mean: what if we sent 1480-byte payload datagrams# that don't require fragmentation?print(f"\n Note: More datagrams = more headers, so fragmenting large blocks")print(f" and sending smaller blocks are comparable in overhead.")print(f" The real win is avoiding reassembly at destination and")print(f" preventing all-or-nothing loss of large datagrams.")Problem 6.1: Complete Fragmentation Analysis
A host sends an ICMP echo request with 4,000 bytes of data. The ICMP header is 8 bytes, IP header is 20 bytes. The packet traverses:
Complete analysis: a) Original datagram total size b) IP payload size (data to be fragmented) c) Fragments after first fragmentation point d) Final fragments at destination e) Fragment Offset and MF flag for each final fragment f) Total bytes transmitted (all fragments) g) Header overhead percentage
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
import math # Problem 6.1: Complete ICMP Fragmentation Analysisprint("Problem 6.1 Solution: Complete ICMP Fragmentation Analysis")print("="*70) # Givenicmp_data = 4000icmp_header = 8ip_header = 20mtu_path = [1500, 1000, 1500] # Fragmentation only at MTU decreases # a) Original datagram sizeip_payload = icmp_data + icmp_header # 4008 bytesoriginal_total = ip_payload + ip_header # 4028 bytes print(f"a) Original datagram:")print(f" ICMP data: {icmp_data} bytes")print(f" ICMP header: {icmp_header} bytes")print(f" IP payload: {ip_payload} bytes")print(f" Total size: {original_total} bytes") # b) IP payloadprint(f"\nb) IP payload (data to fragment): {ip_payload} bytes") # c) Fragments after first hop (MTU 1500→1500: no fragmentation)# Then at WAN (1500→1000): fragmentation occursprint(f"\nc) Fragmentation analysis:") def max_frag_data(mtu): return ((mtu - ip_header) // 8) * 8 min_mtu = min(mtu_path)max_data = max_frag_data(min_mtu)print(f" Minimum MTU on path: {min_mtu} bytes")print(f" Max fragment data at min MTU: {max_data} bytes")print(f" Local ETH (MTU 1500): No fragmentation (packet fits)")print(f" WAN (MTU 1000): Fragmentation required") # d) Final fragmentsnum_fragments = math.ceil(ip_payload / max_data)print(f"\nd) Final fragments: {num_fragments}") # e) Fragment detailsprint(f"\ne) Fragment Offset and MF for each final fragment:")print(f" {'Frag':<6} {'Offset':<10} {'OBytes':<10} {'Data':<8} {'MF':<4}")print(f" {'-'*6} {'-'*10} {'-'*10} {'-'*8} {'-'*4}") fragment_sizes = []remaining = ip_payloadcurrent_offset = 0 for i in range(num_fragments): if remaining > max_data: frag_data = max_data mf = 1 else: frag_data = remaining mf = 0 offset_field = current_offset // 8 fragment_sizes.append(frag_data) print(f" {i+1:<6} {offset_field:<10} {current_offset:<10} {frag_data:<8} {mf}") current_offset += frag_data remaining -= frag_data # f) Total bytes transmittedtotal_transmitted = ip_payload + (num_fragments * ip_header)print(f"\nf) Total bytes transmitted:")print(f" IP payload: {ip_payload} bytes")print(f" Headers: {num_fragments} × {ip_header} = {num_fragments * ip_header} bytes")print(f" Total: {total_transmitted} bytes") # Convert to fragmented packets' wire sizeprint(f"\n Individual fragment sizes (Total Length):")for i, frag_data in enumerate(fragment_sizes): frag_total = frag_data + ip_header print(f" Fragment {i+1}: {frag_total} bytes") # g) Overhead percentageoriginal_headers = ip_header # Just one 20-byte header originallyextra_headers = (num_fragments - 1) * ip_headeroverhead_pct = (extra_headers / original_total) * 100 print(f"\ng) Header overhead:")print(f" Original header: {original_headers} bytes")print(f" Extra from fragmentation: {extra_headers} bytes")print(f" Overhead percentage: {overhead_pct:.2f}%") # Verificationprint(f"\n{'='*70}")print(f"Verification:")print(f" Sum of fragment data: {sum(fragment_sizes)} bytes")print(f" Original IP payload: {ip_payload} bytes")print(f" Match: {'✓' if sum(fragment_sizes) == ip_payload else '✗'}")Problem 6.2: Exam Quick Reference
Here's a consolidated reference for solving fragmentation problems efficiently.
| Calculation | Formula |
|---|---|
| Max fragment data | ((MTU - header) ÷ 8) × 8 |
| Number of fragments | ⌈payload ÷ max_frag_data⌉ |
| Fragment N offset (bytes) | Sum of data in fragments 0 to N-1 |
| Fragment offset (field) | offset_bytes ÷ 8 |
| Last fragment data | payload - (⌊payload ÷ max_data⌋ × max_data) |
| Original size from last frag | last_offset × 8 + last_data_size |
| Header overhead | (num_fragments - 1) × header_size |
| Total transmitted | payload + (num_fragments × header_size) |
| Efficiency | payload ÷ total_transmitted × 100% |
Congratulations! You've completed the comprehensive study of IP fragmentation calculations. Let's summarize the key skills you've developed.
Where These Skills Apply:
You've mastered IP fragmentation calculations—a fundamental skill for any network professional. You can now analyze, predict, and optimize fragmentation behavior in any IPv4 network scenario. Apply these skills in labs, packet captures, and real networks to solidify your understanding.