Loading content...
Fixed partitioning offers simplicity and predictability, but these benefits come with an inherent cost that we've alluded to but not yet rigorously examined: internal fragmentation.
Internal fragmentation is memory waste that occurs inside allocated regions—space that is reserved for a process but never actually used. Unlike external fragmentation (which we'll encounter with variable partitioning), internal fragmentation cannot be eliminated through compaction or reorganization. It is an intrinsic consequence of using fixed-size allocations for variable-size demands.
Understanding internal fragmentation is crucial for three reasons:
Quantitative Impact: Internal fragmentation can waste 30-50% or more of system memory under typical workloads. This isn't a minor inefficiency—it's potentially half your memory destroyed.
Design Decisions: The choice of partition size directly controls internal fragmentation. Understanding this relationship enables better system configuration.
Fundamental Trade-off: Internal fragmentation illustrates a core computer science trade-off between allocation granularity and overhead. This same trade-off appears throughout systems design.
Let's develop both intuitive understanding and formal analytical tools for reasoning about internal fragmentation.
By the end of this page, you will:
• Define internal fragmentation precisely and distinguish it from external fragmentation • Calculate internal fragmentation for specific allocation scenarios • Derive expected fragmentation under statistical workload models • Understand the classic 'half-partition' rule of thumb • Analyze how partition sizing decisions affect fragmentation • Recognize internal fragmentation patterns in modern systems
Internal fragmentation is the difference between the memory allocated to a process and the memory actually used by that process, when this difference occurs because allocation granularity exceeds request precision.
Let:
Then:
F = A - R (when A ≥ R)
The fragmentation rate for a single allocation is:
Fragmentation Rate = F / A = (A - R) / A = 1 - (R / A)
The term "internal" distinguishes this waste from "external" fragmentation:
Internal Fragmentation: Waste inside allocated blocks. The memory is assigned to a process but unused. It cannot be given to other processes.
External Fragmentation: Waste outside allocated blocks. Free memory exists but is scattered in chunks too small to satisfy requests. The memory is unassigned but unusable.
| Characteristic | Internal Fragmentation | External Fragmentation |
|---|---|---|
| Location of waste | Inside allocated blocks | Between allocated blocks |
| Memory status | Allocated but unused | Free but unusable |
| Caused by | Fixed allocation sizes | Variable allocation sizes |
| Can be recovered by | Process termination only | Compaction (moving blocks) |
| Occurs in | Fixed partitioning, paging | Variable partitioning, malloc |
| Predictability | Highly predictable | Workload-dependent |
Consider an 8 MB partition allocated to a 3 MB process:
+----------------------------------+
| 8 MB PARTITION |
+----------------------------------+
| 3 MB USED | 5 MB WASTED |
| (Process A) | (Internal Frag) |
+---------------+------------------+
37.5% 62.5%
The 5 MB of internal fragmentation:
Internal fragmentation is often invisible in system monitoring. A partition appears "in use" even though much of it is empty. The operating system tracks partition allocation status, not actual memory usage within partitions. This invisibility makes internal fragmentation a silent killer of system efficiency—you may not realize 50% of your memory is wasted until you analyze workload patterns.
Internal fragmentation arises from a fundamental mismatch between allocation granularity and request precision. Let's analyze the specific causes in fixed partitioning systems.
The most direct cause is the use of predetermined, unchangeable partition sizes. When the partition size doesn't match process requirements, waste is inevitable.
Example: With 8 MB partitions:
Processes have diverse memory requirements that don't conform to partition boundaries. Even with perfect workload knowledge, the continuous spectrum of process sizes can't match a discrete set of partition sizes.
Processes often request more memory than they actually use:
This "requested vs. actual" gap adds to partition-level waste.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
// Measuring internal fragmentation at runtimetypedef struct { uint32_t partition_size; // Allocated partition size uint32_t process_declared_size; // Size requested by process uint32_t process_actual_usage; // Actual memory touched (via page faults) uint32_t partition_waste; // partition_size - declared_size uint32_t process_waste; // declared_size - actual_usage uint32_t total_waste; // partition_size - actual_usage} FragmentationRecord; // Calculate fragmentation for a partitionFragmentationRecord calculate_fragmentation(int partition_index) { PartitionDescriptor* p = &system_config.partitions[partition_index]; ProcessInfo* proc = get_process_info(p->process_id); FragmentationRecord rec; rec.partition_size = p->size; rec.process_declared_size = proc->requested_memory; rec.process_actual_usage = measure_actual_usage(p->process_id); // Partition waste: unused portion of partition rec.partition_waste = p->size - proc->requested_memory; // Process waste: requested but unused (e.g., empty stack/heap space) rec.process_waste = proc->requested_memory - rec.process_actual_usage; // Total waste: all unused memory in partition rec.total_waste = p->size - rec.process_actual_usage; return rec;} // System-wide fragmentation summaryvoid report_fragmentation_summary(void) { uint64_t total_allocated = 0; uint64_t total_used = 0; uint64_t total_wasted = 0; int occupied_count = 0; for (int i = 0; i < system_config.num_partitions; i++) { PartitionDescriptor* p = &system_config.partitions[i]; if (p->state == PARTITION_OCCUPIED) { FragmentationRecord rec = calculate_fragmentation(i); total_allocated += rec.partition_size; total_used += rec.process_actual_usage; total_wasted += rec.total_waste; occupied_count++; log_debug("Partition %d: %u alloc, %u used, %u waste (%.1f%%)", i, rec.partition_size, rec.process_actual_usage, rec.total_waste, 100.0 * rec.total_waste / rec.partition_size); } } if (occupied_count > 0) { double waste_rate = 100.0 * total_wasted / total_allocated; log_info("System fragmentation: %llu allocated, %llu used, " "%llu wasted (%.1f%%)", total_allocated, total_used, total_wasted, waste_rate); }}Internal fragmentation has two layers:
Partition-level waste: Partition size minus process requested size. This is the classic definition.
Process-level waste: Process requested size minus actual usage. This occurs when processes over-estimate their needs.
Total waste is the sum. In practice, process-level waste can be significant, especially for programs with large but rarely-used buffers or pessimistic heap sizing.
To design efficient systems, we need to predict fragmentation for expected workloads. Let's develop the mathematical framework.
Assume process sizes are uniformly distributed between 0 and the partition size S. This is a simplifying assumption, but it provides useful bounds.
For a randomly selected process size R uniformly distributed on (0, S]:
Expected Request: E[R] = S/2
Expected Fragmentation: E[F] = E[S - R] = S - E[R] = S - S/2 = S/2
Expected Fragmentation Rate: E[F]/S = 1/2 = 50%
This is the famous "half-partition rule": under uniform distribution, we expect to waste half the partition size on average.
Rule of Thumb: When process sizes are uniformly distributed up to the partition size, expect approximately 50% internal fragmentation.
Derivation: If R ~ Uniform(0, S), then E[S-R] = S - E[R] = S - S/2 = S/2.
Application: For quick estimation, assume you'll lose half your memory to internal fragmentation in equal-size partitioning with diverse workloads.
The uniform distribution is often optimistic. Real workloads frequently show:
Bimodal Distribution: Many small processes and some large ones, with few medium-sized. This can increase or decrease fragmentation depending on partition sizing.
Power-Law Distribution: A few process sizes dominate the workload. If partitions don't match these common sizes, fragmentation is high.
Bounded Distribution: Processes cluster in a narrower range than (0, S]. This reduces fragmentation if partitions are sized to match.
For a general distribution with probability density function f(r) over process sizes:
Expected Fragmentation = S - E[R] = S - ∫₀ˢ r·f(r)dr
If process sizes concentrate near the partition size (high E[R]/S), fragmentation is low. If they concentrate near zero, fragmentation is high.
| Distribution Type | Process Size Range | Expected Size | Expected Fragmentation | Waste % |
|---|---|---|---|---|
| Uniform(0, S) | [0, S] | S/2 | S/2 | 50% |
| Uniform(S/2, S) | [S/2, S] | 3S/4 | S/4 | 25% |
| Uniform(0, S/2) | [0, S/2] | S/4 | 3S/4 | 75% |
| Constant(S) | S exactly | S | 0 | 0% |
| Constant(S/4) | S/4 exactly | S/4 | 3S/4 | 75% |
| Triangular(0, S, S) | [0, S], mode S | 2S/3 | S/3 | 33% |
For a system with n partitions, if partitions have sizes S₁, S₂, ..., Sₙ and corresponding expected process sizes E[R₁], E[R₂], ..., E[Rₙ]:
Total Expected Fragmentation = Σᵢ (Sᵢ - E[Rᵢ])
System Fragmentation Rate = (Total Expected Fragmentation) / (Total Memory)
With equal-size partitions (all Sᵢ = S) and independent uniformly distributed processes:
System Fragmentation Rate = n×(S/2) / (n×S) = 1/2 = 50%
The half-partition rule scales to full systems under uniform assumptions.
Let's apply our framework to realistic scenarios and calculate actual fragmentation numbers.
System Configuration:
Calculation:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
// Monte Carlo simulation of fragmentation#include <stdlib.h>#include <stdio.h> #define SIMULATIONS 100000#define NUM_PARTITIONS 8#define PARTITION_SIZE (8 * 1024 * 1024) // 8 MB // Simulate process size (uniformly distributed 1-8 MB)uint32_t random_process_size(void) { // Random size between 1 MB and PARTITION_SIZE uint32_t min = 1024 * 1024; // 1 MB uint32_t max = PARTITION_SIZE; return min + (rand() % (max - min + 1));} void simulate_fragmentation(void) { double total_waste = 0; double total_allocated = 0; for (int sim = 0; sim < SIMULATIONS; sim++) { // Simulate one system state: all partitions occupied for (int p = 0; p < NUM_PARTITIONS; p++) { uint32_t process_size = random_process_size(); uint32_t waste = PARTITION_SIZE - process_size; total_waste += waste; total_allocated += PARTITION_SIZE; } } double avg_waste_rate = total_waste / total_allocated; double avg_waste_per_partition = total_waste / (SIMULATIONS * NUM_PARTITIONS); printf("Simulation Results (%d trials):", SIMULATIONS); printf(" Average waste per partition: %.2f MB", avg_waste_per_partition / (1024 * 1024)); printf(" System fragmentation rate: %.2f%%", avg_waste_rate * 100); printf(" Effective memory utilization: %.2f%%", (1 - avg_waste_rate) * 100);}System Configuration:
Calculation:
The bimodal distribution increases fragmentation because small processes dominate and each wastes most of its partition.
Bimodal workloads (many small + some large) are common in practice and particularly punishing for equal-size partitioning. The small processes waste most of their partitions, while the large processes just barely fit. Neither size class is well-served.
Unequal-size partitioning addresses this by dedicating small partitions to small processes, reducing per-allocation waste.
System Configuration (optimized for bimodal workload):
If 6 small and 2 large processes running:
But wait — this is worse! The large partitions are oversized. Let's reoptimize:
Better Configuration:
This example illustrates why partition sizing requires careful workload analysis—naive choices can worsen fragmentation.
Internal fragmentation has cascading effects beyond simple memory waste. Let's analyze the full impact.
With N MB total memory and 50% fragmentation:
Example: A 64 MB system with 50% fragmentation effectively operates as a 32 MB system for multiprogramming calculations.
For a CPU-bound system with memory as the only constraint:
Theoretical Maximum Throughput ∝ (Memory / Average Process Size)
Actual Throughput ∝ (Memory / Average Partition Size)
Throughput Ratio = (Average Process Size) / (Average Partition Size) = 1 - Fragmentation Rate
With 50% fragmentation, throughput is halved compared to perfect allocation.
For I/O-bound systems, the impact is even worse because multiprogramming degree directly affects CPU utilization during I/O waits.
| Fragmentation Rate | Effective Memory | Relative Throughput | Impact Assessment |
|---|---|---|---|
| 10% | 90% | ~90% | Acceptable |
| 25% | 75% | ~75% | Noticeable |
| 50% | 50% | ~50% | Significant |
| 75% | 25% | ~25% | Severe |
| 90% | 10% | ~10% | System failure imminent |
Track internal fragmentation as a key system metric:
This visibility transforms fragmentation from invisible waste to actionable insight.
While internal fragmentation is inherent to fixed partitioning, several strategies can reduce its severity.
Analyze historical workloads and size partitions to match common process sizes:
Process:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// Analyze workload to recommend partition configurationtypedef struct { uint32_t size; int count;} SizeBucket; #define NUM_BUCKETS 10 void analyze_workload(ProcessLog* log, int log_size) { // Create size buckets (e.g., 1MB, 2MB, 4MB, 8MB, ...) SizeBucket buckets[NUM_BUCKETS] = { {1 * MB, 0}, {2 * MB, 0}, {4 * MB, 0}, {8 * MB, 0}, {16 * MB, 0}, {32 * MB, 0}, {64 * MB, 0}, {128 * MB, 0}, {256 * MB, 0}, {512 * MB, 0} }; // Count processes in each bucket for (int i = 0; i < log_size; i++) { uint32_t size = log[i].memory_size; // Find smallest bucket that fits for (int b = 0; b < NUM_BUCKETS; b++) { if (size <= buckets[b].size) { buckets[b].count++; break; } } } // Report recommendations printf("Workload Analysis (N=%d processes):", log_size); for (int b = 0; b < NUM_BUCKETS; b++) { if (buckets[b].count > 0) { double pct = 100.0 * buckets[b].count / log_size; printf(" <= %3d MB: %5d processes (%.1f%%)", buckets[b].size / MB, buckets[b].count, pct); } } // Estimate fragmentation if we use these bucket sizes as partitions double est_frag = estimate_fragmentation(log, log_size, buckets); printf(" Estimated fragmentation with matched partitions: %.1f%%", est_frag);}Smaller partitions reduce per-allocation waste but increase partition overhead and reduce maximum process size:
Trade-off Analysis:
Practical Limit: Very small partitions increase:
As covered in the previous page, using partitions of different sizes matched to workload distribution reduces fragmentation for heterogeneous workloads.
No strategy within fixed partitioning can eliminate internal fragmentation when process sizes vary. The only complete solutions are:
Fixed partitioning optimizes for simplicity, accepting fragmentation as an inherent cost.
While fixed partitioning is historical, internal fragmentation remains a fundamental concept in modern systems. Understanding it in the partition context prepares you to recognize it everywhere.
Internal fragmentation appears whenever we choose fixed-size allocation units for efficiency:
Why Fixed Sizes?
The Cost:
Modern systems often accept small internal fragmentation (e.g., 2 KB per 4 KB page) to gain the benefits of fixed-size allocation. The historical lesson of fixed partitioning informs these design decisions.
When designing any allocation system, consider:
These questions, derived from partition fragmentation analysis, apply universally.
We have comprehensively analyzed internal fragmentation—the memory waste that occurs when fixed allocation sizes don't match variable request sizes.
We've examined partition sizing and fragmentation. But how does the operating system decide which partition to assign to a process? The next page covers partition assignment strategies—the algorithms and policies that match processes to partitions in both equal and unequal-size schemes.
You now possess both intuitive and mathematical understanding of internal fragmentation. This analytical framework will serve you in evaluating any fixed-size allocation scheme, from historical partitioning to modern paging and memory allocators.