Loading learning content...
Before virtual memory, before paging, before segmentation—there was a deceptively simple idea: give one program all of memory.
In the earliest computer systems, memory management was not a concern because there was only one program running at any given time. The operating system occupied a fixed portion of memory, and the remaining space was entirely available to a single user program. This arrangement, known as single partition allocation or monoprogramming, represents the most fundamental form of memory management.
Understanding single partition allocation is essential not merely for historical context, but because it reveals the core problems that all subsequent memory management techniques were designed to solve. Every modern memory subsystem—from Linux's sophisticated virtual memory manager to embedded real-time operating systems—evolved in response to the limitations we will examine in this page.
By the end of this page, you will understand the complete architecture of single partition memory systems, including the memory map layout, protection mechanisms, the role of the operating system, and the fundamental inefficiencies that drove the evolution toward multiprogramming. You'll be able to analyze why this simple model fails as system requirements grow.
To appreciate single partition allocation, we must understand the computing environment that created it.
The Early Computing Era (1950s-1960s)
The first electronic computers were astronomically expensive—often costing millions of dollars in today's currency. Memory was measured in kilobytes, not gigabytes. A single computer might serve an entire organization, accessed through batch processing where jobs were submitted on punch cards and results collected hours or days later.
In this environment, the concept of multiple programs sharing memory simultaneously seemed unnecessary and impossibly complex. The operating system itself was minimal—often just a monitor program that loaded user programs, executed them, and provided basic I/O services.
The Monoprogramming Model
Under monoprogramming, execution followed a strict sequence:
This model had one major advantage: simplicity. There was no need for memory protection between programs (only one existed), no need for CPU scheduling algorithms, and no need for complex address translation. The program owned the machine completely during its execution window.
| Aspect | Early Systems (1950s-60s) | Impact on Memory Management |
|---|---|---|
| Memory Size | 4KB - 64KB typical | Entire memory visible to single program |
| CPU Cost | Millions of dollars | CPU idle time was extremely expensive |
| I/O Speed | Very slow (tape, cards) | CPU waited extensively for I/O |
| User Interaction | Batch processing | No interactive sessions needed |
| Program Size | Often < total memory | Single program could fit entirely |
| Operating System | Simple monitor | Minimal memory footprint |
When a computer costs more than most houses, every second of idle CPU time represents wasted money. This economic pressure would eventually drive the development of multiprogramming—but in the earliest systems, the hardware and software complexity required for multiprogramming was itself prohibitively expensive.
In a single partition system, physical memory is divided into exactly two regions:
The placement of the operating system itself became an important design decision that varied across different systems.
╔═══════════════════════════════╗ ← Address 0xFFFFF (High)
║ ║
║ User Program Area ║
║ (Single User Process) ║
║ ║
╠═══════════════════════════════╣ ← OS Boundary
║ Operating System ║
║ (Kernel, Drivers, Handlers) ║
╚═══════════════════════════════╝ ← Address 0x00000 (Low)
╔═══════════════════════════════╗ ← Address 0xFFFFF (High)
║ Operating System ║
║ (Kernel, Drivers, Handlers) ║
╠═══════════════════════════════╣ ← OS Boundary
║ ║
║ User Program Area ║
║ (Single User Process) ║
║ ║
╚═══════════════════════════════╝ ← Address 0x00000 (Low)
╔═══════════════════════════════╗ ← ROM Region
║ Operating System (ROM) ║
╠═══════════════════════════════╣
║ ║
║ User Program Area ║ ← RAM Region
║ (All of RAM) ║
║ ║
╚═══════════════════════════════╝
Physical Address Space Characteristics
The user program operated directly on physical addresses. When a programmer wrote code to access memory location 1000, the CPU accessed physical memory location 1000—there was no translation layer. This has profound implications:
In the simplest single-partition systems, there was no hardware-enforced boundary between the OS and user program. A wild pointer or buffer overflow in the user program could corrupt the operating system, requiring a full system restart. This vulnerability motivated the development of hardware protection mechanisms.
The process of loading a program in a single partition system was straightforward but inflexible. Understanding this process reveals why modern systems needed more sophisticated approaches.
The Loading Sequence
When a user submitted a job for execution, the following sequence occurred:
12345678910111213141516171819202122232425262728293031323334353637383940
PROCEDURE LoadAndExecuteProgram(job: JobDescriptor): // Phase 1: Prepare Memory user_base := OS_BOUNDARY // First address after OS user_limit := PHYSICAL_MEMORY_SIZE - OS_SIZE IF job.size > user_limit THEN ReportError("Program too large for available memory") RETURN FAILURE END IF // Phase 2: Load Program Image load_address := user_base FOR EACH segment IN job.segments: CopyFromStorage(segment.source, load_address, segment.size) load_address := load_address + segment.size END FOR // Phase 3: Perform Address Fixups (if absolute loading) // Programs compiled for address 0 need adjustment IF job.requires_relocation THEN FOR EACH relocation_entry IN job.relocation_table: memory_location := user_base + relocation_entry.offset current_value := ReadMemory(memory_location) adjusted_value := current_value + user_base WriteMemory(memory_location, adjusted_value) END FOR END IF // Phase 4: Transfer Control entry_point := user_base + job.entry_offset SaveOSState() TransferControl(entry_point) // Jump to user program // Phase 5: Program completes and returns here RestoreOSState() PrepareForNextJob() RETURN SUCCESSEND PROCEDUREAddress Binding in Single Partition Systems
Address binding—the process of mapping symbolic addresses in source code to physical memory addresses—occurred at different times depending on system sophistication:
| Binding Time | Description | Flexibility | Example |
|---|---|---|---|
| Compile Time | Addresses fixed when program compiled | None | Early mainframe programs |
| Load Time | Addresses computed when program loaded | Moderate | Programs with relocation info |
| Execution Time | Addresses computed during execution | High | Not available in pure single-partition |
Most single-partition systems used load-time binding. The compiler generated code with addresses relative to zero, and the loader adjusted all addresses by adding the base address where the program was actually loaded. This required the program to include a relocation dictionary—a list of every address in the program that needed adjustment.
Some clever programmers wrote 'position-independent code' (PIC) that used only relative addresses—jumps and calls specified as offsets from the current instruction, not absolute addresses. Such code could run at any memory location without modification. This technique, while challenging, foreshadowed modern shared library mechanisms.
Single partition allocation suffers from systematic memory underutilization. Understanding these inefficiencies is crucial for appreciating why more complex schemes were developed.
The Fundamental Utilization Problem
Consider a system with 64KB of total memory, where the OS occupies 16KB. The user partition is 48KB. Now consider the following job sequence:
| Job | Memory Required | Memory Wasted | Utilization |
|---|---|---|---|
| A | 48KB | 0KB | 100% |
| B | 32KB | 16KB | 67% |
| C | 8KB | 40KB | 17% |
| D | 24KB | 24KB | 50% |
On average, we're wasting 20KB per job—nearly half the available memory. This waste is called internal fragmentation: memory allocated to a job but not used by it.
CPU Utilization Disaster
Beyond memory waste, single-partition systems suffer from catastrophic CPU underutilization during I/O operations.
Consider a typical program that:
CPU Utilization = 2ms / (10ms + 2ms + 10ms) = 9%
The CPU sits idle for 91% of the program's execution time, waiting for I/O devices. In a monoprogramming environment, there is nothing else the CPU can do—no other program is in memory to execute.
This represents an enormous economic waste. If the computer costs $1000/hour to operate, 91% of that money is paying for an idle CPU.
| Workload Type | I/O Intensity | Typical CPU Utilization | Economic Efficiency |
|---|---|---|---|
| Scientific Computation | Low (minimal I/O) | 70-90% | Acceptable |
| Data Processing | Medium | 30-50% | Poor |
| Transaction Processing | High | 5-15% | Terrible |
| Interactive Programs | Very High | 1-5% | Catastrophic |
The combination of memory waste and CPU idleness created an irresistible economic pressure. If we could keep multiple programs in memory and switch the CPU between them during I/O waits, CPU utilization could approach 100%. This insight drove the development of multiprogramming and, ultimately, modern operating systems.
Even in a single-partition system with only one user program, protection is essential. The operating system must be protected from the user program—otherwise, a bug (or malicious code) could corrupt the kernel and crash the entire machine.
The Protection Problem
With both the OS and user program sharing the same physical address space, nothing inherently prevents the user program from:
Without hardware protection, the operating system must trust every user program to behave correctly—an untenable situation in any multi-user environment.
base ≤ address < base + limit. This provides complete containment—the program cannot access anything outside its allocated region.123456789101112131415161718192021222324252627282930313233343536373839404142
// Hardware protection check on every memory accessPROCEDURE CheckMemoryAccess(address: Address, access_type: AccessType): // Get current protection context mode := CPU.GetCurrentMode() // USER or SUPERVISOR base := CPU.BaseRegister limit := CPU.LimitRegister // Supervisor mode (OS) has unrestricted access IF mode = SUPERVISOR THEN RETURN ALLOW END IF // User mode: enforce containment IF address < base THEN // Attempt to access below user region (OS territory) TriggerProtectionFault("Address below base register") RETURN DENY END IF IF address >= base + limit THEN // Attempt to access above user region TriggerProtectionFault("Address exceeds limit") RETURN DENY END IF // Address is within valid range RETURN ALLOW END PROCEDURE PROCEDURE TriggerProtectionFault(reason: String): // Switch to supervisor mode CPU.SetMode(SUPERVISOR) // Save user program state SaveContext(CurrentProcess) // Vector to OS fault handler PC := PROTECTION_FAULT_HANDLER // OS will typically terminate the offending programEND PROCEDUREProtection mechanisms require the CPU to support at least two modes: supervisor mode (kernel mode) where protection checks are bypassed, and user mode where all accesses are validated. The transition between modes is carefully controlled—typically only through traps and interrupts. This dual-mode architecture remains fundamental to all modern CPUs.
Single partition allocation, while largely obsolete for general-purpose computing, continues to exist in specific domains. Understanding these modern applications reinforces the concepts and reveals when simplicity trumps sophistication.
Case Study 1: Early MS-DOS (1981-1995)
MS-DOS is perhaps the most widespread example of single-partition memory management in personal computing. The original IBM PC had 640KB of conventional memory, organized as:
├─────────────────────────────────┤ 1MB (0x100000)
│ Upper Memory Area │ 384KB
│ (ROM, Video, Reserved) │
├─────────────────────────────────┤ 640KB (0xA0000)
│ │
│ User Program Area │ ~550KB usable
│ (Single MS-DOS Program) │
│ │
├─────────────────────────────────┤
│ DOS Kernel & Drivers │ ~60-100KB
├─────────────────────────────────┤ 0KB
│ Interrupt Vectors │ 1KB
└─────────────────────────────────┘
MS-DOS epitomized single-partition thinking:
Case Study 2: Embedded Hard Real-Time Systems
Surprisingly, single-partition allocation thrives in modern embedded systems where:
Examples include:
| Application | Memory | Why Single Partition Works |
|---|---|---|
| Automotive ECU | 256KB | Single control loop, must be deterministic |
| Medical Pacemaker | 64KB | One program, certified for safety |
| Industrial PLC | 512KB | Single control program, hard real-time |
| Smart Card Chip | 16KB | Extreme memory constraints |
In these systems, the overhead of a full memory management system would consume precious resources better used for the actual application.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
/* * Typical embedded system memory layout (single partition) * Example: ARM Cortex-M microcontroller with 256KB Flash, 64KB RAM */ // Linker script defines memory regions// MEMORY {// FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 256K// RAM (rwx) : ORIGIN = 0x20000000, LENGTH = 64K// } /* Vector table at fixed address (start of Flash) */__attribute__((section(".vectors")))const uint32_t vector_table[] = { (uint32_t)&_stack_top, // Initial stack pointer (uint32_t)&Reset_Handler, // Reset handler (uint32_t)&NMI_Handler, // NMI handler (uint32_t)&HardFault_Handler,// Hard fault handler // ... more interrupt vectors}; /* The ENTIRE application lives in one partition *//* No OS in traditional sense - just startup code + application */ void Reset_Handler(void) { // Initialize .data section (copy from Flash to RAM) uint32_t *src = &_data_flash; uint32_t *dst = &_data_start; while (dst < &_data_end) { *dst++ = *src++; } // Zero-fill .bss section dst = &_bss_start; while (dst < &_bss_end) { *dst++ = 0; } // No memory management to initialize! // No page tables, no heap manager, no protection setup // Jump directly to application main(); // If main() returns, halt while(1);} /* Memory map at runtime: * * FLASH (256KB): * ┌─────────────────────┐ 0x08040000 * │ Unused Flash │ * ├─────────────────────┤ * │ Application Code │ Single program, directly in Flash * │ Constant Data │ * ├─────────────────────┤ * │ Vector Table │ Fixed at Flash start * └─────────────────────┘ 0x08000000 * * RAM (64KB): * ┌─────────────────────┐ 0x20010000 * │ Stack (grows down) │ * ├─────────────────────┤ * │ Heap (optional) │ * ├─────────────────────┤ * │ .bss (zero-init) │ * ├─────────────────────┤ * │ .data (initialized) │ * └─────────────────────┘ 0x20000000 */In safety-critical embedded systems, simpler is often better. A single-partition system is easier to verify, easier to test, and has fewer failure modes. The memory overhead of a sophisticated MMU might consume 10% of available RAM—unacceptable when RAM is measured in kilobytes.
Single partition allocation, for all its simplicity, has fundamental limitations that become increasingly painful as computing requirements grow. These limitations directly motivated the development of multiple partition schemes, paging, and virtual memory.
Fundamental Limitations
| Limitation | Solution Approach | Resulting Technology |
|---|---|---|
| One program at a time | Multiple partitions in memory | Multiple Partition Allocation |
| Poor memory utilization | Variable-sized partitions | Variable Partitioning |
| Program size limit | Load programs in pieces | Overlays, then Virtual Memory |
| No isolation | Hardware address translation | Relocation Registers, then Paging |
| Inflexible allocation | Non-contiguous allocation | Paging and Segmentation |
| No sharing | Shared memory regions | Shared Libraries, Memory-Mapped Files |
The Transition to Multiprogramming
The economic pressure was undeniable. A computer costing $500/hour that sat idle 90% of the time was wasting $450/hour. The solution was obvious: keep multiple programs in memory and switch the CPU between them when one performs I/O.
But multiprogramming introduces new challenges:
These questions led directly to: multiple partition allocation (next page), protection mechanisms, relocation hardware, and ultimately virtual memory.
Single partition allocation taught us the fundamental problems: protection, utilization, and program size limits. Every subsequent memory management technique is a response to these problems. In the next page, we explore multiple partition allocation—the first step toward multiprogramming and modern memory management.
We have explored single partition allocation in depth—from its historical origins to its modern applications. Let's consolidate the essential concepts:
What's Next:
With single partition allocation's limitations clear, we're ready to explore multiple partition allocation—the scheme that enables multiprogramming by dividing memory among several programs simultaneously. This introduces new challenges in partition sizing, allocation strategies, and protection—all topics we'll examine in the following pages.
You now understand single partition memory allocation: its architecture, protection requirements, loading process, utilization characteristics, and fundamental limitations. This foundation is essential for understanding why more sophisticated memory management techniques were developed. Next, we examine multiple partition allocation.