Loading content...
By the early 1960s, multiprogramming had solved the CPU utilization problem—computers hummed along at 80% utilization or better. But a growing chorus of computer scientists pointed out a critical flaw: the human programmer was terribly underutilized.
Consider the programmer's experience: Write code on paper. Punch it onto cards. Submit the deck. Wait hours (sometimes overnight) for output. Discover a typo. Fix the card. Submit again. Wait again. A simple debugging session that might take 30 minutes on a modern computer consumed days of real time.
This wasn't just frustrating—it was economically wasteful. Programmers' salaries were rising, and their time was expensive too. What if, instead of maximizing computer utilization, we maximized programmer productivity?
By the end of this page, you will understand how time-sharing systems emerged, how they differed fundamentally from batch multiprogramming, the technical challenges they solved (preemptive scheduling, virtual memory, terminal I/O), and how they laid the groundwork for all interactive computing we enjoy today.
The term 'time-sharing' was coined by John McCarthy at MIT in 1959. The vision was radical: share the computer's time among multiple interactive users, giving each the illusion of having the entire machine to themselves.
McCarthy and other pioneers recognized that human computer use was inherently 'bursty'. A programmer types a command, then pauses to think. Types another line, pauses again. In the seconds between keystrokes, an entire batch job could complete! Why not use those intervals to serve other users?
The Time-Sharing Philosophy:
Time-sharing inverted the economic calculation. In batch processing, computer time was precious and human time was cheap. Time-sharing assumed the opposite: human time (especially skilled programmers) was precious, and some computer capacity could be 'wasted' on interactive response to save human time. This shift anticipated modern computing economics perfectly.
What Made Time-Sharing Different:
Interactive Terminals: Instead of punched cards, users sat at typewriter-like terminals connected to the computer. They could type commands and see responses immediately.
Short Time Slices: The OS rapidly switched between users—every 100 milliseconds or so—giving each a 'turn' at the CPU. Users couldn't perceive these switches.
Response Time Focus: The OS was designed to keep response time under about 1 second for simple operations. Users needed to feel the computer was responsive.
Conversational Interaction: Programs could prompt for input, wait for the user's response, then continue—a style of interaction impossible with batch processing.
Multiprogramming used non-preemptive scheduling—jobs ran until they voluntarily gave up the CPU (typically by initiating I/O). This was fine for batch jobs that alternated between compute and I/O phases.
But time-sharing couldn't work this way. Consider a compute-bound scientific job that performs millions of calculations without any I/O. Under non-preemptive scheduling, this job would monopolize the CPU indefinitely—all interactive users would freeze.
The Solution: Preemptive Scheduling
Time-sharing systems introduced preemption—the OS forcibly takes the CPU from a running process, even if that process hasn't finished or blocked. This required new hardware support:
The Time Quantum:
The timer was set to fire after a fixed interval called the time quantum (also called time slice). Typical values were 100-500 milliseconds. When the quantum expired:
Timer Interrupt Fires
↓
Hardware saves PC, switches to kernel mode
↓
Timer ISR (Interrupt Service Routine) executes
↓
OS saves full process state to PCB
↓
OS selects next process (scheduling decision)
↓
OS loads new process's state from PCB
↓
OS sets timer for new quantum
↓
OS returns to user mode, new process resumes
Choosing the Quantum:
The quantum length involved tradeoffs:
| Quantum Size | Advantages | Disadvantages |
|---|---|---|
| Very Short (10ms) | Excellent responsiveness, fair distribution | High context switch overhead, throughput loss |
| Short (100ms) | Good responsiveness, reasonable throughput | Moderate overhead |
| Long (500ms) | Low overhead, good throughput | Perceptible delays, uneven response |
| Very Long (>1s) | Minimal overhead | Poor interactive response, approaches non-preemptive behavior |
Research found that users perceive delays longer than about 100-200ms as 'sluggish'. This set an upper bound on useful quantum length for interactive systems. Modern systems use quanta of 1-10ms, enabled by faster context switching on modern hardware.
Round Robin Scheduling:
The classic time-sharing scheduling algorithm was Round Robin (RR): processes were arranged in a circular queue, each receiving one quantum before the next. This guaranteed:
Process Queue (circular):
┌───────────────────────────────────────────┐
│ │
▼ │
┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ │
│ P1 │──▶│ P2 │──▶│ P3 │──▶│ P4 │──┘
└───────┘ └───────┘ └───────┘ └───────┘
▲
│
Currently
Running
After P1's quantum expires, P2 runs, then P3, then P4, then back to P1. Each user's terminal feels responsive because their process runs every few hundred milliseconds.
Time-sharing dramatically increased the number of simultaneous users—from a handful of batch jobs to dozens or even hundreds of interactive sessions. Memory became the critical bottleneck.
Consider: each interactive user needs a text editor, command interpreter, and any programs they're running. With 50 users, that's 50 copies of the editor, 50 copies of the command interpreter... Memory requirements exploded.
Virtual Memory emerged as the solution—one of the most important innovations in computing history. The core idea: give each process the illusion of having vast, private memory, while the OS transparently manages where data actually lives (RAM or disk).
The Address Translation Hardware:
Virtual memory required substantial hardware support—the Memory Management Unit (MMU):
Virtual Address (from CPU)
│
▼
┌───────────────────────────────────────┐
│ Virtual Address │
│ ┌─────────────────┬────────────────┐ │
│ │ Page Number │ Offset │ │
│ │ (20 bits) │ (12 bits) │ │
│ └────────┬────────┴───────┬────────┘ │
└───────────┼────────────────┼──────────┘
│ │
┌──────▼──────┐ │
│ Page Table │ │
│ Lookup │ │
│ │ │
│ VPage → PFrame │
└──────┬──────┘ │
│ │
▼ │
┌────────────────────────────▼──────────┐
│ Physical Address │
│ ┌─────────────────┬────────────────┐ │
│ │ Frame Number │ Offset │ │
│ │ (20 bits) │ (12 bits) │ │
│ └─────────────────┴────────────────┘ │
└───────────────────────────────────────┘
│
▼
Physical RAM
For a 4KB page size (12-bit offset), a 32-bit virtual address has 20 bits for the page number. The page table translates this to a physical frame number, which is combined with the unchanged offset to produce the physical address.
Page table lookups would double memory access time (one access to read the page table, one for the actual data). The Translation Lookaside Buffer (TLB)—a hardware cache of recent page translations—makes most lookups nearly instant. TLB hit rates of 99%+ are essential for performance.
Benefits for Time-Sharing:
Virtual memory solved multiple time-sharing challenges:
Time-sharing systems introduced a fundamentally new I/O pattern: interactive terminal I/O. Unlike batch processing where entire jobs were read from cards and entire outputs written to printers, interactive systems had to handle character-by-character input from potentially dozens of simultaneous users.
The Terminal:
Early terminals were typewriter-like devices (Teletypes or 'TTYs') that printed output on paper and sent input character-by-character as the user typed. Each keystroke generated an interrupt to the computer.
Terminal (TTY)
┌─────────────────────────────────────────┐
│ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ (paper output) │
│ $ ls -la │
│ total 48 │
│ drwxr-xr-x 5 user ... │
│ $ _ (cursor) │
├─────────────────────────────────────────┤
│ [keyboard] │
└─────────────────────────────────────────┘
│
│ Serial line (RS-232)
│ ~110 baud (characters/sec)
▼
┌─────────────────────────────────────────┐
│ Computer │
│ ┌───────────────────────────────────┐ │
│ │ Terminal Multiplexer (hardware) │ │
│ │ • Buffers input/output │ │
│ │ • Generates interrupts │ │
│ │ • Handles multiple lines │ │
│ └───────────────────────────────────┘ │
└─────────────────────────────────────────┘
Line Discipline:
The OS implemented a line discipline (or 'line driver')—a software layer that processed terminal I/O between raw hardware and applications:
Application
│
│ read()/write() - whole lines
▼
┌─────────────────────────────────────────┐
│ Line Discipline │
│ • Buffers input until newline │
│ • Handles editing (backspace, etc.) │
│ • Echoes characters back │
│ • Interprets control characters │
│ • Handles flow control (^S/^Q) │
└─────────────────────────────────────────┘
│
│ Character at a time
▼
Terminal Driver (hardware interface)
│
▼
Physical Terminal
The line discipline accumulated characters until the user pressed Enter, then delivered the complete line to the application. This meant applications didn't need to handle every keystroke—just complete commands.
The 'tty' subsystem persists in Unix/Linux today. Commands like 'tty' (print terminal name), 'stty' (set terminal options), and device names like /dev/tty still reflect 1960s Teletype origins. Even 'terminal emulators' like xterm or Terminal.app simulate the behavior of these physical devices.
Several systems pioneered time-sharing and shaped all subsequent interactive computing. Their innovations remain visible in modern operating systems.
CTSS - Compatible Time-Sharing System (MIT, 1961):
CTSS was the first general-purpose time-sharing system. Developed at MIT's Computation Center, it demonstrated that time-sharing was practical:
Multics - Multiplexed Information and Computing Service (1965-1969):
Multics was the most ambitious computing project of the 1960s—a collaboration between MIT, Bell Labs, and GE to build the definitive time-sharing system:
Bell Labs withdrew from Multics in 1969, but two researchers—Ken Thompson and Dennis Ritchie—missed the environment. They created a simpler system on a spare PDP-7 minicomputer. As a play on 'Multics' (multiplexed), they called it 'UNICS' (uniplexed)—later simplified to 'Unix'. Nearly every concept in Unix traces to Multics, including hierarchical filesystems, shells, and the fork/exec process model.
TSS/360 - Time Sharing System/360 (IBM, 1966-1971):
IBM's attempt to add time-sharing to its System/360 line was troubled but influential. It demonstrated the difficulty of retrofitting time-sharing onto batch-oriented hardware.
TOPS-10/TENEX (DEC, 1970s):
Digital Equipment Corporation's time-sharing systems for the PDP-10 mainframe became wildly popular in universities and research labs. TENEX introduced many innovations later adopted by Unix and others:
| System | Year | Notable Innovation | Legacy |
|---|---|---|---|
| CTSS | 1961 | First practical time-sharing | Proved the concept viable |
| Multics | 1965 | Security rings, hierarchical FS | Direct ancestor of Unix |
| TSS/360 | 1967 | Time-sharing on mainframes | VM/370 hypervisor |
| TOPS-10 | 1970 | User-friendly commands | Influenced MS-DOS, CP/M |
| Unix | 1969 | Simplicity, portability | Linux, macOS, Android, etc. |
Time-sharing systems needed a new interface between user and machine. Batch systems used JCL—rigid, formal, processed by operators. Interactive systems needed something a human could type and see responses to in real-time.
The Shell emerged as this interface: a program that reads user commands, interprets them, and executes appropriate actions. It's called a 'shell' because it wraps around the kernel, providing the outer layer users interact with.
Shell Responsibilities:
The Interactive Loop:
A shell operates in a simple loop—the Read-Eval-Print Loop (REPL):
while (true) {
print prompt ("$ ");
line = read_line();
tokens = parse(line);
if (is_builtin(tokens[0])) {
execute_builtin(tokens);
} else {
pid = fork();
if (pid == 0) { // child
exec(tokens[0], tokens);
} else { // parent
wait(pid);
}
}
}
This simple structure—read command, fork child, execute program, wait for completion—remains the fundamental model of Unix shells today.
A crucial insight: the shell is just an ordinary user program, not part of the kernel. Users could write their own shells, or choose among alternatives. This flexibility led to the proliferation of shells: sh, csh, ksh, bash, zsh, fish, and many more. The separation of shell from kernel encouraged experimentation and evolution.
Pipes: The Killer Feature:
Perhaps the most powerful shell concept was the pipe—connecting the output of one program to the input of another:
cat log.txt | grep ERROR | wc -l
This command reads a log file, filters for lines containing 'ERROR', and counts them—three simple programs combined to solve a complex task. The shell creates pipes between processes automatically.
The philosophy this enabled—small, focused programs that do one thing well, combined through pipes—became the 'Unix philosophy' and influenced software design for decades.
Time-sharing fundamentally changed how humans relate to computers. It established patterns that remain foundational today:
Cloud Computing: Time-Sharing Reborn:
The parallels between 1960s time-sharing and modern cloud computing are striking:
| 1960s Time-Sharing | 2020s Cloud Computing |
|---|---|
| Expensive mainframe, many users | Expensive datacenter, many tenants |
| Pay for compute time used | Pay for compute time used |
| Terminal connects to remote system | Browser connects to remote services |
| Shared resources, isolated users | Shared resources, isolated VMs/containers |
| Utility computing vision | Infrastructure as a Service |
| System operators manage hardware | Cloud provider manages hardware |
The 'utility computing' vision—computing as a metered utility like electricity—was explicitly articulated by time-sharing pioneers in the 1960s. Amazon Web Services, Google Cloud, and Azure are the realization of this 60-year-old vision, using virtualization techniques descended from those same systems.
Time-sharing systems marked the transition from batch-oriented, machine-centric computing to interactive, human-centric computing. This revolution introduced concepts and technologies that remain fundamental today:
What's Next:
While time-sharing flourished on large, expensive mainframes and minicomputers, a revolution was brewing. Advances in chip technology would soon make it possible to put a computer on a desk—and eventually in a pocket. The next page explores the personal computer revolution and how it democratized computing.
You now understand how time-sharing systems transformed computing from batch-oriented data processing to interactive conversation between human and machine. The concepts of preemptive multitasking, virtual memory, and interactive shells born in this era remain the foundation of all modern personal computing. Next, we'll explore how personal computers brought these capabilities to the masses.