Loading learning content...
After studying FCFS, the inefficiency is glaring—why travel 146 cylinders to service request when there's one only 2 cylinders away? The intuition is obvious: always service the closest request first.
This simple insight leads to Shortest Seek Time First (SSTF), our first optimizing disk scheduling algorithm. SSTF applies greedy optimization at every scheduling decision, selecting whichever pending request requires the minimum head movement.
The results can be dramatic. Where FCFS might achieve only 30-40% of optimal efficiency, SSTF often reaches 70-90%. But this performance comes with a crucial tradeoff that makes SSTF unsuitable for many production environments.
By the end of this page, you will deeply understand SSTF—its algorithm, implementation details, performance characteristics, and critical limitations. You'll learn to calculate SSTF schedules, analyze its starvation problem mathematically, and understand when this algorithm is appropriate despite its fairness issues.
Shortest Seek Time First (SSTF) selects the pending request with the minimum seek distance from the current head position at each scheduling decision.
Algorithm Definition:
Formal Description:
Given current head position $H$ and pending requests ${r_1, r_2, ..., r_n}$ at cylinders ${c_1, c_2, ..., c_n}$:
$$\text{Next request} = \arg\min_{i \in {1..n}} |H - c_i|$$
Tie-Breaking: When multiple requests are equidistant, common strategies include:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257
"""SSTF (Shortest Seek Time First) Disk Scheduling Algorithm Implements greedy seek optimization with comprehensive metrics tracking.Demonstrates the performance gains and starvation potential.""" from dataclasses import dataclass, fieldfrom typing import List, Optional, Tuple, Setfrom enum import Enumimport heapq class TieBreakStrategy(Enum): """Strategy for breaking ties when multiple requests are equidistant.""" LOWER_CYLINDER = "lower" # Choose lower cylinder number HIGHER_CYLINDER = "higher" # Choose higher cylinder number MAINTAIN_DIRECTION = "direction" # Continue current direction FCFS = "fcfs" # First-come among tied @dataclassclass DiskRequest: """Represents a single disk I/O request.""" request_id: int cylinder: int arrival_time: float process_id: int arrival_order: int = 0 # For FCFS tie-breaking def __hash__(self): return hash(self.request_id) def __eq__(self, other): return self.request_id == other.request_id @dataclassclass SSTFScheduler: """ SSTF Disk Scheduling Implementation Maintains a set of pending requests and always services the closest request to the current head position. """ initial_head_position: int total_cylinders: int tie_break: TieBreakStrategy = TieBreakStrategy.LOWER_CYLINDER # Internal state pending: Set[DiskRequest] = field(default_factory=set) head_position: int = field(init=False) last_direction: int = field(default=1) # 1 = outward, -1 = inward # Metrics total_seek_distance: int = 0 requests_serviced: int = 0 service_order: List[int] = field(default_factory=list) seek_sequence: List[Tuple[int, int, int]] = field(default_factory=list) starvation_ages: List[int] = field(default_factory=list) def __post_init__(self): self.head_position = self.initial_head_position def submit_request(self, request: DiskRequest) -> None: """Add a new request to the pending set.""" self.pending.add(request) def find_nearest(self) -> Optional[DiskRequest]: """ Find the request closest to current head position. Applies tie-breaking strategy when multiple requests equidistant. """ if not self.pending: return None # Group requests by distance distance_map: dict[int, List[DiskRequest]] = {} for req in self.pending: dist = abs(self.head_position - req.cylinder) if dist not in distance_map: distance_map[dist] = [] distance_map[dist].append(req) min_distance = min(distance_map.keys()) candidates = distance_map[min_distance] if len(candidates) == 1: return candidates[0] # Apply tie-breaking strategy if self.tie_break == TieBreakStrategy.LOWER_CYLINDER: return min(candidates, key=lambda r: r.cylinder) elif self.tie_break == TieBreakStrategy.HIGHER_CYLINDER: return max(candidates, key=lambda r: r.cylinder) elif self.tie_break == TieBreakStrategy.MAINTAIN_DIRECTION: # Prefer request in direction of last movement for req in candidates: if (req.cylinder - self.head_position) * self.last_direction > 0: return req return candidates[0] # Fallback else: # FCFS return min(candidates, key=lambda r: r.arrival_order) def service_request(self, request: DiskRequest) -> dict: """ Service a request: move head and update state. Returns metadata about the service operation. """ old_position = self.head_position seek_distance = abs(request.cylinder - old_position) # Update direction if request.cylinder != old_position: self.last_direction = 1 if request.cylinder > old_position else -1 # Move head self.head_position = request.cylinder self.pending.remove(request) # Track starvation: how many requests were skipped positions_before = len([r for r in self.pending if abs(r.cylinder - old_position) < seek_distance]) # Update metrics self.total_seek_distance += seek_distance self.requests_serviced += 1 self.service_order.append(request.cylinder) self.seek_sequence.append((old_position, request.cylinder, seek_distance)) return { "request_id": request.request_id, "cylinder": request.cylinder, "seek_distance": seek_distance, "from": old_position, "to": request.cylinder, "pending_after": len(self.pending), } def run_scheduling(self) -> dict: """ Execute SSTF scheduling until all requests serviced. """ results = [] while self.pending: nearest = self.find_nearest() if nearest: result = self.service_request(nearest) results.append(result) return { "algorithm": "SSTF", "service_order": self.service_order, "total_seek_distance": self.total_seek_distance, "average_seek_distance": ( self.total_seek_distance / self.requests_serviced if self.requests_serviced > 0 else 0 ), "requests_serviced": self.requests_serviced, "seek_sequence": self.seek_sequence, "detailed_results": results, } def demonstrate_sstf(): """ Demonstrate SSTF with the same scenario as FCFS for comparison. Disk configuration: - 200 cylinders (0-199) - Head starts at cylinder 50 - Request queue: 95, 180, 34, 119, 11, 123, 62, 64 """ scheduler = SSTFScheduler( initial_head_position=50, total_cylinders=200, tie_break=TieBreakStrategy.LOWER_CYLINDER ) # Submit requests (same as FCFS example) requests = [95, 180, 34, 119, 11, 123, 62, 64] for i, cylinder in enumerate(requests): scheduler.submit_request(DiskRequest( request_id=i, cylinder=cylinder, arrival_time=0.0, process_id=1000 + i, arrival_order=i, )) results = scheduler.run_scheduling() # Display results print("=== SSTF Disk Scheduling Demo ===") print(f"Initial head position: 50") print(f"Request queue: {requests}") print(f"Service Order: 50 -> {' -> '.join(map(str, results['service_order']))}") print(f"Total Seek Distance: {results['total_seek_distance']} cylinders") print(f"Average Seek Distance: {results['average_seek_distance']:.2f} cylinders") print("Seek Sequence:") for i, (from_cyl, to_cyl, dist) in enumerate(results['seek_sequence']): print(f" Move {i+1}: {from_cyl} -> {to_cyl} (distance: {dist})") # Compare with FCFS fcfs_total = 644 # From previous example print(f"=== Comparison with FCFS ===") print(f"FCFS Total Seek: {fcfs_total} cylinders") print(f"SSTF Total Seek: {results['total_seek_distance']} cylinders") print(f"Improvement: {(1 - results['total_seek_distance']/fcfs_total)*100:.1f}%") return results def demonstrate_starvation(): """ Demonstrate how SSTF can cause starvation. Scenario: Continuous requests near head, one request at far end. """ print("=== SSTF Starvation Demonstration ===") scheduler = SSTFScheduler( initial_head_position=100, total_cylinders=200 ) # Request at far end scheduler.submit_request(DiskRequest( request_id=0, cylinder=199, arrival_time=0, process_id=1000, arrival_order=0 )) # Many requests near current position for i in range(1, 20): scheduler.submit_request(DiskRequest( request_id=i, cylinder=95 + (i % 10), # Cylinders 95-104 arrival_time=float(i), process_id=1000 + i, arrival_order=i, )) results = scheduler.run_scheduling() # Find when request 0 (cylinder 199) was serviced position_of_far_request = results['service_order'].index(199) + 1 print(f"Request at cylinder 199 (submitted first) was serviced: #{position_of_far_request}") print(f"It waited for {position_of_far_request - 1} other requests to complete!") print("This demonstrates SSTF starvation - early requests can wait indefinitely.") if __name__ == "__main__": demonstrate_sstf() demonstrate_starvation()Let's trace SSTF execution using the exact same scenario as our FCFS example, allowing direct comparison.
Scenario:
| Step | Head Position | Pending Requests | Distances from Head | Selected | Seek Distance |
|---|---|---|---|---|---|
| Initial | 50 | {95,180,34,119,11,123,62,64} | — | — | — |
| 1 | 50→34 | {95,180,119,11,123,62,64} | {45,130,16,69,39,73,12,14} | 34 | 16 |
| 2 | 34→11 | {95,180,119,123,62,64} | {61,146,85,89,28,30} | 11 | 23 |
| 3 | 11→62 | {95,180,119,123,64} | {84,169,108,112,51,53} | 62 | 51 |
| 4 | 62→64 | {95,180,119,123} | {33,118,57,61} | 64 | 2 |
| 5 | 64→95 | {180,119,123} | {85,55,59} | 95 | 31 |
| 6 | 95→119 | {180,123} | {61,4} | 119 | 24 |
| 7 | 119→123 | {180} | {57} | 123 | 4 |
| 8 | 123→180 | {} | {} | 180 | 57 |
SSTF Performance Metrics:
$$\text{Total Seek Distance} = 16 + 23 + 51 + 2 + 31 + 24 + 4 + 57 = 208 \text{ cylinders}$$
$$\text{Average Seek Distance} = \frac{208}{8} = 26.0 \text{ cylinders}$$
Comparison with FCFS:
| Metric | FCFS | SSTF | Improvement |
|---|---|---|---|
| Total Seek Distance | 644 cylinders | 208 cylinders | 67.7% reduction |
| Average Seek Distance | 80.5 cylinders | 26.0 cylinders | 67.7% reduction |
| Service Order | 95→180→34→119→11→123→62→64 | 34→11→62→64→95→119→123→180 | Locality-optimized |
| Efficiency vs Optimal | 32.3% | 100% (matches optimal!) | Perfect for this case |
For this particular request set, SSTF produces the optimal solution! The greedy selection at each step happens to yield the globally optimal path. However, this is not always the case—SSTF can produce suboptimal schedules, though it typically approaches optimal within 10-20%.
Understanding SSTF's performance requires analyzing its expected behavior under various workloads.
Expected Performance vs FCFS:
For uniform random requests on a disk with $N$ cylinders:
The improvement factor depends heavily on queue depth—the more pending requests, the better SSTF can optimize. With $k$ pending requests:
$$\text{Improvement Factor} \approx \sqrt{k}$$
For a typical queue depth of 16 requests: $$\text{SSTF Expected Seek} \approx \frac{N}{12} \text{ per request}$$
This represents a 4x improvement over FCFS for random workloads.
Relationship to Optimal:
SSTF is a greedy algorithm that makes locally optimal decisions. For 1-dimensional optimization (disk scheduling), greedy approaches can achieve excellent results:
The worst case occurs when SSTF must eventually reach isolated requests after exhausting a local cluster:
Example of Suboptimal SSTF:
This happens when SSTF's local optimization causes it to miss better global orderings.
SSTF's advantage scales with queue depth. At queue depth 1, SSTF degenerates to FCFS. At queue depth 100+, SSTF approaches optimal efficiency. This is why SSTF is particularly valuable in high-load scenarios where many requests are pending.
| Queue Depth | Expected SSTF Seek | FCFS Seek | Improvement Factor |
|---|---|---|---|
| 1 | 333 cyl | 333 cyl | 1.0x (no improvement) |
| 4 | 167 cyl | 333 cyl | 2.0x |
| 16 | 83 cyl | 333 cyl | 4.0x |
| 64 | 42 cyl | 333 cyl | 8.0x |
| 256 | 21 cyl | 333 cyl | 16.0x |
SSTF's critical weakness is starvation—requests far from the current head position may wait indefinitely while closer requests continuously arrive and are serviced first.
Starvation Scenario:
Consider a disk where the head is near cylinder 100::
This is analogous to the Shortest Job First (SJF) CPU scheduling problem—short jobs can starve long jobs.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146
"""SSTF Starvation Analysis Demonstrates how SSTF can cause unbounded waiting timesfor requests at disk extremes under continuous load.""" import randomfrom dataclasses import dataclassfrom typing import List, Optionalimport statistics @dataclassclass StarvationMetrics: """Tracks waiting time statistics for starvation analysis.""" request_id: int cylinder: int arrival_order: int service_order: int wait_count: int # How many requests were serviced before this one def analyze_starvation( head_start: int, total_cylinders: int, hot_zone_center: int, hot_zone_width: int, cold_requests: List[int], hot_request_count: int, seed: int = 42) -> dict: """ Analyze SSTF starvation behavior. Creates a scenario with: - "Cold" requests at specified extreme positions - "Hot" requests continuously generated near hot_zone_center Returns statistics on how long cold requests wait. """ random.seed(seed) # Initialize pending requests pending = [] request_id = 0 # Add cold requests (at extremes) for cyl in cold_requests: pending.append({ "id": request_id, "cylinder": cyl, "arrival": request_id, }) request_id += 1 cold_count = len(cold_requests) # Add hot requests (near center) for _ in range(hot_request_count): cyl = hot_zone_center + random.randint(-hot_zone_width, hot_zone_width) cyl = max(0, min(total_cylinders - 1, cyl)) pending.append({ "id": request_id, "cylinder": cyl, "arrival": request_id, }) request_id += 1 # Run SSTF head = head_start serviced = [] while pending: # Find nearest nearest_idx = min( range(len(pending)), key=lambda i: abs(pending[i]["cylinder"] - head) ) req = pending.pop(nearest_idx) head = req["cylinder"] serviced.append(StarvationMetrics( request_id=req["id"], cylinder=req["cylinder"], arrival_order=req["arrival"], service_order=len(serviced), wait_count=len(serviced) - req["arrival"] )) # Analyze cold request behavior cold_metrics = [s for s in serviced if s.request_id < cold_count] hot_metrics = [s for s in serviced if s.request_id >= cold_count] return { "cold_requests": { "count": len(cold_metrics), "avg_wait": statistics.mean(m.wait_count for m in cold_metrics), "max_wait": max(m.wait_count for m in cold_metrics), "service_positions": [m.service_order for m in cold_metrics], }, "hot_requests": { "count": len(hot_metrics), "avg_wait": statistics.mean(m.wait_count for m in hot_metrics), "max_wait": max(m.wait_count for m in hot_metrics), }, "starvation_ratio": ( statistics.mean(m.wait_count for m in cold_metrics) / statistics.mean(m.wait_count for m in hot_metrics) if hot_metrics else float('inf') ), } # Demonstrationif __name__ == "__main__": # Scenario: 1000 cylinder disk # Hot zone: cylinders 450-550 (center at 500) # Cold requests: cylinders 10, 990 (at extremes) # 100 hot requests result = analyze_starvation( head_start=500, total_cylinders=1000, hot_zone_center=500, hot_zone_width=50, cold_requests=[10, 990], hot_request_count=100, ) print("=== SSTF Starvation Analysis ===") print("Scenario: Head at 500, hot zone 450-550, cold at 10 and 990") print("Cold Requests (at extremes):") print(f" Average wait: {result['cold_requests']['avg_wait']:.1f} requests") print(f" Maximum wait: {result['cold_requests']['max_wait']} requests") print(f" Service positions: {result['cold_requests']['service_positions']}") print("Hot Requests (near center):") print(f" Average wait: {result['hot_requests']['avg_wait']:.1f} requests") print(f" Maximum wait: {result['hot_requests']['max_wait']} requests") print(f"Starvation Ratio: {result['starvation_ratio']:.1f}x") print("(Cold requests wait this many times longer than hot requests)")In the worst case, SSTF starvation is unbounded—a request can wait forever. If new requests continuously arrive closer to the head than a pending request, that pending request will never be served. This is unacceptable for production systems with fairness or latency SLAs.
Formal Starvation Condition:
A request at cylinder $c$ will starve if, at every scheduling decision, there exists at least one other pending request $c'$ such that:
$$|c' - head| < |c - head|$$
Starvation Probability:
For a heavily loaded system with request rate $\lambda$ and service rate $\mu$, the probability that a request at distance $d$ from the hot zone experiences excessive delay grows exponentially with:
$$P(\text{starve}) \propto e^{\alpha \cdot d}$$
where $\alpha$ depends on workload intensity.
Practical Impact:
In production systems, starvation manifests as:
Several variants of SSTF address its limitations while preserving most of its performance benefits:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
"""Aged SSTF Implementation Combines SSTF's seek optimization with aging to prevent starvation.Requests gradually increase in priority as they wait longer.""" from dataclasses import dataclassfrom typing import Set, Optionalimport time @dataclassclass AgedRequest: request_id: int cylinder: int arrival_time: float @dataclassclass AgedSSTFScheduler: """ SSTF with aging to prevent starvation. Priority = seek_distance - (age_weight * age) Lower priority = higher chance of selection. As age increases, effective "distance" decreases until any request will eventually be selected. """ head_position: int age_weight: float = 10.0 # Cylinders reduced per second of waiting pending: Set[AgedRequest] = None current_time: float = 0.0 def __post_init__(self): if self.pending is None: self.pending = set() def submit(self, request: AgedRequest) -> None: self.pending.add(request) def get_effective_distance(self, request: AgedRequest) -> float: """ Calculate aged priority for a request. effective_distance = actual_distance - (age_weight * wait_time) This ensures that even distant requests will eventually have negative effective distance and be selected. """ actual_distance = abs(request.cylinder - self.head_position) wait_time = self.current_time - request.arrival_time age_bonus = self.age_weight * wait_time return actual_distance - age_bonus def select_next(self) -> Optional[AgedRequest]: """Select request with minimum effective distance.""" if not self.pending: return None return min(self.pending, key=self.get_effective_distance) def service(self, request: AgedRequest) -> int: """Service a request and return seek distance.""" seek = abs(request.cylinder - self.head_position) self.head_position = request.cylinder self.pending.remove(request) return seek def advance_time(self, delta: float) -> None: """Advance simulation time (for aging calculation).""" self.current_time += delta def demonstrate_aged_sstf(): """Show how aging prevents starvation.""" scheduler = AgedSSTFScheduler(head_position=100, age_weight=5.0) # Submit distant request first scheduler.submit(AgedRequest(0, 199, arrival_time=0.0)) # Submit close requests for i in range(1, 6): scheduler.submit(AgedRequest(i, 95 + i, arrival_time=0.1 * i)) print("=== Aged SSTF Demo ===") print("Without aging, request at 199 would be served last.") print("With age_weight=5.0 cylinders/second:") service_order = [] # Simulate with time advancement while scheduler.pending: next_req = scheduler.select_next() if next_req: print(f"Time {scheduler.current_time:.1f}s: ", end="") print(f"Select cylinder {next_req.cylinder} ", end="") print(f"(eff_dist={scheduler.get_effective_distance(next_req):.1f})") scheduler.service(next_req) service_order.append(next_req.cylinder) # Simulate seek time (0.01s per cylinder simplified) scheduler.advance_time(0.1) print(f"Service order: {service_order}") print("Note: Distant request served earlier than pure SSTF due to aging.") if __name__ == "__main__": demonstrate_aged_sstf()Implementing SSTF efficiently requires careful attention to data structures and algorithmic complexity.
| Data Structure | Find Nearest | Insert | Memory | Best Use Case |
|---|---|---|---|---|
| Unsorted List | O(n) | O(1) | O(n) | Small queue depth (<20) |
| Sorted Array | O(log n) + O(1) | O(n) | O(n) | Mostly static queue |
| Balanced BST | O(log n) | O(log n) | O(n) | Dynamic, moderate queue |
| Skip List | O(log n) avg | O(log n) | O(n log n) | High concurrency |
| Two Sorted Lists | O(1) + O(log n) | O(log n) | O(n) | Separate left/right of head |
Optimal Structure: Two Sorted Lists
The most efficient SSTF implementation maintains two sorted structures:
Finding the nearest request requires only checking:
This reduces selection to O(1) after the initial sort, with O(log n) insertions using binary search.
Handling Head Movement:
When the head moves, some requests may need to transfer between lists:
Modern disk controllers implement SSTF-like algorithms in hardware/firmware using specialized sorted buffers. The controller's Native Command Queuing (NCQ) with queue depth 32 allows drives to internally reorder requests for optimal head movement, transparent to the operating system.
SSTF is appropriate in specific circumstances where its starvation risk is acceptable:
While SSTF demonstrates optimal seek minimization, production systems almost universally use SCAN or C-SCAN variants that provide similar performance with starvation guarantees. Pure SSTF is primarily useful as a theoretical benchmark and for understanding greedy optimization.
We have thoroughly examined Shortest Seek Time First scheduling. Let's consolidate the key insights:
What's Next:
SSTF demonstrates that greedy optimization dramatically improves performance but creates fairness problems. In the next page, we'll study SCAN (Elevator Algorithm)—an ingenious approach that provides SSTF-like performance while guaranteeing bounded wait times. SCAN achieves this by imposing directional discipline on head movement.
You now understand SSTF at an expert level—its greedy selection strategy, dramatic performance gains, critical starvation vulnerability, and appropriate use cases. This sets the stage for understanding why elevator algorithms like SCAN became the standard approach to disk scheduling.