Loading content...
In 1987, Barbara Liskov delivered a keynote address titled "Data Abstraction and Hierarchy" at the OOPSLA conference. In that talk, she articulated a principle that would become one of the five pillars of SOLID design:
"What is wanted here is something like the following substitution property: If for each object o₁ of type S there is an object o₂ of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o₁ is substituted for o₂, then S is a subtype of T."
This formal definition might seem abstract, but it captures a profound truth about type relationships in object-oriented programming. Let's unpack it completely.
By the end of this page, you will understand Liskov's precise formulation—what each part means, why she phrased it this way, and how to apply this rigorous definition to evaluate your own type hierarchies. You'll gain the theoretical foundation that turns intuition into principled analysis.
Before diving into the principle, it's worth understanding who Barbara Liskov is and why her work carries such weight.
Barbara Liskov is a pioneering computer scientist whose contributions have shaped how we think about programming languages, data abstraction, and distributed computing. She is an Institute Professor at MIT—the highest faculty honor—and in 2008, she received the A.M. Turing Award, often called the "Nobel Prize of Computing."
Her key contributions include:
When we cite LSP, we're not invoking some arbitrary coding guideline. We're applying a principle formulated by one of the most rigorous and accomplished computer scientists in history. The principle emerged from decades of work on what makes programs correct and maintainable.
Liskov's background in formal verification and data abstraction is crucial context. The substitution principle isn't just a practical guideline—it's grounded in the theory of how abstract types relate to one another. Understanding her formulation helps us apply the principle with precision rather than intuition.
Let's break down Liskov's original definition piece by piece:
"If for each object o₁ of type S there is an object o₂ of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o₁ is substituted for o₂, then S is a subtype of T."
| Symbol/Phrase | Meaning | In Practice |
|---|---|---|
| S and T | Two types where S is potentially a subtype of T | The child class (S) and parent class (T) |
| o₁ of type S | Any instance of the proposed subtype | Any object created from the child class |
| o₂ of type T | An instance of the base type | Any object created from the parent class |
| Program P | Any code that uses type T | Client code written against the parent/interface |
| behavior unchanged | P produces the same observable results | Same outputs, same effects, same exceptions (or lack thereof) |
| S is a subtype of T | S can safely be used wherever T is used | The inheritance relationship is semantically valid |
The key insight:
The definition is stated from the perspective of programs that use the types, not from the perspective of the types themselves. This is crucial. Liskov is saying:
"Look at every piece of code that works with type T. If that code also works correctly with type S—without modification, without special cases, without knowing that S is different from T—then and only then is S a legitimate subtype of T."
This shifts our focus from the implementation of the subtype to its observable behavior as seen by clients.
Think of it this way: You're not asking "Does S extend T?" or "Does S have all of T's methods?" You're asking "If I hand code that expects T an instance of S instead, will that code still work correctly?" The inheritance mechanism is irrelevant—only observable behavior matters.
The phrase "behavior of P is unchanged" is the heart of LSP, but it's also the trickiest to interpret. Let's be precise about what it does and doesn't mean.
What "unchanged behavior" DOES mean:
What "unchanged behavior" DOES NOT mean:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
// Example: Unchanged behavior with different implementation abstract class Sorter<T> { // Contract: returns array sorted in ascending order abstract sort(items: T[], compare: (a: T, b: T) => number): T[];} class QuickSorter<T> extends Sorter<T> { sort(items: T[], compare: (a: T, b: T) => number): T[] { // Uses quicksort internally // Different algorithm, but same behavioral contract return this.quickSort([...items], compare); } private quickSort(arr: T[], compare: (a: T, b: T) => number): T[] { if (arr.length <= 1) return arr; const pivot = arr[Math.floor(arr.length / 2)]; const left = arr.filter(x => compare(x, pivot) < 0); const middle = arr.filter(x => compare(x, pivot) === 0); const right = arr.filter(x => compare(x, pivot) > 0); return [...this.quickSort(left, compare), ...middle, ...this.quickSort(right, compare)]; }} class MergeSorter<T> extends Sorter<T> { sort(items: T[], compare: (a: T, b: T) => number): T[] { // Uses mergesort internally // Slower for small arrays, but same behavioral contract return this.mergeSort([...items], compare); } private mergeSort(arr: T[], compare: (a: T, b: T) => number): T[] { if (arr.length <= 1) return arr; const mid = Math.floor(arr.length / 2); const left = this.mergeSort(arr.slice(0, mid), compare); const right = this.mergeSort(arr.slice(mid), compare); return this.merge(left, right, compare); } private merge(left: T[], right: T[], compare: (a: T, b: T) => number): T[] { const result: T[] = []; while (left.length && right.length) { if (compare(left[0], right[0]) <= 0) { result.push(left.shift()!); } else { result.push(right.shift()!); } } return [...result, ...left, ...right]; }} // Client code works with ANY Sorter - behavior is unchangedfunction processData<T>(sorter: Sorter<T>, data: T[], compare: (a: T, b: T) => number): T[] { const sorted = sorter.sort(data, compare); // Contract: sorted is always in ascending order // Doesn't matter if quicksort or mergesort was used return sorted;} // Both work identically from the client's perspective:const quickResult = processData(new QuickSorter<number>(), [3, 1, 4, 1, 5], (a, b) => a - b);const mergeResult = processData(new MergeSorter<number>(), [3, 1, 4, 1, 5], (a, b) => a - b);// Both produce [1, 1, 3, 4, 5] - behavior is unchangedQuickSorter and MergeSorter have completely different implementations—different algorithms, different performance characteristics, different memory usage. But the observable behavior is identical: given an unsorted array, they return a sorted array. This is what LSP allows and requires.
In 1994, Barbara Liskov and Jeannette Wing published a paper titled "A Behavioral Notion of Subtyping" that gave a more rigorous treatment of the substitution principle. This formalization introduced the concepts that we now use to reason precisely about LSP:
The Three Constraint Rules:
These rules might seem abstract, but they have concrete implications for how you design methods in subclasses.
Contravariance of preconditions (can accept more):
If the parent method requires x > 0, the child can accept x >= 0 or even any x. This is safe because any caller that satisfies the parent's stricter requirement will automatically satisfy the child's looser requirement.
Covariance of postconditions (can promise more):
If the parent method promises to return 0 <= result <= 100, the child can promise 50 <= result <= 75. This is safe because any caller expecting the parent's range will be satisfied by the child's narrower range.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
// Demonstrating Liskov-Wing rules abstract class NumberProcessor { /** * Precondition: value must be between 0 and 100 * Postcondition: returns a value between 0 and 200 * Invariant: does not modify external state */ abstract process(value: number): number;} // VALID: Looser precondition, tighter postconditionclass SafeProcessor extends NumberProcessor { /** * Precondition: value can be ANY number (looser - accepts more) * Postcondition: returns value between 0 and 100 (tighter - promises more) */ process(value: number): number { // Accepts ANY value, not just 0-100 const clamped = Math.max(-1000, Math.min(1000, value)); // Returns 0-100, which is within parent's 0-200 range return Math.abs(clamped) % 101; }} // INVALID: Stricter precondition - violates contravarianceclass UnsafeProcessor extends NumberProcessor { /** * Precondition: value must be POSITIVE (stricter - rejects valid inputs!) */ process(value: number): number { if (value < 0) { // Parent accepts 0-100; we're rejecting 0! throw new Error("Value must be positive"); // VIOLATION! } return value * 2; }} // INVALID: Weaker postcondition - violates covarianceclass WeakProcessor extends NumberProcessor { /** * Postcondition: returns ANY number (weaker - promises less!) */ process(value: number): number { // Parent promises 0-200; we might return 500! return value * 5; // Could exceed 200 - VIOLATION! }} // Client code trusts parent's contractfunction runProcessing(processor: NumberProcessor): void { const result = processor.process(50); // Valid input per parent // Client expects result in 0-200 range per parent's contract if (result > 200) { // This should NEVER happen according to parent contract console.log("Unexpected!"); // But WeakProcessor violates this }}Preconditions can get LOOSER (contravariance), postconditions can get STRICTER (covariance). Never the reverse. Think of it this way: the child can be MORE accepting of inputs and MORE reliable about outputs, but never LESS accepting or LESS reliable.
The Liskov-Wing paper introduced another crucial concept: the history constraint. This addresses how an object's state changes over time.
The history constraint: Subtypes must not allow state changes that the base type would not allow.
This is about the evolution of an object's state, not just its current state. If a parent class guarantees that certain state transitions never happen, the child must maintain that guarantee.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
// History Constraint Violation Example class OrderStatus { protected _status: 'pending' | 'confirmed' | 'shipped' | 'delivered'; constructor() { this._status = 'pending'; } get status() { return this._status; } /** * INVARIANT (history constraint): * Status can only progress forward: pending → confirmed → shipped → delivered * Once advanced, it CANNOT go backward */ confirm(): void { if (this._status !== 'pending') { throw new Error('Can only confirm pending orders'); } this._status = 'confirmed'; } ship(): void { if (this._status !== 'confirmed') { throw new Error('Can only ship confirmed orders'); } this._status = 'shipped'; } deliver(): void { if (this._status !== 'shipped') { throw new Error('Can only deliver shipped orders'); } this._status = 'delivered'; }} // VIOLATION: Child allows backward state transitionsclass EditableOrderStatus extends OrderStatus { /** * VIOLATES history constraint! * Parent guarantees status only moves forward * But this allows going backward */ reset(): void { this._status = 'pending'; // History constraint violated! }} // Client code that trusts history constraintfunction processOrder(order: OrderStatus): void { order.confirm(); order.ship(); // After ship(), client TRUSTS that status will NEVER be 'pending' again // This is the history constraint // ... time passes ... // Client code can safely assume: console.log(order.status !== 'pending'); // Should ALWAYS be true // But with EditableOrderStatus, reset() could be called elsewhere, // making this assumption false - the history constraint is broken}Why history constraints matter:
When code observes an object's state and makes decisions based on that observation, it relies on implicit history constraints. If an OrderStatus has been shipped, other code might:
All of these actions assume the order stays shipped (or moves to delivered). If a subclass allows "resetting" to pending, it breaks every piece of code that made decisions based on the shipping status.
If your class represents a state machine with defined transitions, the valid transitions are a history constraint. Subclasses must honor the state machine's rules—they can't add transitions that would bypass or reverse the defined flow.
Armed with Liskov's formal definition, you can now systematically evaluate any proposed inheritance relationship. Here's a practical checklist:
The substitution thought experiment:
When evaluating whether Child is a valid subtype of Parent, imagine:
ParentChild existsParent instances with Child instancesIf the code still works correctly in all cases, Child is LSP-compliant. If any test fails, any assertion breaks, any exception is thrown unexpectedly, or any incorrect result is produced—Child violates LSP.
It's far better to design LSP-compliant hierarchies from the start than to detect violations later. Before creating a subclass, consciously ask: "Will every client of the parent class work correctly with this subclass?" If there's any doubt, reconsider the inheritance relationship.
Barbara Liskov's formulation gives us the theoretical precision to reason about subtyping correctly. Let's consolidate what we've learned:
What's next:
We've covered the formal definition. The next page explores LSP as a behavioral contract—how to think about the implicit agreements between types and their users, and how to make those contracts explicit and enforceable.
You now understand Barbara Liskov's original formulation of the substitution principle, including the Liskov-Wing rules about preconditions, postconditions, invariants, and history constraints. This formal foundation enables precise reasoning about whether an inheritance relationship is semantically valid.