Loading learning content...
There is a seductive trap in software design: the belief that with enough analysis, experience, and insight, we can anticipate every future change our system might need to accommodate. Armed with the Open/Closed Principle, developers sometimes fall into a pattern of building elaborate extension mechanisms for changes that never materialize—while being blindsided by changes they never imagined.
This page confronts an uncomfortable truth: we are remarkably bad at predicting the future of our software. The history of software engineering is littered with systems that were carefully designed to be open for extension in precisely the wrong dimensions, while remaining stubbornly closed in the areas where change actually occurred.
By the end of this page, you will understand why future requirements are fundamentally unpredictable, how this uncertainty should shape your approach to OCP, and what strategies allow you to embrace uncertainty rather than futilely resist it. You'll learn to distinguish between the impossible dream of perfect foresight and the practical art of evolutionary design.
Software requirements emerge from a complex intersection of business strategy, market conditions, user behavior, technological capability, and competitive pressure. Each of these factors is itself unpredictable, and their interactions create a combinatorial explosion of possible futures.
Why prediction fails:
Consider a typical software project at inception. The development team might know:
What they cannot know includes:
The further we project into the future, the more our predictions diverge from reality. A design decision intended to accommodate change two years from now is essentially a gamble—and the house usually wins.
Case studies of successful prediction create a false impression. We hear about the teams who correctly anticipated a major change and designed for it. We don't hear about the far more numerous teams who predicted wrongly—either building extension points that remained forever unused, or preparing for changes that never came while being devastated by changes they never imagined.
To understand why change is unpredictable, we must examine the different sources of unpredictability in software systems. Each source creates different challenges for design foresight.
| Source | Nature of Unpredictability | Design Impact |
|---|---|---|
| Business Pivots | Strategic direction shifts based on market feedback, funding, or leadership changes | Core assumptions invalidated; extension points may become irrelevant |
| User Discovery | Actual user behavior differs from imagined use cases | Features designed for flexibility in unused areas; rigidity where flexibility needed |
| Competitive Response | Competitor actions force reactive feature development | Planned timelines disrupted; rushed changes to closed components |
| Technological Shifts | New technologies change what's possible and expected | Today's extension points may not accommodate tomorrow's paradigms |
| Regulatory Changes | Legal requirements can mandate fundamental changes | Compliance may require modifying what was designed as closed |
| Scale Effects | Problems that emerge only at certain scales | Performance changes may require structural modifications |
| Integration Pressure | Requirements from partners or platforms we integrate with | External constraints on internal design decisions |
The compounding effect:
These sources don't operate in isolation. A business pivot triggered by competitive pressure might coincide with a regulatory change—and the combination requires changes no one could have predicted from either factor alone.
Example: The Payment Processing Evolution
Consider a team building a payment system in 2015:
But could they predict:
No amount of careful analysis in 2015 would have identified these specific dimensions of change. A team that built extensive extension mechanisms for multiple card processors might find themselves with no easy path to cryptocurrency—because the abstraction was built around assumptions that don't transfer.
When we design for extensibility, we're essentially choosing which 'axes' the system can easily extend along. But future requirements often demand extension along axes we never considered—or require changes that cross multiple axes simultaneously. We can't build extension points for dimensions we haven't imagined.
Let's examine real-world patterns where prediction failed, illustrating how even experienced teams building with the best intentions find their foresight inadequate.
NotificationChannel interface, NotificationRouter, ChannelFactory, NotificationTemplateEngine.PricingRule implementations, supporting composition and evaluation order.In each case, the teams were competent and thoughtful. They applied OCP correctly. Their failure wasn't in execution—it was in the fundamental assumption that they could predict the future. The lesson isn't 'their extension points were wrong'; it's that extension points are guesses, and most guesses are wrong.
A fundamental asymmetry exists between design time and runtime. At design time, we have the least information about what our system will need to do. Yet design time is when we must make structural decisions that constrain future possibilities.
What we know at design time:
What we learn during runtime:
The paradox: We must design for a future we cannot know, using information that becomes more accurate only as we proceed—when it's often too late to change course without significant cost.
Implications for OCP:
This knowledge asymmetry fundamentally challenges the notion that we can 'close' the right parts of our system. If we don't know what will change, how can we know what should be closed? The answer is that we can't—at least, not with the confidence that OCP seems to promise.
This doesn't mean OCP is useless. It means OCP must be applied with humility: acknowledging that our extension points are educated guesses, not prophecies.
When developers act as though they know what changes will come, they often over-invest in specific extension mechanisms. When (not if) predictions prove wrong, the over-engineering becomes a liability rather than an asset. Paradoxically, acting with more certainty often leads to worse outcomes than acting with appropriate uncertainty.
If we accept that we cannot predict the future, how should we approach OCP? The key is shifting from prediction-based design to adaptation-based design.
A practical heuristic: Don't abstract until you have three concrete cases. With one case, you're guessing. With two, you might see a pattern but still mostly guessing. With three, you have enough evidence to design an abstraction that actually fits the real variability in your system.
A mature understanding of OCP recognizes that 'closed for modification' is never permanent. It represents a current state of stability, not an eternal guarantee.
The lifecycle of closure:
Initial Implementation — Code is written to solve current requirements. It is neither particularly open nor closed—it simply works.
First Change Requirement — A change is needed that the original design didn't anticipate. We have a choice: modify the existing code or introduce an extension point.
Extension Point Creation — If the change represents a pattern likely to recur, we refactor to create an extension point. The specific dimension becomes 'open for extension.'
Temporary Closure — The refactored code is now 'closed' along that dimension. But this closure is provisional—it reflects our current understanding.
New Unanticipated Change — A change arrives that doesn't fit our extension point. We face the choice again: modify the 'closed' code or find a workaround.
Evolving Abstraction — If the new change is valuable, we may modify the abstraction itself. What was closed opens again, incorporating new understanding.
The cycle repeats. There is no final state where the code is perfectly closed. Each period of closure is temporary, lasting until our understanding proves insufficient.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
// Stage 1: Initial simple implementationclass OrderProcessor { process(order: Order): void { // Direct implementation - not thinking about OCP yet this.validateOrder(order); this.calculateTotal(order); this.chargePayment(order); this.sendConfirmationEmail(order); }} // Stage 2: First change - need SMS notifications too// We COULD just modify sendConfirmationEmail... but we see a pattern// So we introduce an extension point interface NotificationSender { send(order: Order): void;} class OrderProcessor { constructor(private notifiers: NotificationSender[]) {} process(order: Order): void { this.validateOrder(order); this.calculateTotal(order); this.chargePayment(order); // Now "closed" for notification changes this.notifiers.forEach(n => n.send(order)); }} // Stage 3: Time passes. Now we need...// - Notifications with user preferences// - Different notifications for different order types// - Notifications that depend on order state changes // Our NotificationSender abstraction doesn't fit!// It knows nothing about preferences, order types, or state changes. // Stage 4: We must modify what was "closed" - evolve the abstractioninterface NotificationContext { order: Order; eventType: OrderEventType; user: User;} interface SmartNotificationSender { canHandle(context: NotificationContext): boolean; send(context: NotificationContext): Promise<void>;} // The cycle continues - this too will be modified when// requirements arrive that we can't predict todayDesignating code as 'closed' is really a soft commitment: 'We believe this code is stable enough that we don't expect to modify it.' It's a working hypothesis, not a binding contract. When evidence proves the hypothesis wrong, good engineers update their design rather than contort their code to preserve an outdated abstraction.
Given that we cannot predict the future, here are actionable guidelines for applying OCP in the real world:
| Guideline | Rationale | Example |
|---|---|---|
| Start Concrete | Abstractions without concrete experience are guesses | Don't create PaymentProcessor interface until you have two payment types |
| Earn Your Abstractions | Let change requests justify extensions | Add strategy pattern when the second algorithm arrives, not the first |
| Keep Closed Code Modifiable | Well-tested, clear code is safe to change | High test coverage means opening 'closed' code isn't scary |
| Track Prediction Accuracy | Learn from past predictions to improve future ones | Review: did those extension points get used? Were changes unexpected? |
| Accept Rework as Normal | Refactoring isn't failure; it's learning | When predictions fail, refactor enthusiastically rather than defensively |
| Prefer Shallow Hierarchies | Deep inheritance/composition trees are hard to modify | Flat structures are easier to reorganize when understanding evolves |
| Limit Blast Radius | Small, focused modules limit the impact of wrong predictions | Changes to one bounded context shouldn't cascade across the system |
The meta-guideline: Design as if you will be wrong—because you will be. The goal is not to be right about the future; it's to make being wrong as painless as possible.
This mindset transforms how we think about OCP. Instead of closing code against the changes we predict, we keep code healthy enough that any change—predicted or not—remains feasible. OCP becomes less about preventing modification and more about ensuring that modifications, when necessary, are contained and safe.
We've confronted an uncomfortable truth: the future of our software is unknowable. This recognition is not a weakness—it's wisdom that leads to better, more adaptive designs.
What's next:
Accepting that prediction fails raises a new question: if we try to prepare for unpredictable change, we risk over-engineering—building for changes that never come. The next page explores this tension, examining the real cost of over-engineering and how to recognize when extensibility investments are likely to pay off.
You now understand why software change is fundamentally unpredictable and how this reality should shape your approach to OCP. The key insight: design for changeability rather than predicted changes. Next, we'll explore the cost of over-engineering—what happens when we try too hard to prepare for an unknowable future.