Loading learning content...
We've explored three types of wrong abstraction: over-abstraction (too much), under-abstraction (too little), and premature abstraction (too soon). Each has distinct causes and costs, but they share one thing: they all leave detectable traces in the codebase.
This page synthesizes our learning into a diagnostic framework—a systematic approach to recognizing wrong abstractions in any codebase and deciding what to do about them.
By the end of this page, you will have a comprehensive checklist for diagnosing abstraction problems, understand the code smells that signal each type of wrong abstraction, and know the prioritization strategies for remediation.
Effective diagnosis requires a specific mindset. You're not looking for "bad code" in a general sense—you're looking for abstractions that fail to serve their purpose. An abstraction exists to simplify understanding, reduce duplication, or enable flexibility. When it fails at these goals, it's wrong.
The diagnostic mindset involves:
Refactoring for the sake of matching patterns or principles, without addressing real pain, is refactoring theater. If the code isn't causing problems, leave it alone. Diagnosis should identify actual issues, not create projects.
Over-abstraction leaves specific traces in codebases. Here are the key code smells that indicate too much abstraction:
| Smell | Description | Example |
|---|---|---|
| Speculative Generality | Features built for imaginary future uses | Plugin system with no plugins |
| Single Implementations | Interfaces with exactly one implementation | IUserService implemented only by UserService |
| Parallel Hierarchies | Interface hierarchies mirroring implementation hierarchies | IBaseRepo, IUserRepo matching BaseRepo, UserRepo |
| Middleman Classes | Classes that only delegate to other classes | UserFacade that just calls UserService methods |
| Configuration Unused | Configurable options that are never varied | Injectable Logger factory but only one logger type |
| Pattern Obsession | Design patterns applied without matching problems | Strategy pattern for single algorithm |
| Deep Inheritance | Many inheritance levels with minimal behavior per level | Entity → BaseEntity → AbstractBaseEntity → ConcreteEntity |
The navigation test:
A practical way to detect over-abstraction is the navigation test. Ask: "Can I trace from user action to database in under 10 files?" If simple operations require navigating through many layers of abstraction, the system is likely over-abstracted.
The explanation test:
Another practical test: Can you explain to a new team member why each layer exists? If you can't articulate the specific problem each abstraction solves, those abstractions may be over-engineering.
Count your interfaces versus their implementations. If the ratio is close to 1:1 throughout the codebase, you likely have speculative abstraction. Interfaces should exist because multiple implementations exist or will demonstrably exist.
Under-abstraction also leaves distinctive traces. These smells indicate insufficient abstraction:
| Smell | Description | Example |
|---|---|---|
| Primitive Obsession | Domain concepts as primitive types | orderId, customerId, productId all as string |
| Duplicated Logic | Same algorithm in multiple places | Tax calculation in cart.ts, checkout.ts, invoice.ts |
| Shotgun Surgery | Single change requires many-file modifications | Email format change touches 15 files |
| Feature Envy | Methods more interested in other class's data | Order methods constantly accessing Customer fields |
| Long Parameter Lists | Functions with 5+ parameters | createUser(name, email, phone, address, city, state, zip) |
| Inconsistent Behavior | Same concept behaves differently by context | Date formatting differs between reports and UI |
| Data Clumps | Same group of variables travels together | startDate, endDate, timezone always together |
The grep test:
Search for duplicated logic patterns using grep or code search. If the same algorithm (even with slight variations) appears in multiple files, you have duplication that might warrant abstraction.
grep -r "taxRate" --include="*.ts" | wc -l
Many hits suggest tax calculation is scattered rather than centralized.
The change propagation test:
Review recent pull requests. Count how many files each logical change touched. If single-concept changes routinely touch many files, you have under-abstraction.
Review your most-changed files over the past quarter. If specific files come up in every sprint, those files likely contain under-abstracted logic that should be extracted. High churn often indicates logic that conceptually belongs elsewhere.
Premature abstraction has its own signatures. These patterns suggest abstraction happened before understanding matured:
| Smell | Description | Example |
|---|---|---|
| Framework for One | Reusable framework with single user | Data validation library used by one service |
| Configurable Everything | Everything parameterized but rarely varied | Dozens of config options, defaults always used |
| Workarounds in Consumers | Consumers fighting the abstraction | Special-case logic to handle abstraction limitations |
| Misfit Extensions | New requirements awkwardly shoehorned | Apple Pay as a "payment provider" when it's a flow |
| Dead Extension Points | Hooks/callbacks that nothing uses | beforeSave, afterSave callbacks never registered |
| Speculative Parameters | Function parameters that are always the same value | includeDeleted: boolean always false |
| Abstraction Oscillation | Repeated refactoring of same abstraction | PaymentProvider redesigned 3 times in 2 years |
The git history test:
Examine the history of abstraction-heavy areas. If an interface or abstract class has been repeatedly modified to accommodate new use cases, it was likely abstracted prematurely. Good abstractions stabilize once established.
The consumer frustration test:
Ask developers who consume an abstraction: "Do you ever wish you could bypass this?" If the answer is frequently yes, the abstraction is likely wrong—either premature, or simply the wrong shape for actual needs.
Count the workarounds. When code says "this is a special case" or "this breaks the normal pattern," those are signs that the "normal pattern" was designed before reality was understood. More workarounds = more premature abstraction.
Here is a comprehensive checklist for auditing abstractions in any codebase. For each significant abstraction (interface, abstract class, generic framework), evaluate these criteria:
Scoring:
This checklist isn't rigidly quantitative—it's a thinking tool. The goal is to force explicit consideration of whether each abstraction earns its keep.
Consider running this audit quarterly on core abstractions, especially as requirements evolve. An abstraction that was appropriate at creation may no longer be appropriate as the system grows or pivots.
Once you've diagnosed wrong abstraction, how do you fix it? The remediation strategy depends on the type of wrong abstraction:
Avoid "big bang" refactoring. Fix wrong abstractions incrementally, in small safe steps, with comprehensive test coverage. Each step should leave the system in a working state. Large rewrites often introduce more problems than they solve.
In any codebase, you'll find multiple wrong abstractions. Which should you fix first? Here's a prioritization framework:
The opportunity cost lens:
When deciding whether to fix a wrong abstraction, consider opportunity cost. Every hour spent on refactoring is an hour not spent on features, bug fixes, or other improvements. Ask:
Some wrong abstractions aren't worth fixing. If code is rarely touched and causes minimal pain, leave it alone. Perfect is the enemy of shipped.
Not all fixes need to be dedicated projects. The Boy Scout Rule ("leave it better than you found it") means opportunistically improving abstractions when you're already working in an area. This amortizes refactoring cost across feature development.
We've covered substantial ground in this module, exploring the many ways abstraction can go wrong and how to diagnose and fix each type. Let's consolidate the essential insights:
The mastery goal:
The goal isn't to never create wrong abstractions—that's impossible. Requirements change, understanding deepens, and what was right yesterday becomes wrong tomorrow. The goal is to recognize wrong abstractions quickly and fix them efficiently.
With the diagnostic tools and remediation strategies from this module, you can evaluate any abstraction, determine whether it's appropriate, and take action if it isn't. This ability to tune abstraction levels over time is what distinguishes mature codebases from legacy nightmares.
You've completed the module on The Cost of Wrong Abstractions. You now have a comprehensive framework for understanding over-abstraction, under-abstraction, and premature abstraction—their causes, costs, symptoms, and fixes. Apply these principles to create systems that are appropriately abstracted: complex enough to handle real requirements, simple enough to understand and maintain.