Loading content...
Recognizing that complexity is costly is only half the battle. The real work is knowing how to simplify. This isn't about dumbing down systems—it's about finding the elegant minimum that achieves your goals without unnecessary elaboration.
This page is your practical toolkit. For each common source of complexity, we'll examine:
You'll walk away with repeatable patterns for simplification—not just theory, but concrete techniques you can apply to your codebase today.
By the end of this page, you'll have a library of simplification patterns: configuration over code, composition over inheritance, inline over abstracted, synchronous over async, and several more. You'll understand when each applies and how to evaluate the tradeoff between simplicity and other concerns.
One of the most common sources of complexity is building systems that are too configurable. Teams create elaborate configuration schemas, plugin architectures, and domain-specific languages—when a simple, hardcoded approach would suffice.
The Problem: Premature Configurability
A team building a notification system might design:
This seems reasonable—until you realize that notifications have been the same for 3 years and the "business users" always ask engineering to make changes anyway.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
// ❌ COMPLEX: Over-engineered notification systeminterface NotificationRule { id: string; trigger: TriggerExpression; condition: ConditionExpression; actions: ActionConfiguration[]; priority: number; schedule?: ScheduleConfig; throttle?: ThrottleConfig;} class NotificationEngine { private ruleParser: RuleParser; private conditionEvaluator: ConditionEvaluator; private actionExecutor: ActionExecutor; private scheduler: NotificationScheduler; private throttler: RateLimiter; private channelRegistry: Map<string, NotificationChannel>; async processEvent(event: DomainEvent): Promise<void> { const rules = await this.loadRules(event.type); for (const rule of rules) { const context = this.buildContext(event); if (this.conditionEvaluator.evaluate(rule.condition, context)) { const throttleKey = this.throttler.getKey(rule, event); if (await this.throttler.shouldProcess(throttleKey)) { await this.scheduler.schedule(rule, context); } } } } // ... 500 more lines of infrastructure} // ✅ SIMPLE: Just send the notifications you actually sendclass NotificationService { constructor( private emailSender: EmailSender, private pushSender: PushNotificationSender ) {} async orderShipped(order: Order, user: User): Promise<void> { await this.emailSender.send({ to: user.email, template: 'order-shipped', data: { orderId: order.id, trackingUrl: order.trackingUrl } }); if (user.pushEnabled) { await this.pushSender.send({ userId: user.id, title: 'Order Shipped!', body: `Your order #${order.id} is on its way` }); } } async paymentFailed(order: Order, user: User): Promise<void> { await this.emailSender.send({ to: user.email, template: 'payment-failed', data: { orderId: order.id, retryUrl: order.paymentRetryUrl } }); } // Add new methods as needed. Each is clear, testable, debuggable. // Total: ~100 lines covering all actual use cases.}Configure when: Changes are frequent (daily/weekly), require no logic changes, and non-engineers genuinely need to make them. Code when: Changes are infrequent (monthly or less), require testing, or "non-engineers" actually means "engineers wearing a product hat."
Inheritance seems appealing for code reuse but creates tight coupling and rigid hierarchies. Composition offers the same code reuse with dramatically less complexity.
The Inheritance Trap
Inheritance violations of KISS are subtle. The initial hierarchy looks reasonable:
Animal → Mammal → Dog
But requirements evolve. Now you need a FlyingMammal (bats), a SwimmingMammal (whales), and a FlyingSwimmingBird (ducks). The clean hierarchy becomes a diamond of death.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
// ❌ COMPLEX: Deep inheritance creates fragile hierarchiesabstract class BasePaymentProcessor { abstract validatePayment(payment: Payment): ValidationResult; abstract processPayment(payment: Payment): PaymentResult; abstract handleFailure(payment: Payment, error: Error): void; protected logTransaction(payment: Payment): void { /* ... */ } protected updateMetrics(result: PaymentResult): void { /* ... */ }} abstract class CardPaymentProcessor extends BasePaymentProcessor { protected validateCardNumber(card: Card): boolean { /* ... */ } protected checkFraudSignals(payment: Payment): FraudScore { /* ... */ }} class StripeCardProcessor extends CardPaymentProcessor { // Must understand and work with layers of inherited behavior // Changes to base classes ripple down unpredictably // Testing requires understanding the full hierarchy} class StripeCardProcessorWithRetry extends StripeCardProcessor { // 4 levels deep - who knows what's happening?} // ✅ SIMPLE: Composition - mix behaviors as neededinterface PaymentValidator { validate(payment: Payment): ValidationResult;} interface PaymentGateway { process(payment: Payment): Promise<PaymentResult>;} interface TransactionLogger { log(payment: Payment, result: PaymentResult): void;} interface FraudChecker { checkFraud(payment: Payment): FraudScore;} // Compose behaviors explicitlyclass StripePaymentProcessor { constructor( private validator: PaymentValidator, private gateway: PaymentGateway, private logger: TransactionLogger, private fraudChecker: FraudChecker ) {} async processPayment(payment: Payment): Promise<PaymentResult> { // Every step is visible. No hidden inherited behavior. const validation = this.validator.validate(payment); if (!validation.valid) { return { success: false, errors: validation.errors }; } const fraudScore = this.fraudChecker.checkFraud(payment); if (fraudScore.isHighRisk) { return { success: false, errors: ['Flagged for manual review'] }; } const result = await this.gateway.process(payment); this.logger.log(payment, result); return result; }} // Easy to test: inject mocks for each component// Easy to modify: change one component without affecting others// Easy to understand: all behavior is explicit in this classWhy Composition Is Simpler
The Gang of Four design patterns book, published in 1994, already stated: "Favor composition over inheritance." Thirty years later, this remains critical advice.
Inheritance is still useful for true is-a relationships where substitutability matters (LSP). A Rectangle IS-A Shape—polymorphism is the goal. But most "inheritance" in production code is actually "reuse"—which composition handles better.
Premature abstraction is one of the most common sources of unnecessary complexity. The instinct to DRY (Don't Repeat Yourself) is good, but abstraction has costs.
The Abstraction Premium
Every abstraction requires:
For abstractions used many times, this premium is worth paying. For abstractions used 2-3 times, the cost often exceeds the benefit.
The Rule of Three
Don't abstract until you have three instances of the same code. With one instance, abstraction is speculative. With two, the pattern may be coincidental. With three, you have enough examples to create a good abstraction.
123456789101112131415161718192021222324252627282930313233343536373839
// ❌ OVER-ABSTRACTED: Tiny abstractions add cognitive overhead // utils/validators/stringValidators.tsexport function isNonEmpty(s: string): boolean { return s.length > 0;} // utils/validators/emailValidators.tsexport function hasAtSign(email: string): boolean { return email.includes('@');} // utils/validators/compositeValidators.tsexport function createValidator<T>(...validators: ((t: T) => boolean)[]) { return (value: T) => validators.every(v => v(value));} // Usage - now you have to track 3 filesconst validateEmail = createValidator(isNonEmpty, hasAtSign);if (validateEmail(input)) { /* ... */ } // ✅ INLINE: Just write it where it's used // In the component that needs validationfunction handleSubmit(email: string): void { // Inline validation - obvious and editable if (!email || !email.includes('@')) { setError('Please enter a valid email'); return; } // Continue with submission...} // If you need it in 3+ places, THEN extract:function isValidEmail(email: string): boolean { return !!email && email.includes('@') && email.includes('.');}When Abstraction Helps
Good candidates for abstraction:
When to Stay Inline
x > 0 doesn't need isPositive(x)It's easier to extract an abstraction from inline code than to inline an inappropriate abstraction. Start inline. When you see real duplication (3+ times), extract. This is safer than predicting what abstraction you'll need.
Asynchronous architecture (queues, events, eventual consistency) offers real benefits: decoupling, resilience, scalability. But it also introduces significant complexity. Many systems adopt async patterns prematurely.
The Sync Default
Synchronous, request-response processing is dramatically simpler:
| Concern | Synchronous | Asynchronous |
|---|---|---|
| Error handling | Immediate, in context | Delayed, needs compensation |
| Debugging | Stack traces, logs | Distributed tracing, correlation IDs |
| Testing | Deterministic | Non-deterministic, timing-dependent |
| Data consistency | Immediate, ACID | Eventual, compensation needed |
| Coupling | Request-time dependency | Decoupled in time |
| Scalability | Scales with capacity | Absorbs spikes, retries |
| Latency | Sum of processing times | Amortized, can be lower |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
// ❌ OVER-COMPLEX: Event-driven for a simple CRUD operation // Creating a user triggers a cascade of eventsclass UserRegistrationHandler { async handle(command: RegisterUserCommand): Promise<void> { // Create user const user = await this.userRepo.create(command); // Publish event (now we need event infrastructure) await this.eventBus.publish('UserRegistered', { userId: user.id }); }} // Separate handlers - each could fail independentlyclass WelcomeEmailHandler { async handle(event: UserRegisteredEvent): Promise<void> { await this.emailService.sendWelcome(event.userId); // What if this fails? Retry? Dead letter queue? Manual intervention? }} class AnalyticsHandler { async handle(event: UserRegisteredEvent): Promise<void> { await this.analytics.trackSignup(event.userId); // Same failure questions }} // Now you need:// - Event bus infrastructure// - Retry policies for each handler // - Dead letter queue handling// - Distributed tracing to follow the flow// - Understanding what "registered" means (is email sent?) // ✅ SIMPLE: Synchronous orchestration class UserService { async registerUser(command: RegisterUserCommand): Promise<User> { // Transaction: if any step fails, the whole operation fails // Caller knows immediately what happened const user = await this.userRepo.create(command); // If email fails, registration fails - user can retry await this.emailService.sendWelcome(user); // Analytics is fire-and-forget, but visible here this.analytics.trackSignup(user.id).catch(e => this.logger.warn('Analytics failed', e) ); return user; }} // One place with all the logic// Clear failure modes// No event infrastructure needed// Easy to test end-to-endUse async when: processing is slow (>seconds) and users shouldn't wait; operations must survive caller failures; you need to smooth traffic spikes; operations are truly independent. Don't use async just because microservices tutorials showed you message queues.
Every component in a system—every service, database, queue, cache—adds operational burden. The simplest architecture uses the minimum components that satisfy requirements.
The Component Tax
Each component requires:
This tax is paid per component, regardless of complexity. A simple logging service costs almost as much to operate as a complex order service.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
// ❌ OVER-COMPONENTIZED: Microservices for a small team // Architecture for a 5-person startup:// - API Gateway (Kong)// - User Service (Node + PostgreSQL)// - Order Service (Node + PostgreSQL) // - Inventory Service (Node + PostgreSQL)// - Payment Service (Node + PostgreSQL)// - Notification Service (Node + Redis)// - Message Queue (RabbitMQ)// - Cache (Redis)// - Service Mesh (Istio)// - Container Orchestration (Kubernetes)// - Logging (ELK Stack)// - Monitoring (Prometheus + Grafana) // Components: 15+// Personnel: 5// Result: Everyone is on-call for everything; features crawl // ✅ RIGHT-SIZED: Modular monolith for the same team // Architecture:// - Single Application (Node.js)// - Single Database (PostgreSQL)// - Single Cache (Redis - optional, maybe pg is enough)// - Simple Deployment (Heroku, Railway, or one EC2) // Components: 2-3// Personnel: 5// Result: Team focuses on features, not infrastructure // src/modules/users/// - user.service.ts// - user.controller.ts// - user.repository.ts//// src/modules/orders/// - order.service.ts// - order.controller.ts// - order.repository.ts//// src/modules/payments/// - payment.service.ts// - ... // Same code organization, same boundaries// But deployed as one process, one database// When you genuinely need to scale, THEN extract import { UserService } from './modules/users';import { OrderService } from './modules/orders';import { PaymentService } from './modules/payments'; // All in one process - sub-millisecond "network" calls// Transaction across modules is trivial// One deploy, one log stream, one monitoring dashboardThe Modular Monolith Approach
A modular monolith gives you:
The key insight: boundaries matter, but distribution doesn't—until it does. You can have clean, separated modules without the operational cost of separate services.
Almost every successful microservices company started with a monolith: Amazon, Netflix, Twitter. They extracted services when they had concrete scaling problems and dedicated teams to own them. Starting with microservices is starting with complexity.
Object-Relational Mapping (ORM) tools promise to hide database complexity. In practice, they often add complexity while obscuring what's actually happening.
The ORM Trap
ORMs seem simpler initially:
But the complexity is hidden, not eliminated:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// ❌ ORM COMPLEXITY: Hidden n+1 queries // This looks simple...async function getOrdersWithProducts(userId: string): Promise<Order[]> { const orders = await Order.findAll({ where: { userId }, include: [{ model: Product }] }); return orders;} // But what SQL does this generate?// With 50 orders and 3 products each:// - 1 query for orders// - 50 queries for products (if lazy loading)// - OR 1 query with JOINs that returns 150 rows (if eager)// - Memory mapping 150 row results to objects // Debugging requires ORM expertise, not SQL knowledge // ✅ DIRECT SQL: Explicit and controllable async function getOrdersWithProducts(userId: string): Promise<OrderWithProducts[]> { const result = await db.query(` SELECT o.id as order_id, o.created_at, o.total, json_agg(json_build_object( 'id', p.id, 'name', p.name, 'price', p.price, 'quantity', op.quantity )) as products FROM orders o JOIN order_products op ON o.id = op.order_id JOIN products p ON op.product_id = p.id WHERE o.user_id = $1 GROUP BY o.id ORDER BY o.created_at DESC `, [userId]); return result.rows;} // One query. You know exactly what hits the database.// You can EXPLAIN ANALYZE it. You can optimize the indexes.// Debugging is straightforward SQL investigation.The Query Builder Middle Ground
You don't have to choose between full ORM and raw string SQL. Query builders like Knex, Kysely, or Prisma's raw queries offer:
This gives you safety without hiding what's happening.
ORMs can be valuable for simple CRUD applications where most operations are single-table inserts/updates/deletes. They reduce boilerplate for straightforward cases. But the moment you're tuning performance or writing complex queries, be prepared to understand—and bypass—the ORM.
Dan McKinley's influential essay "Choose Boring Technology" articulates a crucial simplicity pattern: Unknown technology introduces unknown complexity.
Innovation Tokens
McKinley suggests teams have a limited budget of "innovation tokens"—capacity to deal with the unknown. Every new, unproven technology spends a token:
Spend your innovation tokens on your core product, not on infrastructure.
| Need | Novel Choice | Boring Choice | Innovation Cost |
|---|---|---|---|
| Database | CockroachDB, TiDB | PostgreSQL | ~6+ months to understand distributed DB edge cases |
| Queue | Kafka, Pulsar | SQS, Redis | ~3 months to understand partitioning, retention, exactly-once |
| Cache | Custom distributed cache | Redis, Memcached | ~2 months to handle invalidation, consistency |
| API | GraphQL + Federation | REST | ~3 months to understand N+1s, caching, federation |
| Language | Rust, Elixir | Node.js, Python, Go | ~6 months to achieve team proficiency |
| Container Orchestration | Nomad, custom K8s | Managed K8s, Heroku | ~6 months to understand networking, observability |
Boring Doesn't Mean Bad
PostgreSQL, Redis, Linux, HTTP, JSON—these technologies are "boring" because they're well-understood. This isn't a weakness:
Novel technology can be the right choice when it solves a genuine problem that boring technology can't. But the bar should be high: "This new database is exciting" isn't sufficient justification.
12345678910111213141516171819202122232425262728293031
// A "boring" but highly effective tech stack // Web Framework: Express.js// - 10+ years of production hardening// - Everyone knows it// - Every problem has a documented solution // Database: PostgreSQL // - Most powerful open-source RDBMS// - JSON support for flexible schema when needed// - Full-text search built-in// - Extensions for nearly anything (PostGIS, pg_trgm, etc.) // Cache: Redis// - Simple, fast, reliable// - Data structures beyond key-value// - Well-understood clustering and replication // Queue: Simple database table or Redis// - No Kafka complexity for 99% of use cases// - Easy to debug and monitor// - Upgrade to real queue when volume demands // Deployment: Single cloud provider managed services// - AWS RDS, ElastiCache, ECS/Fargate// - OR Heroku, Render, Railway for even simpler// - No Kubernetes unless you have the team for it // This stack powers companies from 0 to millions of users// Total innovation tokens spent: 0// All tokens reserved for actual product differentiationNew technologies are tempting because they're exciting and look good on resumes. But your users don't care what database you use—they care if the product works. Every hour debugging Kafka is an hour not spent on features your customers will pay for.
You now have a practical toolkit for simplification. Let's consolidate the patterns:
The Simplification Mindset
Each pattern follows the same principle: start with the simplest approach that could work, then add complexity only when forced by concrete evidence.
This isn't laziness—it's discipline. The easy path is to add sophistication, abstractions, and infrastructure. The hard path is to resist until the need is undeniable.
What's Next:
The final page of this module addresses a critical question: When does simplicity conflict with other goals? We'll examine tensions between simplicity and performance, extensibility, correctness, and other concerns—and develop frameworks for navigating these trade-offs.
You now have concrete patterns to replace complexity with simplicity. Each pattern is a decision framework: understand the trade-offs, recognize when each applies, and default to the simpler option until evidence suggests otherwise. In the next page, we'll tackle the hard question of what to do when simplicity conflicts with other valid goals.