Loading content...
Organizations migrate to microservices seeking independence—independent deployments, independent scaling, independent teams. But many end up with something far worse than the monolith they left behind: a distributed monolith.
A distributed monolith looks like microservices from the outside (many services, network calls, APIs) but behaves like a monolith on the inside (coordinated deployments, shared data, cascading failures). It combines all the operational complexity of distributed systems with none of the benefits of true service independence.
This is not a rare edge case. Many organizations that report "struggling with microservices" are actually struggling with distributed monoliths. Understanding this anti-pattern is essential for anyone designing or evolving microservices architectures.
By the end of this page, you will understand exactly what makes a system a distributed monolith, recognize the warning signs in existing systems, understand why it's worse than a traditional monolith, and learn strategies for prevention and recovery.
A distributed monolith is a system of services that are nominally independent but are so tightly coupled that they must be changed, deployed, and scaled together. The services have network boundaries between them, but not independence boundaries.
The Defining Characteristic:
The core test for a distributed monolith is simple: Can you deploy Service A to production without coordinating with the teams that own Services B, C, and D?
If the answer is consistently "no"—if deploying one service routinely requires changes or deployments in other services—you have a distributed monolith, regardless of how many separate repositories, deployment pipelines, or Docker containers you have.
Types of Coupling That Create Distributed Monoliths:
1. Deployment Coupling Services must be deployed together in a specific order, or they break. Service B assumes Service A has the latest API version. Service C has hard-coded expectations about Service B's behavior.
2. Data Coupling Multiple services read and write the same data directly. They share a database, bypass service APIs, or make assumptions about each other's data models. Changing one service's schema breaks others.
3. Temporal Coupling Services must all be available simultaneously for any operation to complete. A chain of synchronous calls means if any service is down, everything is down.
4. Behavioral Coupling Services have implicit expectations about each other's behavior, timing, or side effects. They work by coincidence, not by contract.
| Characteristic | Monolith | True Microservices | Distributed Monolith |
|---|---|---|---|
| Deployment | Single deployment unit | Independent deploys | Coordinated deploys |
| Network Calls | Local function calls | Network calls, isolated | Network calls, coupled |
| Data Storage | Shared database | Database per service | Shared or duplicated without ownership |
| Failure Isolation | Full system at risk | Isolated blast radius | Cascading failures |
| Team Autonomy | Limited | High | Limited (despite separate services) |
| Operational Complexity | Low | Medium-High | Highest |
| Performance | Local calls (fast) | Network overhead (managed) | Network overhead (unmanaged) |
A distributed monolith has all the costs of distributed systems (network latency, partial failures, serialization overhead, operational complexity) with none of the benefits (independent deployment, isolated scaling, team autonomy). It's strictly worse than either a well-designed monolith or well-designed microservices.
Distributed monoliths don't announce themselves. They emerge gradually as coupling accumulates. Watch for these warning signs:
Deployment Symptoms:
Operational Symptoms:
Performance Symptoms:
Data Symptoms:
Distributed monoliths emerge gradually. Each coupling decision seems minor—'just one shared table,' 'just one synchronized deployment,' 'just one direct call.' But couplings accumulate, and by the time the pain is obvious, unwinding is expensive. Vigilance is required from day one.
Understanding how distributed monoliths emerge helps you prevent them:
Pattern 1: Hasty Decomposition
A team splits a monolith quickly, often under deadline pressure. They create service boundaries based on convenience (existing code modules, database tables) rather than domain analysis. The result: services that are technically separate but semantically coupled.
Example: The monolith had UserService, OrderService, and PaymentService modules. The team extracts each into a microservice without changing how they interact. They still share the same database, call each other synchronously for every operation, and have no fallback for failures.
Pattern 2: Shared Database Shortcut
Teams recognize that data independence requires effort—event publishing, eventual consistency, data migration. To "move faster," they share the database, promising to "fix it later." Later never comes, and now multiple services are coupled through data.
Example: Order Service and Shipping Service both need address data. Instead of properly owning data, both write to the same Address table. Now neither can change the schema without coordinating with the other.
Pattern 3: Synchronous Everything
Teams default to synchronous REST calls for all inter-service communication because it's familiar. Request-response is easy to understand and debug locally. But it creates temporal coupling—all services must be available simultaneously.
Example: Creating an order requires synchronous calls to Inventory (to reserve), Pricing (to calculate), Customer (to validate), and Payment (to charge). If any service is slow or down, order creation fails completely.
Pattern 4: Shared Libraries with Behavior
Utility libraries grow from shared utilities (logging, metrics) to shared business logic (domain models, validation rules). Now services can't evolve independently because changing the library breaks them all.
Example: A "common" library contains the Order domain object with validation logic. Both Order Service and Reporting Service use it. Changing Order validation for orders breaks reporting, even though reporting doesn't create orders.
Pattern 5: API Coupling
Services expose rich, fine-grained APIs that mirror internal structure. Consumers depend on implementation details, not stable abstractions. Any internal change breaks consumers.
Example: Order Service exposes GET /orders/{id}/lines/{lineId}/product for individual order line products. Consumers build UIs around this structure. When Order Service refactors to embed products differently, every consumer breaks.
Unwinding a distributed monolith is expensive—often requiring multi-quarter efforts to fix data ownership, change communication patterns, and migrate consumers. Preventing coupling in the first place, through discipline and architectural governance, is dramatically cheaper.
Preventing distributed monoliths requires intentional practices throughout the development lifecycle:
Strategy 1: Enforce Data Ownership
Every piece of data has exactly one owner service. Other services access data through that owner's API, never directly. This prevents the shared database trap.
Implementation:
Strategy 2: Async by Default
Default to asynchronous communication; use synchronous calls only when immediately necessary. This prevents temporal coupling.
Implementation:
Strategy 3: Contract-First Development
Define service contracts (APIs, events) before implementation. Contracts are versioned, documented, and owned. Changes require consumer analysis.
Implementation:
Strategy 4: Deployment Independence Testing
Regularly verify that each service can be deployed independently. If deployment independence breaks, fix it before it accumulates.
Implementation:
Strategy 5: Architecture Governance
Establish architectural principles and review mechanisms to catch coupling early. This isn't bureaucracy—it's quality assurance for architecture.
Implementation:
| Practice | What It Prevents | How to Verify |
|---|---|---|
| Database per service | Data coupling | Audit database access credentials and queries |
| Async communication | Temporal coupling | Review service dependencies and measure sync call counts |
| Contract testing | API coupling | Track contract test coverage and breaking changes |
| Independent deployability | Deployment coupling | Regular solo deployments to staging/production |
| Behavior-free shared code | Library coupling | Audit shared libraries for business logic |
Where possible, automate coupling detection. Lint rules that detect direct database access to other services. Build checks that identify new dependencies on shared libraries. Metrics that alert when synchronous call counts increase. Make the right thing easy and the wrong thing hard.
If you're already in a distributed monolith, recovery is possible but requires sustained effort:
Step 1: Acknowledge the Problem
Many organizations resist admitting they have a distributed monolith. "We have microservices—we just need better tooling." Until leadership acknowledges the architectural problem, resources won't be allocated to fix it.
Action: Present evidence—deployment coordination frequency, cross-service change rates, incident reports showing cascading failures. Make the pain quantifiable.
Step 2: Map the Coupling
Understand where coupling exists. Create a coupling map showing:
This map prioritizes remediation efforts.
Step 3: Establish Data Ownership
The hardest and most important step. For each entity, designate one owning service. Then:
This is iterative—tackle one entity at a time.
Step 4: Introduce Async Communication
For each synchronous dependency, evaluate:
Convert opportunistically as you touch code—don't try to fix everything at once.
Step 5: Stabilize Contracts
Introduce contract testing and versioning:
Step 6: Progressive Decoupling
Fix coupling incrementally, prioritizing:
Avoid big-bang rewrites. Use Strangler Fig pattern to gradually migrate.
Recovering from a distributed monolith is not a sprint. It typically takes quarters to years of sustained effort, depending on coupling depth. Set expectations appropriately. Quick fixes don't exist—only sustained architectural improvement through disciplined effort.
When facing distributed monolith problems, sometimes the answer isn't "better microservices"—it's reconsidering whether you need microservices at all.
The Modular Monolith:
A modular monolith is a single deployable unit with well-defined internal modules that could become services later. It provides many microservices benefits without the distributed systems complexity:
Benefits:
Trade-offs:
When to Consider:
Conversion Path:
A modular monolith provides a clean path to microservices if/when needed:
This "microservices-ready monolith" avoids both monolith pain (no boundaries) and distributed monolith pain (false boundaries).
The Honesty Test:
If your microservices have:
...you don't have microservices. You have a distributed monolith. A modular monolith would be simpler, faster, and more reliable.
The goal isn't microservices—it's effective software delivery. If a modular monolith achieves your goals with less complexity, it's the better choice. Microservices are a solution to organizational scaling problems, not a universal architecture. Choose the approach that solves your actual problems.
Let's examine a realistic distributed monolith scenario and recovery approach:
The Situation: E-Commerce Platform
A mid-size e-commerce company migrated from a monolith to microservices two years ago. They now have 35 services. But:
Diagnosis: Clear Distributed Monolith
Evidence of coupling:
Recovery Approach:
Phase 1 (Q1): Establish Data Ownership
Phase 2 (Q2): Critical Path Decoupling
Phase 3 (Q3-Q4): Progressive Migration
Phase 4 (Year 2): Operational Independence
Outcomes After Year 1:
Key Learnings:
This case study illustrates pattern, not prescription. Your distributed monolith has different coupling patterns, different pain points, different organizational dynamics. Use the diagnostic approaches and principles, but develop a recovery plan tailored to your specific situation.
The distributed monolith is the most common and costly mistake in microservices adoption. Recognizing and addressing it is essential for realizing microservices benefits. Let's consolidate the key insights:
Module Complete:
This concludes Module 1: Service Boundaries. We've covered the fundamental importance of service boundaries, Domain-Driven Design methodology, bounded contexts, service granularity, and the distributed monolith anti-pattern.
You now have the conceptual foundation to define service boundaries that enable—rather than hinder—the benefits of microservices architecture. The next module explores Inter-Service Communication—how services talk to each other effectively.
You now understand the distributed monolith anti-pattern—what it is, how to recognize it, how it forms, how to prevent it, and how to recover from it. Combined with the previous pages on boundaries, DDD, bounded contexts, and granularity, you have a comprehensive foundation for designing service boundaries that deliver true microservices independence.