Loading content...
One of the core promises of microservices is independent deployability—the ability for teams to release changes to their services without coordinating with every consumer. Yet this promise rings hollow if API changes break dependent services.
In a monolithic application, refactoring an internal function signature is safe—the compiler catches all call sites. In microservices, changing an API can silently break dozens of consumers at runtime. The service that changed might pass all its tests, only to discover during deployment that it has caused an outage across the entire platform.
Service versioning is the discipline that resolves this tension. Done well, it enables continuous evolution while maintaining stability. Done poorly—or not at all—it forces organizations back into coordinated big-bang releases, negating the benefits of microservices.
By the end of this page, you will understand versioning strategies for APIs and events, learn to distinguish breaking from non-breaking changes, master deprecation and sunset workflows, and implement versioning schemes that enable confident evolution while protecting consumers.
Before diving into versioning strategies, let's understand why this problem is both difficult and critical.
In a microservices ecosystem:
This is fundamentally different from library versioning (where consumers choose when to upgrade) or monolithic code (where refactoring is atomic).
Coordinated Deployments: Without versioning discipline, teams must coordinate releases. "Everyone deploy at 2 AM on Sunday." This negates microservices' key benefit.
Fear of Change: When any change might break unknown consumers, teams stop evolving their APIs. Technical debt accumulates.
Breaking Production: Silent compatibility breaks reach production. Users experience errors. Incident response is expensive.
Consumer Lock-In: Consumers build against implicit behaviors. Making any change becomes nearly impossible.
| Level | Characteristics | Risk |
|---|---|---|
| 0: No versioning | YOLO changes; hope nothing breaks | Constant production incidents |
| 1: Ad-hoc versioning | Version when breaking; inconsistent schemes | Confusion; missed breaking changes |
| 2: Consistent versioning | Semver; all APIs versioned; changelog | Breaking changes still possible |
| 3: Contract testing | Automated compatibility checks in CI | Breaking changes caught before merge |
| 4: Proactive management | Deprecation policies; consumer tracking; sunset planning | Controlled, predictable evolution |
Every behavior of your API becomes an implicit contract. Return null instead of empty array once, and some consumer will depend on it. Fix a typo in an error message, and some consumer's error handling breaks. Versioning must be deliberate because implicit contracts form silently.
Several strategies exist for versioning APIs. Each has trade-offs around URL cleanliness, client complexity, and operational overhead.
Version number appears in the URL path:
GET /v1/orders/123
GET /v2/orders/123
Pros:
Cons:
Version specified as query parameter:
GET /orders/123?version=2
GET /orders/123?api-version=2024-01-15
Pros:
Cons:
Version in custom request header:
GET /orders/123
Accept: application/vnd.company.order.v2+json
Or:
GET /orders/123
Api-Version: 2
Pros:
Cons:
| Strategy | Visibility | Routing Ease | Cache-Friendly | Best For |
|---|---|---|---|---|
| URL Path (/v1/) | High | Easy | Yes (per version) | Most APIs; recommended default |
| Query Parameter | Medium | Medium | Requires Vary header | When URL stability matters |
| Header | Low | Requires L7 | Requires Vary header | Content negotiation scenarios |
| Subdomain (v1.api.) | High | Easy at DNS | Yes | Major versions with full separation |
Most major API platforms (Google, Stripe, AWS) use URL path versioning. It's explicit, easy to implement, and works well with infrastructure tooling. Unless you have specific requirements favoring alternatives, URL path versioning is the safe default.
Major Version Only (v1, v2) Used when breaking changes are infrequent. New major version for any breaking change.
Semantic Versioning (v1.2.3) Major.Minor.Patch following semver rules.
Date-Based (2024-01-15) Version is the release date.
No Explicit Version (Evolvable) Use API design patterns that never break (additive only).
The fundamental question in versioning is: does this change require a new version? This depends on whether the change is "breaking"—would it cause correctly-written clients to fail?
A change is breaking if it can cause previously working clients to:
Some changes are technically non-breaking but can still cause problems:
Adding fields to requests that affect behavior:
If you add includeDeleted: boolean to a list endpoint defaulting to false, old clients get the same behavior. But if an old client was somehow relying on seeing deleted items, they break.
Adding enum values: Strict deserializers might fail on unknown enum values. The change is only safe if clients ignore unknown values—which not all do.
Performance changes: Making an endpoint 10x slower isn't a "breaking change" per schema, but it can break clients with tight timeouts.
Pagination changes: Changing from offset to cursor pagination is technically additive (new parameters) but can break clients relying on offset behavior.
If you're debating whether a change is breaking, treat it as breaking. Use a new version or feature flag. The cost of a minor version bump is low; the cost of breaking production is high. Conservative classification prevents incidents.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
// Automated breaking change detection in CIinterface SchemaChange { path: string; changeType: 'added' | 'removed' | 'modified'; breaking: boolean; description: string;} function analyzeSchemaChanges( oldSchema: OpenAPISchema, newSchema: OpenAPISchema): SchemaChange[] { const changes: SchemaChange[] = []; // Check for removed paths (breaking) for (const path of Object.keys(oldSchema.paths || {})) { if (!newSchema.paths?.[path]) { changes.push({ path, changeType: 'removed', breaking: true, description: `Endpoint ${path} was removed`, }); } } // Check for removed response fields (breaking) for (const [path, pathItem] of Object.entries(oldSchema.paths || {})) { for (const [method, operation] of Object.entries(pathItem)) { const oldFields = extractResponseFields(operation); const newOperation = newSchema.paths?.[path]?.[method]; const newFields = newOperation ? extractResponseFields(newOperation) : new Set(); for (const field of oldFields) { if (!newFields.has(field)) { changes.push({ path: `${path}.${method}.response.${field}`, changeType: 'removed', breaking: true, description: `Response field '${field}' was removed`, }); } } } } // Check for new required request parameters (breaking) for (const [path, pathItem] of Object.entries(newSchema.paths || {})) { for (const [method, operation] of Object.entries(pathItem)) { const oldOperation = oldSchema.paths?.[path]?.[method]; for (const param of operation.parameters || []) { if (param.required) { const existedBefore = oldOperation?.parameters?.some( p => p.name === param.name ); if (!existedBefore) { changes.push({ path: `${path}.${method}.parameters.${param.name}`, changeType: 'added', breaking: true, description: `Required parameter '${param.name}' was added`, }); } } } } } // Check for type changes (breaking) // ... additional checks for field type modifications return changes;} // CI pipeline integrationasync function checkBackwardCompatibility(): Promise<void> { const mainBranchSchema = await fetchSchemaFromBranch('main'); const currentSchema = await loadCurrentSchema(); const changes = analyzeSchemaChanges(mainBranchSchema, currentSchema); const breakingChanges = changes.filter(c => c.breaking); if (breakingChanges.length > 0) { console.error('❌ Breaking changes detected:'); breakingChanges.forEach(c => { console.error(` - ${c.path}: ${c.description}`); }); console.error('\nTo proceed, either:'); console.error(' 1. Make the changes backward-compatible'); console.error(' 2. Create a new API version (v2)'); console.error(' 3. Add breaking-change-approved label (requires tech lead)'); process.exit(1); } console.log('✅ No breaking changes detected');}When you release a new version, the old version must continue working until all consumers migrate. This requires running multiple versions simultaneously—which has significant operational implications.
Single Service, Multiple Versions (Code-Level) One service deployment handles all versions internally:
/v1/orders → OrderController.getV1()
/v2/orders → OrderController.getV2()
Pros:
Cons:
Separate Deployments (Service-Level) Distinct deployments for each major version:
order-service-v1 (legacy, maintenance mode)
order-service-v2 (active development)
Pros:
Cons:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
// Single service handling multiple API versionsimport express from 'express'; const app = express(); // Version extraction middlewarefunction extractVersion(req: express.Request): number { // From URL path: /v2/orders/123 → 2 const pathMatch = req.path.match(/^\/v(\d+)\//); if (pathMatch) return parseInt(pathMatch[1]); // From header: Api-Version: 2 const headerVersion = req.get('Api-Version'); if (headerVersion) return parseInt(headerVersion); // Default to latest stable return 2;} // Shared business logicasync function getOrderById(orderId: string): Promise<OrderEntity> { return orderRepository.findById(orderId);} // Version-specific response formattingfunction formatOrderResponse(order: OrderEntity, version: number): object { switch (version) { case 1: // V1: Legacy format with deprecated fields return { id: order.id, customer_id: order.customerId, // snake_case in v1 items: order.items.map(item => ({ product_id: item.productId, qty: item.quantity, // 'qty' in v1, 'quantity' in v2 price: item.price, })), total: order.total, status: order.status.toLowerCase(), // lowercase in v1 created: order.createdAt.toISOString(), }; case 2: // V2: Current format with camelCase return { id: order.id, customerId: order.customerId, items: order.items.map(item => ({ productId: item.productId, quantity: item.quantity, unitPrice: item.price, lineTotal: item.quantity * item.price, })), total: order.total, currency: order.currency, // New in v2 status: order.status, statusHistory: order.statusHistory, // New in v2 createdAt: order.createdAt.toISOString(), updatedAt: order.updatedAt.toISOString(), // New in v2 }; default: throw new Error(`Unsupported API version: ${version}`); }} // Version-specific request parsingfunction parseCreateOrderRequest( body: unknown, version: number): CreateOrderInput { switch (version) { case 1: // V1: snake_case, no idempotency key const v1Body = body as V1CreateOrderBody; return { customerId: v1Body.customer_id, items: v1Body.items.map(item => ({ productId: item.product_id, quantity: item.qty, })), }; case 2: // V2: camelCase, requires idempotency key const v2Body = body as V2CreateOrderBody; if (!v2Body.idempotencyKey) { throw new ValidationError('idempotencyKey is required in v2'); } return { customerId: v2Body.customerId, items: v2Body.items, idempotencyKey: v2Body.idempotencyKey, }; default: throw new Error(`Unsupported API version: ${version}`); }} // Routes for both versionsapp.get(['/v1/orders/:id', '/v2/orders/:id'], async (req, res) => { const version = extractVersion(req); const order = await getOrderById(req.params.id); if (!order) { return res.status(404).json({ error: 'Order not found' }); } res.json(formatOrderResponse(order, version));}); app.post(['/v1/orders', '/v2/orders'], async (req, res) => { const version = extractVersion(req); try { const input = parseCreateOrderRequest(req.body, version); const order = await createOrder(input); res.status(201).json(formatOrderResponse(order, version)); } catch (error) { if (error instanceof ValidationError) { return res.status(400).json({ error: error.message }); } throw error; }});Supporting many simultaneous versions is expensive. Every version needs testing, documentation, and maintenance. Limit concurrent versions (no more than 2-3) and aggressively sunset old versions. Version proliferation is a sign of poor migration support or unclear deprecation policies.
Creating new versions is only half the story. Without deprecation and sunset processes, old versions accumulate forever, creating maintenance burden and security risk.
1. Deprecation Announcement
2. Deprecation Period
3. Sunset Warning
4. Sunset
| Phase | Duration | Actions |
|---|---|---|
| Deprecation | Day 0 | Mark deprecated; add headers; announce |
| Migration | Days 1-90 | Migration support; consumer tracking |
| Warning | Days 91-120 | Escalate warnings; contact remaining users |
| Grace Period | Days 121-150 | Final warnings; freeze new deprecation features |
| Sunset | Day 151 | Remove version; return 410 Gone |
| Cleanup | Day 152+ | Remove code; archive docs |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
// Deprecation middleware with standard headersinterface DeprecationConfig { version: string; deprecatedDate: Date; sunsetDate: Date; migrationGuide: string; replacedBy?: string;} const deprecatedVersions: Record<string, DeprecationConfig> = { 'v1': { version: 'v1', deprecatedDate: new Date('2024-01-15'), sunsetDate: new Date('2024-06-15'), migrationGuide: 'https://docs.company.com/api/migration/v1-to-v2', replacedBy: 'v2', },}; function deprecationMiddleware( req: express.Request, res: express.Response, next: express.NextFunction): void { const version = extractVersion(req); const config = deprecatedVersions[`v${version}`]; if (!config) { return next(); } const now = new Date(); // Check if past sunset date if (now >= config.sunsetDate) { res.status(410).json({ error: 'GONE', message: `API version ${config.version} has been sunset as of ${config.sunsetDate.toISOString()}`, migrationGuide: config.migrationGuide, replacedBy: config.replacedBy, }); return; } // Add deprecation headers (RFC 8594) res.set('Deprecation', config.deprecatedDate.toISOString()); res.set('Sunset', config.sunsetDate.toISOString()); res.set('Link', `<${config.migrationGuide}>; rel="deprecation"`); if (config.replacedBy) { res.set('Link', `</${config.replacedBy}/${req.path}>; rel="successor-version"`); } // Custom header for programmatic detection res.set('X-API-Deprecated', 'true'); res.set('X-API-Sunset-Date', config.sunsetDate.toISOString()); // Log usage for tracking logDeprecatedUsage({ version: config.version, endpoint: req.path, method: req.method, clientId: extractClientId(req), timestamp: now, }); // Warning in response body for non-HEAD requests const originalJson = res.json.bind(res); res.json = (body: unknown) => { if (typeof body === 'object' && body !== null) { return originalJson({ ...body, _deprecation: { warning: `This API version is deprecated and will be removed on ${config.sunsetDate.toDateString()}`, migrationGuide: config.migrationGuide, sunsetDate: config.sunsetDate.toISOString(), }, }); } return originalJson(body); }; next();} // Consumer tracking for targeted outreachasync function getDeprecatedVersionUsage(): Promise<DeprecatedUsageReport> { const usage = await analyticsService.query({ metric: 'api_requests', filter: { deprecated: true }, groupBy: ['version', 'clientId', 'endpoint'], range: '30d', }); return { byVersion: aggregateByVersion(usage), byClient: aggregateByClient(usage), topEndpoints: getTopEndpoints(usage), migrationProgress: calculateMigrationProgress(usage), };}You can't sunset if you don't know who's affected. Track API usage by consumer identity (API key, OAuth client ID, etc.). Before any sunset, review the consumer list. High-value consumers get personal outreach. Unknown consumers get extra time or harder warnings.
API versioning techniques apply to request-response interactions. Event versioning presents unique challenges because:
Schema in Event Include schema version in each event:
{
"schemaVersion": 2,
"type": "OrderPlaced",
"data": { ... }
}
Consumers check version and handle appropriately.
Schema Registry Centralized registry (Confluent Schema Registry) tracks schema evolution:
Avro/Protobuf Evolution Binary formats with built-in evolution rules:
| Change Type | Avro Compatibility | Protobuf Compatibility |
|---|---|---|
| Add optional field | ✅ Backward & Forward | ✅ New field ignored by old consumers |
| Add required field | ❌ Breaks old consumers | N/A (no required in proto3) |
| Remove field | ✅ Old consumers see default | ✅ Old consumers see default |
| Rename field | ❌ Breaking | ✅ If field number unchanged |
| Change type | ❌ Breaking | ❌ Breaking |
| Add enum value | ✅ Forward compatible | ✅ Unknown becomes 0/default |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
// Event versioning with explicit schema versionsinterface VersionedEvent { eventId: string; eventType: string; schemaVersion: number; timestamp: Date; payload: unknown;} // Event schema registry (in-memory example)const eventSchemas: Record<string, Record<number, EventSchema>> = { 'OrderPlaced': { 1: { version: 1, fields: ['orderId', 'customerId', 'items', 'total'], }, 2: { version: 2, fields: ['orderId', 'customerId', 'items', 'total', 'currency', 'channel'], // New fields in v2 }, },}; // Producer: Always publishes latest versionfunction publishOrderPlacedEvent(order: Order): VersionedEvent { return { eventId: crypto.randomUUID(), eventType: 'OrderPlaced', schemaVersion: 2, // Always latest timestamp: new Date(), payload: { orderId: order.id, customerId: order.customerId, items: order.items, total: order.total, currency: order.currency, // v2 field channel: order.channel, // v2 field }, };} // Consumer: Handles multiple versionsclass OrderEventHandler { async handle(event: VersionedEvent): Promise<void> { if (event.eventType !== 'OrderPlaced') return; // Normalize to latest version const normalizedPayload = this.normalizeToV2(event); // Process with normalized data await this.processOrderPlaced(normalizedPayload); } private normalizeToV2(event: VersionedEvent): OrderPlacedV2Payload { switch (event.schemaVersion) { case 1: // Upgrade v1 to v2 with defaults const v1 = event.payload as OrderPlacedV1Payload; return { orderId: v1.orderId, customerId: v1.customerId, items: v1.items, total: v1.total, currency: 'USD', // Default for v1 events channel: 'unknown', // Default for v1 events }; case 2: return event.payload as OrderPlacedV2Payload; default: throw new Error(`Unknown schema version: ${event.schemaVersion}`); } }} // Schema registry integration (e.g., Confluent)class SchemaRegistryClient { async registerSchema( subject: string, schema: object ): Promise<{ id: number }> { // POST to schema registry const response = await fetch(`${this.baseUrl}/subjects/${subject}/versions`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ schema: JSON.stringify(schema) }), }); return response.json(); } async checkCompatibility( subject: string, schema: object ): Promise<{ compatible: boolean }> { // POST to compatibility check endpoint const response = await fetch( `${this.baseUrl}/compatibility/subjects/${subject}/versions/latest`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ schema: JSON.stringify(schema) }), } ); return response.json(); }}The safest event evolution strategy is additive-only: only add new optional fields, never remove or change existing fields. This provides both forward and backward compatibility without explicit versioning. Combined with a schema registry that enforces this policy, you can evolve events safely.
Service versioning is the mechanism that makes microservices independence real. Without it, the promise of independent deployability remains theoretical. With it, teams can evolve their services continuously while protecting their consumers.
Let's consolidate the key insights:
Module Complete:
You've now completed the Inter-Service Communication module. You understand the fundamental choice between synchronous and asynchronous communication, how to design and validate API contracts, which protocols to use for different scenarios, how to handle errors across service boundaries, and how to evolve services through versioning.
These skills form the foundation of building microservices that are truly independent—able to be developed, deployed, and evolved by autonomous teams while maintaining system-wide reliability.
You've mastered inter-service communication in microservices architectures. From sync vs async patterns to API contracts, protocol selection, error handling, and versioning—you now have the knowledge to design communication patterns that enable independent teams to build reliable distributed systems.