Loading content...
Imagine you've cached user data with this structure:
{ "id": "123", "name": "Alice", "email": "alice@example.com" }
Now you deploy a new version that expects:
{ "id": "123", "fullName": "Alice Smith", "email": "alice@example.com", "verified": true }
What happens to the old cached entries?
They're still valid data—but in the wrong format. Your new code tries to access fullName and gets undefined. It checks verified and crashes because Boolean logic on undefined behaves unexpectedly. Even worse, if you cache the new format, old instances of your service (still running during deployment) might write the old format back, corrupting your cache.
This is the cache versioning problem: how do you manage cached data when its structure changes, not just its values?
By the end of this page, you will understand how to design and implement cache versioning strategies. You'll learn version encoding approaches, migration patterns for gradual rollouts, and techniques for maintaining cache integrity across deployments.
Cache versioning becomes critical when any of these conditions exist:
1. Rolling Deployments: During deployment, multiple versions of your service run simultaneously. Old and new versions read and write the same cache. Without versioning, they can corrupt each other's entries.
2. Schema Evolution: Business requirements change. Fields are added, renamed, removed, or restructured. Cached data in the old format must be handled gracefully.
3. Serialization Format Changes: Switching from JSON to Protocol Buffers, or changing compression algorithms, makes old cached data unreadable.
4. A/B Testing and Feature Flags: Different user groups might receive different data shapes. Caching without version awareness can serve wrong data to wrong groups.
5. Multi-Region Deployments: Regions update at different times. A cache warm-up in one region shouldn't break another region running older code.
| Scenario | Problem | User Impact |
|---|---|---|
| Rolling deployment | Old server reads new format, fails to parse | Errors, degraded experience |
| Schema field added | New server reads old format, missing required field | Logic errors, incorrect behavior |
| Schema field renamed | Both old and new servers write conflicting formats | Data inconsistency, flickering UI |
| Serialization change | New server can't deserialize old format | Complete cache miss, load on backend |
| Feature flag split | User A receives data cached by User B's code path | Wrong feature variant shown |
In a rolling deployment with 100 instances updating over 10 minutes, you have a 10-minute window where old and new code coexist. Any cache entry written by old code can be read by new code, and vice versa. Without versioning, every deployment is a potential cache corruption event.
There are three primary approaches to encoding version information in cached data. Each has different tradeoffs for complexity, performance, and flexibility.
Key Prefix Versioning
Include the version in the cache key itself. Different versions use different key namespaces.
Pattern: {version}:{entity}:{id} → v2:user:123
Advantages:
Disadvantages:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
/** * Key Prefix Versioning * * Cache keys include a version prefix, ensuring complete * isolation between schema versions. */ // Version is typically a constant in your codebaseconst CACHE_SCHEMA_VERSION = 'v3'; class KeyPrefixVersionedCache { constructor( private cache: CacheClient, private version: string = CACHE_SCHEMA_VERSION, ) {} /** * Build versioned cache key */ private buildKey(baseKey: string): string { return `${this.version}:${baseKey}`; } async get<T>(key: string): Promise<T | null> { const versionedKey = this.buildKey(key); const raw = await this.cache.get(versionedKey); return raw ? JSON.parse(raw) : null; } async set<T>(key: string, value: T, ttlSeconds: number): Promise<void> { const versionedKey = this.buildKey(key); await this.cache.setex(versionedKey, ttlSeconds, JSON.stringify(value)); } async delete(key: string): Promise<void> { const versionedKey = this.buildKey(key); await this.cache.del(versionedKey); } /** * Clear all entries for a specific version (useful for cleanup) */ async clearVersion(version: string): Promise<void> { const pattern = `${version}:*`; const keys = await this.cache.keys(pattern); if (keys.length > 0) { await this.cache.del(...keys); } } /** * Migrate from old version: read from old, write to new * Useful for warming new version's cache */ async migrateEntry<T>( key: string, oldVersion: string, transform: (old: unknown) => T, ttlSeconds: number, ): Promise<T | null> { const oldKey = `${oldVersion}:${key}`; const raw = await this.cache.get(oldKey); if (!raw) return null; const oldData = JSON.parse(raw); const newData = transform(oldData); await this.set(key, newData, ttlSeconds); return newData; }} // Usage across deployments:// v1: { name: "Alice" }// v2: { name: "Alice", verified: false }// v3: { fullName: "Alice", verified: false } const cache = new KeyPrefixVersionedCache(redis, 'v3'); // New code writes to v3 namespaceawait cache.set('user:123', { fullName: 'Alice', verified: false }, 3600); // Old code (v2) writes to v2 namespace - no conflict// await v2Cache.set('user:123', { name: 'Alice', verified: false }, 3600); // Each reads only their own namespace - no corruption| Strategy | Key Isolation | Memory Efficiency | Migration Path | Complexity |
|---|---|---|---|---|
| Key Prefix | Complete | Low (doubles during transition) | Cold cache | Low |
| Envelope | None | High | Gradual | Medium |
| Type Discriminator | None | High | Gradual | Medium |
Version changes in production rarely happen instantaneously. During rolling deployments, A/B tests, or canary releases, multiple versions coexist. Migration patterns help manage this transition period gracefully.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261
/** * Cache Migration Patterns * * Strategies for handling version transitions during * rolling deployments and gradual rollouts. */ // ============================================// Pattern 1: Read-Migrate-Write// ============================================ /** * On cache read, if old version detected: * 1. Migrate to current version * 2. Write migrated version back (async) * 3. Return migrated data * * Gradually migrates cache as entries are accessed. */class ReadMigrateWriteCache<T> { constructor( private cache: CacheClient, private currentVersion: number, private migrator: (data: unknown, version: number) => T, ) {} async get(key: string): Promise<T | null> { const raw = await this.cache.get(key); if (!raw) return null; const { version, data } = JSON.parse(raw); if (version === this.currentVersion) { return data as T; } // Migrate const migrated = this.migrator(data, version); // Write back asynchronously (fire-and-forget) this.writeVersion(key, migrated).catch(() => {}); return migrated; } private async writeVersion(key: string, data: T): Promise<void> { const ttl = await this.cache.ttl(key); const remainingTtl = Math.max(60, ttl); await this.cache.setex(key, remainingTtl, JSON.stringify({ version: this.currentVersion, data, })); }} // ============================================// Pattern 2: Version Tolerance (Read Any, Write New)// ============================================ /** * Accept reads from multiple versions (within tolerance window), * but always write in current version. * * Useful when migration logic is complex or versions differ subtly. */class VersionTolerantCache<T> { constructor( private cache: CacheClient, private currentVersion: number, private toleratedVersions: Set<number>, private normalizers: Map<number, (data: unknown) => T>, ) {} async get(key: string): Promise<T | null> { const raw = await this.cache.get(key); if (!raw) return null; const { version, data } = JSON.parse(raw); // Reject unknown versions if (!this.toleratedVersions.has(version)) { console.warn(`Rejecting cache entry with unknown version ${version}`); await this.cache.del(key); // Remove corrupt entry return null; } // Normalize to current format const normalizer = this.normalizers.get(version); return normalizer ? normalizer(data) : data as T; } async set(key: string, data: T, ttlSeconds: number): Promise<void> { await this.cache.setex(key, ttlSeconds, JSON.stringify({ version: this.currentVersion, data, })); }} // Setup for version 3, tolerating 2 and 3const userCache = new VersionTolerantCache<User>( cache, 3, // Current version new Set([2, 3]), // Tolerated versions new Map([ [2, (data: unknown) => migrateV2ToV3(data as UserV2)], [3, (data: unknown) => data as User], ]),); // ============================================// Pattern 3: Write-Behind Migration// ============================================ /** * Actively migrate cache entries in background, * independent of user requests. * * Useful for large caches or when you want * migration complete before old code is fully removed. */class BackgroundCacheMigrator { constructor( private cache: CacheClient, private fromVersion: number, private toVersion: number, private migrator: (data: unknown) => unknown, ) {} async migrateAllEntries(keyPattern: string, batchSize: number = 100): Promise<MigrationStats> { const stats: MigrationStats = { scanned: 0, migrated: 0, skipped: 0, failed: 0, }; let cursor = '0'; do { // Scan for matching keys const [nextCursor, keys] = await this.cache.scan( cursor, 'MATCH', keyPattern, 'COUNT', batchSize.toString() ); cursor = nextCursor; // Process batch await Promise.all(keys.map(async (key) => { stats.scanned++; try { const migrated = await this.migrateEntry(key); if (migrated) { stats.migrated++; } else { stats.skipped++; } } catch (error) { stats.failed++; console.error(`Failed to migrate ${key}:`, error); } })); // Rate limit to avoid overloading cache await this.sleep(100); } while (cursor !== '0'); return stats; } private async migrateEntry(key: string): Promise<boolean> { const raw = await this.cache.get(key); if (!raw) return false; const { version, data } = JSON.parse(raw); // Skip if already migrated if (version >= this.toVersion) { return false; } // Skip if too old (not supported) if (version < this.fromVersion) { await this.cache.del(key); return false; } // Migrate and write back const migrated = this.migrator(data); const ttl = await this.cache.ttl(key); await this.cache.setex(key, Math.max(60, ttl), JSON.stringify({ version: this.toVersion, data: migrated, })); return true; } private sleep(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms)); }} interface MigrationStats { scanned: number; migrated: number; skipped: number; failed: number;} // ============================================// Pattern 4: Dual-Write During Transition// ============================================ /** * Write to both old and new version keys during transition. * Old code reads old key, new code reads new key. * Removed after deployment complete. */class DualWriteCache { constructor( private cache: CacheClient, private oldVersion: string, private newVersion: string, private inTransition: boolean = true, ) {} async set<T>(key: string, data: T, ttlSeconds: number): Promise<void> { const promises = []; // Always write to new format promises.push( this.cache.setex( `${this.newVersion}:${key}`, ttlSeconds, JSON.stringify(data) ) ); // During transition, also write old format if (this.inTransition) { const oldFormatData = this.convertToOldFormat(data); promises.push( this.cache.setex( `${this.oldVersion}:${key}`, ttlSeconds, JSON.stringify(oldFormatData) ) ); } await Promise.all(promises); } private convertToOldFormat<T>(data: T): unknown { // Convert new format to old format for backward compatibility // Implementation depends on your schema differences return data; }}The best way to handle version migrations is to avoid breaking changes in the first place. By designing cached data schemas with forward compatibility in mind, many version transitions become seamless.
Principles for Forward-Compatible Schemas:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155
/** * Forward-Compatible Schema Design * * Example of designing cached schemas that gracefully * handle version evolution without breaking changes. */ // ============================================// BAD: Breaking change requiring migration// ============================================ // Version 1interface UserBadV1 { name: string; // Full name email: string;} // Version 2 - BREAKING: renamed fieldinterface UserBadV2 { fullName: string; // Renamed from 'name' email: string;}// This BREAKS old code that expects 'name'// This BREAKS new code reading old entries with 'name' // ============================================// GOOD: Non-breaking evolution// ============================================ // Version 1interface UserGoodV1 { name: string; email: string;} // Version 2 - COMPATIBLE: added fieldsinterface UserGoodV2 { name: string; // Keep for backward compatibility fullName?: string; // Optional new field (alias for richness) email: string; verified?: boolean; // Optional with implicit default (false)} // Version 3 - COMPATIBLE: more fields, deprecation noticeinterface UserGoodV3 { /** @deprecated Use fullName instead */ name: string; // Still present but deprecated fullName?: string; email: string; verified?: boolean; /** * Added in V3. Old entries won't have this. * Default: 'en-US' */ locale?: string;} /** * Reader that handles all versions gracefully */class ForwardCompatibleReader { read(raw: unknown): NormalizedUser { const data = raw as Record<string, unknown>; return { fullName: (data.fullName as string) || (data.name as string) || 'Unknown', email: (data.email as string) || '', verified: (data.verified as boolean) ?? false, // Default false locale: (data.locale as string) ?? 'en-US', // Default en-US }; }} interface NormalizedUser { fullName: string; email: string; verified: boolean; locale: string;} // ============================================// Extensible Enums with Unknown Handling// ============================================ enum UserStatusBad { ACTIVE = 'ACTIVE', SUSPENDED = 'SUSPENDED',}// Adding DELETED in new version breaks old code! enum UserStatusGood { ACTIVE = 'ACTIVE', SUSPENDED = 'SUSPENDED', DELETED = 'DELETED', // Added in V2 ARCHIVED = 'ARCHIVED', // Added in V3} function handleUserStatus(status: string): void { const knownStatus = status as UserStatusGood; switch (knownStatus) { case UserStatusGood.ACTIVE: // Handle active break; case UserStatusGood.SUSPENDED: // Handle suspended break; case UserStatusGood.DELETED: // Handle deleted break; case UserStatusGood.ARCHIVED: // Handle archived break; default: // IMPORTANT: Handle unknown statuses gracefully // This allows old code to handle new status values console.warn(`Unknown status: ${status}, treating as ACTIVE`); // Handle as default case break; }} // ============================================// Versioned Writer with Compatibility Markers// ============================================ const SCHEMA_VERSION = 3; interface VersionedEntry<T> { schemaVersion: number; writtenBy: string; // Service/version that wrote this data: T;} function writeWithVersion<T>(data: T): VersionedEntry<T> { return { schemaVersion: SCHEMA_VERSION, // Current schema version writtenBy: `user-service-v2.3.1`, // Helps debugging data, };} // When reading, check schemaVersion to decide handlingfunction readVersioned<T>(entry: VersionedEntry<unknown>): T { if (entry.schemaVersion > SCHEMA_VERSION) { // Entry from future version - be cautious console.warn(`Reading entry from newer schema ${entry.schemaVersion}`); // Try to read anyway, forward-compat design should allow it } return entry.data as T;}For schemas that require strict forward and backward compatibility, consider using Protocol Buffers or Apache Avro. These formats encode schema evolution rules directly and reject incompatible changes at compile time. The upfront investment pays off for long-lived systems with complex schemas.
Version management isn't complete without cleanup. Old version code paths, migration logic, and stale cached data accumulate as tech debt. A deliberate cleanup strategy keeps your caching system maintainable.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138
/** * Version Monitoring and Cleanup */ class VersionMonitoringCache { private versionReadCounts = new Map<number, number>(); constructor( private cache: CacheClient, private metrics: MetricsClient, ) {} async get<T>(key: string): Promise<T | null> { const raw = await this.cache.get(key); if (!raw) return null; const { version, data } = JSON.parse(raw); // Track version reads this.trackVersionRead(version); return data as T; } private trackVersionRead(version: number): void { // Increment local counter const current = this.versionReadCounts.get(version) || 0; this.versionReadCounts.set(version, current + 1); // Emit metric for monitoring/alerting this.metrics.increment('cache.version.read', { version: version.toString(), }); } /** * Get version distribution statistics */ getVersionStats(): Map<number, number> { return new Map(this.versionReadCounts); } /** * Check if it's safe to remove handling for a version */ async isVersionDeprecatable(version: number): Promise<{ safe: boolean; readsLastDay: number; recommendation: string; }> { // Query metrics backend for recent reads const readsLastDay = await this.metrics.querySum( 'cache.version.read', { version: version.toString() }, '24h' ); if (readsLastDay === 0) { return { safe: true, readsLastDay: 0, recommendation: `Version ${version} has had 0 reads in 24h. Safe to remove.`, }; } if (readsLastDay < 10) { return { safe: false, readsLastDay, recommendation: `Version ${version} has ${readsLastDay} reads. Consider waiting for natural TTL expiration.`, }; } return { safe: false, readsLastDay, recommendation: `Version ${version} still has ${readsLastDay} reads. Not safe to remove yet.`, }; }} /** * Automated cleanup job for deprecated versions */async function runVersionCleanupJob( cache: CacheClient, deprecatedVersions: number[], keyPattern: string,): Promise<CleanupResult> { const result: CleanupResult = { scanned: 0, deleted: 0, preserved: 0, errors: 0, }; const deprecatedSet = new Set(deprecatedVersions); let cursor = '0'; do { const [nextCursor, keys] = await cache.scan(cursor, 'MATCH', keyPattern, 'COUNT', '100'); cursor = nextCursor; for (const key of keys) { result.scanned++; try { const raw = await cache.get(key); if (!raw) continue; const { version } = JSON.parse(raw); if (deprecatedSet.has(version)) { await cache.del(key); result.deleted++; } else { result.preserved++; } } catch (error) { result.errors++; } } } while (cursor !== '0'); console.log(`Cleanup complete: ${result.deleted} deleted, ${result.preserved} preserved`); return result;} interface CleanupResult { scanned: number; deleted: number; preserved: number; errors: number;} interface MetricsClient { increment(metric: string, tags: Record<string, string>): void; querySum(metric: string, tags: Record<string, string>, window: string): Promise<number>;}Cache versioning is essential for maintaining cache integrity across schema changes, deployments, and gradual rollouts. The right versioning strategy depends on your deployment model, migration needs, and tolerance for complexity.
What's next:
We've covered the three main cache invalidation strategies: time-based expiration, event-based invalidation, and cache versioning. The final page addresses what happens despite our best efforts—dealing with stale data. How do you detect, measure, and mitigate the impact of stale cached data in production?
You now understand cache versioning: the encoding strategies, migration patterns, forward-compatible schema design, and cleanup practices. These techniques ensure your cache remains consistent and correct as your data schemas evolve.