Loading content...
Resource leaks are among the most insidious bugs in software. Unlike crashes or incorrect outputs, leaks don't fail fast—they fail slow. A leaked file handle doesn't crash your application; it quietly consumes operating system resources until, after days of stable operation, the system suddenly can't open any files. A leaked database connection doesn't throw an error; it silently fills your connection pool until your application becomes unresponsive under load.
This slow failure mode makes leaks particularly dangerous:
Detecting leaks requires a multi-layered approach: static analysis to catch potential leaks before runtime, dynamic analysis to track resource usage during execution, and systematic testing to expose leak patterns that evade other detection methods.
By the end of this page, you will understand how to detect resource leaks using static analysis tools, implement runtime leak detection mechanisms, use memory and resource profilers effectively, design automated leak detection into your test suites, and establish monitoring to catch leaks that escape to production.
Before diving into detection techniques, we need to understand what constitutes a resource leak and how different types of leaks manifest.
What is a resource leak?
A resource leak occurs when a program acquires a limited resource but fails to release it when that resource is no longer needed. The resource remains allocated, unavailable for reuse, even though nothing in the program is using it.
Types of resource leaks:
| Leak Type | Resource | Symptoms | Detection Method |
|---|---|---|---|
| Memory leak | Heap memory | Growing memory usage, eventual OOM | Memory profiler, heap dumps |
| Handle leak | File handles, sockets | "Too many open files" errors | lsof, handle monitors |
| Connection leak | DB/network connections | Pool exhaustion, timeouts | Pool metrics, connection tracking |
| Thread leak | OS threads | Thread exhaustion, context switch overhead | Thread dumps, monitoring |
| Lock leak | Mutexes, semaphores | Deadlocks, resource starvation | Lock analysis tools |
| GPU resource leak | Textures, buffers | Rendering failures, GPU OOM | GPU profilers |
The lifecycle of a leak:
Every leak follows a predictable pattern:
Many leaks are low-frequency—they only occur under specific conditions or code paths. A leak that happens once per 10,000 requests may take weeks to cause problems, making it extremely difficult to correlate the eventual failure with the original bug.
Static analysis examines code without executing it, identifying potential leaks by analyzing resource acquisition and release patterns. Modern static analyzers can catch many common leak patterns at compile time.
How static analysis detects leaks:
Static analyzers track the 'ownership' of resources through code paths. When a resource is acquired but the analyzer cannot prove it will be released on all paths (including exception paths), it reports a potential leak.
1234567891011121314151617181920212223242526
// ANTI-PATTERN: Leaked resource on exception pathfunction processFile(path: string): ProcessResult { const file = fs.openSync(path, 'r'); // If parse() throws, file handle is never closed! const content = readContent(file); const result = parse(content); // May throw fs.closeSync(file); return result;} // Static analyzer warning:// "Resource 'file' acquired at line 2 may not be released // if exception occurs at line 6" // CORRECT: Ensure cleanup on all pathsfunction processFileSafe(path: string): ProcessResult { const file = fs.openSync(path, 'r'); try { const content = readContent(file); return parse(content); } finally { fs.closeSync(file); // Always executes }}Popular static analysis tools for leak detection:
| Language | Tool | Leak Detection Features |
|---|---|---|
| Java | SpotBugs (FindBugs) | Stream/resource closure, connection leaks |
| Java | SonarQube | Comprehensive resource leak rules (S2095, S2093) |
| C/C++ | Clang Static Analyzer | Memory leaks, double-free, use-after-free |
| C/C++ | Coverity | Enterprise-grade memory and resource analysis |
| C# | Roslyn Analyzers (CA2000) | IDisposable leak detection |
| Python | Pylint / Ruff | File handle, context manager suggestions |
| Go | staticcheck | Deferred resource cleanup analysis |
| Rust | Compiler (borrow checker) | Built-in ownership guarantees |
Run static analysis on every pull request. Configure leak detection rules as blocking errors, not warnings. A potential leak caught at code review is infinitely cheaper to fix than one discovered in production after a week-long incident investigation.
Static analysis can't catch all leaks—some depend on runtime conditions that can't be determined from code alone. Runtime leak detection instruments your application to track resource acquisition and release during execution.
Pattern 1: Resource tracking registry
Maintain a registry that tracks all active resources. Each acquisition adds to the registry; each release removes. Unreleased resources are visible as registry entries.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
// Comprehensive resource tracking for leak detectioninterface ResourceEntry { id: string; type: string; acquiredAt: Date; stackTrace: string; metadata?: Record<string, unknown>;} class ResourceLeakDetector { private static instance: ResourceLeakDetector; private resources = new Map<string, ResourceEntry>(); private enabled = process.env.NODE_ENV !== 'production'; static getInstance(): ResourceLeakDetector { if (!this.instance) { this.instance = new ResourceLeakDetector(); } return this.instance; } trackAcquisition( id: string, type: string, metadata?: Record<string, unknown> ): void { if (!this.enabled) return; this.resources.set(id, { id, type, acquiredAt: new Date(), stackTrace: new Error().stack || '', metadata, }); } trackRelease(id: string): void { if (!this.enabled) return; this.resources.delete(id); } getActiveResources(): ResourceEntry[] { return Array.from(this.resources.values()); } getResourcesByType(type: string): ResourceEntry[] { return this.getActiveResources().filter(r => r.type === type); } getLeakReport(): LeakReport { const resources = this.getActiveResources(); const byType = new Map<string, ResourceEntry[]>(); for (const resource of resources) { const existing = byType.get(resource.type) || []; existing.push(resource); byType.set(resource.type, existing); } return { totalLeaks: resources.length, byType: Object.fromEntries(byType), oldestLeak: resources.sort( (a, b) => a.acquiredAt.getTime() - b.acquiredAt.getTime() )[0], }; } assertNoLeaks(options: { ignoreTypes?: string[] } = {}): void { const resources = this.getActiveResources().filter( r => !options.ignoreTypes?.includes(r.type) ); if (resources.length > 0) { const report = resources.map(r => `[${r.type}] ${r.id} acquired at ${r.acquiredAt.toISOString()}\n` + `Stack: ${r.stackTrace.split('\n').slice(0, 5).join('\n')}` ).join('\n\n'); throw new Error( `Resource leaks detected (\${resources.length}):\n\n${report}` ); } } clear(): void { this.resources.clear(); }} // Usage: Wrap resource operationsclass TrackedConnection implements IConnection { private readonly id: string; constructor(private readonly inner: IConnection) { this.id = crypto.randomUUID(); ResourceLeakDetector.getInstance().trackAcquisition( this.id, 'DatabaseConnection', { connectionTime: new Date() } ); } async execute(query: string): Promise<Result> { return this.inner.execute(query); } dispose(): void { this.inner.dispose(); ResourceLeakDetector.getInstance().trackRelease(this.id); }}Pattern 2: Leak detection in connection pools
Connection pools are a common source of leaks. Enhance your pool with leak detection capabilities:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
// Connection pool with leak detectioninterface LeakDetectionConfig { enabled: boolean; leakThresholdMs: number; // Time after which unreturned connection is suspicious checkIntervalMs: number; // How often to scan for leaks onLeakDetected: (leak: LeakInfo) => void;} interface LeakInfo { connectionId: string; acquiredAt: Date; elapsedMs: number; stackTrace: string;} class LeakDetectingConnectionPool { private borrowedConnections = new Map<string, { connection: Connection; acquiredAt: Date; stackTrace: string; }>(); private leakCheckInterval?: NodeJS.Timeout; constructor( private readonly config: LeakDetectionConfig, private readonly innerPool: ConnectionPool ) { if (config.enabled) { this.startLeakDetection(); } } async borrow(): Promise<Connection> { const connection = await this.innerPool.borrow(); const wrappedId = crypto.randomUUID(); this.borrowedConnections.set(wrappedId, { connection, acquiredAt: new Date(), stackTrace: new Error().stack || '', }); // Return a proxy that tracks return return this.wrapConnection(connection, wrappedId); } private wrapConnection(conn: Connection, id: string): Connection { const self = this; return new Proxy(conn, { get(target, prop) { if (prop === 'close' || prop === 'release') { return () => { self.borrowedConnections.delete(id); return (target as any)[prop](); }; } return (target as any)[prop]; } }); } private startLeakDetection(): void { this.leakCheckInterval = setInterval(() => { this.checkForLeaks(); }, this.config.checkIntervalMs); } private checkForLeaks(): void { const now = Date.now(); for (const [id, info] of this.borrowedConnections) { const elapsedMs = now - info.acquiredAt.getTime(); if (elapsedMs > this.config.leakThresholdMs) { this.config.onLeakDetected({ connectionId: id, acquiredAt: info.acquiredAt, elapsedMs, stackTrace: info.stackTrace, }); } } } getStats(): PoolStats { return { borrowed: this.borrowedConnections.size, available: this.innerPool.available, leakCandidates: Array.from(this.borrowedConnections.entries()) .filter(([_, info]) => Date.now() - info.acquiredAt.getTime() > this.config.leakThresholdMs ).length, }; } shutdown(): void { if (this.leakCheckInterval) { clearInterval(this.leakCheckInterval); } }}Always capture the stack trace when a resource is acquired. When a leak is detected, the acquisition stack trace is your map to finding the bug. Without it, you know a resource leaked but have no idea where it came from.
Memory leaks are the most common form of resource leak. Modern runtimes provide powerful profiling tools to identify objects that should have been garbage collected but remain reachable.
Understanding memory leak patterns in managed languages:
In garbage-collected languages, memory leaks happen differently than in manual memory management. Objects aren't 'leaked' in the traditional sense—they're still reachable through some reference chain that prevents collection:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
// ANTI-PATTERN 1: Event listener accumulationclass LeakyComponent { private handler = () => this.handleEvent(); mount() { // Listener registered but never removed eventBus.on('data', this.handler); } // Forgot to implement unmount() with: // eventBus.off('data', this.handler);} // Creates leak: each component instance stays in memory// because eventBus holds reference via handler // FIX: Always pair registration with cleanupclass SafeComponent { private handler = () => this.handleEvent(); mount() { eventBus.on('data', this.handler); } unmount() { eventBus.off('data', this.handler); // Remove reference }} // ANTI-PATTERN 2: Unbounded cacheclass LeakyCache { private cache = new Map<string, LargeObject>(); get(key: string): LargeObject { let obj = this.cache.get(key); if (!obj) { obj = this.createLargeObject(key); this.cache.set(key, obj); // Never evicted! } return obj; }} // FIX: Use LRU cache with size limitimport LRU from 'lru-cache'; class SafeCache { private cache = new LRU<string, LargeObject>({ max: 1000, // Maximum entries maxSize: 100 * 1024 * 1024, // Maximum 100MB sizeCalculation: (obj) => obj.sizeBytes, }); get(key: string): LargeObject { let obj = this.cache.get(key); if (!obj) { obj = this.createLargeObject(key); this.cache.set(key, obj); } return obj; }} // ANTI-PATTERN 3: Closure capturing too muchclass LeakyProcessor { processItems(items: LargeItem[]) { items.forEach(item => { // This closure captures 'items' array, keeping all of it alive someAsyncQueue.push(() => { console.log(`Processing ${item.id} of ${items.length}`); }); }); }} // FIX: Capture only what you needclass SafeProcessor { processItems(items: LargeItem[]) { const itemCount = items.length; // Extract primitive items.forEach(item => { const itemId = item.id; // Extract only needed data someAsyncQueue.push(() => { console.log(`Processing ${itemId} of ${itemCount}`); }); }); }}Using heap snapshots to find leaks:
Heap snapshots capture the state of memory at a point in time. Comparing two snapshots reveals objects that were allocated but not freed between them.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
// Node.js heap snapshot capture for leak analysisimport v8 from 'v8';import fs from 'fs'; class HeapAnalyzer { private snapshotCount = 0; captureSnapshot(label?: string): string { // Trigger GC first for accurate snapshot if (global.gc) { global.gc(); } const filename = `heap-${Date.now()}-${++this.snapshotCount}.heapsnapshot`; const snapshotPath = `./diagnostics/${filename}`; // Capture and write snapshot const stream = fs.createWriteStream(snapshotPath); v8.writeHeapSnapshot(snapshotPath); console.log(`Heap snapshot saved: ${snapshotPath}`); if (label) console.log(` Label: ${label}`); console.log(` Heap used: ${Math.round(process.memoryUsage().heapUsed / 1024 / 1024)}MB`); return snapshotPath; }} // Test pattern: snapshot before and after to detect leaksdescribe('memory leak detection', () => { const analyzer = new HeapAnalyzer(); it('should not leak memory during operations', async () => { // Force GC and capture baseline if (global.gc) global.gc(); const baselineHeap = process.memoryUsage().heapUsed; analyzer.captureSnapshot('baseline'); // Perform operations that might leak for (let i = 0; i < 1000; i++) { await performPotentiallyLeakyOperation(); } // Force GC and measure if (global.gc) global.gc(); await new Promise(r => setTimeout(r, 100)); // Let finalizers run const afterHeap = process.memoryUsage().heapUsed; analyzer.captureSnapshot('after-operations'); // Calculate leak const growthBytes = afterHeap - baselineHeap; const growthMB = growthBytes / 1024 / 1024; // Assert acceptable growth (some overhead is normal) expect(growthMB).toBeLessThan(10); // 10MB threshold });});Load heap snapshots in Chrome DevTools (Memory tab). Compare two snapshots to see 'Objects allocated between Snapshot 1 and Snapshot 2'. Look for growing arrays, maps, or custom objects that should have been released. The 'Retained Size' column shows the total memory kept alive by each object.
Non-memory resources like file handles and network connections are finite OS-level resources. Their leaks are often more immediately catastrophic than memory leaks because their limits are typically much lower.
Monitoring file handle usage:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
// Linux/macOS: Monitor open file descriptorsimport { execSync } from 'child_process'; class FileDescriptorMonitor { getOpenDescriptorCount(): number { const pid = process.pid; try { // Linux const result = execSync(`ls -la /proc/${pid}/fd 2>/dev/null | wc -l`); return parseInt(result.toString().trim()) - 3; // Subtract . and .. and header } catch { try { // macOS const result = execSync(`lsof -p ${pid} 2>/dev/null | wc -l`); return parseInt(result.toString().trim()) - 1; } catch { return -1; // Unable to determine } } } getOpenDescriptorDetails(): string[] { const pid = process.pid; try { const result = execSync(`lsof -p ${pid} 2>/dev/null`); return result.toString().split('\n'); } catch { return ['Unable to get descriptor details']; } } getLimit(): { soft: number; hard: number } { try { const soft = execSync('ulimit -Sn').toString().trim(); const hard = execSync('ulimit -Hn').toString().trim(); return { soft: parseInt(soft), hard: parseInt(hard), }; } catch { return { soft: -1, hard: -1 }; } }} // Test pattern: verify no descriptor leaksdescribe('file descriptor leak detection', () => { const monitor = new FileDescriptorMonitor(); beforeEach(() => { // Capture baseline this.baselineDescriptors = monitor.getOpenDescriptorCount(); }); afterEach(() => { // Allow async cleanup return new Promise<void>((resolve) => setTimeout(async () => { const current = monitor.getOpenDescriptorCount(); const leaked = current - this.baselineDescriptors; if (leaked > 0) { console.error(`Leaked ${leaked} file descriptors`); console.error('Open descriptors:', monitor.getOpenDescriptorDetails()); throw new Error(`File descriptor leak: ${leaked} descriptors`); } resolve(); }, 100)); }); it('should not leak file descriptors', async () => { // Operations that open files for (let i = 0; i < 100; i++) { await processFile(`./data/file-${i}.txt`); } // afterEach will verify no leak });});Database connection leak detection:
Connection pools provide metrics that reveal leaks. Monitor these metrics and set up alerts.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
// Connection pool metrics for leak detectioninterface PoolMetrics { total: number; // Total connections in pool idle: number; // Available connections active: number; // Currently borrowed waiting: number; // Requests waiting for connection maxSeen: number; // Highest active count observed leakCandidates: number; // Connections held longer than threshold} class ConnectionPoolMonitor { private maxActiveEverSeen = 0; private readonly leakThresholdSeconds: number; constructor( private readonly pool: ConnectionPool, options: { leakThresholdSeconds?: number } = {} ) { this.leakThresholdSeconds = options.leakThresholdSeconds ?? 300; // 5 min default } getMetrics(): PoolMetrics { const stats = this.pool.getStats(); this.maxActiveEverSeen = Math.max(this.maxActiveEverSeen, stats.active); return { total: stats.total, idle: stats.idle, active: stats.active, waiting: stats.waiting, maxSeen: this.maxActiveEverSeen, leakCandidates: this.countLeakCandidates(), }; } private countLeakCandidates(): number { const now = Date.now(); return this.pool.getBorrowedConnections() .filter(conn => { const heldSeconds = (now - conn.borrowedAt.getTime()) / 1000; return heldSeconds > this.leakThresholdSeconds; }).length; } checkHealth(): HealthCheckResult { const metrics = this.getMetrics(); const issues: string[] = []; // Check for potential exhaustion if (metrics.idle === 0 && metrics.waiting > 0) { issues.push(`Pool exhausted: ${metrics.waiting} requests waiting`); } // Check for leak candidates if (metrics.leakCandidates > 0) { issues.push(`Potential leaks: ${metrics.leakCandidates} connections held > ${this.leakThresholdSeconds}s`); } // Check for growth pattern (may indicate slow leak) if (metrics.active > metrics.maxSeen * 0.9 && metrics.active > 5) { issues.push(`High watermark: ${metrics.active}/${this.maxActiveEverSeen} active`); } return { healthy: issues.length === 0, metrics, issues, }; }} // Periodic health check and alertingconst monitor = new ConnectionPoolMonitor(pool, { leakThresholdSeconds: 300 }); setInterval(() => { const health = monitor.checkHealth(); if (!health.healthy) { alerting.warn('Connection pool health issues', { issues: health.issues, metrics: health.metrics, }); } // Export metrics for monitoring dashboard metrics.gauge('connection_pool.active', health.metrics.active); metrics.gauge('connection_pool.idle', health.metrics.idle); metrics.gauge('connection_pool.leak_candidates', health.metrics.leakCandidates);}, 10000); // Every 10 secondsThe best place to catch leaks is in your test suite, before code reaches production. Design your test infrastructure to automatically detect and fail on resource leaks.
Pattern: Test lifecycle leak detection
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
// vitest.config.ts or jest setupimport { beforeEach, afterEach, afterAll } from 'vitest'; // Global leak detection stateconst leakDetector = { baselineDescriptors: 0, baselineMemory: 0, resourceRegistry: new ResourceRegistry(),}; // Configure global test hooksbeforeEach(() => { // Capture baselines before each test if (global.gc) global.gc(); leakDetector.baselineDescriptors = getOpenDescriptorCount(); leakDetector.baselineMemory = process.memoryUsage().heapUsed; leakDetector.resourceRegistry.clear();}); afterEach(async (context) => { // Allow async cleanup to complete await new Promise(resolve => setTimeout(resolve, 50)); if (global.gc) global.gc(); // Check for file descriptor leaks const currentDescriptors = getOpenDescriptorCount(); const descriptorLeak = currentDescriptors - leakDetector.baselineDescriptors; if (descriptorLeak > 0) { throw new Error( `Test "${context.task.name}" leaked ${descriptorLeak} file descriptor(s)` ); } // Check for tracked resource leaks const resourceLeaks = leakDetector.resourceRegistry.getLeaks(); if (resourceLeaks.length > 0) { const details = resourceLeaks.map(r => ` - [${r.type}] ${r.id}: ${r.stackTrace.split('\n')[1]}` ).join('\n'); throw new Error( `Test "${context.task.name}" leaked ${resourceLeaks.length} resource(s):\n${details}` ); } // Check for suspicious memory growth (potential leak) const currentMemory = process.memoryUsage().heapUsed; const memoryGrowth = currentMemory - leakDetector.baselineMemory; const memoryGrowthMB = memoryGrowth / 1024 / 1024; // Only warn on large leaks (individual tests may legitimately use memory) if (memoryGrowthMB > 50) { console.warn( `Warning: Test "${context.task.name}" grew heap by ${memoryGrowthMB.toFixed(2)}MB` ); }}); afterAll(() => { // Final leak check across entire suite if (global.gc) global.gc(); const finalMemory = process.memoryUsage(); console.log(`Final heap usage: ${Math.round(finalMemory.heapUsed / 1024 / 1024)}MB`); const finalResourceLeaks = leakDetector.resourceRegistry.getLeaks(); if (finalResourceLeaks.length > 0) { console.error('Resources leaked across test suite:', finalResourceLeaks); }}); // Export registry for tests to useexport function getResourceRegistry(): ResourceRegistry { return leakDetector.resourceRegistry;}Pattern: Soak testing for slow leaks
Some leaks only manifest after many iterations. Soak tests run the same operation many times to amplify small leaks.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
// Soak test pattern for detecting slow leaksdescribe('soak tests for leak detection', () => { it('should not leak memory over 10,000 iterations', async () => { // Warm up to stabilize JIT and initial allocations for (let i = 0; i < 100; i++) { await performOperation(); } // Capture baseline after warmup if (global.gc) global.gc(); await sleep(100); const baselineMemory = process.memoryUsage().heapUsed; // Main soak run const iterations = 10_000; const measurePoints: number[] = []; for (let i = 0; i < iterations; i++) { await performOperation(); // Periodic measurement if (i % 1000 === 0) { if (global.gc) global.gc(); measurePoints.push(process.memoryUsage().heapUsed); } } // Final measurement if (global.gc) global.gc(); await sleep(100); const finalMemory = process.memoryUsage().heapUsed; // Analyze trend const growth = finalMemory - baselineMemory; const growthPerIteration = growth / iterations; // Assert no significant per-iteration growth // Allow small overhead (1KB per 1000 iterations) expect(growthPerIteration).toBeLessThan(1); // < 1 byte per iteration // Check for monotonic growth pattern (leak signature) const isMonotonicallyGrowing = measurePoints.every((val, i) => i === 0 || val >= measurePoints[i - 1] ); if (isMonotonicallyGrowing && growth > 1024 * 1024) { throw new Error( `Memory monotonically increased by ${(growth / 1024 / 1024).toFixed(2)}MB over ${iterations} iterations - likely leak` ); } }, 60000); // 60 second timeout for soak test it('should not leak file handles over repeated file operations', async () => { const baselineDescriptors = getOpenDescriptorCount(); const iterations = 5000; for (let i = 0; i < iterations; i++) { await processFileAndClose(`./test-data/file-${i % 100}.txt`); } // Allow cleanup await sleep(200); const finalDescriptors = getOpenDescriptorCount(); const growth = finalDescriptors - baselineDescriptors; // Should have essentially no growth expect(growth).toBeLessThan(5); }, 30000);});Soak tests are too slow for every commit. Run them nightly or weekly. They're your early warning system for slow leaks that would take weeks to cause production issues.
Despite best efforts, some leaks will reach production. Robust monitoring detects them before they cause outages.
Key metrics to monitor for leak detection:
| Metric | Normal Pattern | Leak Pattern | Alert Threshold |
|---|---|---|---|
| Heap memory | Sawtooth (up, GC, down) | Monotonic increase | 80% and growing |
| Open file descriptors | Stable with load | Grows without load correlation | 80% of limit |
| Active DB connections | Correlates with traffic | Grows without traffic | 90% pool size |
| Thread count | Stable | Step increases, no decreases | expected threads + buffer |
| RSS memory | Matches heap + overhead | Grows faster than heap | expected RSS |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
// Production leak detection monitoringimport { Gauge, register } from 'prom-client'; // Metrics for leak detectionconst memoryMetrics = { heapUsed: new Gauge({ name: 'nodejs_heap_used_bytes', help: 'Current heap memory used in bytes', }), heapTotal: new Gauge({ name: 'nodejs_heap_total_bytes', help: 'Total heap memory allocated in bytes', }), external: new Gauge({ name: 'nodejs_external_memory_bytes', help: 'Memory used by C++ objects bound to JS', }), rss: new Gauge({ name: 'nodejs_rss_bytes', help: 'Resident Set Size - total process memory', }),}; const resourceMetrics = { openDescriptors: new Gauge({ name: 'process_open_fds', help: 'Number of open file descriptors', }), activeConnections: new Gauge({ name: 'db_connections_active', help: 'Number of database connections in use', }), eventListenerCount: new Gauge({ name: 'event_listeners_total', help: 'Total registered event listeners', labelNames: ['emitter'], }),}; // Periodic metric collectionsetInterval(() => { const mem = process.memoryUsage(); memoryMetrics.heapUsed.set(mem.heapUsed); memoryMetrics.heapTotal.set(mem.heapTotal); memoryMetrics.external.set(mem.external); memoryMetrics.rss.set(mem.rss); resourceMetrics.openDescriptors.set(getOpenDescriptorCount()); resourceMetrics.activeConnections.set(connectionPool.activeCount);}, 15000); // Every 15 seconds // Alerting rules (example for Prometheus)/*# Alert on memory leak pattern- alert: MemoryLeakSuspected expr: | increase(nodejs_heap_used_bytes[1h]) > 100000000 AND min_over_time(nodejs_heap_used_bytes[1h]) > 0.7 * max_over_time(nodejs_heap_used_bytes[1h]) for: 30m labels: severity: warning annotations: summary: "Possible memory leak in {{ $labels.instance }}" description: "Heap has grown >100MB in last hour without significant GC reclamation" # Alert on file descriptor exhaustion risk- alert: FileDescriptorsHigh expr: process_open_fds / process_max_fds > 0.8 for: 10m labels: severity: critical annotations: summary: "File descriptors approaching limit"*/A process using 80% of available memory isn't necessarily leaking. But a process whose minimum memory over each hour is higher than the previous hour's minimum—that's a leak. Design alerts around trends and patterns, not just static thresholds.
Resource leak detection requires a multi-layered strategy spanning static analysis, runtime instrumentation, testing, and production monitoring. Let's consolidate the key approaches:
What's next:
Now that we can verify cleanup and detect leaks, we need to address a practical challenge: testing code that uses real external resources. In the next page, we'll explore techniques for mocking external resources—replacing real files, databases, and network connections with test doubles that verify correct resource management behavior.
You now understand how to detect resource leaks across the entire development lifecycle—from static analysis at code review through runtime detection in tests to monitoring in production. These techniques form a comprehensive defense against the silent killer of resource exhaustion.