Loading learning content...
Unit tests with mocks verify that individual components handle resources correctly in isolation. But software is not made of isolated components—it's made of components that collaborate, pass resources between each other, and depend on shared infrastructure.
This collaboration creates an integration gap where resource management bugs hide:
Integration testing bridges this gap by testing resource management across component boundaries, often with real or near-real resources. These tests are slower and more complex than unit tests, but they catch bugs that unit tests simply cannot.
By the end of this page, you will understand how to design integration tests for resource management, strategies for testing with real databases and files, patterns for verifying resource cleanup across component boundaries, and techniques for catching environment-specific resource bugs.
Integration tests for resource management serve a different purpose than unit tests. While unit tests verify that a component knows how to manage resources, integration tests verify that the system manages resources correctly when components work together.
The limitations of unit tests with mocks:
| Scenario | Unit Test Result | Integration Test Result |
|---|---|---|
| Connection pool exhaustion | Mock pool always has connections | Real pool reaches limit under load |
| File locking conflicts | Mock files never lock | Real files block concurrent access |
| Transaction isolation bugs | Mock transactions are independent | Real transactions can deadlock |
| Cleanup timing issues | Mocks dispose synchronously | Real cleanup may be async |
| Resource configuration | Mocks ignore config | Real resources fail with wrong config |
| Memory pressure | Mocks are lightweight | Real resources trigger GC pressure |
The integration testing pyramid for resources:
Integration tests come in different scopes, each catching different classes of bugs:
Don't replace unit tests with integration tests—complement them. Use unit tests with mocks for fast feedback on component-level cleanup behavior. Use integration tests to verify that the pieces actually fit together correctly.
Database connection management is one of the most critical areas for integration testing. Connection leaks, transaction deadlocks, and pool exhaustion are common production issues that only manifest with real database connections.
Strategy 1: Test containers for isolated database testing
Use containerized databases that spin up fresh for each test run, ensuring complete isolation and reproducibility.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116
// Database integration testing with Testcontainersimport { PostgreSqlContainer, StartedPostgreSqlContainer } from '@testcontainers/postgresql';import { Pool } from 'pg'; describe('UserRepository database integration', () => { let container: StartedPostgreSqlContainer; let pool: Pool; let repository: UserRepository; // Start container once for all tests in this file beforeAll(async () => { container = await new PostgreSqlContainer() .withDatabase('testdb') .start(); pool = new Pool({ connectionString: container.getConnectionUri(), max: 10, // Match production pool size for realistic testing idleTimeoutMillis: 10000, connectionTimeoutMillis: 5000, }); // Run migrations await runMigrations(pool); }, 60000); // Container startup can take time afterAll(async () => { await pool.end(); // Properly close pool await container.stop(); }); beforeEach(async () => { // Clean data but keep schema await truncateAllTables(pool); repository = new UserRepository(pool); }); afterEach(async () => { // Verify no connection leaks after each test const stats = pool as any; const idleCount = stats.idleCount ?? 0; const totalCount = stats.totalCount ?? 0; const waitingCount = stats.waitingCount ?? 0; if (waitingCount > 0) { throw new Error(`Connection leak: ${waitingCount} requests waiting for connections`); } // All connections should be returned to idle if (totalCount > 0 && idleCount < totalCount) { await sleep(100); // Allow async release // Re-check if (stats.idleCount < stats.totalCount) { throw new Error( `Connection leak: ${stats.totalCount - stats.idleCount} connections still in use` ); } } }); it('should return connection to pool after query', async () => { const initialIdle = (pool as any).idleCount ?? 0; await repository.findById('user-1'); await sleep(10); // Allow connection return const finalIdle = (pool as any).idleCount ?? 0; expect(finalIdle).toBeGreaterThanOrEqual(initialIdle); }); it('should return connection on query error', async () => { // Force an error await expect(repository.findById('invalid-id')) .rejects.toThrow(); // Connection must still be returned await sleep(10); const stats = pool as any; expect(stats.waitingCount ?? 0).toBe(0); }); it('should not exhaust pool under concurrent access', async () => { // Create more concurrent requests than pool size const requests = Array(20).fill(null).map((_, i) => repository.findById(`user-${i}`) ); // All should complete without pool exhaustion await expect(Promise.all(requests)).resolves.toBeDefined(); }); it('should properly commit transactions', async () => { await repository.createUser({ id: 'tx-test', name: 'Test User' }); // Query in a separate connection to verify commit const result = await pool.query('SELECT * FROM users WHERE id = $1', ['tx-test']); expect(result.rows).toHaveLength(1); }); it('should rollback on error and release connection', async () => { await expect(repository.createUserWithValidation({ id: 'rollback-test', name: '' // Invalid: empty name })).rejects.toThrow('Validation failed'); // Verify rollback - user should not exist const result = await pool.query('SELECT * FROM users WHERE id = $1', ['rollback-test']); expect(result.rows).toHaveLength(0); // Verify connection returned await sleep(10); const stats = pool as any; expect(stats.waitingCount ?? 0).toBe(0); });});Most connection pool libraries expose statistics about active, idle, and waiting connections. Use these in afterEach hooks to detect leaks immediately, rather than waiting for a later test to fail mysteriously with 'connection timeout'.
File system operations have unique integration testing challenges: files persist between tests, file handles can leak, and platform differences can cause unexpected behavior.
Strategy: Isolated test directories with cleanup verification
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156
import { mkdtemp, rm, readdir } from 'fs/promises';import { tmpdir } from 'os';import { join } from 'path';import { execSync } from 'child_process'; // Test fixture that manages a temporary directoryclass FileSystemTestFixture { private testDir?: string; private baselineHandles = 0; async setup(): Promise<string> { // Create isolated test directory this.testDir = await mkdtemp(join(tmpdir(), 'integration-test-')); // Capture baseline file handle count this.baselineHandles = this.getOpenHandleCount(); return this.testDir; } async teardown(): Promise<void> { // Check for handle leaks const currentHandles = this.getOpenHandleCount(); const leaked = currentHandles - this.baselineHandles; if (leaked > 0) { console.warn(`Potential file handle leak: ${leaked} handles`); this.logOpenHandles(); } // Clean up test directory if (this.testDir) { await rm(this.testDir, { recursive: true, force: true }); } // Throw after cleanup so directory is removed even on leak if (leaked > 2) { // Small tolerance for system noise throw new Error(`File handle leak detected: ${leaked} handles`); } } private getOpenHandleCount(): number { try { const pid = process.pid; const result = execSync(`lsof -p ${pid} 2>/dev/null | wc -l`); return parseInt(result.toString().trim()); } catch { return 0; // Can't check on this platform } } private logOpenHandles(): void { try { const pid = process.pid; const result = execSync(`lsof -p ${pid} 2>/dev/null`); console.log('Open handles:', result.toString()); } catch { // Ignore } } async assertDirectoryEmpty(): Promise<void> { if (!this.testDir) return; const contents = await readdir(this.testDir); if (contents.length > 0) { throw new Error( `Test directory not cleaned up. Contains: ${contents.join(', ')}` ); } } async getFileCount(): Promise<number> { if (!this.testDir) return 0; const contents = await readdir(this.testDir, { recursive: true }); return contents.length; }} describe('FileProcessor integration', () => { const fixture = new FileSystemTestFixture(); let testDir: string; let processor: FileProcessor; beforeAll(async () => { testDir = await fixture.setup(); }); afterAll(async () => { await fixture.teardown(); }); beforeEach(() => { processor = new FileProcessor(testDir); }); afterEach(async () => { // Clean up files created by this test const files = await readdir(testDir); await Promise.all(files.map(f => rm(join(testDir, f), { recursive: true }))); }); it('should close file handles after processing', async () => { // Create test file const inputPath = join(testDir, 'input.txt'); await writeFile(inputPath, 'test content'); const handlesBefore = fixture.getOpenHandleCount(); await processor.processFile(inputPath); // Allow async cleanup await sleep(50); const handlesAfter = fixture.getOpenHandleCount(); expect(handlesAfter).toBeLessThanOrEqual(handlesBefore + 1); // Tolerance }); it('should not leave temp files on failure', async () => { const inputPath = join(testDir, 'input.txt'); await writeFile(inputPath, 'malformed content'); await expect(processor.processFile(inputPath)).rejects.toThrow(); // Count files - should only be the input file const fileCount = await fixture.getFileCount(); expect(fileCount).toBe(1); // Only input.txt, no temp files }); it('should handle concurrent file access', async () => { // Create multiple files for (let i = 0; i < 10; i++) { await writeFile(join(testDir, `file-${i}.txt`), `content ${i}`); } // Process all concurrently const results = await Promise.all( Array(10).fill(null).map((_, i) => processor.processFile(join(testDir, `file-${i}.txt`)) ) ); expect(results).toHaveLength(10); }); it('should properly release file locks', async () => { const filePath = join(testDir, 'locked.txt'); await writeFile(filePath, 'initial content'); // Process file await processor.processFile(filePath); // Should be able to delete file (proves lock is released) await expect(rm(filePath)).resolves.not.toThrow(); });});File locking, path handling, and permission behavior vary significantly between Windows, Linux, and macOS. If your software runs on multiple platforms, run integration tests on all target platforms. Containerized CI helps with Linux, but Windows requires separate runners.
When resources cross component boundaries, ownership and lifecycle become ambiguous. Integration tests must verify that resources are properly managed regardless of which component 'owns' them.
Pattern: Resource ownership contracts in tests
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
// Testing resource ownership across service boundariesdescribe('OrderService → PaymentService resource flow', () => { let pool: Pool; let orderService: OrderService; let paymentService: PaymentService; beforeAll(async () => { pool = await createTestPool(); paymentService = new PaymentService(pool); orderService = new OrderService(pool, paymentService); }); afterAll(async () => { await pool.end(); }); afterEach(async () => { await assertPoolHealthy(pool); }); // Test the "caller owns, callee borrows" contract it('should maintain connection ownership when OrderService calls PaymentService', async () => { // OrderService acquires connection, passes to PaymentService // PaymentService should NOT close it const connectionsBefore = (pool as any).totalCount; await orderService.processOrderWithPayment({ orderId: 'order-1', amount: 100, }); await sleep(50); // No extra connections should be created const connectionsAfter = (pool as any).totalCount; expect(connectionsAfter).toBeLessThanOrEqual(connectionsBefore + 1); // All connections should be returned expect((pool as any).idleCount).toBeGreaterThan(0); }); // Test transaction scope crossing boundaries it('should maintain transaction across service calls', async () => { // Start transaction in OrderService // PaymentService operations should use same transaction // Failure in either should rollback both // Simulate payment failure paymentService.setMockFailure(true); await expect(orderService.processOrderWithPayment({ orderId: 'rollback-test', amount: 100, })).rejects.toThrow(); // Verify complete rollback const orderExists = await orderService.orderExists('rollback-test'); const paymentExists = await paymentService.paymentExists('rollback-test'); expect(orderExists).toBe(false); expect(paymentExists).toBe(false); }); // Test cleanup on partial failure it('should clean up resources when second service fails', async () => { const resourceTracker = new ResourceTracker(); orderService.setResourceTracker(resourceTracker); paymentService.setResourceTracker(resourceTracker); paymentService.setMockFailure(true); await expect(orderService.processOrderWithPayment({ orderId: 'partial-fail', amount: 100, })).rejects.toThrow(); // All resources from both services should be cleaned up expect(resourceTracker.activeResources).toEqual([]); });}); // Helper: Assert pool is in healthy stateasync function assertPoolHealthy(pool: Pool): Promise<void> { const stats = pool as any; // No waiting requests if (stats.waitingCount > 0) { throw new Error(`Pool unhealthy: ${stats.waitingCount} waiting requests`); } // All connections idle if (stats.totalCount > 0 && stats.idleCount === 0) { await sleep(100); // Wait for async release if ((pool as any).idleCount === 0) { throw new Error('Pool unhealthy: no idle connections'); } }}Testing request-scoped resources:
Many frameworks provide request-scoped resources that should be cleaned up when the request ends. Integration tests should verify this lifecycle.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
// Testing request-scoped resource cleanup with a real HTTP frameworkimport { createServer, Server } from 'http';import { AddressInfo } from 'net'; describe('Request-scoped resource cleanup', () => { let server: Server; let serverUrl: string; let resourceTracker: ResourceTracker; beforeAll(async () => { resourceTracker = new ResourceTracker(); const app = createApp({ resourceTracker, connectionPool: await createTestPool(), }); server = createServer(app); await new Promise<void>(resolve => server.listen(0, resolve)); const address = server.address() as AddressInfo; serverUrl = `http://localhost:${address.port}`; }); afterAll(async () => { await new Promise<void>(resolve => server.close(() => resolve())); }); afterEach(() => { // Wait for async cleanup, then verify return new Promise<void>((resolve, reject) => { setTimeout(() => { const leaks = resourceTracker.getActiveResources(); if (leaks.length > 0) { reject(new Error(`Resource leaks: ${JSON.stringify(leaks)}`)); } else { resolve(); } }, 100); }); }); it('should clean up resources after successful request', async () => { const response = await fetch(`${serverUrl}/api/users/123`); expect(response.status).toBe(200); // afterEach will verify cleanup }); it('should clean up resources after error response', async () => { const response = await fetch(`${serverUrl}/api/users/invalid`); expect(response.status).toBe(400); // afterEach will verify cleanup }); it('should clean up resources after exception', async () => { const response = await fetch(`${serverUrl}/api/crash`); expect(response.status).toBe(500); // afterEach will verify cleanup }); it('should handle concurrent requests independently', async () => { const requests = Array(50).fill(null).map((_, i) => fetch(`${serverUrl}/api/users/${i}`) ); await Promise.all(requests); // All request-scoped resources should be cleaned up }); it('should clean up on client disconnect', async () => { const controller = new AbortController(); const requestPromise = fetch(`${serverUrl}/api/slow-operation`, { signal: controller.signal, }); // Abort after request starts setTimeout(() => controller.abort(), 50); await expect(requestPromise).rejects.toThrow(); // Resources for aborted request should still be cleaned up });});Some resource leaks only manifest under load. Low-frequency leaks and concurrency-related leaks require sustained traffic to become visible.
Strategy: Sustained load with resource monitoring
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150
// Load test infrastructure for resource leak detectioninterface ResourceMetrics { timestamp: number; heapUsed: number; heapTotal: number; fileDescriptors: number; activeConnections: number; requestsCompleted: number; errorsCount: number;} class LoadTestRunner { private metrics: ResourceMetrics[] = []; private running = false; async runLoadTest(config: { targetUrl: string; durationSeconds: number; requestsPerSecond: number; warmupSeconds?: number; }): Promise<LoadTestReport> { const { targetUrl, durationSeconds, requestsPerSecond, warmupSeconds = 10 } = config; this.running = true; this.metrics = []; let requestsCompleted = 0; let errorsCount = 0; // Start metrics collection const metricsInterval = setInterval(() => { if (!this.running) return; this.metrics.push({ timestamp: Date.now(), heapUsed: process.memoryUsage().heapUsed, heapTotal: process.memoryUsage().heapTotal, fileDescriptors: this.getFileDescriptorCount(), activeConnections: this.getActiveConnectionCount(), requestsCompleted, errorsCount, }); }, 1000); // Warmup phase console.log(`Warming up for ${warmupSeconds}s...`); await this.generateLoad(targetUrl, warmupSeconds, requestsPerSecond); // Clear warmup metrics this.metrics = []; requestsCompleted = 0; errorsCount = 0; // Main test phase console.log(`Running load test for ${durationSeconds}s at ${requestsPerSecond} rps...`); const startTime = Date.now(); while ((Date.now() - startTime) < durationSeconds * 1000) { const results = await this.generateLoadBatch(targetUrl, requestsPerSecond); requestsCompleted += results.successes; errorsCount += results.failures; } this.running = false; clearInterval(metricsInterval); return this.analyzeResults(); } private analyzeResults(): LoadTestReport { const firstMetric = this.metrics[0]; const lastMetric = this.metrics[this.metrics.length - 1]; // Calculate trends const heapTrend = this.calculateTrend(this.metrics.map(m => m.heapUsed)); const fdTrend = this.calculateTrend(this.metrics.map(m => m.fileDescriptors)); const connTrend = this.calculateTrend(this.metrics.map(m => m.activeConnections)); // Detect leaks const leakIndicators: string[] = []; // Memory: monotonic increase is suspicious if (heapTrend.percentIncrease > 50 && heapTrend.isMonotonic) { leakIndicators.push( `Memory grew ${heapTrend.percentIncrease.toFixed(1)}% monotonically` ); } // File descriptors: should be stable if (lastMetric.fileDescriptors > firstMetric.fileDescriptors + 10) { leakIndicators.push( `File descriptors grew from ${firstMetric.fileDescriptors} to ${lastMetric.fileDescriptors}` ); } // Connections: should return to baseline between bursts if (connTrend.percentIncrease > 20 && connTrend.isMonotonic) { leakIndicators.push( `Active connections trending upward (${connTrend.percentIncrease.toFixed(1)}%)` ); } return { duration: (lastMetric.timestamp - firstMetric.timestamp) / 1000, totalRequests: lastMetric.requestsCompleted, totalErrors: lastMetric.errorsCount, heapGrowth: lastMetric.heapUsed - firstMetric.heapUsed, fdGrowth: lastMetric.fileDescriptors - firstMetric.fileDescriptors, leakIndicators, passed: leakIndicators.length === 0, metrics: this.metrics, }; } private calculateTrend(values: number[]): { percentIncrease: number; isMonotonic: boolean } { const first = values[0]; const last = values[values.length - 1]; const percentIncrease = ((last - first) / first) * 100; const isMonotonic = values.every((val, i) => i === 0 || val >= values[i - 1] * 0.95 // 5% tolerance ); return { percentIncrease, isMonotonic }; }} // Usage in testdescribe('Load testing for resource leaks', () => { it('should not leak resources under sustained load', async () => { const runner = new LoadTestRunner(); const report = await runner.runLoadTest({ targetUrl: 'http://localhost:3000/api/process', durationSeconds: 60, requestsPerSecond: 100, warmupSeconds: 10, }); console.log(`Completed ${report.totalRequests} requests in ${report.duration}s`); console.log(`Heap growth: ${(report.heapGrowth / 1024 / 1024).toFixed(2)}MB`); console.log(`FD growth: ${report.fdGrowth}`); if (report.leakIndicators.length > 0) { console.error('Leak indicators:', report.leakIndicators); } expect(report.passed).toBe(true); }, 120000);});When analyzing load test results for leaks, focus on trends rather than absolute values. A process using 500MB of memory isn't necessarily leaking—but a process whose minimum memory after GC keeps increasing over time definitely is. Track the baseline, not just the peak.
Resource behavior varies between environments—development, staging, and production often have different configurations, limits, and behaviors that affect resource management.
Common environment-specific resource issues:
| Aspect | Dev/Test | Production | Impact on Resource Management |
|---|---|---|---|
| Connection pool size | 5-10 | 100+ | Leaks visible sooner in dev, masked in prod |
| File descriptor limits | 1024 | 65536 | Leaks hit limits much faster in dev |
| Memory limits | Unlimited | Container limits | OOM kills in prod, not dev |
| Timeout settings | Long/none | Aggressive | Cleanup on timeout not tested |
| Concurrency | Single user | Many users | Race conditions not triggered |
| Network reliability | Localhost | Real network | Disconnect handling not tested |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
// Tests that simulate production-like resource constraintsdescribe('Resource management under production-like constraints', () => { describe('with small pool (simulating exhaustion potential)', () => { let pool: Pool; beforeAll(async () => { pool = new Pool({ max: 3, // Very small - makes leaks visible quickly connectionTimeoutMillis: 1000, // Short timeout idleTimeoutMillis: 5000, }); }); afterAll(() => pool.end()); it('should not exhaust tiny pool under normal usage', async () => { const repository = new UserRepository(pool); // Sequential requests - should reuse connections for (let i = 0; i < 100; i++) { await repository.findById(`user-${i}`); } // Pool should not be exhausted expect((pool as any).waitingCount).toBe(0); }); it('should queue when pool exhausted, not fail', async () => { const repository = new UserRepository(pool); // More concurrent than pool size const requests = Array(10).fill(null).map((_, i) => repository.slowQuery(`user-${i}`) // Takes 100ms each ); // Should complete (queuing), not fail await expect(Promise.all(requests)).resolves.toBeDefined(); }); }); describe('with aggressive timeouts', () => { it('should cleanup resources when timeout cancels operation', async () => { const tracker = new ResourceTracker(); const repository = new UserRepository(pool, { tracker }); const controller = new AbortController(); // Start slow operation const slowOperation = repository.verySlowQuery('test', { signal: controller.signal, }); // Cancel quickly setTimeout(() => controller.abort(), 50); await expect(slowOperation).rejects.toThrow(); // Resources should still be cleaned up await sleep(100); // Allow cleanup expect(tracker.activeResources).toHaveLength(0); }); }); describe('with memory pressure', () => { it('should release resources under memory pressure', async () => { // Allocate memory to create pressure const pressure: Buffer[] = []; try { // Create memory pressure for (let i = 0; i < 100; i++) { pressure.push(Buffer.alloc(10 * 1024 * 1024)); // 10MB each } } catch { // Expected - we're creating pressure } // Force GC if available if (global.gc) global.gc(); // Now run normal operations const repository = new UserRepository(pool); // Should work despite memory pressure await expect(repository.findById('test')).resolves.toBeDefined(); // Cleanup pressure.length = 0; if (global.gc) global.gc(); }); }); describe('simulating network unreliability', () => { let proxyServer: ToxiproxyServer; beforeAll(async () => { // Toxiproxy allows simulating network failures proxyServer = await ToxiproxyServer.create(); }); afterAll(async () => { await proxyServer.stop(); }); it('should cleanup when connection is severed mid-operation', async () => { const proxy = await proxyServer.createProxy('db', { listen: ':15432', upstream: 'localhost:5432', }); const pool = new Pool({ port: 15432 }); const tracker = new ResourceTracker(); const repository = new UserRepository(pool, { tracker }); // Start query, then kill connection const queryPromise = repository.slowQuery('test'); setTimeout(async () => { await proxy.addToxic('reset', { type: 'reset_peer' }); }, 50); await expect(queryPromise).rejects.toThrow(); // Resources should be cleaned up despite network failure await sleep(100); expect(tracker.activeResources).toHaveLength(0); await pool.end(); }); });});Integration tests for resources need special consideration in CI/CD pipelines. They require infrastructure, have longer run times, and need careful resource cleanup between runs.
CI/CD pipeline design for resource integration tests:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
name: Integration Tests on: pull_request: branches: [main] push: branches: [main] schedule: # Run full suite including load tests nightly - cron: '0 2 * * *' jobs: integration-tests: runs-on: ubuntu-latest services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: test POSTGRES_DB: testdb ports: - 5432:5432 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 redis: image: redis:7 ports: - 6379:6379 steps: - uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - name: Install dependencies run: npm ci - name: Run migrations run: npm run db:migrate env: DATABASE_URL: postgresql://postgres:test@localhost:5432/testdb - name: Run integration tests run: npm run test:integration env: DATABASE_URL: postgresql://postgres:test@localhost:5432/testdb REDIS_URL: redis://localhost:6379 NODE_OPTIONS: '--expose-gc' # Enable GC for memory leak detection - name: Upload test results if: always() uses: actions/upload-artifact@v4 with: name: integration-test-results path: test-results/ load-tests: # Only run on schedule or manual trigger if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' runs-on: ubuntu-latest services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: test ports: - 5432:5432 steps: - uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Start application run: | npm run start:test & sleep 10 # Wait for startup - name: Run load tests run: npm run test:load timeout-minutes: 30 - name: Analyze and report run: npm run analyze:load-results - name: Upload load test results uses: actions/upload-artifact@v4 with: name: load-test-results path: load-test-results/Integration testing completes the resource management testing story. While unit tests verify that individual components handle resources correctly, integration tests verify that the system works together correctly across boundaries and under realistic conditions.
Module complete!
You now have a comprehensive understanding of testing resource management code. From verifying cleanup in unit tests, to detecting leaks with profiling, to mocking external resources, to integration testing across component boundaries—you have the tools to build confidence that your resource management code is production-ready.
The patterns and techniques in this module apply across all types of resources: memory, files, connections, handles, and beyond. The investment in thorough testing pays dividends in system reliability, reduced operational burden, and the confidence to ship changes without fear of resource-related incidents.
Congratulations! You've completed the Testing Resource Management module. You now understand how to verify cleanup, detect leaks, mock resources, and integration test resource management across your systems. Apply these techniques to build robust, leak-free applications.