Loading content...
Resources come in many forms, each with unique characteristics, management requirements, and failure modes. While the previous page established the abstract definition of resources, this page provides a concrete taxonomy—a comprehensive catalog of the resource types you'll encounter as a professional software engineer.
Understanding these categories is not merely academic. Each resource type has:
Let's systematically explore each major category.
By the end of this page, you will understand: (1) The four major categories of system resources, (2) The specific characteristics of memory, file, connection, and handle resources, (3) The management requirements for each type, and (4) How to recognize which category a resource belongs to.
Memory is the most fundamental resource in computing. Every object, every data structure, every buffer consumes memory. While managed languages with garbage collection abstract away much of memory management, memory resources still require attention in several contexts.
Categories of memory resources:
1. Heap Memory (Managed)
In languages like Java, C#, Python, and JavaScript, heap memory is managed by the garbage collector. Objects are allocated on the heap and automatically reclaimed when no longer referenced. This appears automatic, but:
123456789101112131415161718192021222324
// Memory resource considerations in managed languages // ❌ Unbounded cache - memory will grow indefinitelyconst userCache = new Map<string, User>(); function getUser(id: string): User { if (!userCache.has(id)) { const user = loadFromDatabase(id); userCache.set(id, user); // Never evicted! } return userCache.get(id)!;} // ✅ Bounded cache with LRU evictionimport { LRUCache } from 'lru-cache'; const boundedCache = new LRUCache<string, User>({ max: 10000, // Maximum entries maxSize: 50 * 1024 * 1024, // Maximum 50MB sizeCalculation: (user) => JSON.stringify(user).length, ttl: 1000 * 60 * 60, // 1 hour TTL}); // Memory is now bounded and automatically managed2. Native/Unmanaged Memory
Some languages allow direct allocation of memory outside the managed heap:
malloc(), new, stack allocationsBuffer.allocUnsafe(), Java ByteBuffer.allocateDirect(), C# unmanaged memoryNative memory bypasses garbage collection entirely. You must explicitly allocate and deallocate, or use patterns like RAII (Resource Acquisition Is Initialization) that tie memory lifetime to object scope.
1234567891011121314151617181920212223242526272829303132
// Native memory management in C++ // ❌ Manual allocation - easy to leakvoid risky() { int* data = new int[1000]; if (someCondition()) { return; // Memory leaked! Never deleted. } delete[] data; // Only reached if condition is false} // ✅ RAII with smart pointers - cannot leak#include <memory> void safe() { auto data = std::make_unique<int[]>(1000); if (someCondition()) { return; // data automatically deleted } // data automatically deleted here too} // ✅ RAII with standard containersvoid safest() { std::vector<int> data(1000); // Memory managed automatically by vector // Cannot leak under any circumstances}3. Shared Memory
Shared memory regions allow multiple processes to access the same memory space:
shmget, mmap with MAP_SHARED)4. Memory-Mapped Files
Memory-mapped files map file contents directly into process address space:
mmap() (Unix) or CreateFileMapping() (Windows)Files represent persistent data storage, one of the oldest and most fundamental resource types in computing. Despite their ubiquity, file resources are frequently mismanaged, causing data corruption, resource leaks, and cross-process conflicts.
The anatomy of a file resource:
When you 'open a file,' multiple resources are actually involved:
Each of these has lifecycle implications. Failing to close a file doesn't just leak a number—it leaves buffers unflushed, locks held, and kernel resources consumed.
| Mode | Description | Resource Implications |
|---|---|---|
| Read-only | Open for reading | Shared access, no write conflicts |
| Write-only | Open for writing | May truncate, exclusive write access |
| Read-write | Open for both | Combines both access patterns |
| Append | Write only at end | Seeking for write disabled |
| Exclusive create | Fail if exists | Atomic creation for uniqueness |
| Truncate | Clear on open | Existing content lost immediately |
File descriptor limits:
Operating systems impose strict limits on file descriptors:
Exceeding these limits causes EMFILE ('Too many open files') or ENFILE ('File table overflow') errors. These errors are particularly insidious because:
Special file types:
Not all 'files' are regular disk files:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
import { open, FileHandle } from 'fs/promises'; // ❌ Leaky file handlingasync function leakyRead(path: string): Promise<string> { const file = await open(path, 'r'); const content = await file.readFile('utf-8'); // file.close() never called - file descriptor leaked! return content;} // ❌ Leak on error pathasync function leakyOnError(path: string): Promise<string> { const file = await open(path, 'r'); const content = await file.readFile('utf-8'); if (!content.startsWith('HEADER')) { throw new Error('Invalid format'); // file.close() skipped due to throw! } await file.close(); return content;} // ✅ Proper file resource management with try/finallyasync function safeRead(path: string): Promise<string> { const file = await open(path, 'r'); try { const content = await file.readFile('utf-8'); if (!content.startsWith('HEADER')) { throw new Error('Invalid format'); } return content; } finally { await file.close(); // Always executes }} // ✅ Modern approach: using helper that encapsulates cleanupimport { readFile } from 'fs/promises'; async function simplestRead(path: string): Promise<string> { // readFile opens, reads, and closes automatically return readFile(path, 'utf-8');}Connections represent bidirectional communication channels between your application and external systems. They are among the most expensive resources to acquire and most impactful when mismanaged.
Network connections encompass multiple stacked resources:
Each layer adds establishment overhead and cleanup requirements.
The connection lifecycle:
┌─────────────────────────────────────────────────────────────┐
│ CONNECTION LIFECYCLE │
├─────────────────────────────────────────────────────────────┤
│ │
│ [Closed] ──Create──► [Connecting] ──Handshake──► [Open] │
│ │ │
│ ▼ │
│ [Closed] ◄──Close── [Closing] ◄──Shutdown── [Active] │
│ │
└─────────────────────────────────────────────────────────────┘
Connections can also enter error states: connection refused, connection reset, timeout. Error handling must account for all states.
| Connection Type | Establishment Cost | Typical Pool Size | Leak Impact |
|---|---|---|---|
| PostgreSQL | 5-50ms | 10-50 | Query failures, 'too many clients' |
| MySQL | 5-30ms | 10-100 | Connection refused errors |
| Redis | 1-10ms | 10-50 | Slower than expected (creates on demand) |
| MongoDB | 10-100ms | 5-50 | Timeout errors |
| HTTP/1.1 | 50-200ms (TLS) | Per-host limits | Socket exhaustion |
| HTTP/2 | 50-200ms (TLS) | Few, multiplexed | Stream limits hit |
| gRPC | 50-200ms (TLS) | Few, multiplexed | Stream/message limits |
| WebSocket | 50-200ms (TLS) | Application-dependent | Memory growth, client timeouts |
Connection pooling:
Due to high establishment costs, connections are almost always pooled. A pool:
Connection pools transform connection usage from acquire-release to borrow-return semantics.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
import { Pool, PoolClient } from 'pg'; // Connection pool configurationconst pool = new Pool({ host: 'localhost', database: 'myapp', user: 'app', password: process.env.DB_PASSWORD, // Pool sizing min: 5, // Minimum connections to maintain max: 20, // Maximum connections allowed // Timeouts idleTimeoutMillis: 30000, // Close idle connections after 30s connectionTimeoutMillis: 5000, // Fail if can't connect in 5s // Health checks allowExitOnIdle: true, // Allow process to exit when idle}); // ❌ Leaky: Connection borrowed but never returnedasync function leakyQuery(id: string): Promise<User> { const client = await pool.connect(); // Borrow from pool const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); // client.release() never called! // Connection stuck - pool will exhaust at max size return result.rows[0];} // ❌ Leaky on error: Connection not returned if query failsasync function leakyOnError(id: string): Promise<User> { const client = await pool.connect(); const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); // If query throws, release() is skipped client.release(); return result.rows[0];} // ✅ Correct: Always return connection with try/finallyasync function safeQuery(id: string): Promise<User> { const client = await pool.connect(); try { const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); return result.rows[0]; } finally { client.release(); // Always returns to pool }} // ✅ Even better: Use pool.query() for simple casesasync function simpleQuery(id: string): Promise<User> { // pool.query() automatically borrows, executes, and releases const result = await pool.query( 'SELECT * FROM users WHERE id = $1', [id] ); return result.rows[0];}Connection leaks often trigger a death spiral: pool exhausts → requests wait for connections → timeouts occur → error handlers may leak too → failed requests retry → more load → faster exhaustion. Detection and prevention are critical.
Handles are opaque references to kernel or system objects. Unlike connections (which represent channels) or memory (which represents storage), handles are tokens that grant capability. Possessing a valid handle entitles you to perform operations on the underlying resource.
Handle semantics:
The handle lifecycle:
┌────────────────────────────────────────────────────────────┐
│ HANDLE LIFECYCLE │
├────────────────────────────────────────────────────────────┤
│ │
│ [Invalid] │
│ │ │
│ Acquire (open, create, connect) │
│ ▼ │
│ [Valid] ──────────► Use (read, write, query) │
│ │ │ │
│ │ ▼ │
│ │ May become stale │
│ │ (remote close, timeout) │
│ │ │ │
│ ▼ ▼ │
│ Release (close, dispose) │
│ │ │
│ ▼ │
│ [Invalid] ◄────────── Handle reused │
│ for new resource! │
│ │
└────────────────────────────────────────────────────────────┘
Handle reuse hazard:
When a handle is closed, its integer value becomes available for reuse. If you:
This is why using closed handles is undefined behavior, not a clean error.
12345678910111213141516171819202122232425262728293031323334353637383940
import os # Demonstrating handle reuse hazard (educational only!)def handle_reuse_hazard(): # Open a file, get file descriptor fd1 = os.open("/tmp/file1.txt", os.O_CREAT | os.O_WRONLY) print(f"File 1 descriptor: {fd1}") # e.g., "3" # Close the file - handle is now invalid os.close(fd1) # Open another file - might get the same descriptor! fd2 = os.open("/tmp/file2.txt", os.O_CREAT | os.O_WRONLY) print(f"File 2 descriptor: {fd2}") # Possibly also "3"! # If fd1 == fd2 (handle reused), and we have a 'stale' fd1 reference: # Writing to fd1 would actually write to file2! # ❌ Bug: using stale handle # os.write(fd1, b"Meant for file 1") # Would write to file 2! os.close(fd2) # ✅ ALWAYS set handles to None/invalid after closingclass SafeFileHandle: def __init__(self): self._fd: int | None = None def open(self, path: str) -> None: self._fd = os.open(path, os.O_CREAT | os.O_RDWR) def close(self) -> None: if self._fd is not None: os.close(self._fd) self._fd = None # Mark as invalid immediately def write(self, data: bytes) -> int: if self._fd is None: raise RuntimeError("Handle is not open") return os.write(self._fd, data)Threads and processes are execution resources—they represent active computation capacity. Unlike passive resources (files, connections), these are active entities that consume CPU time, have their own state, and may hold other resources.
Thread resources include:
| Resource Aspect | Thread (within process) | Process (separate) |
|---|---|---|
| Memory overhead | ~1-8 MB (stack) | ~20-50 MB (address space) |
| Creation time | ~10-100 μs | ~1-100 ms |
| Context switch | ~1-10 μs | ~10-100 μs |
| Memory isolation | None (shared heap) | Full (separate address spaces) |
| Failure isolation | None (crash kills process) | Full (crash isolated) |
| Communication cost | Cheap (shared memory) | Expensive (IPC) |
Thread pool pattern:
Like connection pools, thread pools amortize creation cost and bound resource usage:
Thread pools transform 'spawn thread for each task' into 'submit task to pool.'
Process resources and zombies:
When a child process completes, it enters a 'zombie' state until the parent:
wait() or waitpid() to collect exit statusFailing to wait for child processes creates zombie processes that consume PID table entries, potentially exhausting available PIDs.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
import { Worker, isMainThread, parentPort, workerData } from 'worker_threads'; // Simple thread pool implementation conceptclass ThreadPool { private workers: Worker[] = []; private taskQueue: Array<{ task: any; resolve: (value: any) => void; reject: (error: any) => void; }> = []; private availableWorkers: Set<Worker>; constructor(private size: number, private workerScript: string) { this.availableWorkers = new Set(); this.initializeWorkers(); } private initializeWorkers(): void { for (let i = 0; i < this.size; i++) { const worker = new Worker(this.workerScript); this.workers.push(worker); this.availableWorkers.add(worker); worker.on('message', (result) => { this.availableWorkers.add(worker); this.processQueue(); }); worker.on('error', (error) => { // Worker crashed - replace it this.replaceWorker(worker); }); } } private replaceWorker(deadWorker: Worker): void { const index = this.workers.indexOf(deadWorker); const newWorker = new Worker(this.workerScript); this.workers[index] = newWorker; this.availableWorkers.add(newWorker); // Set up message handlers... } async execute<T>(task: any): Promise<T> { return new Promise((resolve, reject) => { this.taskQueue.push({ task, resolve, reject }); this.processQueue(); }); } private processQueue(): void { while (this.taskQueue.length > 0 && this.availableWorkers.size > 0) { const { task, resolve, reject } = this.taskQueue.shift()!; const worker = this.availableWorkers.values().next().value; this.availableWorkers.delete(worker); worker.once('message', resolve); worker.once('error', reject); worker.postMessage(task); } } // ✅ Critical: Clean up all workers on shutdown async shutdown(): Promise<void> { const terminations = this.workers.map(w => w.terminate()); await Promise.all(terminations); this.workers = []; this.availableWorkers.clear(); }}Threads often hold other resources (connections, locks, file handles). If a thread is terminated or abandoned without cleanup, those resources leak. Thread termination must include cleanup of all resources the thread acquired during execution.
Beyond the core resource types, many specialized resources exist in specific domains:
Synchronization resources:
Synchronization resources are dangerous because leaking them (holding indefinitely) causes deadlocks rather than exhaustion.
Timer and scheduling resources:
These are often overlooked because they don't block on acquire. But they consume memory and kernel resources, and failing to cancel them causes memory leaks and unexpected callbacks.
Cache entries as resources:
Cache entries, while just memory, behave as resources when they:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
// Timer resources exampleclass OrderProcessor { private reminderTimers = new Map<string, NodeJS.Timeout>(); async processOrder(orderId: string): Promise<void> { // Schedule reminder const timer = setTimeout(() => { this.sendReminder(orderId); }, 24 * 60 * 60 * 1000); // 24 hours this.reminderTimers.set(orderId, timer); } async completeOrder(orderId: string): Promise<void> { // ✅ Clean up timer resource when no longer needed const timer = this.reminderTimers.get(orderId); if (timer) { clearTimeout(timer); this.reminderTimers.delete(orderId); } await this.markComplete(orderId); } // ✅ Cleanup all timers on shutdown shutdown(): void { for (const timer of this.reminderTimers.values()) { clearTimeout(timer); } this.reminderTimers.clear(); }} // Distributed lock exampleimport { RedisLock } from './redis-lock'; async function processExclusively(resourceId: string): Promise<void> { const lock = new RedisLock(redis); // Acquire distributed lock const acquired = await lock.acquire(`resource:${resourceId}`, { ttl: 30000, // Auto-expire after 30s (safety) retryDelay: 100, // Retry every 100ms if not acquired retryCount: 50, // Give up after 50 retries }); if (!acquired) { throw new Error('Could not acquire lock'); } try { await doExclusiveWork(resourceId); } finally { // ✅ ALWAYS release lock, even on error await lock.release(); }}We've surveyed the major categories of resources you'll encounter in production software systems. Let's consolidate:
What's next:
Now that we understand the types of resources, we'll examine the resource lifecycle in detail—the phases every resource goes through from creation to destruction, and how these phases inform management patterns.
You now have a comprehensive taxonomy of resource types used in production software systems. This knowledge allows you to recognize resource responsibilities whenever you encounter them in code and apply appropriate management patterns. Next, we'll explore the resource lifecycle in depth.