Loading content...
Understanding thread concepts and lifecycle is essential—but building production concurrent systems requires practical thread management skills. Creating threads naively leads to resource exhaustion, poor performance, and operational nightmares.
The Production Reality:
In real applications, you rarely create and manage individual threads directly. Instead, you use abstractions that handle the complexity:
These patterns have emerged from decades of experience with the pitfalls of raw thread management.
By the end of this page, you will understand how to create threads safely, manage thread pools effectively, configure thread behavior, and apply production-proven patterns for thread management. You'll know when to use raw threads vs. executors vs. higher-level abstractions.
Before exploring higher-level abstractions, we must understand raw thread creation. This foundation helps you appreciate why abstractions exist and when to use (or avoid) direct thread management.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
import java.util.concurrent.atomic.AtomicInteger; public class ThreadCreationPatterns { // ============================================ // Method 1: Extend Thread class // ============================================ static class CountingThread extends Thread { private final int countTo; private int result; public CountingThread(String name, int countTo) { super(name); // Set thread name this.countTo = countTo; } @Override public void run() { for (int i = 0; i < countTo; i++) { result += i; } System.out.println(getName() + " completed: " + result); } public int getResult() { return result; } } // ============================================ // Method 2: Implement Runnable interface (Preferred) // ============================================ static class ProcessingTask implements Runnable { private final String data; public ProcessingTask(String data) { this.data = data; } @Override public void run() { // Access thread info directly String threadName = Thread.currentThread().getName(); System.out.println("[" + threadName + "] Processing: " + data); // Simulate work try { Thread.sleep(100); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } } // ============================================ // Method 3: Lambda expressions (concise) // ============================================ public static void lambdaExample() { Thread thread = new Thread(() -> { System.out.println("Lambda thread running"); }, "LambdaThread"); thread.start(); } // ============================================ // Creating threads with configuration // ============================================ public static Thread createConfiguredThread(Runnable task) { Thread thread = new Thread(task); // Name for debugging and monitoring thread.setName("Worker-" + System.nanoTime()); // Priority: 1 (MIN) to 10 (MAX), default 5 (NORM) thread.setPriority(Thread.NORM_PRIORITY); // Daemon status (must set BEFORE start()) thread.setDaemon(false); // Uncaught exception handler thread.setUncaughtExceptionHandler((t, e) -> { System.err.println("Thread " + t.getName() + " failed: " + e); // Log, alert, or handle the failure }); return thread; } public static void main(String[] args) throws InterruptedException { // Method 1: Subclass CountingThread counter = new CountingThread("Counter", 1000); counter.start(); // Method 2: Runnable Thread processor = new Thread(new ProcessingTask("DataItem")); processor.setName("Processor"); processor.start(); // Method 3: Lambda lambdaExample(); // Wait for all counter.join(); processor.join(); }}Creating a new thread for every task is expensive (1-10ms per thread), consumes significant memory per thread (8KB-1MB stack), and can exhaust system resources under load. If you spawn 10,000 threads for 10,000 requests, your system will likely crash or become unresponsive. Thread pools solve this by reusing threads.
A thread pool is a collection of pre-created threads that wait for tasks to execute. Instead of creating a new thread for each task, tasks are submitted to the pool, which assigns them to available threads. This pattern:
┌─────────────────────────────────────────────────────────────────────┐│ THREAD POOL ARCHITECTURE │└─────────────────────────────────────────────────────────────────────┘ Application Code │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ EXECUTOR SERVICE │ │ │ │ submit(task) ──────► ┌────────────────────────────────┐ │ │ │ BLOCKING QUEUE │ │ │ │ ┌──────┬──────┬──────┬──────┐ │ │ │ │ │Task 1│Task 2│Task 3│ ... │ │ │ │ │ └──────┴──────┴──────┴──────┘ │ │ │ └────────────────┬───────────────┘ │ │ │ │ │ ┌────────────┴────────────┐ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────────────────────────────────────────────────┐ │ │ │ THREAD POOL │ │ │ │ │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ │ │ Worker 1 │ │ Worker 2 │ │ Worker 3 │ │ Worker 4 │ │ │ │ │ │ (active) │ │ (active) │ │ (idle) │ │ (idle) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ Task 1 │ │ Task 2 │ │ waiting │ │ waiting │ │ │ │ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ │ │ │ │ │ │ Core Pool Size: 2 Maximum Pool Size: 4 │ │ │ │ Active Threads: 2 Queued Tasks: 1 │ │ │ └──────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────┘ LIFECYCLE:1. Tasks submitted via submit() or execute()2. Task added to blocking queue3. Idle worker takes task from queue4. Worker executes task5. Worker returns to idle, takes next task (or waits)6. On shutdown: complete queued tasks, then terminate workers123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
import java.util.concurrent.*;import java.util.List;import java.util.ArrayList; public class ThreadPoolPatterns { // ============================================ // Fixed Thread Pool: For CPU-bound work // ============================================ public static void fixedPoolExample() throws Exception { // Number of threads = number of CPU cores int numCores = Runtime.getRuntime().availableProcessors(); ExecutorService executor = Executors.newFixedThreadPool(numCores); List<Future<Integer>> futures = new ArrayList<>(); // Submit tasks for (int i = 0; i < 100; i++) { final int taskId = i; Future<Integer> future = executor.submit(() -> { // CPU-intensive work return heavyComputation(taskId); }); futures.add(future); } // Collect results int total = 0; for (Future<Integer> future : futures) { total += future.get(); // Blocks until result available } // Graceful shutdown executor.shutdown(); executor.awaitTermination(1, TimeUnit.MINUTES); System.out.println("Total result: " + total); } // ============================================ // Cached Thread Pool: For I/O-bound work // ============================================ public static void cachedPoolExample() { // Creates threads as needed, reuses idle threads // Idle threads terminated after 60 seconds ExecutorService executor = Executors.newCachedThreadPool(); // Good for many short-lived async tasks for (int i = 0; i < 1000; i++) { final int taskId = i; executor.execute(() -> { try { // I/O-bound work (network call, file read) Thread.sleep(100); // Simulate I/O System.out.println("Task " + taskId + " completed"); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); } executor.shutdown(); } // ============================================ // Custom Thread Pool: Fine-grained control // ============================================ public static ThreadPoolExecutor createCustomPool() { return new ThreadPoolExecutor( 4, // corePoolSize 16, // maximumPoolSize 60L, TimeUnit.SECONDS, // keepAliveTime for excess threads new LinkedBlockingQueue<>(1000), // Work queue with capacity new CustomThreadFactory("MyPool"), // Thread factory new ThreadPoolExecutor.CallerRunsPolicy() // Rejection policy ); } // ============================================ // Custom Thread Factory for naming and configuration // ============================================ static class CustomThreadFactory implements ThreadFactory { private final String namePrefix; private final AtomicInteger threadNumber = new AtomicInteger(1); public CustomThreadFactory(String namePrefix) { this.namePrefix = namePrefix; } @Override public Thread newThread(Runnable r) { Thread t = new Thread(r, namePrefix + "-thread-" + threadNumber.getAndIncrement()); t.setDaemon(false); t.setPriority(Thread.NORM_PRIORITY); t.setUncaughtExceptionHandler((thread, ex) -> { System.err.println("Thread " + thread.getName() + " failed: " + ex); }); return t; } } // ============================================ // Scheduled Thread Pool: For periodic tasks // ============================================ public static void scheduledPoolExample() { ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2); // Run once after delay scheduler.schedule(() -> { System.out.println("Delayed task executed"); }, 5, TimeUnit.SECONDS); // Run repeatedly at fixed rate scheduler.scheduleAtFixedRate(() -> { System.out.println("Periodic task: " + System.currentTimeMillis()); }, 0, 1, TimeUnit.SECONDS); // Initial delay, period // Run repeatedly with fixed delay between completions scheduler.scheduleWithFixedDelay(() -> { System.out.println("Fixed delay task"); }, 0, 500, TimeUnit.MILLISECONDS); } private static int heavyComputation(int input) { int result = 0; for (int i = 0; i < 1000000; i++) { result += i * input % 100; } return result; }}For CPU-bound work: use number of CPU cores (or cores - 1 to leave headroom). For I/O-bound work: use more threads (2x-10x cores) since threads spend time waiting. For mixed workloads: consider separate pools for CPU and I/O tasks. Monitor and tune based on actual behavior—initial sizing is just a starting point.
When a thread pool's work queue is full and no threads are available to execute new tasks, what happens? The answer is the rejection policy—a crucial configuration that determines system behavior under load.
| Policy | Behavior | Use Case | Risk |
|---|---|---|---|
| AbortPolicy | Throws RejectedExecutionException | Fail-fast systems that must know about overload | Exceptions must be handled or application crashes |
| CallerRunsPolicy | Task runs in the submitting thread | Throttling: slows submission when pool is saturated | May block critical threads; unexpected thread context |
| DiscardPolicy | Silently drops the task | Non-critical work where loss is acceptable | Data loss with no notification; debugging difficulty |
| DiscardOldestPolicy | Drops oldest queued task, retries submission | Prefer fresh data over stale | May starve long-running important tasks |
| Custom Policy | Implement RejectedExecutionHandler | Complex handling: alternative queue, logging, metrics | Requires careful implementation |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123
/** * Implementing rejection policies for a thread pool */ type Task = () => Promise<void>; interface RejectionPolicy { name: string; handle(task: Task, queue: Task[], stats: PoolStats): void;} interface PoolStats { submittedTasks: number; completedTasks: number; rejectedTasks: number; queueSize: number; queueCapacity: number;} // ============================================// Policy 1: Abort - Throw exception// ============================================const abortPolicy: RejectionPolicy = { name: 'AbortPolicy', handle(task, queue, stats) { stats.rejectedTasks++; throw new Error( 'Task rejected: pool saturated ' + `(queue: ${stats.queueSize}/${stats.queueCapacity})` ); }}; // ============================================// Policy 2: Caller Runs - Execute in calling thread// ============================================const callerRunsPolicy: RejectionPolicy = { name: 'CallerRunsPolicy', handle(task, queue, stats) { console.warn( '[CallerRunsPolicy] Pool saturated, running task in caller thread' ); // This blocks the caller, providing backpressure task().catch(err => { console.error('Task failed in caller thread:', err); }); }}; // ============================================// Policy 3: Discard - Silently drop// ============================================const discardPolicy: RejectionPolicy = { name: 'DiscardPolicy', handle(task, queue, stats) { stats.rejectedTasks++; // Task is simply not executed - be careful with this! console.debug('[DiscardPolicy] Task discarded'); }}; // ============================================// Policy 4: Discard Oldest - Drop oldest, retry// ============================================const discardOldestPolicy: RejectionPolicy = { name: 'DiscardOldestPolicy', handle(task, queue, stats) { if (queue.length > 0) { const discarded = queue.shift(); // Remove oldest stats.rejectedTasks++; console.debug('[DiscardOldestPolicy] Discarded oldest task'); queue.push(task); // Add new task } }}; // ============================================// Policy 5: Custom - Persist to backup queue// ============================================class PersistToBackupPolicy implements RejectionPolicy { name = 'PersistToBackupPolicy'; private backupQueue: Task[] = []; handle(task: Task, queue: Task[], stats: PoolStats): void { // Save to backup storage for later retry this.backupQueue.push(task); console.warn( `[PersistToBackupPolicy] Task saved to backup queue ` + `(backup size: ${this.backupQueue.length})` ); // Could also: write to disk, send to message queue, etc. } drainBackupQueue(): Task[] { const tasks = [...this.backupQueue]; this.backupQueue = []; return tasks; }} // ============================================// Choosing the right policy// ============================================/*Decision flowchart: Is task loss acceptable?├── YES: DiscardPolicy or DiscardOldestPolicy│ └── Prefer fresh over stale? DiscardOldestPolicy│ └── Otherwise: DiscardPolicy└── NO: Must process every task └── Can caller block waiting? ├── YES: CallerRunsPolicy (provides backpressure) └── NO: └── Is there a backup system? ├── YES: Custom policy (persist to queue/disk) └── NO: AbortPolicy (fail fast, alert operators) Best practice: Always monitor rejection counts and alert on thresholds!*/DiscardPolicy can hide serious problems. If your pool silently drops 50% of tasks, you may not notice until users complain about missing data. Always combine discard policies with monitoring and alerting on rejection counts.
Thread pools themselves have a lifecycle that must be managed. Improper shutdown can lead to leaked threads, lost work, or applications that refuse to exit.
| State | New Tasks | Queued Tasks | Active Tasks |
|---|---|---|---|
| RUNNING | Accepted | Processed | Executing |
| SHUTDOWN | Rejected | Processed | Executing |
| STOP | Rejected | Discarded | Interrupted |
| TIDYING | Rejected | Empty | None |
| TERMINATED | Rejected | Empty | None |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
import java.util.concurrent.*;import java.util.List; public class PoolLifecycleManagement { /** * Pattern 1: Graceful shutdown with timeout */ public static void gracefulShutdown(ExecutorService executor) { executor.shutdown(); // No new tasks accepted try { // Wait for existing tasks to complete if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // Tasks didn't complete in time - force shutdown List<Runnable> cancelled = executor.shutdownNow(); System.out.println("Cancelled " + cancelled.size() + " tasks"); // Wait for forced shutdown to complete if (!executor.awaitTermination(30, TimeUnit.SECONDS)) { System.err.println("Pool did not terminate"); } } } catch (InterruptedException e) { // Current thread was interrupted while waiting executor.shutdownNow(); Thread.currentThread().interrupt(); } } /** * Pattern 2: Two-phase shutdown with task preservation */ public static void twoPhaseShutdown( ThreadPoolExecutor executor, long waitSeconds ) throws InterruptedException { // Phase 1: Stop accepting new tasks, let current work complete executor.shutdown(); System.out.println("Phase 1: Shutdown initiated, waiting for completion..."); if (executor.awaitTermination(waitSeconds, TimeUnit.SECONDS)) { System.out.println("All tasks completed gracefully"); return; } // Phase 2: Cancel running tasks, drain queue System.out.println("Phase 2: Forcing shutdown..."); List<Runnable> pendingTasks = executor.shutdownNow(); // Preserve pending tasks (e.g., for retry on restart) savePendingTasks(pendingTasks); if (!executor.awaitTermination(waitSeconds, TimeUnit.SECONDS)) { System.err.println("WARNING: Some threads didn't terminate"); } } /** * Pattern 3: Shutdown hook for application exit */ public static void registerShutdownHook(ExecutorService executor) { Runtime.getRuntime().addShutdownHook(new Thread(() -> { System.out.println("Shutdown hook: Stopping executor..."); executor.shutdown(); try { if (!executor.awaitTermination(30, TimeUnit.SECONDS)) { executor.shutdownNow(); } } catch (InterruptedException e) { executor.shutdownNow(); } System.out.println("Shutdown hook: Executor stopped"); }, "ExecutorShutdownHook")); } /** * Pattern 4: AutoCloseable wrapper for try-with-resources */ public static class ManagedExecutor implements AutoCloseable { private final ExecutorService executor; private final long shutdownTimeoutSeconds; public ManagedExecutor(ExecutorService executor, long timeoutSeconds) { this.executor = executor; this.shutdownTimeoutSeconds = timeoutSeconds; } public ExecutorService get() { return executor; } @Override public void close() { executor.shutdown(); try { if (!executor.awaitTermination(shutdownTimeoutSeconds, TimeUnit.SECONDS)) { executor.shutdownNow(); } } catch (InterruptedException e) { executor.shutdownNow(); Thread.currentThread().interrupt(); } } } // Usage with try-with-resources public static void managedExecutorExample() throws Exception { try (ManagedExecutor managed = new ManagedExecutor( Executors.newFixedThreadPool(4), 30 )) { ExecutorService executor = managed.get(); Future<String> result = executor.submit(() -> { return "Work complete"; }); System.out.println(result.get()); } // Executor is automatically shut down when exiting the try block } private static void savePendingTasks(List<Runnable> tasks) { // In production: serialize to disk, send to message queue, etc. System.out.println("Saved " + tasks.size() + " pending tasks for retry"); }}An ExecutorService with non-daemon threads will prevent your application from exiting even if main() returns. Always register shutdown hooks or use try-with-resources patterns to ensure pools are properly terminated.
Traditional OS threads are heavyweight—limited to thousands in typical systems. Modern runtimes are introducing virtual threads (or fibers, green threads, goroutines) that enable millions of concurrent threads with dramatically lower overhead.
The Core Idea:
Virtual threads are managed by the language runtime, not the OS. Many virtual threads are multiplexed onto a smaller pool of OS threads (carrier threads). When a virtual thread blocks, it's unscheduled from its carrier, allowing the carrier to run another virtual thread—no OS thread is wasted waiting.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129
import java.util.concurrent.*;import java.time.Duration; /** * Java 21+ Virtual Threads (Project Loom) * * Virtual threads are lightweight, managed by the JVM, and can be created * by the millions. They're ideal for I/O-bound workloads. */public class VirtualThreadsDemo { /** * Basic virtual thread creation */ public static void basicExample() throws InterruptedException { // Create a virtual thread Thread virtualThread = Thread.ofVirtual() .name("my-virtual-thread") .unstarted(() -> { System.out.println("Running on: " + Thread.currentThread()); }); virtualThread.start(); virtualThread.join(); } /** * Virtual thread per task - the new scalable pattern */ public static void virtualThreadPerTask() throws Exception { // Creates a new virtual thread for each submitted task // Can handle millions of concurrent tasks try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) { // Submit many concurrent tasks - this is now efficient! for (int i = 0; i < 100_000; i++) { final int taskId = i; executor.submit(() -> { // I/O-bound work with blocking calls performBlockingOperation(taskId); }); } } // Automatically shuts down } /** * Compare virtual vs platform threads */ public static void compareScalability() throws Exception { int taskCount = 10_000; // Platform threads: Would crash or be very slow // Each thread = ~1MB of stack memory // 10,000 threads = ~10GB memory! /* try (ExecutorService platform = Executors.newFixedThreadPool(10_000)) { // This would likely fail with OutOfMemoryError } */ // Virtual threads: No problem! // Each virtual thread ~ 1KB or less // 10,000 virtual threads ~ 10MB long start = System.currentTimeMillis(); try (ExecutorService virtual = Executors.newVirtualThreadPerTaskExecutor()) { CountDownLatch latch = new CountDownLatch(taskCount); for (int i = 0; i < taskCount; i++) { virtual.submit(() -> { try { Thread.sleep(Duration.ofSeconds(1)); // Simulate I/O wait } catch (InterruptedException e) { Thread.currentThread().interrupt(); } finally { latch.countDown(); } }); } latch.await(); } long elapsed = System.currentTimeMillis() - start; System.out.println(taskCount + " tasks completed in " + elapsed + "ms"); // With virtual threads: ~1 second (parallel) // With limited thread pool: taskCount/poolSize * 1 second (sequential) } /** * Structured concurrency (preview in Java 21) */ public static void structuredConcurrencyExample() throws Exception { // Structured concurrency ensures child tasks complete before parent try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { // Fork multiple concurrent tasks StructuredTaskScope.Subtask<String> task1 = scope.fork(() -> { Thread.sleep(100); return fetchFromServiceA(); }); StructuredTaskScope.Subtask<String> task2 = scope.fork(() -> { Thread.sleep(100); return fetchFromServiceB(); }); // Wait for all tasks scope.join(); scope.throwIfFailed(); // Get results String result = task1.get() + " | " + task2.get(); System.out.println("Combined: " + result); } // All threads are guaranteed complete when exiting scope } private static void performBlockingOperation(int id) { try { Thread.sleep(100); // Simulate blocking I/O } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } private static String fetchFromServiceA() { return "A"; } private static String fetchFromServiceB() { return "B"; }}| Aspect | Platform (OS) Threads | Virtual Threads |
|---|---|---|
| Managed by | Operating system | JVM / Runtime |
| Memory per thread | ~1 MB (stack) | ~1 KB (growable) |
| Maximum practical count | ~10,000 | ~10,000,000+ |
| Creation time | ~1 ms | ~1 µs |
| Context switch cost | ~1-10 µs (kernel) | ~100 ns (user space) |
| Blocking behavior | Wastes OS thread | Releases carrier thread |
| Best for | CPU-bound work, parallelism | I/O-bound work, concurrency |
| Pooling needed? | Yes (essential) | No (thread-per-task is fine) |
Virtual threads excel at I/O-bound workloads with many concurrent connections (web servers, database clients, API calls). For CPU-bound work, platform threads with a sized pool are still preferable—there's no benefit to having more runnable threads than CPU cores for pure computation.
After years of experience with concurrent systems, the industry has developed clear best practices for thread management. Following these guidelines prevents common pitfalls and makes systems more maintainable.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
import java.util.concurrent.*;import java.util.concurrent.atomic.AtomicInteger; /** * Production-ready thread pool configuration */public class ProductionThreadPool { public static ThreadPoolExecutor createProductionPool( String poolName, int coreSize, int maxSize, int queueCapacity ) { // Custom thread factory with naming and exception handling ThreadFactory threadFactory = new ThreadFactory() { private final AtomicInteger counter = new AtomicInteger(1); @Override public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName(poolName + "-worker-" + counter.getAndIncrement()); t.setDaemon(false); t.setUncaughtExceptionHandler((thread, ex) -> { // *** CRITICAL: Log uncaught exceptions *** System.err.println("UNCAUGHT in " + thread.getName() + ": " + ex); ex.printStackTrace(); // In production: send to logging/alerting system }); return t; } }; // Bounded queue for backpressure BlockingQueue<Runnable> queue = new LinkedBlockingQueue<>(queueCapacity); // Rejection policy that logs and tracks metrics RejectedExecutionHandler rejectionHandler = (task, executor) -> { // *** CRITICAL: Never silently discard! *** System.err.println( "REJECTED: " + poolName + " (queue: " + executor.getQueue().size() + ", active: " + executor.getActiveCount() + ")" ); // In production: increment rejection metric, alert if threshold exceeded throw new RejectedExecutionException( "Task rejected from pool: " + poolName ); }; ThreadPoolExecutor pool = new ThreadPoolExecutor( coreSize, maxSize, 60L, TimeUnit.SECONDS, queue, threadFactory, rejectionHandler ); // Allow core threads to be reclaimed if idle (optional) pool.allowCoreThreadTimeOut(true); // *** CRITICAL: Register shutdown hook *** Runtime.getRuntime().addShutdownHook(new Thread(() -> { System.out.println("Shutting down " + poolName + "..."); pool.shutdown(); try { if (!pool.awaitTermination(30, TimeUnit.SECONDS)) { pool.shutdownNow(); } } catch (InterruptedException e) { pool.shutdownNow(); } }, poolName + "-shutdown")); return pool; } /** * Monitor pool health - call periodically */ public static void logPoolStats( ThreadPoolExecutor pool, String poolName ) { System.out.printf( "[%s] Active: %d, Pool: %d, Queue: %d, Completed: %d%n", poolName, pool.getActiveCount(), pool.getPoolSize(), pool.getQueue().size(), pool.getCompletedTaskCount() ); // Alert thresholds double queueUtilization = (double) pool.getQueue().size() / pool.getQueue().remainingCapacity(); if (queueUtilization > 0.8) { System.err.println("WARNING: " + poolName + " queue >80% full!"); } }}When a thread's run() method throws an unchecked exception and there's no exception handler, the thread dies silently. In a thread pool, this is especially dangerous—tasks that fail don't get retried, and you may not notice until users report missing data. Always set exception handlers and log failures.
We've covered the complete journey from raw thread creation to production-ready thread management. Let's consolidate the essential knowledge:
Module Complete:
You now have a comprehensive understanding of threads and processes—from the foundational concepts of process isolation and thread architecture, through the lifecycle of threads, to practical management techniques for production systems. This knowledge is the foundation for the synchronization and coordination patterns we'll explore in subsequent modules.
You've mastered threads and processes—the fundamental building blocks of concurrent programming. You understand process isolation, thread memory sharing, lifecycle management, and production-grade thread pool patterns. Next, you'll learn how to make these concurrent units work together safely through synchronization primitives and thread safety techniques.