Loading learning content...
When threads execute within the same process, they share a remarkable amount of state. This sharing is the source of threading's power—and its peril. Understanding exactly what is shared, why it's shared, and how to manage that sharing safely is fundamental to writing correct concurrent programs.
The resources threads share fall into several categories:
Each shared resource introduces specific considerations for concurrent access. A misunderstanding of any one can lead to subtle, difficult-to-debug problems.
By the end of this page, you will understand every category of shared resource in detail: why it's shared, what problems can arise from concurrent access, and what strategies exist for safe sharing. You'll gain the knowledge needed to reason about thread safety for any shared resource.
The code segment (also called the text section) contains the compiled machine instructions of the program. This is the actual executable code—the functions, loops, and logic that threads execute.
Why It's Shared:
There's only one copy of the program code in memory, and all threads read from it. This is both efficient and logical:
Is It Safe?
The code segment is read-only (marked as executable but not writable in the memory map). Threads cannot modify it at runtime. This means:
1234567891011121314151617181920212223242526272829303132333435363738394041
#include <pthread.h>#include <stdio.h> /* This function exists once in the code segment */void shared_function(int thread_id, int value) { /* The function's code is shared, but: */ /* - 'thread_id' and 'value' are on each thread's stack (private) */ /* - The CPU registers holding them are per-thread (private) */ int local_result = value * 2; /* Stack variable - private to this call */ printf("Thread %d computed: %d\n", thread_id, local_result); /* printf's code is also shared, but each call is independent */} void *worker(void *arg) { int id = *(int*)arg; /* Multiple threads call the same function simultaneously */ /* Each has its own stack frame, so they don't interfere */ for (int i = 0; i < 3; i++) { shared_function(id, i + id * 100); } return NULL;} int main() { pthread_t threads[3]; int ids[] = {1, 2, 3}; for (int i = 0; i < 3; i++) { pthread_create(&threads[i], NULL, worker, &ids[i]); } for (int i = 0; i < 3; i++) { pthread_join(threads[i], NULL); } return 0;}Because code is read-only and each thread has its own stack, multiple threads can execute the same function simultaneously without interference—as long as the function only uses local variables and parameters. Functions that rely solely on local state are called 'reentrant' and are inherently thread-safe.
Position-Independent Code (PIC):
Shared libraries use position-independent code, allowing the same library code to be mapped at different virtual addresses in different processes while being physically shared in RAM. Threads within the same process see the library at the same address, further reinforcing the shared nature of code.
The data segment contains global and static variables. This includes:
Unlike the code segment, the data segment is read-write, and all threads can both read and modify it.
Why It's Shared:
Global and static variables are meant to have a single instance visible throughout the program. In a single-threaded world, this is straightforward. With threads, it becomes a source of complexity.
The Danger:
Concurrent access to shared mutable data without synchronization causes data races—one of the most common and pernicious bugs in concurrent programming.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
#include <pthread.h>#include <stdio.h> /* Global variable - shared between all threads */int global_counter = 0; /* Stored in .bss (initialized to zero) */ /* Static variable - also shared! */static int static_counter = 100; /* Stored in .data */ void *unsafe_increment(void *arg) { for (int i = 0; i < 1000000; i++) { /* DATA RACE: Two threads may read the same value, */ /* compute the same result, and write it back, */ /* losing one increment. */ global_counter++; /* This looks like one operation but is actually: */ /* 1. Read global_counter from memory into register */ /* 2. Add 1 to register */ /* 3. Write register back to global_counter */ /* A context switch between any steps causes lost updates */ } return NULL;} int main() { pthread_t t1, t2; printf("Initial counter: %d\n", global_counter); pthread_create(&t1, NULL, unsafe_increment, NULL); pthread_create(&t2, NULL, unsafe_increment, NULL); pthread_join(t1, NULL); pthread_join(t2, NULL); printf("Final counter: %d\n", global_counter); printf("Expected: 2000000\n"); printf("Difference (lost updates): %d\n", 2000000 - global_counter); return 0;}/* Typical output: * Initial counter: 0 * Final counter: 1234567 (some value less than 2000000) * Expected: 2000000 * Difference (lost updates): 765433 */counter++ is not atomic and can lose updates.static variable inside a function persists across calls and is shared by all threads calling that function.A particular trap: static variables declared inside functions. These persist across calls and are shared by all threads. The classic example is strtok(), which uses internal static state and is notoriously thread-unsafe. The thread-safe alternative is strtok_r(), which takes an explicit state pointer.
12345678910111213141516171819
/* Classic thread-unsafe pattern: static buffer in function */char *get_timestamp(void) { static char buffer[64]; /* SHARED between all callers! */ time_t now = time(NULL); strftime(buffer, sizeof(buffer), "%Y-%m-%d %H:%M:%S", localtime(&now)); return buffer; /* If two threads call this simultaneously, they overwrite */ /* each other's buffer contents! */} /* Thread-safe alternative: caller provides buffer */void get_timestamp_safe(char *buffer, size_t size) { time_t now = time(NULL); struct tm tm_buf; localtime_r(&now, &tm_buf); /* Thread-safe localtime */ strftime(buffer, size, "%Y-%m-%d %H:%M:%S", &tm_buf);}The heap is the region of memory used for dynamic allocation (malloc, new, etc.). All threads share the same heap, meaning memory allocated by one thread is accessible by all threads (if they have the pointer).
Key Characteristics:
Heap Allocator Thread Safety:
Modern malloc implementations (glibc, jemalloc, tcmalloc) are thread-safe. They use internal locking and/or per-thread arenas to prevent corruption from concurrent allocations. However, this has costs:
| Allocator | Strategy | Characteristics |
|---|---|---|
| glibc malloc | Arenas with locks | Good general-purpose performance; arenas reduce contention |
| jemalloc (FreeBSD, Rust) | Thread-local caches + arenas | Excellent multi-threaded performance; used by Firefox, Redis |
| tcmalloc (Google) | Thread-caching | Very fast for allocation-heavy workloads |
| mimalloc (Microsoft) | Free-list sharding | Excellent performance with minimal memory overhead |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
#include <pthread.h>#include <stdlib.h>#include <stdio.h>#include <string.h> /* Producer allocates on heap and passes pointer to consumer */typedef struct { int id; char message[256];} Task; Task *task_queue[100];int queue_head = 0, queue_tail = 0;pthread_mutex_t queue_lock = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t queue_not_empty = PTHREAD_COND_INITIALIZER; void *producer(void *arg) { for (int i = 0; i < 10; i++) { /* Allocate on heap - this memory is accessible to all threads */ Task *task = malloc(sizeof(Task)); task->id = i; snprintf(task->message, sizeof(task->message), "Task %d from producer", i); pthread_mutex_lock(&queue_lock); task_queue[queue_tail++] = task; /* Hand pointer to consumer */ pthread_cond_signal(&queue_not_empty); pthread_mutex_unlock(&queue_lock); } return NULL;} void *consumer(void *arg) { for (int i = 0; i < 10; i++) { pthread_mutex_lock(&queue_lock); while (queue_head == queue_tail) { pthread_cond_wait(&queue_not_empty, &queue_lock); } Task *task = task_queue[queue_head++]; pthread_mutex_unlock(&queue_lock); /* Use the heap-allocated task from producer */ printf("Consumer received: %s\n", task->message); /* Free memory allocated by another thread - perfectly valid */ free(task); } return NULL;}A clean pattern for heap sharing: define clear ownership transfer semantics. The producer 'owns' the memory until it's placed in the queue. The consumer then 'owns' it and is responsible for freeing. At any point, exactly one thread 'owns' each object. This eliminates double-free and use-after-free by design.
All threads in a process share the file descriptor table. This means:
The Shared File Offset Problem:
When multiple threads read from or write to the same file descriptor, they share the file offset. This leads to unpredictable behavior:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
#include <pthread.h>#include <stdio.h>#include <unistd.h>#include <fcntl.h>#include <string.h> int shared_fd; /* File descriptor shared by all threads */ void *writer(void *arg) { int id = *(int*)arg; char buffer[100]; for (int i = 0; i < 5; i++) { snprintf(buffer, sizeof(buffer), "Thread %d: Message %d\n", id, i); /* PROBLEM: These operations are NOT atomic together */ /* Thread could be preempted between lseek and write */ /* or between two write() calls */ write(shared_fd, buffer, strlen(buffer)); /* After each write, file offset advances */ /* But another thread might write in between! */ } return NULL;} /* Result: Garbled, interleaved output in the file */ /* Solution 1: Use mutex to serialize access */pthread_mutex_t fd_lock = PTHREAD_MUTEX_INITIALIZER; void *safe_writer(void *arg) { int id = *(int*)arg; char buffer[100]; for (int i = 0; i < 5; i++) { snprintf(buffer, sizeof(buffer), "Thread %d: Message %d\n", id, i); pthread_mutex_lock(&fd_lock); write(shared_fd, buffer, strlen(buffer)); pthread_mutex_unlock(&fd_lock); } return NULL;} /* Solution 2: Use pwrite() for atomic positioned write */void *atomic_writer(void *arg) { int id = *(int*)arg; static _Atomic off_t global_offset = 0; /* Atomic offset counter */ char buffer[100]; for (int i = 0; i < 5; i++) { int len = snprintf(buffer, sizeof(buffer), "Thread %d: Message %d\n", id, i); off_t my_offset = atomic_fetch_add(&global_offset, len); pwrite(shared_fd, buffer, len, my_offset); /* Atomic at offset */ } return NULL;}| Resource Type | Sharing Behavior | Thread Safety Concern |
|---|---|---|
| Regular files | Shared offset, shared content | Interleaved reads/writes, race on offset |
| Pipes | Shared read/write ends | Interleaved data, partial reads |
| Sockets | Shared connection state | Interleaved sends, protocol corruption |
| stdin/stdout/stderr | fd 0, 1, 2 shared | Interleaved console output |
| Directory handles | Shared position in readdir | Missing/duplicate entries in listing |
Options for safe concurrent I/O: (1) Serialize access with mutexes, (2) Use pread()/pwrite() for positioned I/O that doesn't affect shared offset, (3) Each thread opens its own file descriptor (separate offset per thread), (4) Design so only one thread handles each file descriptor.
Close-While-Using Hazard:
Another danger: if one thread closes a file descriptor while another is using it, the behavior is undefined. Worse, the kernel may reuse the file descriptor number for a new file, so the thread that didn't know about the close might now be reading/writing an entirely different file:
Thread A: read(fd, buffer, size); // Starts read on file X
Thread B: close(fd); // Closes file X
Thread B: open("other", O_RDONLY); // Kernel reuses 'fd' for file Y
Thread A: // Completes read from file Y! Silently wrong.
Solution: Coordinate the lifecycle of file descriptors. Use reference counting or clear ownership conventions.
Signal handling in multi-threaded programs is notoriously complex. The key facts:
signal() or sigaction(), you're setting the handler for all threads.pthread_kill) go to a specific thread.The Threading Challenges:
read), causing it to return EINTR. All threads must handle this.1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
#include <pthread.h>#include <signal.h>#include <stdio.h>#include <unistd.h> volatile sig_atomic_t got_signal = 0; void signal_handler(int sig) { /* Only async-signal-safe operations here! */ /* Can't use printf, malloc, pthread_mutex_lock, etc. */ got_signal = 1; /* sig_atomic_t write is safe */} /* RECOMMENDED PATTERN: Dedicated signal-handling thread */ sigset_t signal_mask;pthread_t signal_thread; void *signal_handler_thread(void *arg) { int sig; while (1) { /* sigwait() blocks until a signal from the set is pending */ sigwait(&signal_mask, &sig); /* Now we're in a normal thread context - can use any function */ printf("Received signal %d\n", sig); if (sig == SIGINT) { printf("Initiating graceful shutdown...\n"); /* Can safely coordinate with other threads here */ break; } } return NULL;} int main() { /* Step 1: Block signals in main thread BEFORE creating other threads */ sigemptyset(&signal_mask); sigaddset(&signal_mask, SIGINT); sigaddset(&signal_mask, SIGTERM); pthread_sigmask(SIG_BLOCK, &signal_mask, NULL); /* Step 2: Create a dedicated thread to handle signals */ pthread_create(&signal_thread, NULL, signal_handler_thread, NULL); /* Step 3: Create worker threads - they inherit blocked signals */ /* Workers don't need to worry about signal handling at all */ pthread_join(signal_thread, NULL); return 0;}The clean solution: block all signals in main() before creating threads. Then create a dedicated signal-handling thread that uses sigwait() to synchronously receive signals. This thread is in a normal context and can safely use any functions. Other threads never receive asynchronous signals.
Several additional process-level attributes are shared among all threads:
Current Working Directory:
chdir("/tmp"), all threads now see /tmp as their working directoryEnvironment Variables:
getenv() and setenv() operate on the same environmentsetenv() is not guaranteed to be thread-safe on all systemsUser and Group IDs:
setuid() and similar calls affect all threadsResource Limits:
setrlimit() (file size, CPU time, stack size, etc.) are process-wide12345678910111213141516171819202122232425262728293031323334353637383940414243444546
#include <pthread.h>#include <stdio.h>#include <unistd.h>#include <stdlib.h>#include <limits.h> void *worker1(void *arg) { /* Print current working directory */ char cwd[PATH_MAX]; getcwd(cwd, sizeof(cwd)); printf("Worker 1: CWD = %s\n", cwd); /* Change directory */ chdir("/tmp"); printf("Worker 1: Changed CWD to /tmp\n"); return NULL;} void *worker2(void *arg) { sleep(1); /* Wait for worker1 to change directory */ /* This thread sees the new working directory! */ char cwd[PATH_MAX]; getcwd(cwd, sizeof(cwd)); printf("Worker 2: CWD = %s\n", cwd); /* Will print /tmp */ /* Similarly, environment changes are visible */ printf("Worker 2: MY_VAR = %s\n", getenv("MY_VAR") ?: "(null)"); return NULL;} int main() { /* Set an environment variable */ setenv("MY_VAR", "shared_value", 1); pthread_t t1, t2; pthread_create(&t1, NULL, worker1, NULL); pthread_create(&t2, NULL, worker2, NULL); pthread_join(t1, NULL); pthread_join(t2, NULL); return 0;}If any thread calls chdir(), all threads are affected. This is a common source of bugs in multi-threaded file processing. The solution: use absolute paths exclusively, or use the *at() system calls (openat, faccessat, etc.) that work relative to a directory file descriptor rather than the current directory.
| Attribute | Shared? | Thread-Safe Modification? | Notes |
|---|---|---|---|
| Current working directory | Yes | No (use *at() functions) | chdir() affects all threads |
| Environment variables | Yes | Varies by platform | Prefer not to modify after startup |
| UID/GID | Yes | N/A (rarely changed) | Security requirement |
| Resource limits | Yes | Thread-safe (usually) | Affects all threads' usage |
| umask | Yes | Not thread-safe | Affects file creation permissions |
| Session/process group | Yes | N/A (set once) | Rarely modified in multi-threaded code |
Given the extent of resource sharing, the natural question is: how do we safely access shared resources? The answer is synchronization—ensuring that concurrent accesses don't conflict.
When Is Synchronization Needed?
If all three conditions hold, you have a potential data race and must synchronize.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
#include <pthread.h>#include <stdatomic.h>#include <stdio.h> /* Example 1: Mutex for complex data structure */struct shared_data { int count; char buffer[1024]; pthread_mutex_t lock;}; void update_data(struct shared_data *data, const char *msg) { pthread_mutex_lock(&data->lock); /* Critical section - only one thread here at a time */ data->count++; snprintf(data->buffer, sizeof(data->buffer), "%s (count=%d)", msg, data->count); pthread_mutex_unlock(&data->lock);} /* Example 2: Atomic for simple counter */atomic_int atomic_counter = 0; void increment_atomic(void) { /* No lock needed - hardware guarantees atomicity */ atomic_fetch_add(&atomic_counter, 1);} /* Example 3: Read-write lock for cache */struct cache { int data[1000]; pthread_rwlock_t rwlock;}; int read_cache(struct cache *c, int index) { pthread_rwlock_rdlock(&c->rwlock); /* Multiple readers OK */ int value = c->data[index]; pthread_rwlock_unlock(&c->rwlock); return value;} void write_cache(struct cache *c, int index, int value) { pthread_rwlock_wrlock(&c->rwlock); /* Exclusive access */ c->data[index] = value; pthread_rwlock_unlock(&c->rwlock);}Match the synchronization mechanism to the access pattern. Mutexes for general protection. Atomics for simple counters (faster, no kernel involvement). Read-write locks when reads dominate. Condition variables when threads must wait for state changes. Over-synchronizing hurts performance; under-synchronizing causes bugs.
We've thoroughly examined the resources that threads share within a process. Let's consolidate the key insights:
What's Next:
Having explored what threads share, we'll now examine what threads keep private. The next page covers Thread-Specific Resources—the stack, registers, thread-local storage, and other per-thread state that enables threads to work independently despite sharing so much else.
You now have deep knowledge of every category of shared resource in a multi-threaded program—and the synchronization challenges each presents. This understanding is essential for writing correct, efficient concurrent code.