Loading content...
What happens when you need a file system that's as fast as RAM itself? What if you could create files and directories that exist purely in memory, avoiding disk I/O entirely, yet behave exactly like regular files?
This is tmpfs—the temporary file system. Unlike traditional file systems that persist data to physical storage, tmpfs stores everything in the kernel's page cache and swap space. Files appear instantly, writes complete in microseconds, and the entire file system vanishes when unmounted or the system reboots.
tmpfs is everywhere in modern Linux:
/tmp — Temporary files for applications/run — Runtime state (PID files, sockets, locks)/dev/shm — POSIX shared memory (shm_open)Understanding tmpfs is essential for optimizing system performance and designing applications that leverage memory effectively.
By the end of this page, you will understand tmpfs architecture from kernel implementation to practical deployment. You'll learn how tmpfs interacts with the page cache and swap, how to size and configure tmpfs mounts, and when to choose tmpfs over disk-based alternatives.
Before tmpfs, Linux had ramfs—a simpler memory-based file system. Understanding the difference illuminates key memory management concepts.
| Characteristic | ramfs | tmpfs |
|---|---|---|
| Size limit | None—can consume all RAM | Configurable maximum size |
| Swappable | No—pages are pinned in RAM | Yes—can swap to disk under memory pressure |
| Size reporting | Always shows 0 used | Accurate used/available reporting |
| Memory accounting | No accounting | Proper RSS/PSS attribution |
| OOM behavior | Can cause OOM without warning | Respects limits, fails writes gracefully |
| Use case | Largely obsolete | Standard for most uses |
Why tmpfs replaced ramfs:
Ramfs had a critical flaw: no size limits. A runaway process could fill ramfs indefinitely, consuming all physical RAM and triggering the OOM killer with no warning. Additionally, ramfs pages were pinned—they couldn't be swapped out under memory pressure, making the memory consumption irrecoverable.
tmpfs addressed these issues by:
df shows accurate usage1234567891011121314151617181920212223242526
# Mounting ramfs (obsolete, shown for comparison)$ sudo mount -t ramfs ramfs /mnt/ramfs$ df -h /mnt/ramfsFilesystem Size Used Avail Use% Mounted onramfs 0 0 0 - /mnt/ramfs# Note: Size shows as 0—no limit enforcement! # Mounting tmpfs with size limit$ sudo mount -t tmpfs -o size=1G tmpfs /mnt/tmpfs$ df -h /mnt/tmpfsFilesystem Size Used Avail Use% Mounted ontmpfs 1.0G 0 1.0G 0% /mnt/tmpfs # Write some data$ dd if=/dev/zero of=/mnt/tmpfs/testfile bs=1M count=500500+0 records out524288000 bytes (524 MB) copied $ df -h /mnt/tmpfsFilesystem Size Used Avail Use% Mounted ontmpfs 1.0G 500M 524M 49% /mnt/tmpfs # Attempting to exceed the limit$ dd if=/dev/zero of=/mnt/tmpfs/overflow bs=1M count=1000dd: error writing '/mnt/tmpfs/overflow': No space left on device# Graceful failure—system remains stableWhile tmpfs is preferred, ramfs persists for specific use cases: initramfs (early boot filesystem) and situations requiring guaranteed non-swappable memory. However, for general use, always prefer tmpfs with explicit size limits.
tmpfs is implemented as a kernel module (mm/shmem.c) that provides a file system interface backed by the kernel's memory management subsystem. Unlike disk file systems that interact with block devices, tmpfs operates entirely within the virtual memory system.
Key architectural components:
How tmpfs stores data:
Page cache integration: tmpfs file content resides in the page cache, using the same mechanisms as disk file systems. The difference is that there's no backing block device—the page cache is the storage.
Shmem pages: tmpfs uses "shmem" (shared memory) pages, a special category in the kernel's memory accounting. These pages are:
Inode and dentry structures: File metadata (names, permissions, timestamps) uses standard kernel data structures allocated from slab caches—minimal overhead compared to file content.
123456789101112131415161718192021222324252627282930313233
# View shmem (tmpfs) usage in meminfo$ grep -i shmem /proc/meminfoShmem: 567890 kB # Current shmem usageShmemHugePages: 0 kB # Huge pages in shmemShmemPmdMapped: 0 kB # PMD-mapped shmem # This includes:# - All tmpfs mounts# - POSIX shared memory (/dev/shm)# - Shared anonymous mappings (mmap MAP_SHARED|MAP_ANONYMOUS) # View tmpfs mounts specifically$ mount | grep tmpfstmpfs on /run type tmpfs (rw,nosuid,nodev,size=3216140k,mode=755)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=16080696k)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) # Detailed tmpfs metrics$ df -h --type=tmpfsFilesystem Size Used Avail Use% Mounted ontmpfs 3.1G 1.2M 3.1G 1% /runtmpfs 16G 0 16G 0% /dev/shmtmpfs 16G 12K 16G 1% /tmp # Check if tmpfs data has been swapped$ vmstat 1 3procs -----------memory---------- ---swap-- r b swpd free buff cache si so 1 0 0 8234567 456789 12345678 0 0 0 0 0 8234123 456789 12345890 0 0 0 0 0 8233890 456789 12346012 0 0# swpd=0 and si/so=0 means no swapping occurringModern kernels support huge pages (2MB or 1GB) for tmpfs via Transparent Huge Pages (THP). This can significantly improve performance for large tmpfs files by reducing TLB misses. Enable with mount option huge=always or huge=within_size.
One of tmpfs's most important characteristics is its swappable nature. Unlike ramfs, tmpfs pages can be evicted to swap when the system experiences memory pressure. This behavior has significant implications for performance and reliability.
Controlling swap behavior:
You can influence whether tmpfs pages get swapped through the system's swappiness setting and cgroup memory limits:
123456789101112131415161718192021222324252627282930313233
# System-wide swappiness (0-100)# Lower values = prefer evicting file cache over swapping# Higher values = more willing to swap (including tmpfs)$ cat /proc/sys/vm/swappiness60 # Default value # Reduce swappiness to keep tmpfs in RAM longer$ echo 10 | sudo tee /proc/sys/vm/swappiness # Per-process memory locking to prevent swapping# mlock() system call can lock tmpfs pages in RAM # Example: Create tmpfs with swap disabled (Linux 5.14+)# Note: This is a kernel-level option, not a mount option# Use memory cgroup limits for fine-grained control # Monitor which processes are causing tmpfs swap activity$ sudo iotop -o# Look for processes with SWAPIN activity on tmpfs paths # View swap composition$ sudo swapon --showNAME TYPE SIZE USED PRIO/dev/sda3 partition 16G 234M -2 # Check if any tmpfs data is in swap (approximate)$ sudo smem -tw | headArea Used Cache Noncachefirmware/hardware 0 0 0kernel image 0 0 0kernel dynamic memory 1234567 987654 246913userspace memory 8765432 6543210 2222222free memory 7654321 0 0If tmpfs contains sensitive data (encryption keys, passwords), be aware that swapped pages may persist on disk. For security-critical applications, consider: (1) disabling swap entirely, (2) using encrypted swap, or (3) using mlock() to prevent specific pages from swapping.
tmpfs supports numerous mount options for controlling size, permissions, and behavior. Proper configuration is crucial for system stability and security.
| Option | Description | Default | Example |
|---|---|---|---|
size= | Maximum filesystem size | 50% of RAM | size=2G, size=50% |
nr_blocks= | Max blocks (alternative to size) | Calculated from size | nr_blocks=131072 |
nr_inodes= | Maximum number of inodes | Calculated | nr_inodes=1000000 |
mode= | Root directory permissions | 1777 (sticky) | mode=0755 |
uid= | Owner UID of root directory | 0 (root) | uid=1000 |
gid= | Owner GID of root directory | 0 (root) | gid=1000 |
huge= | Huge page policy | never | huge=always, huge=within_size |
mpol= | NUMA memory policy | System default | mpol=interleave |
noatime | Don't update access times | atime enabled | noatime |
nosuid | Disallow setuid binaries | suid allowed | nosuid |
noexec | Disallow execution | exec allowed | noexec |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# ═══════════════════════════════════════════════════════════════# Standard system tmpfs mounts (from /etc/fstab)# ═══════════════════════════════════════════════════════════════ # /tmp - General temporary storage# Size: 50% of RAM, sticky bit, noexec for securitytmpfs /tmp tmpfs nosuid,nodev,noexec,size=50%,mode=1777 0 0 # /run - Runtime state (PID files, sockets)# Smaller size, accessible to daemons, no exectmpfs /run tmpfs nosuid,nodev,noexec,size=10%,mode=0755 0 0 # /dev/shm - POSIX shared memory# Larger size to support applications using shm_open()tmpfs /dev/shm tmpfs nosuid,nodev,size=50% 0 0 # ═══════════════════════════════════════════════════════════════# Application-specific tmpfs mounts# ═══════════════════════════════════════════════════════════════ # Build directory - maximize performance for compilation$ sudo mount -t tmpfs -o size=8G,nr_inodes=1000000,noatime tmpfs /build# Large size for object files, many inodes for source trees # Redis/Memcached data directory - low latency, sized to fit$ sudo mount -t tmpfs -o size=4G,noexec,nosuid tmpfs /var/lib/redis# noexec/nosuid for security since data comes from network # Per-user private tmpfs (e.g., for secrets)$ sudo mount -t tmpfs -o size=100M,mode=0700,uid=1000,gid=1000 tmpfs /home/user/.private # ═══════════════════════════════════════════════════════════════# Huge pages for large files (improves TLB efficiency)# ═══════════════════════════════════════════════════════════════ # Enable huge pages for all files$ sudo mount -t tmpfs -o size=4G,huge=always tmpfs /mnt/hugetmpfs # Enable huge pages only for files >= hugepage size$ sudo mount -t tmpfs -o size=4G,huge=within_size tmpfs /mnt/hugetmpfs # Check huge page usage$ cat /sys/kernel/mm/transparent_hugepage/enabled[always] madvise never $ grep -i hugepages /proc/meminfoAnonHugePages: 524288 kBShmemHugePages: 0 kB # ═══════════════════════════════════════════════════════════════# NUMA-aware tmpfs for multi-socket systems# ═══════════════════════════════════════════════════════════════ # Interleave across all NUMA nodes$ sudo mount -t tmpfs -o size=8G,mpol=interleave tmpfs /mnt/numa_tmpfs # Bind to specific NUMA node$ sudo mount -t tmpfs -o size=4G,mpol=bind:0 tmpfs /mnt/node0_tmpfsThe default tmpfs size (50% of RAM) is often too generous for /tmp, risking OOM situations. Best practice: size /tmp to 10-25% of RAM for general systems, size /dev/shm based on application requirements (some databases need large shared memory), and always monitor usage with df -h.
tmpfs finds applications across the entire Linux ecosystem. Understanding these use cases helps you recognize opportunities to leverage tmpfs in your own systems.
System directories using tmpfs:
/run (or /var/run) — Runtime state that must be cleared on reboot: PID files, socket files, lock files. systemd and other init systems rely on /run being empty at boot./tmp — Application temporary files. Using tmpfs for /tmp provides automatic cleanup on reboot and dramatically faster temp file operations./dev/shm — POSIX shared memory implementation. Applications using shm_open() create files here for inter-process shared memory regions./sys/fs/cgroup — cgroup hierarchy (when using cgroup v1). This is a tmpfs containing the control group file hierarchy./run/user/$UID — Per-user runtime directory. XDG_RUNTIME_DIR for user-specific sockets and temp files.1234567891011121314
# Typical system tmpfs mounts$ mount | grep tmpfstmpfs on /run type tmpfs (rw,nosuid,nodev,size=3216140k,mode=755)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,size=5120k)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)tmpfs on /tmp type tmpfs (rw,nosuid,nodev)tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,size=3216136k,mode=700,uid=1000,gid=1000) # Contents of /run$ ls /runagetty.reload dbus lock screen sudo utmpcrond.reboot dmeventd log shm tmpfiles.d xtables.lockcups initctl lvmetad sshd.pid udevtmpfs performance is fundamentally different from disk-based file systems. Understanding these characteristics helps you set appropriate expectations and make informed tradeoffs.
| Metric | tmpfs | SSD (NVMe) | HDD | Notes |
|---|---|---|---|---|
| Read latency | ~50-200 ns | ~10-100 µs | ~5-15 ms | tmpfs 100-1000x faster than SSD |
| Write latency | ~50-200 ns | ~10-100 µs | ~5-15 ms | No persistence overhead for tmpfs |
| Random IOPS (4K) | 10M+ | 500K-1M | 100-200 | Memory bandwidth limited only |
| Sequential throughput | 10+ GB/s | 3-7 GB/s | 100-200 MB/s | Limited by memory bus speed |
| Metadata operations | ~1 µs | ~10-50 µs | ~1-10 ms | Directory listing, stat(), etc. |
| Space efficiency | 1:1 RAM usage | 1:1 disk usage | 1:1 disk usage | tmpfs consumes RAM directly |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# ═══════════════════════════════════════════════════════════════# Benchmark: fio comparison (tmpfs vs SSD)# ═══════════════════════════════════════════════════════════════ # tmpfs random read benchmark$ fio --name=tmpfs-randread --filename=/tmp/testfile --size=1G \ --rw=randread --bs=4k --numjobs=4 --iodepth=32 --runtime=30 # Example results (tmpfs):# read: IOPS=2.45M, BW=9576MiB/s# lat (nsec): avg=51.2, stdev=123.4 # Same test on NVMe SSD:# read: IOPS=485k, BW=1895MiB/s # lat (usec): avg=262.1, stdev=1234.5 # ═══════════════════════════════════════════════════════════════# Real-world comparison: extracting kernel source# ═══════════════════════════════════════════════════════════════ $ time tar -xf linux-5.15.tar.xz -C /mnt/ssd/real 0m45.123s $ time tar -xf linux-5.15.tar.xz -C /tmp/ # tmpfsreal 0m8.456s # 5.3x faster on tmpfs! # ═══════════════════════════════════════════════════════════════# Git operations comparison# ═══════════════════════════════════════════════════════════════ # Clone to SSD$ time git clone https://github.com/torvalds/linux /mnt/ssd/linuxreal 2m34s # Clone to tmpfs$ time git clone https://github.com/torvalds/linux /tmp/linuxreal 0m52s # git status on large repository$ time git status # On SSDreal 0.89s $ time git status # On tmpfsreal 0.12stmpfs performance degrades drastically when pages are swapped. If your tmpfs workload exceeds available RAM and pages swap to disk, you'll experience SSD/HDD latencies instead of memory latencies. Monitor swap usage and size tmpfs appropriately.
Proper monitoring of tmpfs usage is essential to prevent out-of-memory situations and performance degradation.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
# ═══════════════════════════════════════════════════════════════# Basic tmpfs usage monitoring# ═══════════════════════════════════════════════════════════════ # View all tmpfs mounts with usage$ df -h --type=tmpfsFilesystem Size Used Avail Use% Mounted ontmpfs 16G 234M 16G 2% /dev/shmtmpfs 3.1G 1.2M 3.1G 1% /runtmpfs 16G 12K 16G 1% /tmp # Watch for high usage$ watch -n 1 'df -h --type=tmpfs' # ═══════════════════════════════════════════════════════════════# Find what's consuming tmpfs space# ═══════════════════════════════════════════════════════════════ # Large files in /tmp$ sudo du -ah /tmp | sort -rh | head -20500M /tmp/large_download.iso234M /tmp/build_artifacts45M /tmp/cache # Files by age (find old temp files)$ sudo find /tmp -type f -mtime +7 -ls # ═══════════════════════════════════════════════════════════════# Memory pressure and swap correlation# ═══════════════════════════════════════════════════════════════ # Monitor memory and swap together$ while true; do echo "=== $(date) ===" echo "Memory:" free -h | grep -E '^(Mem|Swap)' echo "Tmpfs usage:" df -h --type=tmpfs | grep -v Filesystem echo "Shmem in meminfo:" grep Shmem /proc/meminfo sleep 5done # ═══════════════════════════════════════════════════════════════# Identify processes using shared memory/tmpfs# ═══════════════════════════════════════════════════════════════ # Which processes have files open in /dev/shm?$ sudo lsof /dev/shm 2>/dev/null | head -20 # Which processes are using the most shared memory?$ for pid in /proc/[0-9]*/; do shmem=$(awk '/Shmem/ {sum+=$2} END {print sum}' $pid/smaps 2>/dev/null) if [ "$shmem" -gt 0 ] 2>/dev/null; then name=$(cat $pid/comm 2>/dev/null) echo "$shmem kB - $name ($(basename $pid))" fidone | sort -rn | head -10 # ═══════════════════════════════════════════════════════════════# Prometheus/Grafana monitoring# ═══════════════════════════════════════════════════════════════ # node_exporter exposes tmpfs metrics automatically# Query: node_filesystem_avail_bytes{fstype="tmpfs"}# Alert on: (1 - node_filesystem_avail_bytes{fstype="tmpfs"} / # node_filesystem_size_bytes{fstype="tmpfs"}) > 0.8du, or clean up old temp filestmpfs provides memory-speed file operations with the familiar file system interface, enabling dramatic performance improvements for appropriate workloads.
What's next:
We've explored tmpfs for general-purpose memory-backed file storage. Next, we'll examine devtmpfs—the specialized tmpfs variant that dynamically manages the /dev device node directory, automatically creating and removing device files as hardware is detected.
You now understand tmpfs architecture, swapping behavior, mount options, use cases, and performance characteristics. This knowledge enables you to leverage memory-backed storage for performance-critical applications while avoiding common pitfalls.