Loading content...
Every file system operates within absolute constraints—maximum file sizes, volume capacities, path lengths, and file counts that cannot be exceeded regardless of available hardware. These limits stem from fundamental design decisions: the bit-width of pointers, the size of addressing fields, and architectural choices made when the file system was first designed.
Understanding these limits is essential for:
This page provides a comprehensive examination of size limits across major file systems, explaining not just what the limits are but why they exist.
By the end of this page, you will understand the maximum sizes for files, volumes, filenames, and paths across all major file systems; recognize the architectural decisions that create these limits; appreciate practical implications for data management and application design; and know when size constraints should influence file system selection.
File system size limits don't arise arbitrarily—they reflect fundamental architectural decisions made during file system design. Understanding these origins helps predict limits for any file system and appreciate why certain limits exist.
off_t limited file sizes to 2GB even on capable file systems until Large File Support (LFS) adoption.The 2GB file size limit in many older systems came from using signed 32-bit integers for file offsets. Signed 32-bit integers max at 2,147,483,647 (2GB - 1 byte). Unsigned extended this to 4GB, and 64-bit offsets effectively eliminated file size concerns. This is why you might encounter 2GB limits even on file systems capable of larger files—legacy APIs strike again.
The Evolution of Addressing:
File system addressing has progressed through several generations:
First Generation (16-bit):
Second Generation (32-bit):
Third Generation (64-bit+):
Each generation expanded limits by orders of magnitude, driven by storage capacity growth that consistently exceeded predictions.
Maximum file size determines what individual files can hold. This matters for:
| File System | Maximum File Size | Limiting Factor | Notes |
|---|---|---|---|
| FAT12 | 32 MB | Cluster chain + volume size | Obsolete for most purposes |
| FAT16 | 2 GB / 4 GB | 32-bit size field | 4GB with 64KB clusters; rare |
| FAT32 | 4 GB minus 1 byte | 32-bit size field | Most common practical limit encountered |
| exFAT | 16 EB (theoretical) | 64-bit size field | Designed specifically for large files |
| NTFS | 16 EB (theoretical) / 16 TB (practical) | Implementation limits | Windows versions impose practical limits |
| ext4 | 16 TB | Extent-based addressing | With 4KB blocks; larger with extent bigalloc |
| XFS | 8 EB | 64-bit addressing | Effectively unlimited for any practical purpose |
| ZFS | 16 EB | 128-bit addressing | Limited by 64-bit file size field in metadata |
| Btrfs | 16 EB | 64-bit addressing | Extent-based; effectively unlimited |
FAT32's 4GB file size limit is the most commonly encountered constraint in practice. Attempting to copy a 4.5GB video file to a FAT32 USB drive fails with confusing errors. Many users and even some applications don't understand why. This single limitation drives most FAT32-to-exFAT (or NTFS) migrations.
Deep Dive: FAT32's 4GB Limit
FAT32's limit comes from two sources:
Directory Entry Size Field: Each directory entry contains a 32-bit field for file size, maxing at 4,294,967,295 bytes (4GB - 1).
Cluster Chain Overhead: FAT32 uses 28 bits for cluster numbers (4 bits reserved for status), supporting ~268 million clusters. With 32KB clusters (typical for large volumes), this theoretically allows ~8TB files. But the size field hits first.
Why not just expand the size field? Backward compatibility. Changing the directory entry structure would break every FAT32 implementation in existence—firmware, cameras, game consoles, embedded systems. exFAT was created specifically to support larger files while maintaining FAT's simplicity.
Deep Dive: NTFS Practical Limits
NTFS theoretically supports 16 exabytes (2⁶⁴ bytes), but Windows versions impose restrictions:
The limitation often comes from the Windows file APIs, not NTFS itself. 32-bit applications using non-LFS APIs still face 2GB or 4GB limits even on NTFS.
Deep Dive: ext4 File Size
Ext4 files are limited by:
With 4KB blocks, this yields ~16TB. ext4 bigalloc feature (cluster allocation) can increase this by treating multiple blocks as a unit, but adoption is limited.
Deep Dive: 128-bit ZFS
ZFS uses 128-bit block pointers, theoretically supporting 2^128 blocks. At any conceivable block size, this exceeds the amount of data that could physically exist:
Practical limits come from 64-bit size fields in metadata (16 exabytes per file) and implementation constraints, not the addressing scheme.
Volume (or partition) size limits determine how much total storage can live within a single file system instance. With modern storage arrays and cloud volumes reaching petabytes, these limits matter for:
| File System | Maximum Volume Size | Limiting Factor | Practical Context |
|---|---|---|---|
| FAT12 | 32 MB | 12-bit cluster numbering | Legacy floppy disks |
| FAT16 | 2 GB / 4 GB | 16-bit cluster numbering | Larger with 64KB clusters |
| FAT32 | 2 TB (Windows) / 16 TB | 32-bit sector addressing; 28-bit clusters | Windows enforces 2TB; spec allows 16TB |
| exFAT | 128 PB | 64-bit cluster addressing | Designed for flash media scalability |
| NTFS | 16 EB (theoretical) / 256 TB (typical) | Cluster count × cluster size | 8 PB maximum with 64KB clusters |
| ext4 | 1 EB | 48-bit block numbering | Commonly limited by tools to 16-50TB |
| XFS | 8 EB | 64-bit block numbering | Proven at petabyte scale in production |
| ZFS | 256 ZB (zettabytes) | 128-bit addressing | Zpool composed of multiple vdevs |
| Btrfs | 16 EB | 64-bit addressing | Can span multiple devices natively |
Understanding Cluster/Block Size Tradeoffs:
Volume size limits often depend on the block (cluster) size chosen at format time:
Larger Blocks Increase Volume Capacity:
Larger Blocks Waste Space:
The Default Balance: Most formatters choose block sizes that balance capacity and efficiency:
Practical Considerations:
Theoretical limits rarely matter. Practical limits include:
Format Time: Creating an ext4 filesystem on 100TB can take hours due to inode table initialization (unless lazy initialization is used)
fsck Duration: File system check on multi-petabyte volumes takes days without journaling consistency
Backup Windows: Full backup of 100TB+ volumes requires careful planning
Device Geometry: Some storage devices have sector count limits independent of file system
ZFS doesn't have a volume limit in the traditional sense because zpools can span unlimited vdevs (virtual devices). A single zpool can incorporate hundreds of disks, each potentially multi-terabyte, creating aggregate pools in the exabyte range. This is why ZFS is favored for large-scale storage—the 'volume' grows with you.
Real-World Volume Size Examples:
| Scenario | Typical Volume Size | Suitable File Systems |
|---|---|---|
| Boot partition | 500MB - 2GB | ext4, FAT32, NTFS |
| Desktop root | 50-500GB | ext4, Btrfs, NTFS |
| Workstation data | 1-10TB | ext4, XFS, NTFS, ZFS |
| Database storage | 1-50TB | XFS, ext4, ZFS |
| Virtualization pool | 10-200TB | XFS, ZFS, ext4 |
| Enterprise NAS | 100TB - 2PB | ZFS, XFS |
| HPC scratch | 500TB - 10PB | XFS |
| Cloud object store | Multi-PB | Custom (not traditional FS) |
For volumes exceeding 50TB, XFS and ZFS are the proven choices. ext4 works but lacks the large-scale operational tooling. FAT32 and NTFS are inappropriate at these scales.
Filename and path length limits affect user experience and application compatibility. These constraints create real problems:
| File System | Max Filename | Max Path | Character Set | Case Sensitivity |
|---|---|---|---|---|
| FAT32 | 255 chars (LFN) | Limited by OS | Unicode (LFN) | Case-preserving, insensitive |
| exFAT | 255 chars | Limited by OS | Unicode | Case-preserving, insensitive |
| NTFS | 255 chars | 32,767 chars | Unicode | Case-preserving, insensitive* |
| ext4 | 255 bytes | No limit (4096 typical) | Any except NUL, / | Case-sensitive |
| XFS | 255 bytes | No limit | Any except NUL, / | Case-sensitive |
| ZFS | 255 bytes | No limit | Any except NUL, / | Configurable |
| Btrfs | 255 bytes | No limit | Any except NUL, / | Case-sensitive |
*NTFS is case-sensitive at file system level but Windows APIs are typically case-insensitive; WSL can access NTFS case-sensitively.
Understanding Bytes vs. Characters:
Unix-derived file systems (ext4, XFS, ZFS, Btrfs) limit filenames to 255 bytes, not characters. With UTF-8 encoding:
Windows file systems (NTFS, FAT with LFN) count characters using UTF-16, providing a consistent 255 character limit regardless of script.
The Path Length Problem:
While modern file systems support very long paths, operating systems and applications often don't:
openat() bypass PATH_MAX concernsJava/Maven projects, Node.js node_modules, and deeply nested directory structures routinely exceed Windows' 260-character path limit. Files become inaccessible, undeletable, or cause bizarre application crashes. Solutions include: enabling Windows long path support, using subst to shorten paths, flattening directory structures, or moving to Linux.
Beyond file and volume sizes, file systems impose limits on other resources that can become constraints in specific scenarios.
| Limit Type | FAT32 | NTFS | ext4 | XFS | ZFS | Btrfs |
|---|---|---|---|---|---|---|
| Files per directory | 65,534 | ~4.3 billion | ~10 million* | Unlimited† | Unlimited† | ~2^64 |
| Total files on volume | 268M clusters | ~4.3 billion | ~4 billion | ~2^64 | ~2^48/dataset | ~2^64 |
| Link count per file | N/A | 1,024 | 65,000 | ~4.3 billion | Unlimited† | 65,535 |
| Max subdirectory depth | Varies by path | ~4,000 | No limit | No limit | No limit | No limit |
| Extended attr size | N/A | 64KB | 4KB inline† | 64KB | Large† | 16KB/~1MB |
| Max symbolic link length | N/A | N/A | 4,096 bytes | 1,024 bytes | 1,024 bytes | 1,024 bytes |
*ext4 uses HTree hashing; performance degrades around 10 million entries †Performance degrades before hard limits
Files Per Directory: A Practical Concern
While file systems allow millions of files per directory, practical limits are lower:
FAT32 (65,534 entries):
ext4 HTree (Millions possible):
Application Behaviors: Many applications choke on large directories regardless of file system capability:
ls -l becomes slow due to stat() calls on each entryBest Practice: Shard files across subdirectories. Instead of 1 million files in one directory, use /data/00/ through /data/ff/ (256 subdirectories) with ~4,000 files each.
ext4's 65,000 hard link limit per file can affect systems using hard links for deduplication or versioning (like rsnapshot). Directory entries count against their parent's link count (for '..'). XFS supports ~4.3 billion links, eliminating this concern.
Inode Limits (ext4-specific):
ext4 pre-allocates inode tables at format time based on the "bytes-per-inode" ratio:
| Scenario | Bytes per Inode | Files per TB | Use Case |
|---|---|---|---|
| Default | 16,384 | ~64 million | General purpose |
| Mail storage | 4,096 | ~256 million | Many small files |
| Media storage | 1,048,576 | ~1 million | Large files only |
If inode tables fill before space does, no new files can be created despite free space. Use df -i to monitor inode usage.
XFS, ZFS, and Btrfs allocate inodes dynamically, eliminating this concern.
Subdirectory Depth:
Most file systems have no hard limit on directory nesting depth, but:
Let's examine how these limits impact real-world scenarios and when they should influence file system selection.
| Scenario | Critical Limits | Unsuitable FS | Recommended FS |
|---|---|---|---|
| 4K/8K Video Production | File size (>4GB common) | FAT32 | exFAT, NTFS, XFS |
| Large Database | File size + volume size | FAT32, older ext3 | XFS, ZFS |
| Mail Server (millions of msgs) | Files per dir, inodes | FAT32 | XFS, ext4 (tuned) |
| Node.js Development (Windows) | Path length | Any on Windows w/o LongPath | WSL/Linux or LongPath enabled |
| Backup Archive | Volume size, hard links | FAT32 | ZFS, XFS |
| Embedded/IoT | Simplicity, compatibility | ZFS, Btrfs (overhead) | FAT32, ext4 |
| Genomics Data | File size (multi-TB common) | FAT32, ext3 | XFS, ZFS, Lustre |
| Virtual Machine Storage | File size, random I/O | FAT32 | XFS, ZFS, ext4 |
Cross-Platform Considerations:
When storage must be accessible from multiple operating systems, limits compound:
USB/Removable Storage:
Network Shares:
Cloud Storage:
Before deploying any storage for video, VMs, or databases, verify file size limits by attempting to create a file larger than 4GB. This trivial test reveals FAT32 volumes masquerading as more capable file systems—surprisingly common in external drives and memory cards.
We've examined the ceiling constraints that define file system scalability. Let's consolidate the essential insights:
What's Next:
We've covered features, performance, and limits. The next page examines Journaling Support—the mechanisms that protect file system consistency across crashes, their performance implications, and how different journaling approaches trade off speed, safety, and recovery time.
You now understand the size constraints that govern file system selection. From FAT32's 4GB barrier to ZFS's astronomical limits, these ceilings define what's possible within each storage architecture. This knowledge is essential for capacity planning and avoiding unexpected limits in production.