Loading content...
Traditional file systems present a single, monolithic namespace. A disk formatted with ext4 gives you one file system hierarchy, one set of mount options, one administrative domain. If you need to create backup snapshots, manage quotas for different purposes, or mount portions of your storage with different options—you need multiple partitions, each wasting space in its pre-allocated boundaries.
Btrfs Subvolumes shatter this limitation. A subvolume is a self-contained file system tree within the larger Btrfs file system. Each subvolume has its own directory hierarchy, can be mounted independently, can be snapshotted, and can have its own quotas—yet all subvolumes share the same underlying storage pool without artificial partition boundaries.
By the end of this page, you will understand subvolumes completely: what they are architecturally, how to create and manage them, mounting strategies, the relationship between subvolumes and snapshots, quota groups for space management, and practical organization patterns used in production systems.
A Btrfs subvolume is a separately mountable POSIX file tree that exists within a Btrfs file system. Think of subvolumes as lightweight, dynamically-sized partitions that share storage.
Key Characteristics:
Subvolume vs. Directory:
A subvolume looks like a directory when accessed from its parent, but it's fundamentally different:
| Aspect | Regular Directory | Subvolume |
|---|---|---|
| Inodes | Same namespace as parent | Separate inode namespace |
| Mounting | Cannot mount independently | Can mount anywhere |
| Snapshots | Cannot snapshot directly | Can create snapshots |
| Quotas | No separate quota tracking | Quota groups apply |
| Hard links | Cross-directory hard links OK | Cannot cross subvolume boundary |
| Device crossing | Part of parent's tree | Separate FS tree |
| st_dev | Same device as parent | Different device number |
The ls -la Illusion:
When listing a parent directory containing a subvolume, the subvolume appears as a directory. But internally, it's marked as a subvolume root:
$ ls -la /
drwxr-xr-x 24 root root 4096 Jan 15 10:00 .
drwxr-xr-x 24 root root 4096 Jan 15 10:00 ..
drwxr-xr-x 4 root root 80 Jan 15 10:00 home # Actually a subvolume!
...
$ btrfs subvolume list /
ID 256 gen 1234 top level 5 path home
The inode number 256 is special—it's always the root inode of a subvolume. Every subvolume starts its inode numbering at 256.
Hard links cannot cross subvolume boundaries. Similarly, tools like 'mv' that try to use rename() across subvolumes will fail; data must be copied. This is because subvolumes are technically different file systems sharing storage.
Every Btrfs file system has a special top-level subvolume with ID 5 (or FS_TREE_OBJECTID in code). This is the root of the entire subvolume hierarchy.
Understanding the Top-Level:
Btrfs File System
└── Subvolume ID 5 (FS_TREE, top-level)
├── @ (subvolume ID 256, often /)
├── @home (subvolume ID 257, often /home)
├── @snapshots (subvolume ID 258)
│ ├── @-2024-01-01 (snapshot of @)
│ └── @-2024-01-15 (snapshot of @)
└── regular_dir/
└── regular_file.txt
The top-level subvolume (ID 5) is the parent of all other subvolumes. When you create a new subvolume, it becomes a child of whatever subvolume you're currently in.
Default Subvolume:
When you mount a Btrfs file system without specifying a subvolume, Btrfs mounts the default subvolume. Initially, this is the top-level (ID 5), but it can be changed:
# See current default
$ btrfs subvolume get-default /mnt
ID 5 (FS_TREE)
# Change default to a different subvolume
$ btrfs subvolume set-default 256 /mnt
# Now mounting /dev/sda1 mounts subvolume 256
$ mount /dev/sda1 /mnt # Mounts subvol ID 256
Common Patterns:
| Pattern | Default | Use Case |
|---|---|---|
| Simple | ID 5 (top-level) | Easy access to all subvols, less common in distros |
| Flat @ layout | @ subvol | openSUSE, Ubuntu: root is @, home is @home |
| Fedora-style | root subvol | Root at top level, snapshots nested |
| Nested | Custom root | Legacy compatibility, complex layouts |
Mounting the Top-Level:
Even if the default subvolume is changed, you can always access the top-level:
# Mount top-level explicitly
$ mount -o subvolid=5 /dev/sda1 /mnt/btrfs-top
# Or by path
$ mount -o subvol=/ /dev/sda1 /mnt/btrfs-top
# Now you can see all subvolumes
$ ls /mnt/btrfs-top
@ @home @snapshots
This is essential for snapshot management—you often need to access subvolumes that aren't mounted in your running system.
Many distributions use @ prefix for subvolume names (e.g., @, @home, @var). The @ is just a naming convention with no special meaning to Btrfs—it helps visually distinguish subvolumes from regular directories when browsing the top-level.
Subvolume management uses the btrfs subvolume command family. Let's explore the essential operations.
Creating Subvolumes:
# Create a subvolume in current directory
$ btrfs subvolume create /mnt/myfs/subvol_name
Create subvolume '/mnt/myfs/subvol_name'
# Create with specific ownership (runs as root, sets owner)
$ btrfs subvolume create /mnt/myfs/user_data
$ chown user:user /mnt/myfs/user_data
# Create nested subvolume
$ btrfs subvolume create /mnt/myfs/parent_subvol/child_subvol
New subvolumes start empty (just the root directory) and can be populated like any directory.
Listing Subvolumes:
# List all subvolumes
$ btrfs subvolume list /mnt
ID 256 gen 100 top level 5 path @
ID 257 gen 100 top level 5 path @home
ID 258 gen 100 top level 5 path @snapshots
ID 259 gen 99 top level 258 path @snapshots/@-2024-01-01
# Detailed listing with more info
$ btrfs subvolume list -t /mnt
ID gen top level path
-- --- --------- ----
256 100 5 @
257 100 5 @home
# Show subvolume info
$ btrfs subvolume show /mnt/@
@
Name: @
UUID: abc123-def456-...
Parent UUID: -
Received UUID: -
Creation time: 2024-01-01 00:00:00 +0000
Subvolume ID: 256
Generation: 100
Gen at creation: 1
Parent ID: 5
Top level ID: 5
Flags: -
Send transid: 0
Send time: 2024-01-01 00:00:00 +0000
Receive transid: 0
Receive time: -
Snapshot(s):
@snapshots/@-2024-01-01
Deleting Subvolumes:
# Delete a subvolume (must be empty of nested subvolumes)
$ btrfs subvolume delete /mnt/@snapshots/@-old-snapshot
Delete subvolume (no-commit): '/mnt/@snapshots/@-old-snapshot'
# Delete with immediate commit
$ btrfs subvolume delete -c /mnt/@snapshots/@-old-snapshot
# Delete multiple subvolumes
$ btrfs subvolume delete /mnt/@snapshots/@-snap-{1,2,3}
Important: You cannot delete a subvolume that contains other subvolumes. Nested subvolumes must be deleted first (leaf to root order).
Subvolume Properties:
1234567891011121314
# Make subvolume read-only$ btrfs property set /mnt/@snapshots/@-2024-01-01 ro true # Check if read-only$ btrfs property get /mnt/@snapshots/@-2024-01-01 roro=true # Make writable again$ btrfs property set /mnt/@snapshots/@-2024-01-01 ro false # Read-only subvolumes are common for:# - Snapshots intended as backups# - Send/receive operations (source must be read-only)# - Protection against accidental modificationWhen you delete a subvolume, its exclusive data is freed immediately. However, data shared with other subvolumes (via snapshots) is only freed when ALL referencing subvolumes are deleted. This is why deleting many snapshots can temporarily increase space usage as back-references are updated.
One of the most powerful aspects of subvolumes is independent mounting. This enables:
Mounting by Subvolume Path:
# Mount specific subvolume by relative path from top-level
$ mount -o subvol=@ /dev/sda1 /
$ mount -o subvol=@home /dev/sda1 /home
# The path is relative to the top-level (ID 5)
$ mount -o subvol=@snapshots/@-2024-01-01 /dev/sda1 /mnt/snapshot
Mounting by Subvolume ID:
# Mount by numeric ID (more robust than path)
$ mount -o subvolid=256 /dev/sda1 /
$ mount -o subvolid=257 /dev/sda1 /home
# Find subvolume ID
$ btrfs subvolume list -o / | grep @home
ID 257 gen 100 top level 256 path @home
fstab Configuration:
Typical /etc/fstab entries for Btrfs subvolumes:
# Device Mount Point Type Options Dump Pass
/dev/sda1 / btrfs subvol=@,compress=zstd:1,defaults 0 0
/dev/sda1 /home btrfs subvol=@home,compress=zstd:1 0 0
/dev/sda1 /var/log btrfs subvol=@log,compress=zstd:1,nodatacow 0 0
/dev/sda1 /.snapshots btrfs subvol=@snapshots 0 0
Note on Device Repetition:
Notice that the same device (/dev/sda1) is used for multiple mount points. This is correct—each line mounts a different subvolume from the same Btrfs file system.
Mount Options and Subvolumes:
| Option Type | Behavior | Example |
|---|---|---|
| File system level | First mount wins | compress, space_cache |
| Subvolume level | Can vary per mount | nodatacow (set with chattr) |
| VFS level | Can vary per mount | ro, noexec, nodev |
| Subvol selection | Per mount | subvol=, subvolid= |
Some Btrfs options (like 'compress' or 'space_cache') are set at the file system level and are determined by the first mount. Subsequent mounts with different values for these options will be ignored. To change them, unmount all mounts and remount.
Rollback Strategy:
Subvolume mounting enables powerful rollback capabilities:
# Current state: @ is root, @-backup is yesterday's snapshot
# Boot from live USB, mount top-level
$ mount -o subvolid=5 /dev/sda1 /mnt
# Move current broken root aside
$ mv /mnt/@ /mnt/@-broken
# Create writable snapshot from backup to become new root
$ btrfs subvolume snapshot /mnt/@-backup /mnt/@
# Reboot - system is rolled back
This pattern is used by tools like snapper and timeshift for system rollback.
In Btrfs, a snapshot is simply a subvolume that was created as a copy of another subvolume. There's no fundamental difference between a subvolume and a snapshot—both are FS trees with unique IDs. The term "snapshot" describes how it was created, not what it is.
Creating Snapshots:
# Create a snapshot of a subvolume
$ btrfs subvolume snapshot /mnt/@ /mnt/@snapshots/@-2024-01-15
Create a snapshot of '/mnt/@' in '/mnt/@snapshots/@-2024-01-15'
# Create a read-only snapshot (recommended for backups)
$ btrfs subvolume snapshot -r /mnt/@ /mnt/@snapshots/@-2024-01-15
# The snapshot is instantly usable - no delay regardless of size
How Snapshots Work (COW in Action):
When you create a snapshot:
After the snapshot:
1234567891011121314151617181920212223242526272829303132
Before Snapshot:═══════════════════════════════════════════════════════════════Subvolume @ (ID 256)├── Tree Root ──→ [Node A] ──→ [Leaf: file.txt → Extent X] [Leaf: data.db → Extent Y] Reference counts: Extent X (1), Extent Y (1) After Snapshot (@ → @snap):═══════════════════════════════════════════════════════════════Subvolume @ (ID 256)├── Tree Root ──┬─→ [Node A] ──→ [Leaf: file.txt → Extent X] │ [Leaf: data.db → Extent Y] │Subvolume @snap (ID 260)├── Tree Root ──┘ (Same node!) Reference counts: Extent X (2), Extent Y (2)Space used by snapshot: ~0 bytes (just a root pointer) After Modifying file.txt in @:═══════════════════════════════════════════════════════════════Subvolume @ (ID 256)├── Tree Root' ──→ [Node A'] ──→ [Leaf': file.txt → Extent X'] [Leaf: data.db → Extent Y] (shared) ↑Subvolume @snap (ID 260) │├── Tree Root ───→ [Node A] ──→ [Leaf: file.txt → Extent X] [Leaf: data.db ────→ Y] (shared) Reference counts: Extent X (1), Extent X' (1), Extent Y (2)Now file.txt differs between @ and @snap, data.db is still sharedNested Subvolume Behavior:
A crucial detail: Snapshots do not include nested subvolumes. If subvolume @ contains subvolume @/nested, a snapshot of @ will not include @/nested's contents—it will appear as an empty directory.
# Setup: @ contains nested subvol
$ btrfs subvolume create /mnt/@/nested
$ echo "data" > /mnt/@/nested/file.txt
# Snapshot @
$ btrfs subvolume snapshot /mnt/@ /mnt/@-snap
# Check snapshot
$ ls /mnt/@-snap/nested/
# Empty! The nested subvolume was not included
$ btrfs subvolume list /mnt
ID 256 ... path @
ID 257 ... path @/nested # Original nested subvol
ID 258 ... path @-snap # Snapshot of @
# Note: no @-snap/nested subvolume!
This behavior is intentional—it allows snapshotting system partitions without including user data, or vice versa. To snapshot multiple subvolumes together, use btrfs send/receive with snapshot bundles.
To avoid nested subvolume complications, many administrators use a flat layout: all subvolumes are direct children of the top-level (ID 5). This means @, @home, @var, @log are all siblings at the same level, making snapshots more predictable.
Btrfs's extent sharing makes traditional quotas meaningless—if a file is shared by 10 snapshots, who "owns" the space? Quota groups (qgroups) solve this by tracking both exclusive and shared space for each subvolume.
Enabling Quotas:
# Enable quota tracking on a file system
$ btrfs quota enable /mnt
# Check quota status
$ btrfs qgroup show /mnt
qgroupid rfer excl
-------- ---- ----
0/256 10.00GiB 5.00GiB
0/257 8.00GiB 8.00GiB
0/258 5.00GiB 0B
Understanding qgroup Output:
In the example above:
Setting Limits:
# Limit a subvolume to 50 GiB exclusive space
$ btrfs qgroup limit 50G /mnt/@home
# Limit referenced space (less common)
$ btrfs qgroup limit -e 100G /mnt/@home
# Remove limit
$ btrfs qgroup limit none /mnt/@home
# Check current limits
$ btrfs qgroup show -r /mnt
qgroupid rfer excl max_rfer max_excl
-------- ---- ---- ------- --------
0/256 10.00GiB 5.00GiB none none
0/257 8.00GiB 8.00GiB none 50GiB
Hierarchical Quota Groups:
Qgroups support hierarchy for complex quota scenarios:
# Level 0: Automatic qgroups for each subvolume (0/256, 0/257, ...)
# Level 1+: User-defined parent groups
# Create a parent qgroup at level 1
$ btrfs qgroup create 1/100 /mnt
# Assign subvolumes to the parent group
$ btrfs qgroup assign 0/256 1/100 /mnt # @ under group 1/100
$ btrfs qgroup assign 0/257 1/100 /mnt # @home under group 1/100
# Set limit on parent group (affects combined usage)
$ btrfs qgroup limit 100G 1/100 /mnt
# Now @ and @home combined cannot exceed 100G exclusive space
| Level | Format | Purpose |
|---|---|---|
| Level 0 | 0/subvolid | Automatic per-subvolume tracking |
| Level 1 | 1/N | User-defined groups (e.g., per-user) |
| Level 2+ | N/M | Higher aggregation levels |
Enabling quotas adds overhead to every write (tracking reference counts). It also requires scanning during enable/rescan. On very large file systems with many snapshots, quota operations can be slow. Consider whether you need quotas before enabling them.
Real-world Btrfs deployments follow established layout patterns. Here are the most common approaches:
Distribution Layouts:
12345678910111213141516171819202122232425262728293031323334353637
openSUSE/Ubuntu Layout (Flat @):═══════════════════════════════════════════════════════════════Top-Level (ID 5)├── @ ................ / (root)├── @home ............ /home├── @opt ............. /opt (optional)├── @var ............. /var (optional, or use subvols under)└── @snapshots ....... /.snapshots (for snapper) Mounts: / subvol=@ /home subvol=@home /.snapshots subvol=@snapshots Fedora Layout (Flat with named subvols):═══════════════════════════════════════════════════════════════Top-Level (ID 5)├── root ............. / (root)└── home ............. /home Mounts: / subvol=root /home subvol=home Arch Linux Typical Layout:═══════════════════════════════════════════════════════════════Top-Level (ID 5)├── @ ................ / (root)├── @home ............ /home├── @pkg ............. /var/cache/pacman/pkg (exclude from snapshots)├── @log ............. /var/log└── @.snapshots ...... /.snapshots Rationale: Separate pkg cache (large, reproducible) and logsfrom system snapshots to reduce snapshot size.Server/NAS Layout:
Top-Level (ID 5)
├── @system .............. /
├── @docker-data ......... /var/lib/docker (nodatacow)
├── @databases ........... /var/lib/databases (nodatacow)
├── @share-media ......... /srv/media
├── @share-documents ..... /srv/documents
├── @backups
│ ├── hourly-001 ....... Snapshot of @share-documents
│ ├── hourly-002
│ └── daily-001
└── @tmp ................. /tmp (might use tmpfs instead)
Design Principles:
If your distribution didn't set up the layout you want, you can restructure by booting from a live USB, mounting the top-level (subvolid=5), and reorganizing subvolumes. Just update /etc/fstab before rebooting.
Subvolumes are one of Btrfs's most distinctive features, enabling flexible storage organization that traditional file systems cannot match. Let's consolidate the essential concepts:
What's Next:
With subvolume fundamentals mastered, we'll dive deep into Snapshots—exploring snapshot strategies, btrfs send/receive for replication, automated snapshot tools like snapper, and disaster recovery techniques.
You now understand Btrfs subvolumes comprehensively: their architecture, creation and management, mounting strategies, relationship to snapshots, quota groups, and practical layout patterns. This knowledge enables you to design and maintain sophisticated Btrfs storage configurations.