Loading content...
Imagine being able to save the complete state of your file system—every file, every directory, every byte—in less than a second, regardless of whether you have 100 GB or 100 TB of data. No copying, no compression, no waiting. This is the reality of Btrfs snapshots.
Powered by the Copy-on-Write architecture we explored earlier, Btrfs snapshots provide instant point-in-time copies that initially consume zero additional space. As the original and snapshot diverge through modifications, only the changed data requires new storage. This enables backup strategies, system rollback, development workflows, and disaster recovery patterns that would be impractical or impossible with traditional methods.
By the end of this page, you will master Btrfs snapshots: the technical mechanics enabling instant creation, read-only vs writable snapshots, snapshot management strategies, btrfs send/receive for replication and backup, automated snapshot tools like snapper and btrbk, and real-world disaster recovery workflows.
We introduced snapshots in the subvolumes section, but let's deepen our understanding of exactly how they work at the implementation level.
The Instant Creation Process:
When you execute btrfs subvolume snapshot source dest:
Total I/O involved: A few metadata block writes—typically under 64 KB, regardless of source size.
Why It's So Fast:
Post-Snapshot Divergence:
After a snapshot is created, the source and snapshot share everything. As either is modified:
12345678910111213141516171819202122232425262728293031323334
T=0: Create snapshot═══════════════════════════════════════════════════════════════Source @ Snapshot @snap │ │ └──→ [Root] ←─────────┘ ↓ [Tree nodes...] → [Data extents A, B, C, D, E] RefCounts: A=2, B=2, C=2, D=2, E=2 T=1: Modify file in @ (touches extent B)═══════════════════════════════════════════════════════════════Source @ Snapshot @snap │ │ └──→ [Root'] (COW'd) └──→ [Root] (original) ↓ ↓ [Modified tree] ──→ [B'] [Original tree] ──→ [B] ↓ ↓ Shared: [A, C, D, E] Shared: [A, C, D, E] RefCounts: A=2, B=1, B'=1, C=2, D=2, E=2 T=2: Delete file in @snap (extent D)═══════════════════════════════════════════════════════════════After deletion:- @snap no longer references D- But @ still references D- RefCount of D: 2 → 1- D is NOT freed yet If we delete @snap entirely:- All @snap-exclusive refs removed- Shared data (A, C, E) now RefCount=1 (owned by @)- D stays at RefCount=1- Only @snap-exclusive data is freedWhile snapshots protect against accidental changes, they're NOT a substitute for backups. Snapshots exist on the same storage—a disk failure loses both source and snapshots. Use 'btrfs send' to replicate snapshots to separate storage for true backup.
Btrfs snapshots can be created in two modes: writable (default) or read-only (-r flag). The choice significantly impacts usage patterns.
Creating Snapshots:
# Writable snapshot (default)
$ btrfs subvolume snapshot /mnt/@ /mnt/@-work-copy
Create a snapshot of '/mnt/@' in '/mnt/@-work-copy'
# Read-only snapshot
$ btrfs subvolume snapshot -r /mnt/@ /mnt/@-backup-20240115
Create a readonly snapshot of '/mnt/@' in '/mnt/@-backup-20240115'
# Check if read-only
$ btrfs property get /mnt/@-backup-20240115 ro
ro=true
| Aspect | Read-Only (-r) | Writable (default) |
|---|---|---|
| Creation flag | btrfs subvolume snapshot -r | btrfs subvolume snapshot |
| Modification | ❌ Cannot modify files | ✅ Full read/write access |
| Send/Receive | ✅ Required for send | ❌ Cannot send |
| Accidental changes | ✅ Protected | ⚠️ Can be modified |
| Archive/backup use | ✅ Ideal | ⚠️ May diverge |
| Rollback source | ✅ Clean reference point | ⚠️ May have changes |
| Convert to writable | ✅ btrfs property set ro false | N/A |
| Convert to read-only | N/A | ✅ btrfs property set ro true |
When to Use Each:
Read-Only Snapshots:
Writable Snapshots:
Converting Between Modes:
# Convert read-only to writable
$ btrfs property set /mnt/@-backup-20240115 ro false
# Now you can modify it (useful for restoring, then modifying)
# Convert writable to read-only
$ btrfs property set /mnt/@-work-copy ro true
# Now protected from changes (useful before send)
Always use read-only snapshots for backup purposes. Even if you need a writable copy, create the read-only snapshot first, then make a writable snapshot of the read-only one. This preserves the exact backup state.
Without a strategy, snapshots can proliferate uncontrollably, consuming space and slowing operations. Effective snapshot management requires planning.
Retention Policies:
Most snapshot strategies use a tiered retention approach:
Hourly snapshots: Keep last 24 hours
Daily snapshots: Keep last 7 days
Weekly snapshots: Keep last 4 weeks
Monthly snapshots: Keep last 12 months
This provides fine-grained recovery for recent issues while maintaining historical records with decreasing granularity.
Naming Conventions:
Consistent naming makes snapshots manageable:
# Timestamp-based (sortable)
@snapshots/@-2024-01-15T10:30:00
@snapshots/@home-2024-01-15T10:30:00
# Type-prefixed
@snapshots/hourly-2024-01-15T10:00
@snapshots/daily-2024-01-15
@snapshots/weekly-2024-W03
# Purpose-based
@snapshots/pre-upgrade-2024-01-15
@snapshots/pre-kernel-6.7.0
Automated Cleanup Script:
#!/bin/bash
# Keep last 24 hourly, 7 daily, 4 weekly snapshots
SNAP_DIR="/mnt/btrfs-top/@snapshots"
# Remove old hourly
ls -1 $SNAP_DIR | grep 'hourly-' | head -n -24 | \
xargs -I {} btrfs subvolume delete $SNAP_DIR/{}
# Remove old daily
ls -1 $SNAP_DIR | grep 'daily-' | head -n -7 | \
xargs -I {} btrfs subvolume delete $SNAP_DIR/{}
# Remove old weekly
ls -1 $SNAP_DIR | grep 'weekly-' | head -n -4 | \
xargs -I {} btrfs subvolume delete $SNAP_DIR/{}
Space Monitoring:
Snapshots don't show accurate size with du because of sharing. Use Btrfs commands:
# Show shared vs exclusive space per subvolume
$ btrfs qgroup show -reF /mnt
qgroupid rfer excl max_rfer max_excl
-------- ---- ---- -------- --------
0/256 50.0GiB 25.0GiB none none # @
0/257 50.0GiB 1.0GiB none none # @-snap-1 (shares 49G)
0/258 50.0GiB 0.5GiB none none # @-snap-2 (shares 49.5G)
# File system usage overview
$ btrfs filesystem usage /mnt
Overall:
Device size: 200.00GiB
Device allocated: 100.00GiB
Device unallocated: 100.00GiB
Used: 75.00GiB
Free (estimated): 125.00GiB
Each snapshot increases metadata overhead. With many snapshots, delete operations slow down (updating back-references), and quota calculations become expensive. For most systems, 50-100 snapshots is reasonable; thousands may cause issues.
Btrfs send/receive is the mechanism for replicating snapshots to another location—another disk, another machine, or compressed archive. It's the foundation for proper Btrfs backup strategies.
The Send Stream:
btrfs send generates a stream of commands that, when played back with btrfs receive, recreates the snapshot:
# Send a snapshot to another Btrfs filesystem
$ btrfs send /mnt/source/@-snap-readonly | btrfs receive /mnt/backup/
# Send to a file (for later receive or transfer)
$ btrfs send /mnt/source/@-snap-readonly > /backup/snapshot.btrfs
$ btrfs receive /mnt/backup/ < /backup/snapshot.btrfs
# Compress during transfer
$ btrfs send /mnt/source/@-snap | zstd -c > snapshot.btrfs.zst
$ zstd -d -c snapshot.btrfs.zst | btrfs receive /mnt/backup/
Requirement: Read-Only Snapshots
Only read-only snapshots can be sent:
# Won't work:
$ btrfs send /mnt/@-writable
ERROR: subvolume /mnt/@-writable is not read-only
# Solution: make read-only first
$ btrfs property set /mnt/@-writable ro true
$ btrfs send /mnt/@-writable
Incremental Send (The Killer Feature):
Full sends transfer ALL data. For 500 GB snapshots, that's 500 GB transferred every time. Incremental sends only transfer the differences:
# First send: full transfer
$ btrfs send /mnt/@-snap-day1 | btrfs receive /backup/
# Transfers ~500 GB
# Second send: incremental (only differences)
$ btrfs send -p /mnt/@-snap-day1 /mnt/@-snap-day2 | btrfs receive /backup/
# Transfers only ~5 GB of changes!
# The -p (parent) flag specifies the reference snapshot
How Incremental Send Works:
12345678910111213141516171819202122232425
Source (for incremental send from @-day1 to @-day2):═══════════════════════════════════════════════════════════════ @-day1: [A] [B] [C] [D] [E] (parent - already at destination) │ │ │ │ │@-day2: [A] [B'] [C] [D] [E'] (child - what we want to send) ↑ ↑↑ ↑ ↑ ↑↑ │ ││ │ │ ││ same modified same same modified Btrfs Send compares trees:1. Walks @-day2's tree2. For each extent, checks if @-day1 had the same extent3. If same: send "clone" command (reference existing data)4. If different: send actual data Send Stream Content:───────────────────────────────────────────────────────────────snapshot @-day2clone file.txt from @-day1write modified.txt offset=0 data=<new content of B'>clone unchanged.txt from @-day1...write other_modified.txt offset=4096 data=<new content of E'>endMulti-Parent Incremental:
For best efficiency, you can specify multiple parents:
$ btrfs send -p /mnt/@-day1 -c /mnt/@-day0 /mnt/@-day2 | btrfs receive /backup/
# -p: primary parent (where receive will look for clone sources)
# -c: additional clone sources (for finding matching extents)
Remote Replication:
# Send to remote machine over SSH
$ btrfs send /mnt/@-snapshot | ssh backup-server btrfs receive /backup/
# With compression and progress
$ btrfs send /mnt/@-snapshot | pv | zstd -c | ssh backup-server 'zstd -d | btrfs receive /backup/'
Never delete parent snapshots you might need for incremental sends! If you delete @-day1 but still need to send @-day3 incrementally, you'll need to do a full send or find another common ancestor.
Managing snapshots manually becomes tedious. Several tools automate creation, retention, and replication.
Snapper (SUSE/openSUSE):
Snapper is the default snapshot manager for openSUSE and integrates with YaST and zypper:
# Install snapper
$ sudo zypper install snapper # openSUSE
$ sudo apt install snapper # Debian/Ubuntu
# Create a configuration for a subvolume
$ sudo snapper -c home create-config /home
# List configurations
$ sudo snapper list-configs
Config | Subvolume
--------+-----------
root | /
home | /home
# Create manual snapshot
$ sudo snapper -c root create --description "Before upgrade"
# List snapshots
$ sudo snapper -c root list
# | Type | Pre # | Date | User | Description
--+--------+-------+-------------------------+------+-------------
0 | single | | | root | current
1 | single | | 2024-01-15 10:00:00 | root | Before upgrade
2 | pre | | 2024-01-15 11:00:00 | root | zypper install
3 | post | 2 | 2024-01-15 11:00:05 | root | zypper install
Snapper capabilities:
btrbk (Backup-focused):
btrbk is designed for backup scenarios with send/receive:
# /etc/btrbk/btrbk.conf
# Global settings
snapshot_preserve_min 2d
snapshot_preserve 14d
target_preserve_min no
target_preserve 20d 10w *m
# Define volumes
volume /mnt/btrfs
snapshot_dir @snapshots
subvolume @
target send-receive ssh://backup-server/mnt/backup
subvolume @home
target send-receive ssh://backup-server/mnt/backup
# Run btrbk
$ sudo btrbk run
# Show status
$ sudo btrbk list
# Dry run
$ sudo btrbk -n -v run
btrbk advantages:
Timeshift (Desktop-focused):
Timeshift provides a GUI-friendly snapshot experience for desktops:
# Install
$ sudo apt install timeshift # Debian/Ubuntu
$ sudo pacman -S timeshift # Arch
# Launch GUI
$ timeshift-gtk
# Or CLI
$ sudo timeshift --create --comments "Before major update"
$ sudo timeshift --list
$ sudo timeshift --restore --snapshot '2024-01-15_10-00-00'
Timeshift features:
| Feature | Snapper | btrbk | Timeshift |
|---|---|---|---|
| Primary use | System snapshots | Backup/replication | Desktop restore |
| Pre/Post snapshots | ✅ Excellent | ❌ No | ❌ No |
| Remote replication | ⚠️ Needs scripts | ✅ Built-in | ❌ No |
| GUI | YaST module | ❌ CLI only | ✅ GTK app |
| Distro integration | openSUSE, SUSE | Any | Ubuntu, Debian |
| File-level restore | ✅ Yes | ❌ No | ✅ Yes |
Many setups combine tools: Snapper for local hourly snapshots with pre/post system changes, btrbk for nightly replication to backup storage. The tools don't conflict as long as snapshot names/locations don't overlap.
Snapshots provide multiple recovery options, from individual file restores to complete system rollbacks. Let's explore practical workflows.
Scenario 1: Recover a Deleted/Modified File
# Find the file in a snapshot
$ ls /mnt/@snapshots/@-2024-01-14/home/user/important-file.txt
# Method 1: Copy from snapshot
$ cp /mnt/@snapshots/@-2024-01-14/home/user/important-file.txt ~/recovered/
# Method 2: Use snapper (if available)
$ snapper -c home diff 45 # Show what changed in snapshot 45
$ snapper -c home undochange 45 /home/user/important-file.txt
Scenario 2: System Rollback (Broken Update)
1234567891011121314151617181920212223242526272829
# Scenario: System update broke your install, can't boot# You have a snapshot from before the update # Step 1: Boot from live USB or recovery environment # Step 2: Mount top-level Btrfs$ mount -o subvolid=5 /dev/sda2 /mnt # Step 3: Identify current and target snapshots$ ls /mnt@ @home @snapshots$ ls /mnt/@snapshots@-2024-01-14 @-2024-01-15 @-pre-update # Step 4: Move broken root aside$ mv /mnt/@ /mnt/@-broken # Step 5: Create writable snapshot from backup$ btrfs subvolume snapshot /mnt/@snapshots/@-pre-update /mnt/@ # Step 6: If needed, update fstab UUIDs (usually not required with subvol mount)# Step 7: Unmount and reboot$ umount /mnt$ reboot # System should now boot into pre-update state! # Step 8: After verifying, optionally cleanup$ sudo btrfs subvolume delete /mnt/btrfs-top/@-brokenScenario 3: Full System Restore from Remote Backup
# On recovery environment with backup disk connected
# Step 1: Format new disk with Btrfs
$ mkfs.btrfs -L root /dev/nvme0n1p2
$ mount /dev/nvme0n1p2 /mnt
# Step 2: Receive backed-up snapshots from backup disk
$ mount /dev/sdb1 /backup # Backup disk
# Receive base snapshot
$ btrfs send /backup/@-2024-01-01 | btrfs receive /mnt/
# Receive latest incrementally
$ btrfs send -p /backup/@-2024-01-01 /backup/@-2024-01-15 | btrfs receive /mnt/
# Step 3: Create writable root from latest snapshot
$ btrfs subvolume snapshot /mnt/@-2024-01-15 /mnt/@
# Step 4: Set up fstab, bootloader, etc.
$ arch-chroot /mnt # or equivalent
$ vim /etc/fstab
$ grub-install /dev/nvme0n1
$ grub-mkconfig -o /boot/grub/grub.cfg
$ exit
$ reboot
Scenario 4: Rollback with Boot Menu (openSUSE)
openSUSE/SUSE can integrate snapshots into the GRUB boot menu:
GRUB Menu:
- openSUSE Tumbleweed
- openSUSE Tumbleweed (Snapshot 47: Before upgrade) ← Boot into past state
- openSUSE Tumbleweed (Snapshot 45)
- Advanced options...
If you boot from a snapshot and want to make it permanent:
$ snapper rollback 47 # Make snapshot 47 the new default
A backup you've never tested is not a backup. Periodically test your recovery procedures in a VM or spare system. Verify that btrfs receive works and the restored system boots successfully.
Beyond basic backup and recovery, snapshots enable sophisticated workflows.
Pattern 1: Development Branches with Snapshots
# Start project
$ btrfs subvolume create /mnt/project
# ... develop main version ...
# Create "branch" for experimental feature
$ btrfs subvolume snapshot /mnt/project /mnt/project-experiment
# Work on experiment independently
$ cd /mnt/project-experiment
# ... make experimental changes ...
# If experiment succeeds, swap:
$ mv /mnt/project /mnt/project-old
$ mv /mnt/project-experiment /mnt/project
$ btrfs subvolume delete /mnt/project-old
# If experiment fails, just delete:
$ btrfs subvolume delete /mnt/project-experiment
Pattern 2: Safe Package Testing
# Before installing untrusted software
$ sudo btrfs subvolume snapshot -r / /.snapshots/@-pre-install
# Install and test
$ sudo apt install sketchy-software
# ... turns out it's malware ...
# Rollback (from live USB)
$ mount -o subvolid=5 /dev/sda1 /mnt
$ btrfs subvolume delete /mnt/@
$ btrfs subvolume snapshot /mnt/.snapshots/@-pre-install /mnt/@
$ reboot
Pattern 3: Database Point-in-Time Recovery
# For databases on Btrfs (ensure WAL is fsync'd)
# Before backup: freeze writes, snapshot, unfreeze
$ psql -c "SELECT pg_start_backup('snapshot');"
$ btrfs subvolume snapshot -r /var/lib/postgresql /backup/pg-snapshot-$(date +%Y%m%d)
$ psql -c "SELECT pg_stop_backup();"
# Postgres-specific, but concept applies to others
# Alternatively, use NOCOW files + traditional pg_dump
Pattern 4: Container Layer Storage
Docker and LXC can use Btrfs snapshots for their layer storage:
# Docker with btrfs storage driver
# /etc/docker/daemon.json
{
"storage-driver": "btrfs"
}
# Each container layer becomes a Btrfs snapshot
# Container creation is instant (snapshot of image layer)
# Container deletion reclaims only exclusive data
Pattern 5: Build/Test Isolation
# Create base development environment
$ btrfs subvolume create /mnt/dev-base
# ... install toolchains, dependencies ...
# For each build, snapshot the base
$ btrfs subvolume snapshot /mnt/dev-base /tmp/build-$$
$ cd /tmp/build-$$
$ make && make test
$ RESULT=$?
$ cd /
$ btrfs subvolume delete /tmp/build-$$
$ exit $RESULT
# Each build gets a pristine environment
# All build artifacts are automatically cleaned up
For file-level (not subvolume-level) COW copies, use reflinks: cp --reflink=always source dest. This creates an instant copy that shares extents with the original, diverging on modification. Useful for duplicating large files without snapshot overhead.
Btrfs snapshots transform how we think about data versioning and backup. Let's consolidate the essential knowledge:
-r flag to prevent accidental modification and enable send/receiveWhat's Next:
With snapshots mastered, we'll explore one of Btrfs's most critical features for data integrity: Data Scrubbing. Scrubbing actively reads and verifies all data against stored checksums, detecting and (with redundancy) correcting silent corruption before it causes data loss.
You now have comprehensive knowledge of Btrfs snapshots: their mechanics, management strategies, send/receive for replication, automation tools, and disaster recovery workflows. This knowledge enables you to implement robust data protection and versioning strategies.