Loading learning content...
Imagine you need to back up a 500GB database that's actively processing transactions. Stopping the database for hours while you copy the data is unacceptable—every minute of downtime costs money. Traditional backup methods face an impossible choice: either accept inconsistent backups (data changing during the copy) or accept significant downtime.
LVM Snapshots solve this dilemma elegantly. A snapshot creates an instant, consistent point-in-time copy of a logical volume. The snapshot is created in milliseconds, regardless of volume size—500GB or 5TB, it doesn't matter. The snapshot initially consumes almost no space, growing only as the original volume changes. Meanwhile, both the original and the snapshot remain fully accessible.
This capability transforms backup procedures, testing workflows, and system administration. You can create a snapshot, perform a backup from the frozen snapshot while the live volume continues operating, then discard the snapshot when done. Or create a snapshot before a risky upgrade, test the upgrade, and instantly rollback if something goes wrong.
By the end of this page, you will understand copy-on-write snapshot mechanics, snapshot sizing strategies, snapshot creation and management, snapshot performance implications, backup workflows using snapshots, snapshot limitations and failure modes, and the differences between classic and thin snapshots.
LVM snapshots use a Copy-on-Write (CoW) mechanism to efficiently preserve original data without duplicating the entire volume. Understanding CoW is essential for proper snapshot sizing and performance tuning.
How Copy-on-Write Works:
When you create a snapshot of a logical volume (the 'origin'):
Initial state: The snapshot is essentially a pointer to the origin's data. No actual data is copied. Both origin and snapshot share the same physical extents.
First write to origin: Before the original data is overwritten on the origin, LVM copies the original block to the snapshot area. Only then does the new data get written to the origin.
Snapshot access: When reading the snapshot, LVM checks whether each block has been modified on the origin. If yes, it reads from the snapshot's CoW area. If no, it reads directly from the origin (since that data hasn't changed).
Subsequent writes: Once a block is copied to the snapshot, future writes to that same block don't need additional copies—the pre-change data is already preserved.
Key Implications:
| Aspect | Implication |
|---|---|
| Initial Size | Snapshot needs only metadata space at creation (~16-32KB) |
| Growth Pattern | Snapshot grows only as origin blocks change for the first time |
| Maximum Size | Cannot exceed origin size (point of full copy achieved) |
| Read Performance | Snapshot reads are fast (mostly from origin or sequential CoW) |
| Origin Write Performance | Slower—each first-write to a block requires a copy |
| Snapshot as Read-Only | Traditional snapshots are typically read-only; thin snapshots can be writable |
Copy-on-Write adds overhead to origin writes: each block modified for the first time requires reading the old data, writing it to the snapshot area, then writing the new data to the origin. This effectively triples I/O for first-time writes. For write-heavy workloads, this can reduce performance by 30-50%. Plan snapshot lifecycles accordingly—create, use for backup, delete promptly.
Proper snapshot sizing is critical. If a snapshot runs out of space, it becomes invalid and must be removed—you cannot resize a full classic snapshot. Understanding your workload's change rate is essential.
Sizing Strategies:
Calculating Expected Snapshot Growth:
To estimate snapshot size requirements, you need to understand the origin volume's write pattern:
Snapshot Size = (Unique Blocks Written) × Block Size
If your origin is 100GB and experiences 10% random writes per hour:
Monitoring Snapshot Usage:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
#!/bin/bash# Snapshot Sizing and Monitoring # === CHECKING SNAPSHOT USAGE === # Display snapshot usage percentagelvs -o lv_name,origin,snap_percent,lv_size vg_storage # Output example:# LV Origin Snap% LSize# lv_data 100.00g# lv_snap lv_data 45.23 20.00g # The snap_percent shows how full the snapshot's CoW area is# At 100%, the snapshot becomes invalid! # === CONTINUOUS MONITORING === # Watch snapshot fill ratewatch -n 5 'lvs -o lv_name,snap_percent vg_storage | grep -v "^ LV"' # Script to alert on high snapshot usage#!/bin/bashTHRESHOLD=80for snap in $(lvs --noheadings -o lv_name,snap_percent 2>/dev/null | \ grep -v "^ LV" | awk '{print $1":"$2}' | grep ":"); do name=$(echo $snap | cut -d: -f1) pct=$(echo $snap | cut -d: -f2 | sed 's/%//') if [ ! -z "$pct" ] && [ $(echo "$pct > $THRESHOLD" | bc) -eq 1 ]; then echo "WARNING: Snapshot $name is ${pct}% full!" # Send alert, email, etc. fidone # === SNAPSHOT SIZE RECOMMENDATIONS === # For a 100GB origin with moderate write activity:# - Quick backup (5 min): 5-10GB snapshot# - Extended backup (1 hour): 15-25GB snapshot# - Testing environment (8 hours): 30-50GB snapshot# - Development clone (days): 50-100GB snapshot # === CHUNK SIZE CONFIGURATION === # Chunk size affects CoW granularity and efficiency# Default is auto-selected based on origin size# Smaller chunks = finer granularity but more metadata overhead# Larger chunks = less overhead but more data copied per write # Create snapshot with specific chunk sizelvcreate -s -L 20G -c 64K -n snap_64k /dev/vg_storage/lv_data # Check current chunk sizelvs -o lv_name,chunk_size /dev/vg_storage/snap_64k # Common chunk sizes:# 4K: Finest granularity, highest overhead# 16K: Balanced for OLTP databases# 64K: Default, good general purpose# 512K: Sequential workloads, large filesWhen a classic snapshot reaches 100% capacity, it becomes permanently invalid. LVM cannot write the CoW data, so the snapshot can no longer represent a consistent point in time. The only recovery is to remove the snapshot. There is NO way to resurrect an invalid snapshot. This is why monitoring and proper sizing are critical.
LVM snapshot creation is straightforward, but effective management requires understanding the various options and their implications.
Creating Snapshots:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
#!/bin/bash# Snapshot Creation and Management # === BASIC SNAPSHOT CREATION === # Create a snapshot of lv_data# -s = snapshot# -L = snapshot size (for CoW area)# -n = snapshot namelvcreate -s -L 20G -n snap_backup /dev/vg_storage/lv_data # Create with percentage of origin sizelvcreate -s -l 20%ORIGIN -n snap_20pct /dev/vg_storage/lv_data # Create with specific chunk sizelvcreate -s -L 20G -c 64K -n snap_tuned /dev/vg_storage/lv_data # === SNAPSHOT WITH PERMISSIONS === # Create read-only snapshot (default for classic snapshots)lvcreate -s -L 20G -pr -n snap_ro /dev/vg_storage/lv_data # Create read-write snapshot (careful - changes persist in snapshot)lvcreate -s -L 20G -prw -n snap_rw /dev/vg_storage/lv_data # === SNAPSHOT INFORMATION === # List all snapshots with their originslvs -o lv_name,origin,snap_percent,lv_size # Detailed snapshot informationlvdisplay /dev/vg_storage/snap_backup # Shows:# LV snapshot status source of /dev/vg_storage/lv_data# LV Status available# LV Size 100.00 GiB (origin size)# Current LE 25600# COW-table size 20.00 GiB# COW-table LE 5120# Allocated to snapshot 12.35%# Snapshot chunk size 4.00 KiB # === MOUNTING SNAPSHOTS === # Snapshots can be mounted like regular volumesmkdir -p /mnt/snapshotmount -o ro /dev/vg_storage/snap_backup /mnt/snapshot # For XFS snapshots, need nouuid option (same UUID as origin)mount -o ro,nouuid /dev/vg_storage/snap_xfs /mnt/snapshot # === BACKUP FROM SNAPSHOT === # Create snapshotlvcreate -s -L 20G -n snap_backup /dev/vg_storage/lv_data # Mount read-onlymount -o ro /dev/vg_storage/snap_backup /mnt/snapshot # Perform backup from snapshottar -czf /backup/data_backup.tar.gz -C /mnt/snapshot .# Or use rsync, dd, or any backup tool # Unmount and remove snapshot when doneumount /mnt/snapshotlvremove /dev/vg_storage/snap_backup # === REMOVING SNAPSHOTS === # Remove a snapshot (space reclaimed to VG)lvremove /dev/vg_storage/snap_backup # Force removal without confirmationlvremove -f /dev/vg_storage/snap_backup # Remove all snapshots of an originfor snap in $(lvs --noheadings -o lv_name,origin | \ awk '$2=="lv_data" {print $1}'); do lvremove -f /dev/vg_storage/$snapdone # === SNAPSHOT EXTENSION === # Classic snapshots CAN be extended if not fulllvextend -L +10G /dev/vg_storage/snap_backup # Check new sizelvs /dev/vg_storage/snap_backupXFS stores a unique UUID in its superblock. When you snapshot an XFS volume, both origin and snapshot have the same UUID. XFS refuses to mount two filesystems with the same UUID simultaneously. Use 'mount -o nouuid' for the snapshot to override this check. Alternatively, use xfs_admin -U generate on an unmounted snapshot to assign a new UUID.
LVM snapshots support a powerful merge operation that restores the origin volume to the snapshot's point-in-time state. This enables instant rollback after failed upgrades, patches, or other changes.
How Merge Works:
The merge operation reverses the CoW relationship. Instead of the snapshot preserving old data as the origin changes, the snapshot's preserved data is copied back to the origin, effectively reverting all changes made since the snapshot was created.
Merge Mechanics:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
#!/bin/bash# Snapshot Merge (Rollback) Operations # === BASIC MERGE OPERATION === # Scenario: Upgrade failed, need to rollback# Snapshot 'pre_upgrade' was created before upgrade # Step 1: Unmount the origin volumeumount /mnt/data # Step 2: Initiate mergelvconvert --merge /dev/vg_storage/pre_upgrade # Step 3: Reactivate the origin# Merge starts when origin is activatedlvchange -an /dev/vg_storage/lv_datalvchange -ay /dev/vg_storage/lv_data # Step 4: Remount (now contains pre-upgrade data)mount /dev/vg_storage/lv_data /mnt/data # === MERGE WITH ACTIVE ORIGIN === # If origin can't be unmounted (root filesystem, etc.)lvconvert --merge /dev/vg_storage/pre_upgrade # Output: "Merging of snapshot pre_upgrade will occur on # next activation of lv_data" # Merge deferred - will complete on next boot# Reboot the system to complete merge # === MONITORING MERGE PROGRESS === # Merge is a background operation - check statuslvs -o lv_name,copy_percent,lv_attr /dev/vg_storage/lv_data # 'Merging' or 'MergeTarget' appears in attributes# copy_percent shows merge completion # Watch merge progresswatch -n 2 'lvs -o lv_name,copy_percent vg_storage' # === ROOT FILESYSTEM ROLLBACK === # Special case: Rolling back the root filesystem # Step 1: Create pre-upgrade snapshotlvcreate -s -L 20G -n snap_pre_upgrade /dev/vg_root/lv_root # Step 2: Perform upgrade (which fails catastrophically) # Step 3: Initiate merge (will defer since root is mounted)lvconvert --merge /dev/vg_root/snap_pre_upgrade # Step 4: Reboot - merge happens in initramfs before root mountreboot # After reboot, system is back to pre-upgrade state # === CANCELING A PENDING MERGE === # If you changed your mind before reboot# The snapshot must still exist (merge not started)lvconvert --mergecancel /dev/vg_storage/pre_upgrade # === PRACTICAL WORKFLOW: SAFE UPGRADE === #!/bin/bash# safe_upgrade.sh - Upgrade with automatic rollback capability ORIGIN="/dev/vg_storage/lv_data"SNAP_NAME="pre_upgrade_$(date +%Y%m%d_%H%M%S)"MOUNT_POINT="/mnt/data" # Create snapshotecho "Creating safety snapshot..."lvcreate -s -L 20G -n $SNAP_NAME $ORIGIN # Perform upgrade (replace with actual upgrade commands)echo "Performing upgrade..."# dnf upgrade -y# apt upgrade -y # Check if upgrade succeeded (replace with actual validation)if [ $? -ne 0 ]; then echo "Upgrade failed! Rolling back..." umount $MOUNT_POINT lvconvert --merge /dev/vg_storage/$SNAP_NAME lvchange -an $ORIGIN lvchange -ay $ORIGIN mount $ORIGIN $MOUNT_POINT echo "Rollback complete."else echo "Upgrade successful. Removing snapshot..." lvremove -f /dev/vg_storage/$SNAP_NAME echo "Snapshot removed."fiMerge replaces the origin's current data with the snapshot's old data. All changes made to the origin since the snapshot was created will be lost. If you need to preserve the current state, create another snapshot before merging. Also note: the snapshot is consumed by the merge—you cannot merge the same snapshot twice.
LVM thin provisioning introduces a fundamentally different snapshot model that addresses many limitations of classic snapshots. Thin snapshots operate within a thin pool and offer significant advantages.
Classic vs. Thin Snapshots:
| Feature | Classic Snapshot | Thin Snapshot |
|---|---|---|
| Space allocation | Pre-allocated CoW area | Shares thin pool, allocates on demand |
| Sizing | Fixed at creation | Grows automatically from pool |
| Invalidation risk | High (100% = invalid) | None (limited by pool size) |
| Origin write penalty | High (CoW per block) | Lower (pool-level optimization) |
| Multiple snapshots | Each needs separate CoW area | Share pool efficiently |
| Snapshot of snapshot | Not directly supported | Fully supported (snapshot chains) |
| Writability | Read-only default | Read-write by default |
| Merge rollback | Supported | Supported differently |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
#!/bin/bash# Thin Provisioning and Thin Snapshots # === CREATING A THIN POOL === # Step 1: Create the thin pool# This creates a pool that thin volumes and snapshots sharelvcreate --type thin-pool -L 200G -n thin_pool vg_storage # With explicit metadata and data sizeslvcreate --type thin-pool -L 200G \ --poolmetadatasize 1G \ -n thin_pool vg_storage # === CREATING THIN VOLUMES === # Create a thin volume (virtual size, initially uses no space)lvcreate --type thin -V 500G \ --thinpool thin_pool \ -n thin_data vg_storage # Note: Virtual size (500G) > Pool size (200G)# This is thin provisioning - overcommit! # Thin volumes allocate from pool on first write # === THIN SNAPSHOTS === # Create a thin snapshot (virtually instant, no size needed!)lvcreate -s -n snap_thin /dev/vg_storage/thin_data # That's it! No size specification required# Snapshot shares pool with origin # Create read-only thin snapshotlvcreate -s -pr -n snap_ro /dev/vg_storage/thin_data # === SNAPSHOT CHAINS === # Thin snapshots can be snapshotted! lvcreate -s -n snap_level2 /dev/vg_storage/snap_thin # And again...lvcreate -s -n snap_level3 /dev/vg_storage/snap_level2 # This creates a chain: origin → snap → snap_level2 → snap_level3# Each level only stores deltas # === CHECKING THIN POOL USAGE === # Show pool capacity and usagelvs -o lv_name,lv_size,data_percent,metadata_percent thin_pool # Show all thin volumes and their actual data usagelvs -o lv_name,lv_size,pool_lv,data_percent # === WRITABLE THIN SNAPSHOTS === # Thin snapshots are writable by default# You can mount and modify them independently mount /dev/vg_storage/snap_thin /mnt/snapshot_work # Changes to snapshot don't affect origin# Changes to origin don't affect snapshot (after snapshot creation) # === THIN SNAPSHOT FOR CLONING === # Create independent clone from snapshotlvcreate -s -n vm_clone /dev/vg_storage/vm_template # The clone can evolve independently# Only stores differences from template # Multiple clones efficiently share template datalvcreate -s -n vm_clone1 /dev/vg_storage/vm_templatelvcreate -s -n vm_clone2 /dev/vg_storage/vm_templatelvcreate -s -n vm_clone3 /dev/vg_storage/vm_template # 3 full VM clones using only delta storage! # === THIN POOL AUTOEXTEND === # Configure auto-extend for thin pool (in /etc/lvm/lvm.conf)# thin_pool_autoextend_threshold = 80# thin_pool_autoextend_percent = 20 # Or extend manuallylvextend -L +50G vg_storage/thin_pool # === DELETING THIN SNAPSHOTS === # Simply remove - space reclaimed to poollvremove /dev/vg_storage/snap_thin # Removing middle of snapshot chain?# LVM handles this - data moves to remaining snapshotsThin snapshots excel at VM template deployment. Create a golden image thin volume, install and configure your template OS, then create thin snapshots for each VM instance. Each VM stores only its differences from the template. Deploying 100 VMs from a 50GB template might use only 55GB total instead of 5TB.
Snapshots provide powerful functionality but come with performance trade-offs that must be understood and managed.
Performance Impact by Operation:
| Operation | Impact | Cause | Mitigation |
|---|---|---|---|
| Origin write (first to block) | High (2-3x latency) | Read-before-write + CoW copy | Short snapshot lifecycles |
| Origin write (subsequent) | None | Block already in CoW | N/A |
| Origin read | None | Direct read from origin | N/A |
| Snapshot read (unchanged block) | Minimal | Redirected to origin | N/A |
| Snapshot read (changed block) | Low | Read from CoW area | N/A |
| Snapshot delete | Low | Metadata update, CoW release | N/A |
| Multiple snapshots | Higher write penalty | Multiple CoW copies | Use thin snapshots |
Write Amplification:
The Copy-on-Write mechanism causes write amplification. For each unique block first written after snapshot creation:
This triples I/O for first-time writes to each block. The impact depends on:
Chunk Size Effects:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
#!/bin/bash# Snapshot Performance Tuning # === CHUNK SIZE SELECTION === # Smaller chunks (4K):# + Fine granularity - less data copied per write# - Higher metadata overhead# - More seeks for reading CoW area# Best for: Databases with 4K pages, OLTP workloads lvcreate -s -L 20G -c 4K -n snap_oltp /dev/vg_storage/lv_database # Medium chunks (32K-64K):# + Balanced performance# + Aligns with common filesystem block sizes# Best for: General purpose, mixed workloads lvcreate -s -L 20G -c 64K -n snap_general /dev/vg_storage/lv_data # Larger chunks (256K-1M):# + Lower metadata overhead# + Better sequential read of CoW area# - More data copied even for small writes# Best for: Large file workloads, sequential I/O lvcreate -s -L 20G -c 256K -n snap_media /dev/vg_storage/lv_video # === PERFORMANCE MONITORING === # Monitor origin I/O during snapshotiostat -x 1 /dev/mapper/vg_storage-lv_data # Watch for:# - Increased w_await (write latency)# - Increased write throughput (CoW overhead) # === MINIMIZING SNAPSHOT DURATION === # For backup: Create snapshot, backup immediately, deleteSTART=$(date +%s)lvcreate -s -L 20G -n snap_backup /dev/vg_storage/lv_datamount -o ro /dev/vg_storage/snap_backup /mnt/snaptar -czf /backup/data.tar.gz -C /mnt/snap .umount /mnt/snaplvremove -f /dev/vg_storage/snap_backupEND=$(date +%s)echo "Snapshot lived for $((END-START)) seconds" # === MULTIPLE SNAPSHOTS IMPACT === # With multiple snapshots, each first-write might need# to copy to ALL snapshots (worst case) # Better: Use thin snapshots which share a pool# and optimize CoW operations # === THIN POOL PERFORMANCE TUNING === # Zero new blocks (security, but slower allocation)lvchange --zero y vg_storage/thin_pool # Skip zeroing (faster, for trusted environments)lvchange --zero n vg_storage/thin_pool # Discards (for SSD optimization)lvchange --discards passdown vg_storage/thin_pool # === SNAPSHOT ON DIFFERENT PHYSICAL VOLUME === # Place snapshot CoW area on different PV to # separate I/O streams lvcreate -s -L 20G -n snap_separate \ /dev/vg_storage/lv_data /dev/fast_ssd # This puts the CoW area on /dev/fast_ssd# Origin remains on its original PV# Reduces I/O contentionThe longer a snapshot exists, the more blocks have been CoW-copied, and the larger the snapshot's CoW area becomes. Old snapshots can have significant storage overhead and may impact origin write performance. Establish snapshot retention policies—delete snapshots as soon as they're no longer needed, especially for write-intensive workloads.
Snapshots enable workflows that would otherwise be difficult, risky, or impossible. Here are production-tested patterns.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
#!/bin/bash# Consistent Backup Using LVM Snapshots # ConfigurationORIGIN="/dev/vg_storage/lv_database"SNAP_NAME="backup_snap_$(date +%Y%m%d_%H%M%S)"MOUNT_POINT="/mnt/backup_snapshot"BACKUP_DEST="/backup" # === FOR DATABASES: Quiesce before snapshot === # MySQL/MariaDBmysql -e "FLUSH TABLES WITH READ LOCK;"mysql -e "FLUSH LOGS;" # PostgreSQLpsql -c "SELECT pg_start_backup('lvm_snapshot');" # === Create the snapshot ===echo "Creating snapshot..."lvcreate -s -L 20G -n $SNAP_NAME $ORIGIN # === Release database lock immediately ===# Snapshot is instant, minimize lock duration mysql -e "UNLOCK TABLES;"# orpsql -c "SELECT pg_stop_backup();" # === Perform backup from snapshot ===echo "Mounting snapshot..."mkdir -p $MOUNT_POINTmount -o ro /dev/vg_storage/$SNAP_NAME $MOUNT_POINT echo "Backing up..."# Use your preferred backup methodtar -czf $BACKUP_DEST/db_backup_$(date +%Y%m%d_%H%M%S).tar.gz \ -C $MOUNT_POINT . # Or use rsync for incrementalrsync -av --delete $MOUNT_POINT/ $BACKUP_DEST/latest/ # Or use dd for raw imagedd if=/dev/vg_storage/$SNAP_NAME of=$BACKUP_DEST/db.img bs=4M # === Cleanup ===echo "Cleaning up..."umount $MOUNT_POINTlvremove -f /dev/vg_storage/$SNAP_NAMErmdir $MOUNT_POINT echo "Backup complete."Integrate snapshot operations into your automation pipelines. CI/CD systems can create test snapshots on demand, backup scripts can use snapshots for consistency, and deployment pipelines can create rollback points automatically. The instantaneous nature of snapshot creation makes this practical even in fast-moving environments.
LVM snapshots provide point-in-time copies of volumes using space-efficient copy-on-write mechanics. Let's consolidate the essential concepts:
What's Next:
The final page in this module explores Thin Provisioning—a powerful storage virtualization technique that enables over-commitment, on-demand allocation, and highly efficient snapshot management. Building on the thin snapshot concepts introduced here, we'll dive deep into thin pool design, capacity management, and the production considerations that make thin provisioning both powerful and potentially dangerous.
You now possess comprehensive knowledge of LVM Snapshots—from copy-on-write fundamentals through sizing, management, merging, thin snapshots, and production workflows. This foundation prepares you for the advanced topic of thin provisioning.