Loading learning content...
Logical Volumes (LVs) are the culmination of LVM's abstraction stack—the layer where virtualized storage becomes usable by file systems and applications. While Physical Volumes provide raw capacity and Volume Groups aggregate that capacity into pools, Logical Volumes are where storage actually gets allocated, formatted, mounted, and used.
From the perspective of a file system or application, a logical volume is indistinguishable from a physical disk partition. It appears as /dev/vg_name/lv_name or /dev/mapper/vg_name-lv_name, supports all standard block device operations, and can host any file system. The magic lies in what happens beneath this abstraction: the freedom to resize, relocate, snapshot, and stripe without the application knowing or caring.
This page provides comprehensive coverage of logical volume types, creation strategies, resizing mechanics, and the critical relationship between LVs and the file systems that inhabit them.
By the end of this page, you will understand linear, striped, and mirrored logical volumes; LV creation with all essential options; online and offline resizing operations; the critical difference between LV and file system resize; LV activation and device paths; RAID logical volumes; and best practices for production LV design.
LVM supports multiple logical volume types, each optimized for different use cases. Understanding these types and their trade-offs is essential for effective storage design.
Linear Volumes:
The simplest and most common type. A linear volume maps logical extents to physical extents sequentially. If the volume spans multiple segments (due to non-contiguous allocation or spanning PVs), LVM simply concatenates them.
Striped Volumes:
Data is distributed across multiple physical volumes in an interleaved pattern. Each "stripe" writes to a different PV in round-robin fashion. This improves I/O parallelism and throughput for workloads that can benefit from concurrent disk access.
Mirrored Volumes:
Data is duplicated across multiple physical volumes. Every write goes to all mirrors simultaneously, providing redundancy. If one PV fails, data remains available from surviving mirrors.
RAID Volumes:
LVM implements standard RAID levels (0, 1, 4, 5, 6, 10) natively using dm-raid. These provide various combinations of striping, mirroring, and parity for performance and redundancy.
Thin Volumes:
Thin provisioning volumes allocate physical storage on-demand rather than upfront. Covered in detail on Page 5.
| Type | Performance | Redundancy | Space Efficiency | Use Case |
|---|---|---|---|---|
| Linear | Good (single disk) | None | 100% | General storage, databases |
| Striped | Excellent (parallel I/O) | None | 100% | High-throughput workloads |
| Mirror (RAID1) | Good read, slower write | Full redundancy | 50% | Critical data, boot volumes |
| RAID5 | Good balanced | Single disk fault tolerance | ~(n-1)/n | Balanced performance/protection |
| RAID6 | Good balanced | Dual disk fault tolerance | ~(n-2)/n | High availability requirements |
| RAID10 | Excellent | Full redundancy | 50% | High-performance with protection |
| Thin | Good with overhead | None (pool-level) | Variable | Over-committed environments |
Default to linear volumes unless you have a specific performance requirement. Striped volumes improve throughput for large sequential I/O but add complexity and require all stripe PVs to extend the volume. For most database and application workloads, the file system and I/O scheduler already optimize access patterns effectively.
The lvcreate command creates logical volumes with extensive options for size, type, naming, and allocation. Mastering these options enables precise control over storage allocation.
Size Specification Methods:
LVM offers flexible ways to specify volume size:
-L 10G, -L 500M, -L 1T)-l 100 = 100 extents)-l 50%FREE = half of remaining)123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
#!/bin/bash# Creating Linear Logical Volumes # === BASIC LINEAR VOLUME CREATION === # Create 50GB linear volume named lv_datalvcreate -L 50G -n lv_data vg_storage # Create volume using extent count (assumes 4MB extents)# 12800 extents × 4MB = 50GBlvcreate -l 12800 -n lv_data vg_storage # Create volume using all free spacelvcreate -l 100%FREE -n lv_remaining vg_storage # Create volume using 50% of free spacelvcreate -l 50%FREE -n lv_half vg_storage # === SPECIFYING PHYSICAL VOLUME LOCATION === # Create volume only on specific PVlvcreate -L 50G -n lv_ssd vg_mixed /dev/nvme0n1p2 # Create volume spanning specific PVs (in order)lvcreate -L 100G -n lv_large vg_storage /dev/sdb /dev/sdc # Create volume on PVs with specific taglvcreate -L 50G -n lv_fast vg_mixed @tier_ssd # === CONTIGUOUS ALLOCATION === # Require physically contiguous extents (best performance)lvcreate -L 50G -n lv_contiguous --alloc contiguous vg_storage # This will fail if 50GB of contiguous space isn't available # === SPECIFYING PE RANGE === # Allocate from specific PE range on a PV# Syntax: PV[:PE-PE]lvcreate -L 20G -n lv_specific vg_storage /dev/sdb:0-5119 # === ZERO INITIALIZATION === # Create volume and zero all blocks (security, clean state)lvcreate -L 50G -n lv_zeroed --zero y vg_storage # Skip zeroing (faster, for trusted environments)lvcreate -L 50G -n lv_fast --zero n vg_storage # === ACTIVATION CONTROL === # Create but don't activate (useful for scripting)lvcreate -L 50G -n lv_inactive --activate n vg_storage # Create with specific activation modelvcreate -L 50G -n lv_local --activate ay vg_storage # auto-yeslvcreate -L 50G -n lv_cluster --activate ey vg_storage # exclusive # === VERIFY CREATION === # Display the new volumelvdisplay /dev/vg_storage/lv_data # Show segment informationlvs -o lv_name,seg_count,seg_start_pe,devices /dev/vg_storage/lv_dataRAID protects against hardware failure, not data corruption, accidental deletion, or software bugs. Files corrupted by application errors are corrupted identically across all RAID members. Always maintain proper backups regardless of RAID configuration.
Every logical volume is accessible through multiple device paths. Understanding these paths is essential for scripting, configuration, and troubleshooting.
Device Path Formats:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
#!/bin/bash# Logical Volume Device Paths # For a logical volume named 'lv_data' in volume group 'vg_storage': # === TRADITIONAL LVM PATH ===# Symbolic link maintained by LVM/dev/vg_storage/lv_data # This is a symlink to the device-mapper device:ls -la /dev/vg_storage/lv_data# lrwxrwxrwx 1 root root 7 Jan 15 10:00 /dev/vg_storage/lv_data -> ../dm-3 # === DEVICE-MAPPER PATH ===# The actual dm device created by device-mapper/dev/mapper/vg_storage-lv_data # Note the naming convention: VG-LV with hyphens# Hyphens in names are doubled: vg-name/lv-data becomes vg--name-lv--data ls -la /dev/mapper/vg_storage-lv_data# lrwxrwxrwx 1 root root 7 Jan 15 10:00 /dev/mapper/vg_storage-lv_data -> ../dm-3 # === RAW DM DEVICE ===# The underlying device node (dm-N where N is a number)/dev/dm-3 # Not recommended for configuration files - number can change! # === WHICH PATH TO USE? === # For /etc/fstab: Use the mapper path or VG/LV path# /dev/mapper/vg_storage-lv_data /data ext4 defaults 0 2# Or use UUID (most reliable):# UUID=a1b2c3d4-... /data ext4 defaults 0 2 # For scripts: Use /dev/mapper path (works consistently)mkfs.ext4 /dev/mapper/vg_storage-lv_data # For documentation: Use /dev/VG/LV format (most readable)mount /dev/vg_storage/lv_data /mnt/data # === FINDING UUID === blkid /dev/vg_storage/lv_data# /dev/vg_storage/lv_data: UUID="a1b2c3d4-5e6f-7a8b-9c0d-e1f2a3b4c5d6" TYPE="ext4" # LVM also assigns its own UUID (different from filesystem UUID)lvs -o lv_name,lv_uuid /dev/vg_storage/lv_data # === DEVICE-MAPPER INTERNALS === # List all device-mapper devicesdmsetup ls# vg_storage-lv_data (253:3) # Show mapping table for an LVdmsetup table vg_storage-lv_data# 0 104857600 linear 8:17 2048 # Interpretation:# 0 = start sector of this segment# 104857600 = number of sectors (50GB)# linear = target type# 8:17 = major:minor of underlying device (/dev/sdb1)# 2048 = offset on underlying deviceIn /etc/fstab, prefer UUID references over device paths. While /dev/mapper paths are stable, UUIDs survive VG renames and are the most portable. Use 'blkid' to find the filesystem UUID (not the LVM UUID) for fstab entries.
The ability to resize logical volumes—often online, without unmounting—is one of LVM's most valuable features. However, resizing involves two distinct operations that must be performed correctly:
These operations have different requirements and constraints depending on whether you're growing or shrinking.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
#!/bin/bash# Resizing Logical Volumes # ============================================# EXTENDING (GROWING) LOGICAL VOLUMES# ============================================ # === EXTEND LV, THEN FS === # Step 1: Extend the logical volume by 20GBlvextend -L +20G /dev/vg_storage/lv_data # Step 2a: Extend ext4 filesystem (online capable)resize2fs /dev/vg_storage/lv_data # Step 2b: Extend XFS filesystem (online ONLY)xfs_growfs /mnt/data # XFS uses mount point, not device! # === EXTEND LV + FS IN ONE COMMAND === # The -r flag automatically resizes the filesystemlvextend -L +20G -r /dev/vg_storage/lv_data # This works for ext4, XFS, and other supported filesystems# Much safer - no chance of forgetting the FS resize # === SIZE SPECIFICATION OPTIONS === # Extend to absolute sizelvextend -L 100G -r /dev/vg_storage/lv_data # Extend by specific amountlvextend -L +50G -r /dev/vg_storage/lv_data # Extend by extent countlvextend -l +1000 -r /dev/vg_storage/lv_data # Use all remaining free space in VGlvextend -l +100%FREE -r /dev/vg_storage/lv_data # Use specific percentage of VGlvextend -l 50%VG -r /dev/vg_storage/lv_data # ============================================# REDUCING (SHRINKING) LOGICAL VOLUMES# ============================================ # WARNING: Shrinking is DANGEROUS if done incorrectly!# You MUST shrink the filesystem BEFORE shrinking the LV!# Otherwise you will DESTROY your filesystem! # === SAFE SHRINK PROCEDURE FOR EXT4 === # Step 1: Unmount the filesystemumount /mnt/data # Step 2: Check filesystem integritye2fsck -f /dev/vg_storage/lv_data # Step 3: Shrink the filesystem to target size# MUST be smaller than or equal to LV target sizeresize2fs /dev/vg_storage/lv_data 80G # Step 4: Shrink the logical volume# Use equal or slightly larger size than FSlvreduce -L 80G /dev/vg_storage/lv_data # Step 5: Remountmount /dev/vg_storage/lv_data /mnt/data # === COMBINED SHRINK COMMAND === # The -r flag handles FS resize automatically# Still requires unmount for ext4 shrinking!umount /mnt/datalvreduce -L 80G -r /dev/vg_storage/lv_datamount /dev/vg_storage/lv_data /mnt/data # === XFS CANNOT BE SHRUNK === # XFS does not support shrinking!# To reduce XFS volume:# 1. Backup data# 2. Create new smaller volume# 3. Format with XFS# 4. Restore data# 5. Delete old volume # === FORCE REDUCE (DANGEROUS!) === # Only use if filesystem is already smaller than target LV size# Or if you're intentionally destroying the filesystemlvreduce -f -L 50G /dev/vg_storage/lv_data # ============================================# RESIZE (ALTERNATIVE COMMAND)# ============================================ # lvresize can both grow and shrinklvresize -L 100G -r /dev/vg_storage/lv_data # Set absolute sizelvresize -L +20G -r /dev/vg_storage/lv_data # Grow by 20Glvresize -L -20G -r /dev/vg_storage/lv_data # Shrink by 20GWhen shrinking, ALWAYS shrink the filesystem FIRST, then the LV. Shrinking the LV before the filesystem truncates the filesystem, causing catastrophic data loss. The -r flag handles this automatically, but if doing manual operations, never forget this order. Growing is the opposite: extend LV first, then filesystem.
| Filesystem | Online Grow | Offline Grow | Online Shrink | Offline Shrink |
|---|---|---|---|---|
| ext4 | ✅ Yes | ✅ Yes | ❌ No | ✅ Yes |
| XFS | ✅ Yes | ✅ Yes | ❌ No | ❌ No |
| Btrfs | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
| ReiserFS | ✅ Yes | ✅ Yes | ❌ No | ✅ Yes |
| NTFS | ❌ No | ✅ Yes | ❌ No | ✅ Yes |
Logical volumes must be activated before they can be mounted or used. Activation loads the device-mapper configuration into the kernel and creates the /dev entries. Understanding activation is crucial for troubleshooting boot issues and managing shared storage.
Activation States:
An LV can be in one of several states:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
#!/bin/bash# Logical Volume Activation Management # === CHECKING ACTIVATION STATE === # Show all LVs with activation statelvs -o lv_name,lv_attr,lv_active # The 5th character in lv_attr shows activation:# 'a' = active# '-' = inactive# 's' = suspended# 'I' = invalid snapshot# 'd' = device is present without tables # Example output:# LV Attr Active# lv_data -wi-a----- active # === MANUAL ACTIVATION/DEACTIVATION === # Activate a single LVlvchange -ay /dev/vg_storage/lv_data # Deactivate a single LV (must be unmounted first!)umount /mnt/datalvchange -an /dev/vg_storage/lv_data # Activate all LVs in a VGvgchange -ay vg_storage # Deactivate all LVs in a VGvgchange -an vg_storage # === ACTIVATION MODES === # Standard activation (local)lvchange -ay /dev/vg_storage/lv_data # Exclusive activation (for clustered environments)# Prevents other nodes from activatinglvchange -aey /dev/vg_storage/lv_data # Shared activation (for clustered environments)# Multiple nodes can activate simultaneouslylvchange -asy /dev/vg_storage/lv_data # === AUTO-ACTIVATION CONTROL === # Set LV to auto-activate at bootlvchange --setactivationskip n /dev/vg_storage/lv_data # Prevent LV from auto-activatinglvchange --setactivationskip y /dev/vg_storage/lv_data # Check auto-activation setting (look for 'k' in attr)lvs -o lv_name,lv_attr vg_storage# 'k' in position 10 = activation skip enabled # === BOOT-TIME ACTIVATION === # LVs are typically activated during boot by:# 1. systemd lvm2-activation-generator# 2. initramfs with lvm2 hook# 3. dracut lvm module # Force regeneration of initramfs with LVM support# Debian/Ubuntu:update-initramfs -u # RHEL/CentOS/Fedora:dracut -f # === TROUBLESHOOTING ACTIVATION === # Scan for all PVs and update cachepvscan --cache # Activate with verbose outputvgchange -ay -vvvv vg_storage 2>&1 | tee /tmp/lvm_debug.log # Check if device-mapper module is loadedlsmod | grep dm_mod # Verify dm device was createdls -la /dev/mapper/vg_storage-lv_datadmsetup ls | grep vg_storage # Force activation even with partial VGvgchange -ay --partial vg_storage # === SUSPEND AND RESUME === # Suspend I/O to an LV (for consistent snapshots)lvchange --suspend /dev/vg_storage/lv_data # Resume I/Olvchange --refresh /dev/vg_storage/lv_data # Refresh device-mapper table (after pvmove, etc.)lvchange --refresh /dev/vg_storage/lv_dataIf your root filesystem is on LVM, activation happens in the initramfs before the root filesystem is mounted. If you're having boot issues with LVM root volumes, ensure your initramfs includes LVM support and rebuild it with 'update-initramfs -u' or 'dracut -f' as appropriate for your distribution.
LVM allows converting logical volumes between different types—adding mirrors, converting to RAID, or changing stripe configurations. The lvconvert command handles these transformations, often online.
Common Conversions:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
#!/bin/bash# Converting Between Logical Volume Types # === LINEAR TO MIRRORED === # Add a mirror to a linear volume (creates RAID1)lvconvert --type raid1 -m 1 /dev/vg_storage/lv_data # This triggers a synchronization - check progresslvs -o lv_name,copy_percent /dev/vg_storage/lv_data # Specify which PV for the new mirror leglvconvert --type raid1 -m 1 /dev/vg_storage/lv_data /dev/sdc # === MIRRORED TO LINEAR === # Remove mirror, keeping data on one leglvconvert -m 0 /dev/vg_storage/lv_data # Specify which PV's data to keeplvconvert -m 0 /dev/vg_storage/lv_data /dev/sdb # === LINEAR TO RAID5 === # Convert linear to RAID5 (requires free space for parity)lvconvert --type raid5 -i 2 /dev/vg_storage/lv_data # This is a complex operation:# 1. Allocates space for parity# 2. Reshapes data layout# 3. Calculates and writes parity # === RAID1 TO RAID5 === # Convert RAID1 to RAID5 for space efficiencylvconvert --type raid5 /dev/vg_storage/lv_mirror # === RAID5 TO RAID6 === # Add extra parity for dual-failure protectionlvconvert --type raid6 /dev/vg_storage/lv_raid5 # === ADD/REMOVE MIRRORS === # Add another mirror to existing RAID1lvconvert -m +1 /dev/vg_storage/lv_mirror # Remove a mirror leglvconvert -m -1 /dev/vg_storage/lv_mirror # === REPLACE FAILED DEVICE IN RAID === # Replace a specific PV in a RAID arraylvconvert --replace /dev/failed_disk /dev/vg_storage/lv_raid \ /dev/replacement_disk # === STRIPE CONVERSION === # Note: Stripe count changes require reshape support# Converting between stripe counts is version-dependent # === CONVERSION TO/FROM THIN === # Convert linear to thin (requires thin pool)# First create a thin pool if not existslvcreate --type thin-pool -L 100G -n thin_pool vg_storage # Convert regular LV to thin LVlvconvert --type thin --thinpool vg_storage/thin_pool \ /dev/vg_storage/lv_data # === MONITORING CONVERSION === # Watch conversion progresswatch -n 1 'lvs -o lv_name,copy_percent,sync_percent vg_storage' # Check RAID synchronization statuslvs -o lv_name,seg_type,data_percent,sync_percent # === CONVERSION OPTIONS === # Run conversion in backgroundlvconvert --background --type raid1 -m 1 /dev/vg_storage/lv_data # Set synchronization rate limitslvconvert --type raid1 -m 1 \ --minrecoveryrate 1024 \ # KB/s minimum --maxrecoveryrate 10240 \ # KB/s maximum /dev/vg_storage/lv_dataType conversions often require significant free space in the VG (for new mirror legs or parity) and can take hours for large volumes. Plan conversions during low-activity periods. While conversions are typically online-safe, system crashes during conversion can leave volumes in intermediate states requiring recovery.
Effective logical volume design requires balancing performance, flexibility, and maintainability. These best practices derive from real-world production experience.
Filesystem Selection for LV:
| Use Case | Recommended FS | Rationale |
|---|---|---|
| Database data files | XFS or ext4 | XFS for large files, ext4 for general purpose |
| Web server content | ext4 | Good all-around performance, shrink capability |
| Large media files | XFS | Excellent large file performance |
| Container storage | XFS or Btrfs | Overlay/devicemapper backend support |
| Boot volume | ext4 | Universal bootloader support |
| Frequently resized | ext4 or Btrfs | XFS can't shrink |
Create wrapper scripts for common resize operations that include pre-checks (space available, filesystem type), use the -r flag, and log operations. This prevents human error and creates an audit trail. Example: 'extend_lv vg_storage lv_data +10G' that handles all safety checks.
Logical volumes are the user-facing layer of LVM—the virtualized block devices that applications and file systems interact with. Let's consolidate the essential concepts:
What's Next:
With logical volumes mastered, we turn to one of LVM's most powerful features: Snapshots. The next page explores how LVM creates point-in-time copies of volumes using copy-on-write mechanics, enabling consistent backups, testing, and rollback capabilities without doubling storage consumption.
You now possess comprehensive knowledge of LVM Logical Volumes—from creation through resizing, activation, and type conversion. This foundation prepares you for advanced topics including snapshots and thin provisioning.