What is LVM?
In the world of Linux storage management, Logical Volume Manager (LVM) stands as a versatile and powerful tool. LVM offers a flexible approach to managing disk storage, allowing users to dynamically allocate, resize, and manipulate volumes with ease. In this article, we’ll explore the theoretical foundations of LVM disks in Linux, followed by practical examples to demonstrate its utility in real-world scenarios.
Logical Volume Manager (LVM) is a disk management tool that abstracts physical storage devices into logical volumes. It enables administrators to manage storage resources more flexibly by providing features such as volume resizing, snapshots, and striping across multiple disks.
Components of LVM:
- Physical Volumes (PV): Physical storage devices, such as hard drives or SSDs, are initialized as physical volumes in LVM.
- Volume Groups (VG): Physical volumes are grouped into volume groups, which serve as a pool of storage.
- Logical Volumes (LV): Volume groups are subdivided into logical volumes, which act as virtual partitions that can be mounted and used like traditional disk partitions.
- Physical Extents (PE): The smallest unit of allocation within a physical volume. Logical volumes are created by allocating logical extents from volume groups.
Advantages of LVM:
- Dynamic Volume Management: LVM allows for dynamic resizing of logical volumes, even while the system is running, without the need to unmount the filesystem.
- Data Striping and Mirroring: LVM supports striping (RAID 0) and mirroring (RAID 1) across multiple physical volumes, enhancing both performance and data redundancy.
- Snapshot Creation: LVM enables the creation of snapshots, which are point-in-time copies of logical volumes. This feature is useful for backup and testing purposes.
- Ease of Management: LVM provides a convenient and centralized interface for managing storage resources, simplifying tasks such as volume creation, resizing, and migration.
Conclusion:
Logical Volume Manager (LVM) offers a flexible and efficient solution for managing disk storage in Linux environments. By abstracting physical storage devices into logical volumes, LVM enables dynamic volume management, data striping, mirroring, and snapshot creation. With its rich feature set and ease of management, LVM is a valuable tool for optimizing storage resources and enhancing system flexibility and reliability.
Lets explore some examples:
Obtaining Disk Information with lsblk
Before performing any disk partitioning operations, it’s essential to obtain information about the available disks and their partitions. The lsblk command provides a convenient way to list block devices and their attributes, including disk size, partition layout, and file system type.
[root@vm2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2K 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rl-root 253:0 0 17G 0 lvm /
└─rl-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 5G 0 disk
vdc 252:32 0 5G 0 disk
Creating Partitions with parted and set to lvm:
# Identify the disk to partition (e.g., /dev/vdb).
[root@vm2 ~]# parted /dev/vdb print
Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 5369MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
# Define the GPT partitioning scheme.
[root@vm2 ~]# parted /dev/vdb mklabel gpt
Information: You may need to update /etc/fstab.
# Create a new partition "storage".
[root@vm2 ~]# parted /dev/vdb mkpart storage 1MiB 5G
Information: You may need to update /etc/fstab.
# Set to lvm type.
[root@vm2 ~]# parted /dev/vdb set 1 lvm on
Information: You may need to update /etc/fstab.
# Check results.
[root@vm2 ~]# parted /dev/vdb print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 5369MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 5368MB 5367MB storage lvm
Creating a Physical Volume (PV):
# Attach new physical volume.
[root@vm2 ~]# pvcreate /dev/vdb1
Physical volume "/dev/vdb1" successfully created.
# Check results.
[root@vm2 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 rl lvm2 a-- <19.00g 0
/dev/vdb1 lvm2 --- <5.00g <5.00g
Creating a Volume Group (VG):
# Create a volume group.
[root@vm2 ~]# vgcreate vg-projects /dev/vdb1
Volume group "vg-projects" successfully created
# Check results.
[root@vm2 ~]# vgs
Devices file /dev/vdb is excluded: device is partitioned.
VG #PV #LV #SN Attr VSize VFree
rl 1 2 0 wz--n- <19.00g 0
vg-projects 1 0 0 wz--n- <5.00g <5.00g
Creating a Logical Volume(LV) with XFS File System
# Create a logical volumes within the volume group
[root@vm2 ~]# lvcreate -n lv-project1 -L 2.5g vg-projects
Logical volume "lv-project1" created.
[root@vm2 ~]# lvcreate -n lv-project2 -l 100%FREE vg-projects
Logical volume "lv-project2" created.
# Format the logical volume with XFS filesystem
[root@vm2 ~]# mkfs.xfs /dev/vg-projects/lv-project1
meta-data=/dev/vg-projects/lv-project1 isize=512 agcount=4, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=655360, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
[root@vm2 ~]# mkfs.xfs /dev/mapper/vg--projects-lv--project2
meta-data=/dev/mapper/vg--projects-lv--project2 isize=512 agcount=4, agsize=163584 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=654336, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
# Mount the lv in disk folders.
mkdir -p /mnt/data/project{1,2}
mount /dev/vg-projects/lv-project1 /mnt/data/project1
mount /dev/vg-projects/lv-project2 /mnt/data/project2
# Check results.
[root@vm2 ~]# df -h | grep projects
/dev/mapper/vg--projects-lv--project1 2.5G 50M 2.4G 2% /mnt/data/project1
/dev/mapper/vg--projects-lv--project2 2.5G 50M 2.4G 2% /mnt/data/project2
Expanding a Volume Group (VG) with a new disk
# Identify the free disk.
[root@vm2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2K 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rl-root 253:0 0 17G 0 lvm /
└─rl-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 5G 0 disk
└─vdb1 252:17 0 5G 0 part
├─vg--projects-lv--project1 253:2 0 2.5G 0 lvm /mnt/data/project1
└─vg--projects-lv--project2 253:3 0 2.5G 0 lvm /mnt/data/project2
vdc 252:32 0 5G 0 disk
# Prepared new attached disk.
[root@vm2 ~]# parted /dev/vdc mklabel gpt
Information: You may need to update /etc/fstab.
[root@vm2 ~]# parted /dev/vdc mkpart storage_expansion 1MiB 5G
Information: You may need to update /etc/fstab.
# Attach the new physical volume.
[root@vm2 ~]# pvcreate /dev/vdc1
Physical volume "/dev/vdc1" successfully created.
# Display volume group containing the disk to be added
[root@vm2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rl 1 2 0 wz--n- <19.00g 0
vg-projects 1 2 0 wz--n- <5.00g 0
# Add the new disk to the volume group
[root@vm2 ~]# vgextend vg-projects /dev/vdc1
Devices file /dev/vdb is excluded: device is partitioned.
Volume group "vg-projects" successfully extended
# Check results.
[root@vm2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rl 1 2 0 wz--n- <19.00g 0
vg-projects 2 2 0 wz--n- 9.99g 5.00g
Expanding Another Logical Volume (LV) with XFS File System
# Extend the logical volume.
[root@vm2 ~]# lvextend -L 3G /dev/vg-projects/lv-project1
Size of logical volume vg-projects/lv-project1 changed from 2.50 GiB (640 extents) to 3.00 GiB (768 extents).
Logical volume vg-projects/lv-project1 successfully resized.
[root@vm2 ~]# lvextend -L 3.5G /dev/vg-projects/lv-project2
Size of logical volume vg-projects/lv-project2 changed from <2.50 GiB (639 extents) to 3.50 GiB (896 extents).
Logical volume vg-projects/lv-project2 successfully resized.
# Extend xfs file system.
[root@vm2 ~]# xfs_growfs /mnt/data/project1
meta-data=/dev/mapper/vg--projects-lv--project1 isize=512 agcount=4, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=655360, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 655360 to 786432
[root@vm2 ~]# xfs_growfs /mnt/data/project2
meta-data=/dev/mapper/vg--projects-lv--project2 isize=512 agcount=4, agsize=163584 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=654336, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 654336 to 917504
# Check results.
root@vm2 ~]# lvs | grep projects
lv-project1 vg-projects -wi-ao---- 3.00g
lv-project2 vg-projects -wi-ao---- 3.50g
[root@vm2 ~]# df -h | grep projects
/dev/mapper/vg--projects-lv--project1 3.0G 54M 2.9G 2% /mnt/data/project1
/dev/mapper/vg--projects-lv--project2 3.5G 57M 3.4G 2% /mnt/data/project2
Shrinking a Logical Volume (LV) with XFS File System
# Create some dummy files.
[root@vm2 ~]# cd /mnt/data/project1/
[root@vm2 project1]# dd if=/dev/urandom bs=1024 count=5 of=poc_data
5+0 records in
5+0 records out
5120 bytes (5.1 kB, 5.0 KiB) copied, 0.000123102 s, 41.6 MB/s
[root@vm2 project1]# dd if=/dev/urandom bs=1024 count=5 of=test_data
5+0 records in
5+0 records out
5120 bytes (5.1 kB, 5.0 KiB) copied, 0.000155395 s, 32.9 MB/s
[root@vm2 project1]# dd if=/dev/urandom bs=1024 count=5 of=hotfix_data
5+0 records in
5+0 records out
5120 bytes (5.1 kB, 5.0 KiB) copied, 0.000124846 s, 41.0 MB/s
[root@vm2 project1]# dd if=/dev/urandom bs=1024 count=5 of=deploy_data
5+0 records in
5+0 records out
5120 bytes (5.1 kB, 5.0 KiB) copied, 0.000118771 s, 43.1 MB/s
# Lets create a projects backup partition.
[root@vm2 project1]# lvcreate -n lv-backups -l 100%FREE vg-projects
Logical volume "lv-backups" created.
[root@vm2 project1]# mkfs.xfs /dev/vg-projects/lv-backups
meta-data=/dev/vg-projects/lv-backups isize=512 agcount=4, agsize=228864 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=915456, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done
# Mount partition.
[root@vm2 project1]# mkdir /mnt/data/projects_backups
[root@vm2 project1]# mount /dev/vg-projects/lv-backups /mnt/data/projects_backups/
[root@vm2 project1]# df -h | grep backups
/dev/mapper/vg--projects-lv--backups 3.5G 57M 3.4G 2% /mnt/data/projects_backups
# Install xfsdump
[root@vm2 project1]# dnf install xfsdump -y
Installed:
attr-2.5.1-3.el9.x86_64 xfsdump-3.1.12-4.el9_3.x86_64
Complete!
# Backup the data.
[root@vm2 ~]# xfsdump -l 0 -L "project1 data" -f /mnt/data/projects_backups/project1.dump /dev/vg-projects/lv-project1
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.12 (dump format 3.0) - type ^C for status and control
xfsdump: level 0 dump of vm2.lab.hackelarre.cc:/mnt/data/project1
xfsdump: dump date: Sun Feb 4 13:18:35 2024
xfsdump: session id: 03c80d5e-1cee-4160-a3a7-6ecba970fef9
xfsdump: session label: "project1 data"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 54848 bytes
--------------------------------- end dialog ---------------------------------
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 44248 bytes
xfsdump: dump size (non-dir files) : 20608 bytes
xfsdump: dump complete: 10 seconds elapsed
xfsdump: Dump Summary:
xfsdump: stream 0 /mnt/data/projects_backups/project1.dump OK (success)
xfsdump: Dump Status: SUCCESS
# Umont the partition to reduce.
[root@vm2 ~]# umount /mnt/data/project1
# Reduce the size of the logical volume not supported in xfs.
[root@vm2 ~]# lvreduce -r -L -1G /dev/vg-projects/lv-project1
File system xfs found on vg-projects/lv-project1.
File system size (3.00 GiB) is larger than the requested size (2.00 GiB).
File system reduce is required and not supported (xfs).
# Lets destroy and recreate partition.
[root@vm2 ~]# lvremove /dev/vg-projects/lv-project1
Do you really want to remove active logical volume vg-projects/lv-project1? [y/n]: y
Logical volume "lv-project1" successfully removed.
[root@vm2 ~]# lvcreate -n lv-project1 -L 2g vg-projects
[root@vm2 ~]# mkfs.xfs /dev/vg-projects/lv-project1
[root@vm2 ~]# mount /dev/vg-projects/lv-project1 /mnt/data/project1
# Restore data.
[root@vm2 ~]# xfsrestore -f /mnt/data/projects_backups/project1.dump /mnt/data/project1/
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.12 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: vm2.lab.hackelarre.cc
xfsrestore: mount point: /mnt/data/project1
xfsrestore: volume: /dev/mapper/vg--projects-lv--project1
xfsrestore: session time: Sun Feb 4 13:18:35 2024
xfsrestore: level: 0
xfsrestore: session label: "project1 data"
xfsrestore: media label: "label"
xfsrestore: file system id: 6fbdacab-a9aa-4641-87cd-fc946602379a
xfsrestore: session id: 03c80d5e-1cee-4160-a3a7-6ecba970fef9
xfsrestore: media id: 62996160-c917-4cd2-9b5c-d6656e82dc3c
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 4 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore: stream 0 /mnt/data/projects_backups/project1.dump OK (success)
xfsrestore: Restore Status: SUCCESS
# Check results.
[root@vm2 ~]# lvs | grep project1
lv-project1 vg-projects -wi-ao---- 2.00g
[root@vm2 ~]# df -h | grep project1
/dev/mapper/vg--projects-lv--project1 2.0G 47M 1.9G 3% /mnt/data/project1
[root@vm2 ~]# ls /mnt/data/project1/
deploy_data hotfix_data poc_data test_data
Destroying a Logical Volume (LV) and get the Free Space
# Check volume group space.
[root@vm2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rl 1 2 0 wz--n- <19.00g 0
vg-projects 2 3 0 wz--n- 9.99g 1.00g
# Unmount the logical volume
[root@vm2 ~]# umount /mnt/data/projects_backups/
# Remove the logical volume
[root@vm2 ~]# lvremove /dev/vg-projects/lv-backups
Do you really want to remove active logical volume vg-projects/lv-backups? [y/n]: y
Logical volume "lv-backups" successfully removed.
# Check results.
[root@vm2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rl 1 2 0 wz--n- <19.00g 0
vg-projects 2 2 0 wz--n- 9.99g 4.49g
These examples cover various operations with Logical Volume Manager (LVM) in Linux, including creating physical volumes, volume groups, and logical volumes, as well as expanding, destroying, and shrinking them, all while utilizing the XFS file system for data storage. Remember to proceed with caution and ensure proper backups before making significant changes to your disk configuration.