The Linux Logical Volume Manager (LVM) is software system designed for adding a layer between real disks and the operating system’s view of them to make them easier to manage, replace, and extend. It is used in data centers to use upgrade disk hardware as well to mirror data to prevent loss. There are of course alternatives: hardware RAID is better at performance but more restrictive: for example you cannot sanely replace a disk in a hardware RAID0; then there is mdadm - or software RAID which is a software implementation(OS level) of RAID but comes with similar shortcomings. LVM is more flexible allowing for configuration that RAID cannot do. That said, because it is a pure software solution (comprised of kernel modules and user space daemons) there is a performance hit involved, and you will lose some speed over using the disks natively.
The Debian wiki explains pretty well the core concepts so I will not attempt to compete with them, see this: https://wiki.debian.org/LVM. For a more detailed tutorial on LVM see RedHat’s excellent at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Cluster_Logical_Volume_Manager/ch_Introduction-CLVM.html.
The core concepts I will use here are:
- PV - physical volume
- VG - volume group
- LV - logical volume
All LVM commands use these initials to designate the above concepts.
Installation
If you are running any of the official Ubuntu images from Hardkernel’s repository, this is all you need to do:
$ sudo apt install lvm2This will install the kernel packages, the user space daemon, and everything else you need to work with LVM.
Cloudshell2
Cloudshell2 offers hardware RAID with 2 disks but your disk upgradability is somewhat limited unless you have a way to clone the array each time you want to upgrade. The ODROID Wiki explains how to set up your JMicron controller at https://wiki.odroid.com/accessory/add-on_boards/xu4_cloudshell2/raid_setting. If you want to use LVM then you will need to use the JBOD setting. You can also run LVM on top of a hw RAID config like RAID0 or RAID1 but I think it the context of just 2 disks it kills any advantage LVM would bring into the mix. After you connect your disks, you will want to partition them. LVM docs recommend that you do not use raw disks as PVs(physical volumes) and use partitions instead, even it’s a disk spanning partition. In my setup I did just that, I used 2x3TB HDDs that contain one partition that fills the disk.
A quick way to partition your disk is with the following commands:
$ sudo fdisk /dev/sda Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-621, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-621, default 621): wFor more information on disk partitioning, please refer to https://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html.
One of the cool things about LVM is that you can plan and simulate your setup with loopback devices before actually implementing it. The following section details a 2 identically sized disks used for 2 striped volumes(for performance, not safety), and it’s the setup that I did. As a bonus step, we are also going to simulate upgrading one of the disks with a new disks, twice it size, this something I also did - simulated and then implemented. In this final scenario part of your volume will no longer be striped because the disks are no longer of equal sizes.
Setup loopback devices
$ dd if=/dev/zero of=/tmp/hdd1.img bs=1G count=1 $ dd if=/dev/zero of=/tmp/hdd2.img bs=1G count=1 # twice the space $ dd if=/dev/zero of=/tmp/hdd3.img bs=1G count=2 $ losetup -f $ losetup /dev/loop0 /tmp/hdd1.img $ losetup /dev/loop1 /tmp/hdd2.img $ losetup /dev/loop2 /tmp/hdd3.imgSo we have created 3 disks: two 1GB disks and one 2GB disk, feel free to use any dimensions you want, it’s the proportions that matter and not the actual sizes. Create physical volumes (PV) $ pvcreate /dev/loop0 $ pvcreate /dev/loop1 $ pvcreate /dev/loop2
What this did was tell LVM that we plan to use these disks as physical support for our future logical volumes. The cool part to remember here is that each PV is given an unique ID that is written to the disk so that even if you move the disks around in your system, LVM will find them based on their IDs. This will come in handy when we will be upgrading our disks in the Cloudshell2 enclosure and one of the new disks will be connected via USB 3.0 and then swapped with one of the disks in the enclosure.
We will need to put our PVs in a Volume Group before using them to create logical volumes, this is a mandatory step, also note that it is not possible to create logical volumes using PVs from different VGs.
$ vgcreate vgtest /dev/loop0 /dev/loop1Now that we have created a VG using our 2 simulated 1GB disks, we can check the status of our setup any moment using these commands:
$ pvs $ vgs $ pvdisplay $ lvdisplay
Create logical volumes (LV)
Now we will create our volumes that will become the drives that the OS sees as usable. In this scenario, I am creating 2 striped volumes, one 1GB one and another one that just fills up any remaining space.
$ lvcreate -i2 -I4 -L1G -nlvdata1 vgtest $ lvcreate -i2 -I4 -l100%FREE -nlvdata2 vgtestThe parameters are:
- -i2 : strip this volume across 2 PVs
- -I4 : extend size(the equivalent of a block in LVM parlance) will be 4MB
- -n : what to name the volume
The last parameter is the volume group to operate on. The size is specified using the -L or -l option, the -L requires specific sizes while -l allows for percentages and other specifiers. At the end, we will have 2 volumes that are evenly distributed across our 2 PVs in stripes, similar to a RAID0 ( actually, 2 RAID0s, one for each logical volume or LV). At this point, we will also need to format our new logical volumes with the filesystem we want to use you do that by running the following commands:
$ mkfs.ext4 /dev/mapper/vgtest-lvdata1 $ mkfs.ext4 /dev/mapper/vgtest-lvdata2Just like on any regular partition, except notice the path of the devices, these logical devices are exposed by LVM. For extra safety, mount these disks and write some test files to them, just like you would mount a regular disk. This will allow you to test integrity at the end.
Once you got the hang of it, you can implement the above scenario with real disks instead of loopbacks. Just replace /dev/loop0 and /dev/loop1 with /dev/sda and /dev/sdb and adjust the sizes of your LVs.
Upgrading your disks
Now, here’s where LVM really shines; unlike hardware RAID which can be quite rigid about upgradability(unless your using a mirrored setup) LVM allows for all kinds of crazy disk arrangements. This part is based on the cool tutorial at https://www.funtoo.org/LVM_Fun.
The scenario we are going to implement is as follows: we will replace one of the disks in the setup with one that is double the original capacity, e.g. if we have 2x2GB disks, we will replace one of them with 4GB disk. To figure out which disk is which, use this command:
$ sudo smartctl -i /dev/sda1 $ sudo smartctl -i /dev/sdb1Make sure to run this on the partitions and not on the disks. Because of the JMicron controller in front of our disks you will not get any info from the disks themselves. The above command will tell you the disk product code, such as ST2000DM for a 2TB Seagate Barracuda. This will help you decide which physical disk you want to replace.
The full procedure is:
- Connect new spare disk via the second USB3.0 port using an external enclosure(the cloudshell only supports 2 SATA drives and both ports are occupied right now)
- Create a PV(physical volume) on the new disk
- Add new PV to existing VG(volume group)
- unmount all VG volumes and/or freeze allocation on the PV to migrate
- pvmove one of the 2 existing PVs onto this new PV
- Leave it overnight since it is going to copy sector by sector a 2 TB disk
- Reduce VG by removing old PV(the one moved at the previous step)
- Shutdown and swap out old disk with new one
- Boot and check that the LVs(logical volumes) are correctly mapped to the PVs
Be warned that not all external USB3.0 enclosures will be supported by the UAS driver. I used a ORICO branded one, but your mileage may vary.
Like with all things LVM, you can (and you should!) simulate the upgrade before executing it.
Attach the new PV
First, let’s attach the 3rd loop device we created earlier (the 2GB one) to our existing VG:
$ vgextend vgtest /dev/loop2
Migrate the old PV to the new PV
In this step, we migrate the old disk to the new disk:
$ pvmove --atomic -v -A y /dev/loop1 /dev/loop2There are 2 important parameters here:
- - --atomic: the move will be done transactionally, if it fails at any point, it just gets reverted, no data loss
- -A y: automatically backup the LVM config for restoring in case something bad happens. The tool you will need to use then is vgcfgrestore.
If you interrupt the process or you experience a power loss, you can restart the process by running the following command:
$ pvmoveAlthough, I would suggest aborting, and starting again instead:
$ pvmove --abortBecause our pvmove was atomic, this abort will restore everything to its original state(if we did not use --atomic then some PE - physical extents would get moved and some would still be allocated on the old volume and you would need to manually move them). In the real world this step takes quite a while, and I usually just let it run over night (I was moving 2 TB of data).
Resize the new PV
We need now to resize the newly moved PV to include the extra free space on the disk, this is simply done with the following command:
$ pvresize /dev/loop2
Resize the Logical Volume
Now we can take advantage of that brand new disk space and extend one of our LVs to include it. Because our setup uses stripes, and because this new free space is only on one PV, we will not be able to make the new space striped so we will need to use this command:
$ lvextend -i1 -l +100%FREE /dev/vgtest/lvdata2The parameter -i1 tells LVM that we are reducing to just 1 stripe. This will result in an overall impact performance as the data written on this part of the volume will be on a single disk. By running the “lvdisplay -m” command, we can inspect our resulting setup:
$ lvdisplay -m /dev/vgtest/lvdata2 --- Logical volume --- LV Path /dev/vgtest/lvdata2 LV Name lvdata2 VG Name vgtest LV UUID vDefWQ-1ugy-1Sp5-T1JL-8RQo-BWqJ-Sldyr2 LV Write Access read/write LV Creation host, time odroid, 2018-03-06 17:43:44 +0000 LV Status available # open 0 LV Size 1.99 GiB Current LE 510 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:2 --- Segments --- Logical extents 0 to 253: Type striped Stripes 2 Stripe size 4.00 KiB Stripe 0: Physical volume /dev/loop0 Physical extents 128 to 254 Stripe 1: Physical volume /dev/loop2 Physical extents 128 to 254 Logical extents 254 to 509: Type linear Physical volume /dev/loop2 Physical extents 255 to 510As you can see, the second LV contains a linear segment at the end, that’s the new space we just added which could not be striped. In theory, if replacing the second disk as well, you can restripe it but I have not yet found a safe way to do that. If I do, I will write another article about it.
Recycle the spare disk
Now it’s time to remove the disk we migrated from our setup:
$ vgreduce vgtest /dev/loop1That just removes it from the volume group. You can also use pvremove to wipe the LVM label if you want. We are going to also simulate physically removing the disk:
$ losetup -d /dev/loop1Now, let’s simulate the part where we shutdown the system and put the new disk directly in the Cloudshell2 (remember that we had it in an external enclosure), effectively replacing the old one. In this step, disk 3 will go offline, then come back as a new disk:
$ losetup -d /dev/loop2 $ losetup /dev/loop1 /tmp/hdd3.imgIf you run pvs, vgs and lvs, they should indicate that your volumes are intact:
PV VG Fmt Attr PSize PFree /dev/loop0 vgtest lvm2 a-- 1020.00m 0 /dev/loop1 vgtest lvm2 a-- 2.00g 0Finally, mount the volumes and check that your test files are still there. For comments, questions, and suggestions, please visit the original article at https://www.cristiansandu.ro/2018/03/06/lvm-fun-swap-out-disk-in-lvm2-stripe/.
Be the first to comment