You are not logged in.
A bootable USB stick made with Refractasnapshot (refractasnapshot-10.1.1 (20171213)) on Devuan Ascii
is unable to use gparted to check disks on a target machine.
Gparted complains that the disk on target machine are still in use.
Listing the device names for the partitions shows the partitions are mounted as logical volumes managed by lvm2
ls -la /dev/disk/by-id
total 0
..
..
lrwxrwxrwx 1 root root 9 Nov 27 21:26 ata-VB0250EAVER_Z2AABSSW -> ../../sda
lrwxrwxrwx 1 root root 10 Nov 27 21:26 ata-VB0250EAVER_Z2AABSSW-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 27 21:26 ata-VB0250EAVER_Z2AABSSW-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Nov 27 21:26 ata-VB0250EAVER_Z2AABSSW-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Nov 27 21:26 ata-VB0250EAVER_Z2AABSSW-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Nov 27 21:02 dm-name-VB0250EAVER_S2A0GLHE -> ../../dm-0
lrwxrwxrwx 1 root root 10 Nov 27 21:02 dm-name-VB0250EAVER_S2A0GLHE-part1 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Nov 27 21:02 dm-uuid-mpath-VB0250EAVER_S2A0GLHE -> ../../dm-0
lrwxrwxrwx 1 root root 10 Nov 27 21:02 dm-uuid-part1-mpath-VB0250EAVER_S2A0GLHE -> ../../dm-1
So I stopped and removed the services lvm2, lvm2-lvmetad, lvm2-lvmpolld and reran refractasnapshot.
However, this wasn't enough as the partitions were still mounted as logical volumes and gparted was still unable to check the disks on the target machine.
How can refractasnapshot be configured to use "plain vanilla" physical volumes so gparted can do its work and check and partition disks on a target machine?
This would make it much easier to deploy Devuan at my workplace on existing machines.
thanks in advance!
Offline
I've seen the same issue with mdadm and raid. To avoid having to manually unmount and close the volume group, you could disable the lvm2 service before making a snapshot. That can be done with update-rc.d or sysv-rc-conf. If you want to get fancy, you could have lvm enabled or disabled in different runlevels, and then make boot menu entries for the different levels. (There's already one for runlevel 3 as text mode in the stock refractasnapshot menu..) That way, it would be available if you need to boot live and access files in the lvm.
Offline
Hi fsmithred,
Thanks for the tips re lvm and mdadm
/dev/mapper persisted after services lvm2, mdadm were deactivated via sysv-rc-conf
In addition, I edited /etc/mdadm/mdadm.conf to prevent auto starting of any raid arrays
And removed the lvm partitions with dmsetup remove <partition> which got the system back to raw disks and enabled gparted to check the physical disks.
still after reboot the /dev/mapper persisted !!!
What else could be keeping /dev/mapper in place?
What else could be done to permanently remove /dev/mapper?
thanks in advance!
The details:...
# fdisk -l
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000672f8
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 58593279 58591232 28G 83 Linux
/dev/sda2 58595326 488396799 429801474 205G 5 Extended
/dev/sda5 58595328 478515199 419919872 200.2G 83 Linux
/dev/sda6 478517248 488396799 9879552 4.7G 82 Linux swap / Solaris
Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00029c92
Device Boot Start End Sectors Size Id Type
/dev/sdb1 63 488392064 488392002 232.9G 83 Linux
Disk /dev/mapper/VB0250EAVER_S2A0GLHE: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00029c92
Device Boot Start End Sectors Size Id Type
/dev/mapper/VB0250EAVER_S2A0GLHE-part1 63 488392064 488392002 232.9G 83 Linux
so then I edited /etc/mdadm/mdadm to prevent the mounting of the raid at boot
adding:
# ignore all harddrives
ARRAY <ignore> UUID=00000000:00000000:00000000:00000000
# dont Autostart
AUTO -all
rebooted and /dev/mapper persisted
then removed the lvm partition via dmsetup:
# dmsetup ls
VB0250EAVER_S2A0GLHE (253:0)
VB0250EAVER_S2A0GLHE-part1 (253:1)
# dmsetup remove VB0250EAVER_S2A0GLHE-part1
# dmsetup remove VB0250EAVER_S2A0GLHE
rebooted and the /dev/mapper was still there!!
Offline
I don't know how that could happen. Are you making new snapshots or doing this on a persistent partition after making changes? Either of those ways should work.
Offline
Doh! You need to rebuild the initramfs without lvm support. I'm not sure if disabling it in all runlevels before running update-initramfs -u is enough. There may be a config file with a setting something like the one for mdadm. Another way to do it would be to remove lvm2 and mdadm, and the initramfs will be rebuilt automatically without those items.
.
Offline
Hi fsmithred,
Thanks for "update-initramfs -u"!
So I stopped the lvm, mdadm services, removed the partitions with dmsetup, and then removed and purged lvm2, mdadm and dmraid.
Ran "update-initramfs -u", and "update-grub" and rebooted the host machine.
Still the /dev/mapper devices are there, so something else is starting the drive mapper.
Removed the mapped drives with "dmsetup remove xxx" and checked to see if the drives and partitions were active as "raw" devices in /dev/disk/by-id
then re-ran refractasnapshot and made a bootable USB stick.
On booting, gparted couldn't check a drive as said drive was already mounted.
By drive mapper as in /dev/disk/by-id/ xxxxx -> /./dm-0
so located mapped drives with "dmsetup ls" and then removed the mapper drives with "dmsetup remove xxxx"
so gparted could then scan drives.
Next, I looked at removing dmsetup on the host machine and found a lot of packages depend on this, so maybe one of these are starting the drive mapper?
BTW, I found a different machine and installed a fresh copy of Devuan Ascii and found that all the drives and partitions were listed as "raw" devices in /dev/disks/by-id with out any dm-0 etc that the drive mapper was assigning on the host machine above.
This tells me that its some package I have installed after the initial Devuan Ascii install.
So I read about drive mapper over the weekend and test to if I can find the package(s) involved.
thanks again
Offline
Found the service that is mapping the drives!
/etc/init.d/multipath-tools
While this is running and mapping drives
1) Gparted can't scan or fix drives
2) Refracta Installer will be unable to install on the selected drives.
To Fix:
To be able to use Gparted and Refracta Installer:
1) Stop the service manually
$ sudo service multipath-tools stop
2) Find the drives:
$ sudo dmsetup ls
ST3500630NS_5QG196JX-part4 (254:4)
ST3500630NS_5QG196JX-part3 (254:3)
ST3500630NS_5QG196JX-part2 (254:2)
ST3500630NS_5QG196JX-part1 (254:1)
ST3500630NS_5QG196JX
Then remove the drives:
$ sudo dmsetup remove ST3500630NS_5QG196JX-part{1,2,3,4}
$ sudo dmsetup remove ST3500630NS_5QG196JX
Check the drives again:
# dmsetup ls
No devices found
Stop this service from starting at boot with sysv-rc-conf.
$ sudo sysv-rc-conf
Scroll down to multipath-tools and delete "X" from all run levels
Reboot and enjoy the raw drives once again!
Great!
Now can get busy with Gparted and Refracta Installer!
Offline
Good one to know. I never heard of multipath-tools and don't have it installed. Also never used dmsetup commands. Thanks for that!
Offline