You are not logged in.
I have a working Devuan setup on my NUC:
gene@devuan-nuc:~$ grep -i pretty /etc/os-release
PRETTY_NAME="Devuan GNU/Linux 4 (chimaera)"
I have had this running for a while on a 240 GB m.2 SSD. I added a second WD 2.5" spinning rust 1 TB disk this week and have been attempting to get it set up with another volume group to move /home, /var, and swap off of the m.2 SSD and onto the spinning rust to reduce writes on the SSD. However, the command lvcreate is not working as expected:
[ROOT@devuan-nuc ~] # lvcreate -n var -L 20g devuan-nuc-vg2
/dev/devuan-nuc-vg2/var: not found: device not cleared
Aborting. Failed to wipe start of new LV.
Everything else is working as expected:
[ROOT@devuan-nuc ~] # pvdisplay /dev/sdb
--- Physical volume ---
PV Name /dev/sdb
VG Name devuan-nuc-vg2
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 238467
Free PE 238467
Allocated PE 0
PV UUID vNIiio-PyV1-Efg7-iZIL-hdVE-oKF6-3RTy1a
[ROOT@devuan-nuc ~] # vgdisplay devuan-nuc-vg2
--- Volume group ---
VG Name devuan-nuc-vg2
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.51 GiB
PE Size 4.00 MiB
Total PE 238467
Alloc PE / Size 0 / 0
Free PE / Size 238467 / 931.51 GiB
VG UUID 1jB84x-3kAV-KHLW-sD6k-G00r-1xvX-3fWCy0
Web search has only lead me to some comments about "known Debian problem" with no solution. I have been working on this since this past Sunday and cannot find the answer on my own. How do I get past this hurdle, please?
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
You might have forgotten to activate the volume group?
Offline
Thank you for the suggestion. That is not the problem:
[ROOT@devuan-nuc ~] # vgchange --activate y devuan-nuc-vg2
0 logical volume(s) in volume group "devuan-nuc-vg2" now active
[ROOT@devuan-nuc ~] # lvcreate -n var -L 20g devuan-nuc-vg2
/dev/devuan-nuc-vg2/var: not found: device not cleared
Aborting. Failed to wipe start of new LV.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
I believe the next attempt would be to: a) deactivate that volume group, then b) manually remove spurious links for that volume group in /dev and /dev/mapper, and then c) activate it again.
That's on the assumption that the disk itself doesn't happen to contain some old LVM meta-data on the partitions. As you know, LVM writes stuff onto the partitions, and some of its admin scripts gets confused if there are even accidental remnants of old meta-data.
One step to deal with that would be to delete the physical volume from the volume group, then clear it fully, and then add it again.
Offline
Thank you for your suggestions.
This is a brand new disk, just taken out of the factory packaging this past weekend. It was completely blank when I added it as a PV then created the VG for the first time and attempted to create the first logical volume on it using the same LVM steps at the CLI I have used hundreds of times in my job as a Unix / Linux administrator at a regional ISP. Since then I have removed all the LVM data and started from a completely blank disk more than once now. There was nothing on the disk to confuse LVM from the start.
That said, my company uses primarily Red Hat / CentOS (ICK!). So I suspect there is a difference between those and Devuan / Debian that I am missing.
Last edited by UnixRocks (2022-08-31 17:59:25)
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
Okay, this vgs output looks weird:
[ROOT@devuan-nuc ~] # vgs
VG #PV #LV #SN Attr VSize VFree
devuan-nuc-vg 1 5 0 wz--n- <222.59g 12.00m
devuan-nuc-vg2 1 0 0 wz--n- 931.51g 931.51
Why are both showing up as on PV 1? There are TWO physical disks:
[ROOT@devuan-nuc ~] # pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 devuan-nuc-vg lvm2 a-- <222.59g 12.00m
/dev/sdb devuan-nuc-vg2 lvm2 a-- 931.51g 931.51g
Or am I misreading that? At this point I am fairly confused.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
Isn't the #PV just a counter?
If there is a difference between Devuan/Debian and those other OS types, it might be a difference in the lvm.conf files. Perhaps you can diff those between systems?
Offline
Yeah, I'm off down the wrong rabbit hole with the #PV thing.
This is my home system. I cannot copy files from the work systems to my home, nor vice versa, for diff without breaking company security rules. Besides, I am on vacation from work which is why I have time to work on my home setup. Not gonna login to work just for this. Someone would notice, and I would get pinged for workthing.
I can take a gander at the work systems when I am back at work next week to see if I can find a difference with my eyeball-mark-I.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
I wonder why there is no new volume group directory, nor dm-? file created for this attempt at making a new logical volume? I see these for the original install:
[ROOT@devuan-nuc ~] # ls -dl /dev/devuan-nuc-vg*
drwxr-xr-x 2 root root 140 Aug 30 05:28 /dev/devuan-nuc-vg
[ROOT@devuan-nuc ~] # ls -l /dev/devuan-nuc-vg*
total 0
lrwxrwxrwx 1 root root 7 Aug 30 05:28 home -> ../dm-4
lrwxrwxrwx 1 root root 7 Aug 30 05:28 root -> ../dm-0
lrwxrwxrwx 1 root root 7 Aug 30 05:28 swap_1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Aug 30 05:28 tmp -> ../dm-3
lrwxrwxrwx 1 root root 7 Aug 30 05:28 var -> ../dm-1
[ROOT@devuan-nuc ~] # ls -l /dev/dm*
brw-rw---- 1 root disk 254, 0 Aug 30 13:28 /dev/dm-0
brw-rw---- 1 root disk 254, 1 Aug 30 13:28 /dev/dm-1
brw-rw---- 1 root disk 254, 2 Aug 30 13:28 /dev/dm-2
brw-rw---- 1 root disk 254, 3 Aug 30 13:28 /dev/dm-3
brw-rw---- 1 root disk 254, 4 Aug 30 13:28 /dev/dm-4
There should be a /dev/devuan-nuc-vg2 directory for the new volume group. It seems there should also be a /dev/dm-5(?) created when I make the new logical volume. I have never had to create any of that stuff by hand, but if needed, I will do it. Anyone here know the proper method to get those created?
Added: Obviously I can create the directory and symlinks "by hand" with mkdir and ln. My main concern is getting the dm-? disk files created correctly.
Last edited by UnixRocks (2022-08-31 18:05:53)
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
Huh, interesting. I am web searching to find out how to make the LVM stuff work on Devuan. Look at "Never use ..." under the Notes section on this page: https://www.thegeekdiary.com/how-to-cre … h-devices/
Yet those are used when logical volumes are made using the Devuan installer. Weird.
Last edited by UnixRocks (2022-08-31 18:41:26)
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
... aaaand I am back to it being a Debian thing: https://listman.redhat.com/archives/lin … 23207.html
Using the "-Zn" switch worked to partially create what I needed. These steps got my new logical var created and ready to use:
[ROOT@devuan-nuc ~] # lvcreate -Zn -n var -L 20g devuan-nuc-vg2
WARNING: Logical volume devuan-nuc-vg2/var not zeroed.
Logical volume "var" created.
Then I ran vgscan --mknodes, but that did not create the symlink in /dev/mapper as I was expecting:
[ROOT@devuan-nuc ~] # ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Aug 30 13:28 control
brw-rw---- 1 root disk 254, 5 Aug 31 11:10 devuan--nuc--vg2-var <<<--- yeah, no.
lrwxrwxrwx 1 root root 7 Aug 30 05:28 devuan--nuc--vg-home -> ../dm-4
lrwxrwxrwx 1 root root 7 Aug 30 05:28 devuan--nuc--vg-root -> ../dm-0
lrwxrwxrwx 1 root root 7 Aug 30 05:28 devuan--nuc--vg-swap_1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Aug 30 05:28 devuan--nuc--vg-tmp -> ../dm-3
lrwxrwxrwx 1 root root 7 Aug 30 05:28 devuan--nuc--vg-var -> ../dm-1
I removed that file with rm /dev/mapper/devuan--nuc--vg2-var, and created the symlink by hand with:
[ROOT@devuan-nuc ~] # cd /dev/mapper/
[ROOT@devuan-nuc mapper] # ln -s ../dm-5 devuan--nuc--vg2-var
Then I made the filesystem on the logical volume:
[ROOT@devuan-nuc mapper] # mkfs.ext4 /dev/devuan-nuc-vg2/var
mke2fs 1.46.2 (28-Feb-2021)
Discarding device blocks: done
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: e46a61e4-c322-4d45-8d26-31386bd47efb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Now I will make a mount point, mount the sucker, add that to /etc/fstab, and see if this all survives a reboot or not.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
There is nothing special with LVM on Devuan. It works fine on my machines. Mostly created while installing the system.
Here are three links to HowTo's:
https://tldp.org/HOWTO/LVM-HOWTO/
https://linuxhandbook.com/lvm-guide/
https://www.howtoforge.com/linux_lvm
The first I have used a lot.
Online
There is nothing special with LVM on Devuan. It works fine on my machines. Mostly created while installing the system.
Here are three links to HowTo's:
https://tldp.org/HOWTO/LVM-HOWTO/
https://linuxhandbook.com/lvm-guide/
https://www.howtoforge.com/linux_lvmThe first I have used a lot.
I have already been through all the guides, mate. This problem with lvcreate AFTER INSTALLATION is not addressed in any of them. As you can see above, I found a workaround.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
Yep, it survived a reboot. The steps I outlined above in https://dev1galaxy.org/viewtopic.php?pid=37275#p37275 will get one where one needs to be to add a logical volume after using pvcreate and vgcreate.
Note, this is not novice friendly. If you, dear reader, are a novice, do not blindly follow my steps. Try to figure out what I did and why first.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
To complete this saga, after doing the steps I outlined above and rebooting, when I created my new home logical volume, all the bits were created as expected. I did not have to make any special symlinks. I just had to make the file system with mkfs.ext4, create the new /home2 mount point, mount it, and edit /etc/fstab to make it stick.
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
You don't mention whether(/that?) you also re-created the block device (major:minor = 254:5) as /dev/dm-5?
Offline
You don't mention whether(/that?) you also re-created the block device (major:minor = 254:5) as /dev/dm-5?
The block device was created "automagically" when I used lvcreate with the -Zn switch. Only the symlinks had to be created that first time. I am now happily running with /home, /var, and swap on the new WD 1 TB disk.
Added: All -Zn does is tell lvcreate to not attempt to zero the new device.
Last edited by UnixRocks (2022-08-31 22:09:00)
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline
This is a brand new disk, just taken out of the factory packaging this past weekend. It was completely blank when I added it as a PV then created the VG for the first time and attempted to create the first logical volume on it using the same LVM steps at the CLI I have used hundreds of times in my job as a Unix / Linux administrator at a regional ISP. Since then I have removed all the LVM data and started from a completely blank disk more than once now. There was nothing on the disk to confuse LVM from the start.
I once overheard one of the LAN admins in our company complaining loudly about a supposedly new disk not being blank (about half of the people in the canteen could hear him). He had spent several hours trying to add the disk to a Novell Netware server and could not get it to work. Eventually he removed all the old disks from the server and booted it with just the new supposedly blank disk in it and it came up as another company's server!
So don't assume a new disk is always blank. Or that a disk you send back as faulty (or part of a faulty system) won't end up being sold to a random customer. Always wipe anything that might be sensitive first.
Offline
I've heard similar anecdotes, and I have bought "refurbished" drives that have had data on them. I have never had that happen with new disks to date.
For the record, I did not ass-u-me the new disk was blank. I verified it was blank as soon as it was installed and I booted the system. Thanks for the heads up for anyone else reading this though.
Added: This is a bit of a tangent from the original post we should probably take elsewhere. That said, if your "dead" or used disk has sensitive data on it then have it run through a disk shredder, or otherwise destroy it to the point it is unreadable. If you need to return a defective disk for a replacement, consider not doing that and just buy another new disk and destroy the defective one. These are the only ways to be sure your data on that disk does not escape your control.
Last edited by UnixRocks (2022-09-01 17:35:04)
“Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.”
-- Suzanne McMinn
Offline