The officially official Devuan Forum!

You are not logged in.

#51 2023-01-11 17:29:19

dcolburn
Member
Registered: 2022-11-02
Posts: 280  

Re: Server lost changes and partially reverted

The partitioning app offered to do that and I followed their instructions.

I just used Alt-F2 and fdisk -l /dev/sda

It's showing "Disklabel type: gpt" - even thought I chose MBR.

I'll have to reverse directions and figure out why it's defaulting to gpt.

Maybe a BIOS setting?

EDIT: Or, do I really want to use gpt, instead?

From 2022 https://www.howtouselinux.com/post/mbr-vs-gpt
From 2013 https://www.linux.com/training-tutorial … cient-mbr/

Last edited by dcolburn (2023-01-11 17:46:30)

Offline

#52 2023-01-11 18:45:39

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

I don't know your hardware or motherboard.
My test machine can do both, old-style BIOS mode and UEFI mode. (My experiences with software RAID1 System-installations in UEFI-mode are quite bad)
However, I suggest that you stick to one scheme:
Either
-BIOS/MBR mode mainboard
-MBR / DOS disk labels
-Boot the stick with the USB NON_UEFI selection.
Or
-EFI/UEFI mainboard
-GPT disk labels
-Boot stick with the USB UEFI selection.

When booting, EFI mainboard firmware offers you the UEFI-USB-Boot option to load the USB stick. In that mode you do a EFI-mode Devuan install and the installer proposes GPT disk formatting. On MBR/BIOS hardware this would be in MBR mode with DOS labels on the disk. I wouldn't force to mix the modes, even though possible. I'm no specialist in UEFI tricks, but HoaS seems to know a lot more on that topic.
You may be able to mix up these elements, but the result could be "interesting".

In MBR mode you can go as instructed in my post. In EFI mode you must have 1 active, bootable EFI partition. you can set aside a second such partition on your second disk. After successful installation you can then dd the active EFI partition to the inactive second one. Maybe it works?

Offline

#53 2023-01-11 18:46:46

rolfie
Member
Registered: 2017-11-25
Posts: 759  

Re: Server lost changes and partially reverted

Linux can deal with both, does not make a real difference.

Meanwhile I have adopted the habit to use gpt wherever possible, even if its not really necessary. The advantage is that you do not have to deal with the limitations of the old MSDOS partitioning scheme: not more than 4 partitions. Or you use 3 plus an extended with logical drives in there (i.e. sda5 as first logical partition in an extended), or sometimes one in the middle is missing. gpt is linear, you just count up.

Offline

#54 2023-01-11 18:49:21

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

Oh, BTW: when checking with fdisk -l /dev/sda (or /dev/sdb), make sure your partitioning in the installer has been committed to disk! Everything you do in the installer becomes active after committing the part you do. Until then it's in memory only. ;-)

Offline

#55 2023-01-11 18:52:34

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

You are right, rolfie. But have you tried that with RAID1 installations???
The little boot partition plus LVM2 on the second partition overcomes all these "limitations". It's still quite usual on enterprise-server hardware.

Offline

#56 2023-01-11 19:07:17

dcolburn
Member
Registered: 2022-11-02
Posts: 280  

Re: Server lost changes and partially reverted

Andre4freedom wrote:

I don't know your hardware or motherboard.
My test machine can do both, old-style BIOS mode and UEFI mode. (My experiences with software RAID1 System-installations in UEFI-mode are quite bad)

For the record, the hardware is a Dell OptiPlex 7050 SFF https://www.hardware-corner.net/desktop … -7050-SFF/

Other than the special requirements of MS version of windows version 11 bloatware, or gamer or video-editing apps, it seems to be fully capable of managing any of the alternative partitioning schemes.

My first priority is stability but I'd like to be able to upgrade the drives to much larger ones, in the future, without the need to re-engineer everything from scratch.

Offline

#57 2023-01-11 19:09:15

dcolburn
Member
Registered: 2022-11-02
Posts: 280  

Re: Server lost changes and partially reverted

rolfie wrote:

Linux can deal with both, does not make a real difference.

Meanwhile I have adopted the habit to use gpt wherever possible, even if its not really necessary. The advantage is that you do not have to deal with the limitations of the old MSDOS partitioning scheme: not more than 4 partitions. Or you use 3 plus an extended with logical drives in there (i.e. sda5 as first logical partition in an extended), or sometimes one in the middle is missing. gpt is linear, you just count up.

When you look at the two pictures I posted, one before the Disk Partitioner Raid1 setup, and the other after, do they look correct for gpt (though different than for MBR) - or should I wipe the partitions (and any other debris) and begin fresh?

Offline

#58 2023-01-11 20:55:59

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

Okay, with this machine you best stick with UEFI mode and GPT partitioning scheme.
But be aware that the RAID1 setup is different. I have pointed to an article that shows right that:

https://askubuntu.com/questions/1299978/install-ubuntu-20-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios

Unfortunately I can't help at this point right now, since I have to invest considerably more time to work it out under UEFI/GPT conditions.
I'm sure someone in our Devuan community has the deeper knowledge. I always had the problem with the grub-install in UEFI mode in RAID setups. The boot afterwards just hangs with the message "BOOT"

Offline

#59 2023-01-12 17:38:20

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

Devuan Deadalus Installation using RAID-1 Disks and UEFI/GPT mode
*****************************************************************

After all, I tried to install Devaun Deadalus to a single disk in pure UEFI/GPT mode (but secure boot disabled).
Installation went well, but finally the computer wouldn't boot. Again. Misery.

To deal with that problem, I had to reset the computer including all BIOS settings to factory defaults and to completely wipe all residual configuration from the disks.
Then a new trial to install an Enterprise Linux (Alma Linux 9.1) in UEFI mode.
That went well too, AND the computer would re-boot after all.

Now, using the Devuan USB-boot stick I configured the disks as visible below.

The disks:
----------
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD1000DHTZ-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 081780F2-D244-5C4E-9623-C4200969845D

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   1050623   1048576   512M EFI System
/dev/sda2  1050624   3147775   2097152     1G Linux RAID
/dev/sda3  3147776 976773120 973625345 464.3G Linux RAID

Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: SAMSUNG HD502HJ
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FD95D43D-6719-4C2A-B389-299985C86967

Device       Start       End   Sectors   Size Type
/dev/sdb1     2048   1050623   1048576   512M EFI System
/dev/sdb2  1050624   3147775   2097152     1G Linux RAID
/dev/sdb3  3147776 976773119 973625344 464.3G Linux RAID

The RAID status:
----------------
linuxadmin@deadalus:~$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb3[0] sda3[1]
      486680576 blocks super 1.2 [2/2] [UU]
      [==============>......]  resync = 71.4% (347912704/486680576) finish=22.8min speed=101256K/sec
      bitmap: 2/4 pages [8KB], 65536KB chunk

md0 : active raid1 sdb2[0] sda2[1]
      1046528 blocks super 1.2 [2/2] [UU]
     
unused devices: <none>

The block devices:
------------------
linuxadmin@deadalus:~$ lsblk -f
NAME             FSTYPE            FSVER    LABEL      UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                                         
├─sda1           vfat              FAT32               530D-FFCA                                             
├─sda2           linux_raid_member 1.2      deadalus:0 ddc7f83e-413b-367b-5753-38ec7a4f55b3                 
│ └─md0          ext3              1.0      BOOT       c4b6ab5f-f5a6-483a-a2c2-0ef546daab37    884.2M     5% /boot
└─sda3           linux_raid_member 1.2      deadalus:1 1d92ad6e-ac89-97e5-541a-96410abb2c9c                 
  └─md1          LVM2_member       LVM2 001            FLJkFO-IZCD-kWiE-2IUI-pS59-4xMm-RcPd1K               
    ├─vg0-lvroot                                                                                30.8G    10% /
    ├─vg0-lvhome                                                                                43.2G     0% /home
    ├─vg0-lvswap                                                                                             [SWAP]
    └─vg0-lvsrv                                                                                339.9G     0% /srv
sdb                                                                                                         
├─sdb1           vfat              FAT32               0D98-84C3                               498.2M     2% /boot/efi
├─sdb2           linux_raid_member 1.2      deadalus:0 ddc7f83e-413b-367b-5753-38ec7a4f55b3                 
│ └─md0          ext3              1.0      BOOT       c4b6ab5f-f5a6-483a-a2c2-0ef546daab37    884.2M     5% /boot
└─sdb3           linux_raid_member 1.2      deadalus:1 1d92ad6e-ac89-97e5-541a-96410abb2c9c                 
  └─md1          LVM2_member       LVM2 001            FLJkFO-IZCD-kWiE-2IUI-pS59-4xMm-RcPd1K               
    ├─vg0-lvroot                                                                                30.8G    10% /
    ├─vg0-lvhome                                                                                43.2G     0% /home
    ├─vg0-lvswap                                                                                             [SWAP]
    └─vg0-lvsrv                                                                                339.9G     0% /srv
sdc                                                                                                         
sdd                                                                                                         
sde                                                                                                         
sdf                                                                                                         
sr0                     

The active mounted filesystems:
-------------------------------
linuxadmin@deadalus:~$ df -h
udev                    7.8G     0  7.8G   0% /dev
tmpfs                   1.6G  1.2M  1.6G   1% /run
/dev/mapper/vg0-lvroot   37G  3.8G   31G  11% /
tmpfs                   5.0M  8.0K  5.0M   1% /run/lock
tmpfs                   3.1G     0  3.1G   0% /dev/shm
/dev/md0                989M   54M  885M   6% /boot
/dev/sdb1               511M   13M  499M   3% /boot/efi
/dev/mapper/vg0-lvhome   46G  1.7M   44G   1% /home
/dev/mapper/vg0-lvsrv   359G   28K  340G   1% /srv
cgroup_root              10M     0   10M   0% /sys/fs/cgroup
tmpfs                   1.6G   16K  1.6G   1% /run/user/1000
tmpfs                   1.6G  4.0K  1.6G   1% /run/user/109

linuxadmin@deadalus:~$
           
The LVM2 configuration:
-----------------------
linuxadmin@deadalus:~$ sudo pvscan
[sudo] password for linuxadmin:
  PV /dev/md1   VG vg0             lvm2 [464.13 GiB / 0    free]
  Total: 1 [464.13 GiB] / in use: 1 [464.13 GiB] / in no VG: 0 [0   ]
 
linuxadmin@deadalus:~$ sudo vgscan
  Found volume group "vg0" using metadata type lvm2
 
linuxadmin@deadalus:~$ sudo lvscan
  ACTIVE            '/dev/vg0/lvroot' [37.25 GiB] inherit
  ACTIVE            '/dev/vg0/lvhome' [46.56 GiB] inherit
  ACTIVE            '/dev/vg0/lvswap' [15.36 GiB] inherit
  ACTIVE            '/dev/vg0/lvsrv' [<364.96 GiB] inherit

The installation of Devuan Deadalus went quite well, then. Finally it was able to reboot.
The only thing that needs to be done yet is to clone the EFI partition to the second disk.
In my case:
             dd if=/dev/sdb1 of=/dev/sda1 bs=1M
             
I will do that once the RAID1 md1 is synced.

As you can see, doing all that for UEFI's sake gives a lot of pain, but it's doable.
I hope you can deal with this information and adapt your installation.
If not, loving UEFI-volunteers are welcome to help you.

dcolburn: keep in mind to never access the RAID components or LVM2 elements directly.
The devices to mount, of fsck, or whatever are:
/dev/md0                989M   54M  885M   6% /boot
/dev/mapper/vg0-lvroot   37G  3.8G   31G  11% /
/dev/mapper/vg0-lvhome   46G  1.7M   44G   1% /home
/dev/mapper/vg0-lvsrv   359G   28K  340G   1% /srv

/dev/sdb1 and /dev/sda1 are the EFI partitions, Never touch them for other reasons that to do a grub-install on each of them.

Greetings, Andre

Offline

#60 2023-01-12 17:50:11

dcolburn
Member
Registered: 2022-11-02
Posts: 280  

Re: Server lost changes and partially reverted

Would something like this sync them?

mdadm --create /dev/??? --level=1 --raid-devices=2 /dev/??? /dev/???

That comes from Step 9 here: https://www.golinuxcloud.com/mdadm-command-in-linux/#9_Create_RAID_1_array_with_mdadm_command

Offline

#61 2023-01-12 19:43:26

Andre4freedom
Member
Registered: 2017-11-15
Posts: 78  

Re: Server lost changes and partially reverted

This will create a RAID volume and will synchronize it.
Usually the partitioner within the installer will do that for you.
You can monitor it by cat /proc/mdstat

What I do with my EFI partition is just to copy over the contents. The EFI devices are not in a RAID.

Offline

Board footer