The officially official Devuan Forum!

You are not logged in.

#1 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-09-01 18:27:31

I tested devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso on the two PCs that I had reported in the first comment. I ran the live desktop, then rebooted into the PC, and the RAID arrays were started normally with no issues. I repeated the test 2 more times on each PC, and no RAID issues, or any other issues to report.

So the solution of not having any RAID installed by default on the live-desktop worked.

#2 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-31 06:09:28

* Quick summary

@fsmithred - I tested the live desktop iso devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso, and I confirmed that it did NOT auto start my test RAID arrays, which is good. I also confirmed that I can install the mdadm.deb in the root directory and do some RAID stuff.

The next test is the REAL test: run the desktop live on the 2 PCs that I reported in the first comment and make sure there are no other hidden surprises. Of course, I'll have to first run backups on those two PCs, which means digging out some extra HDDs, USB to SATA dock, etc. from my closet of tech stuff. So it might take a few days before I have more test results.

* Test notes

* Checking the iso's /live/initrd.img
  - file: devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso
  - Steps: Mount the iso, copy live/initrd.img to a temp directory.
  - cmd: unmkinitramfs initrd.img <output directory>
  - Weird. Extracting the initrd.img produces 3 directories: early, early2, main. Since it works, this could be just the way the live desktop init ram disk is organized.
  - Inside main, the contents look like the usual init ram disk.
  - find . | grep mdadm : No files found. Nothing related to mdadm is inside this ramdisk.
  - find . | grep rules.d : None of the rules files in udev/rules.d have any relation to md or raid.

Conclusion so far: If the init ram disk is has no knowledge of RAID, it shouldn't auto start any of my test RAID arrays.

* Checking the live desktop
  - cat /proc/mdstat  : file not found
  - sudo which mdadm  : file not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.
  - lsmod | grep md   : No md_mod module.

Conclusion so far: Now that's what I call a solution. As predicted, since the init ram disk is totally clueless about RAID, it didn't (or couldn't) auto start any of the RAID arrays.

* Live desktop: installing mdadm.deb located on the root directory
  - install cmd: sudo dpkg -i /mdadm_4.4-11devuan3_amd64.deb
    - The install worked. There were some info messages about "live system is running on read-only media", and an error with grub-probe, but I'll see how far I can get.
  - Checking lsmod, md_mod is now loaded, but none of the raid modules are loaded.
  - cat /proc/mdstat: File exists. No RAID personalities listed. No arrays started.
  - sudo mdadm --examine --scan
    - Correctly discovered two RAID arrays, /dev/md/11, dev/md/22.
      - I don't know what the extra / means between the "md" and the number, but I'll keep going.
  - sudo mdadm --assemble /dev/md77 --uuid=<RAID UUID>
    - Array started on the intentionally weird md number, md77.
    - Array is confirmed in /proc/mdstat.
    - /proc/mdstat also shows the "raid1" personality.
    - lsmod now shows that the raid1 module is loaded. I guess the raid modules get loaded when they're needed.
      - This is convenient. The user doesn't have to manually load specific RAID modules in order to start their RAID arrays.
  - sudo mdadm --stop /dev/md77
    - Unsurprisingly, this also works.

  + Retest with no network connection (unplug from the PC).
    - Results: all same

  + Retest with no internet connection (plug PC into LAN, but unplug router's WAN).
    - Results: all same

Conclusion: Confirmed that the user can install mdadm locally and do RAID stuff, and there's no dependency on an Internet connection or any LAN connection.

#3 Re: Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-29 18:40:19

I can file the bug. But since the next netstall iso will include rsyslog and is coming soon, I'll test that netinstall build first. If rsyslog is confirmed to be installed, I'll skip the bug report.

#4 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-29 05:21:00

Here's what I've learned tonight about blacklisting that's not been mentioned in other comments.

* What it takes to unload the md_mod module

Running "sudo modprobe -r md_mod" returns the error: "modprobe: FATAL: Module md_mod is in use.". But it doesn't tell you who's using it.

Running "sudo rmmod md_mod" returns the error: "rmmod: ERROR: Module md_mod is in use by: raid1 raid10 raid0 raid456".

So the solution is:

- Stop all RAID arrays so that the raid modules can be unloaded:
  - sudo mdadm --stop /dev/mdx /dev/mdy, etc

- Remove the 4 raid modules, then md_mod.
  - sudo rmmod raid1 raid10 raid0 raid456 md_mod

* What if you blacklisted the raid modules

I modified /etc/modprobe.d/mdadm.conf and added these lines, then rebuilt the init ram disk, then rebooted.

blacklist raid1
blacklist raid10
blacklist raid0
blacklist raid456
blacklist md_mod

The result was that the RAID arrays block devices were STILL created, but are inactive. /proc/mdstat also shows no raid "personalities" available.

$ ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 Aug 28 21:45 /dev/md127
brw-rw---- 1 root disk 9,  22 Aug 28 21:45 /dev/md22

$ cat /proc/mdstat 
Personalities : 
md22 : inactive sda1[1] sdc1[0]
      200705 blocks super 1.2
       
md127 : inactive sda2[1] sdc2[0]
      197953 blocks super 1.2
       
unused devices: <none>

Looking at the loaded modules "sudo lsmod | egrep 'raid|md' | sort", the md_mod is STILL loaded, but none of the raid modules were loaded. As mentioned by g4sra, blacklisting doesn't prevent other ways of loading the md_mod.

#5 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-29 03:04:46

@fsmithred - You asked if anyone has the file "/etc/modprobe.d/mdadm.conf".

Ans: Yep, I also have that file on my excalibur PC, and I also see it in the init ram disk.

I ran a quick test to see if I can see what effects "start_ro" has. "start_ro" has an effect on the RAID arrays I defined in /etc/mdadm/mdadm.conf, but it has no effect on the auto-started arrays at md127, md126. The setting will set whether or not the explicitly defined arrays are started with "auto-read-only" or not. For auto-started arrays at md127, the array is always started with "auto-read-only". I didn't dig any further to see what "auto-read-only" means, but the setting DOES show an effect.

When the "start_ro=1" line is defined (md22 is the raid number I explicitly defined in mdadm.conf):

$ cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active (auto-read-only) raid1 sdc2[1] sdb2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md22 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

When the "start_ro=1" line is commented out:

$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md22 : active raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

Next, I'll play around with blacklisting the md module. If I find out something new that hasn't already been mentioned, I'll post my test results.

#6 Re: Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-29 02:29:10

OK, thanks for the reply. I have installed rsyslog manually on my Excalibur PC, and a quick check shows no obvious problems. I wanted to make sure it wasn't an oversight in the Excalibur installer.

I'll marked this as solved, since it's a known issue.

#7 Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-28 08:15:09

Eeqmcsq
Replies: 8

I mentioned this problem in the RAID auto start thread, but it may have gotten lost in the deluge of testing. Since this problem isn't related to RAID, I'm creating a new post to track this problem.

- Installer iso: devuan_excalibur_6.0-20250823_amd64_netinstall.iso
- Download directory: https://files.devuan.org/devuan_excalib … aller-iso/

* Problem

After installing the netinstall to my PC, /var/log/syslog is not found. I discovered that the package rsyslog isn't installed. Is this a problem with the installer, or is rsyslog no longer installed by default? If not, what logs would I look at?

Note that in the Excalibur live desktop, rsyslog IS installed.

#8 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-28 05:41:37

@fsmithred: Tonight, I tested your two images:
  - devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso
  - devuan_excalibur_6.0-preview-2025-08-25_1056_amd64_desktop-live.iso

First some quick replies to your comments:

- You said the 0923 iso "somehow got the default /etc/default/mdadm with START_DAEMON=true".
  - Ans: I didn't see that on the 0923 iso. It was START_DAEMON=false, the same as the "2025-08-13" that I tested a few days ago.

- You also asked if I could "still use the mdadm command without the service running?"
  - Ans: Yes, I was able to run the mdadm command in the 0923 iso. These two examples worked:
    - sudo mdadm --examine --scan
    - sudo mdadm --stop /dev/md127

* Test results for 0923 and 1056 were the same

  - /proc/mdstat showed that the RAID arrays auto started at md127, md126.
  - /etc/default/mdadm has START_DAEMON=false
  - Checking "ps -ef | grep mdadm", I didn't see any "mdadm --monitor". The mdadm monitoring/reporting daemon was NOT running.
  - Can I run the mdadm cmd? Yes, these cmds worked:
    - sudo mdadm --examine --scan
      - listed both of my test RAID arrays
    - sudo mdadm --stop /dev/md127

* Investigation

So why did the RAID arrays auto start? Here are my investigation notes. Note that I tested using legacy boot.

- /etc/mdadm/mdadm.conf does contain the lines "AUTO -all" and the dummy line "ARRAY <ignore>" with UUID of all zeroes. This looks OK.

- I extracted the contents of /boot/initrd.img.-6.12.38+deb13-amd64. This init ram disk in the live desktop apparently has 4 total archives concatenated together, with the last one being compressed.

(cpio -id ; cpio -id ; cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.38+deb13-amd64

   And the contents of ./etc/mdadm/mdadm.conf contains the two lines "AUTO -all" and "ARRAY <ignore>". So this looks OK.
   
   And yet, the live-desktop STILL auto started my test RAID arrays???

- Checking the output of dmesg for clues, the second line shows the "Command line" args. And one of the lines show initrd=/live/initrd.img.

  Aha, /live looks like something on the .iso image.

- I mounted the file "devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso" to the local file system, copied ./live/initrd.img to a temp directory and extracted it using the same 3* cpio + zstdcat|cpio, and examined ./etc/mdadm/mdadm.conf. The contents look like a default file. No "AUTO -all" or dummy "ARRAY <ignore>" lines.

- Conclusion: This would explain why the RAID arrays got autostarted. The live-desktop loaded THIS init ram disk that's located on the .iso.

- More info: The legacy boot menu comes from the iso image file "./isolinux/isolinux.cfg", which adds the "initrd" and other args to the boot cmd line.

label toram
    menu label devuan-live (amd64) (load to RAM)
    linux /live/vmlinuz
    append initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin

* Test and investigation (UEFI live desktop)
- The test results and investigation results were the same, except that dmesg does NOT contain an "initrd" arg, so there's no clue about what init ram disk was loaded.

- Mounting the .iso again and checking "./boot/grub/grub.cfg", the menu entry contains a separate initrd line, which refers to the same "/live/initrd.img".

menuentry "devuan-live (load to RAM)" {
    set gfxpayload=keep
    linux   /live/vmlinuz boot=live username=devuan toram nottyautologin 
    initrd  /live/initrd.img
}

So both the UEFI and legacy boot must be loading the same init ram disk on the .iso file, which doesn't contain the two lines "AUTO -all" and "ARRAY <ignore>".

#9 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-27 05:25:08

@g4sra - Yep. I've tried 2 of your 3 suggestions. Adding raid=noautodetect to the boot menu's kernel options didn't disable autostarting the RAID. Adding nodmraid also had no effect. The dmraid test output can be found in this comment:

I did NOT try blacklisting the md modules, since I had no idea how to blacklist a module. That's on my list of things to investigate.

And just to make sure I didn't screw anything up in my previous tests, I retested several combinations of raid=noautodetect and nodmraid in both the Excalibur live desktop and my Excalibur PC. In all cases, the RAID arrays auto started at md127, md126.

These were the test cases I ran tonight.

  - default: No kernel cmd line changes
  - Added: raid=noautodetect
  - Added: nodmraid
  - Added: nodmraid=1
  - Added: raid=noautodetect nodmraid
  - Added: raid=noautodetect nodmraid=1

For the live desktop, I used "devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso". For the PC installation, I set my mdadm.conf to this and updated the init ram disk.

  #AUTO -all
  ARRAY <ignore> UUID=00000000:00000000:00000000:00000000

During the tests, I confirmed the cmd line args by checking /proc/cmdline.

#10 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-26 03:57:03

@fsmithred: I haven't tested your two isos yet. Instead, I satisified my curiosity by investigating exactly what this "START_DAEMON" settings does and how it affects mdadm.

The START_DAEMON option is used in the script /etc/init.d/mdadm to determine if it should start mdadm with --monitor. According to the man pages, the monitor mode periodically checks the arrays to check for status changes and reports these events by logging to the syslog, sending a mail alert, etc.

$ ps -ef | grep mdadm
root      1434     1  0 19:57 ?        00:00:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog

At first, I thought this option was harmless. Then I see this in the man pages:

MONITOR MODE
  <...>
  As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group or domain and if the  destination  array  has  a
  failed drive but no spares.

That's definitely a NO for a live desktop.

Otherwise, in my tests, turning off the START_DAEMON didn't affect my mdadm cmd line commands.

* Other issues

While testing the START_DAEMON setting to see if "mdadm --monitor" really writes to the syslog, I couldn't find /var/log/syslog on my Excalibur PC! The package "rsyslog" wasn't installed. Is that an oversight in the Excalibur netinstall iso, or is rsyslog no longer installed by default? If not, what are the alternative log files?

On a side note, my tests show that mdadm --monitor DOES log to the syslog when it detects a failed drive or a drive added back to a degraded array. I used mdadm --fail, --remove, --add, and observed the syslog.

#11 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-26 03:27:20

@g4sra: Here are my test results of adding nodmraid to the boot menu kernel options. Short answer: no difference. My test raid arrays are still auto started in the Excalibur live desktop at md126/md127.

Test file: devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso

At the initial menu, I edited the boot options and included "nodmraid", and "nodmraid=1". Then I ran the live desktop.

- Test 1: No cmd line change:

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin

devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- Test 2, cmd line change, added "nodmraid":

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin nodmraid

devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- Test 3, cmd line change, added "nodmraid=1":

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin nodmraid=1

devuan@devuan:~$ cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sdb1[1] sda1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

================================

@stargate-sg1-cheyenne-mtn:

Although mdadm may have been in Debian/Ubuntu since the 2000s, it occurred to me that the live desktops might not have mdadm installed or running. To satisfy my curiosity, I tested various Devuan live desktops on my test PC with my 2 test RAID arrays to see when the live desktop started auto starting the RAID arrays.

Short answer: Beowulf, based on Debian 10 Buster.

- jessie (based on Debian 8 Jessie)
  - devuan_jessie_1.0.0_amd64_desktop-live.iso
  - /proc/mdstat      : File not found.
  - sudo which mdadm  : mdadm not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.

- ascii (based on Debian 9 Stretch)
  - devuan_ascii_2.0.0_amd64_desktop-live.iso
  - /proc/mdstat      : File not found.
  - sudo which mdadm  : mdadm not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.

- beowulf (based on Debian 10 Buster)
  - devuan_beowulf_3.1.1_amd64_desktop-live.iso
  - /proc/mdstat      : File found. RAID arrays auto started at md127, md126.
  - sudo which mdadm  : /sbin/mdadm
  - lsmod | grep raid : raid0, raid1, etc. modules found

So that might explain why there hasn't been a history of complaints about the live desktop ruining RAID arrays - they weren't auto started until just a few years ago.

#12 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-25 05:48:11

@g4sra - I tried the suggestion about "raid=noautodetect" as a kernel boot arg, but that didn't work. Based on my research, that's supposed to work only when the RAID code is compiled directly into the kernel, instead of being a separately loaded module.

After searching online and lots of testing of the init ram disk, including borking it a few times big_smile , I think I found the easiest solution.

* Solution

Add these two lines to /etc/mdadm/mdadm.conf:

#Disable all auto-assembly, so that only arrays explicitly listed in mdadm.conf or on the command line are assembled.
AUTO -all

#At least one ARRAY line is required for update-initramfs to copy this mdadm.conf into the init RAM disk. Otherwise,
#update-initramfs will ignore this file and auto generate its own mdadm.conf in the init RAM disk.
ARRAY <ignore> UUID=00000000:00000000:00000000:00000000

Then rebuild the init RAM disk:

sudo update-initramfs -u

* Explanation

The AUTO keyword is described in the mdadm.conf man pages as a way to allow or deny auto-assembling an array. The "all" keyword matches all metadata.

From my tests "AUTO -all" doesn't affect any ARRAY lines that you manually define in mdadm.conf. Also, your ARRAY lines can be placed before or after the "AUTO -all" line, and they'll still work.

The reason why "AUTO -all" isn't enough is because when you run "update-initramfs", the script for handling mdadm first checks if your mdadm.conf has any ARRAY lines defined. If not, the script will ignore your mdadm.conf file containing the "AUTO -all" and run its own script "mkconf" to auto generate an mdadm.conf containing any RAID arrays that it detects on the PC, even if they're not currently started, and even if you didn't want any of those arrays to be auto-started.

So we define a dummy ARRAY line to ensure that your mdadm.conf is copied into the init ramdisk.

See: /usr/share/initramfs-tools/hooks/mdadm, /usr/share/mdadm/mkconf

* How to check the mdadm.conf in the init ram disk

Create a temp dir to hold the files. Then extract the files from the init ram disk. Replace the file name with your initrd file.  The files are extracted into the current directory.

mkdir /tmp/testdir
cd /tmp/testdir

(cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.41+deb13-amd64

cat ./etc/mdadm/mdadm.conf

* Other info

I tracked down exactly what triggers the RAID arrays to get auto started during boot, even if the init ram disk's mdadm.conf contains no ARRAY definitions. It's this udev rule file:

/usr/lib/udev/rules.d/64-md-raid-assembly.rules

I don't know how to read udev rules, so I don't know what this file does, but I've confirmed that moving this file out of this directory, then rebuilding the init RAM disk will disable auto starting arrays on boot. Since there are other md related rules in here, I don't know what the side effects are of removing just this file.

#13 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-24 22:07:15

I tested the Aug 13 Excalibur preview live desktop, both UEFI and legacy boot, and it's still auto starting the RAID array. I confirmed that /etc/default/mdadm shows START_DAEMON is set to false, but the output of /proc/mdstat shows my test RAID array was started at /dev/md127.

https://files.devuan.org/devuan_excalibur/desktop-live/
devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso

devuan@devuan:~$ cat /etc/default/mdadm 
# mdadm Debian configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#

# AUTOCHECK:
#   should mdadm run periodic redundancy checks over your arrays? See
#   /etc/cron.d/mdadm.
AUTOCHECK=true

# AUTOSCAN:
#   should mdadm check once a day for degraded arrays? See
#   /etc/cron.daily/mdadm.
AUTOSCAN=true

# START_DAEMON:
#   should mdadm start the MD monitoring daemon during boot?
START_DAEMON=false

# DAEMON_OPTIONS:
#   additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"

# VERBOSE:
#   if this variable is set to true, mdadm will be a little more verbose e.g.
#   when creating the initramfs.
VERBOSE=false
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- /var/log/kern.log, partial output, showing that md127 was started.

2025-08-24T19:51:23.214552+00:00 localhost kernel: [drm] GART: num cpu pages 262144, num gpu pages 262144
2025-08-24T19:51:23.214553+00:00 localhost kernel: [drm] PCIE GART of 1024M enabled (table at 0x0000000000162000).
2025-08-24T19:51:23.214574+00:00 localhost kernel: radeon 0000:00:01.0: WB enabled
2025-08-24T19:51:23.214577+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000020000c00
2025-08-24T19:51:23.214578+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000020000c0c
2025-08-24T19:51:23.214579+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000072118
2025-08-24T19:51:23.214581+00:00 localhost kernel: radeon 0000:00:01.0: radeon: MSI limited to 32-bit
2025-08-24T19:51:23.214582+00:00 localhost kernel: radeon 0000:00:01.0: radeon: using MSI.
2025-08-24T19:51:23.214583+00:00 localhost kernel: [drm] radeon: irq initialized.
2025-08-24T19:51:23.214584+00:00 localhost kernel: [drm] ring test on 0 succeeded in 1 usecs
2025-08-24T19:51:23.214585+00:00 localhost kernel: [drm] ring test on 3 succeeded in 3 usecs
2025-08-24T19:51:23.214586+00:00 localhost kernel: md/raid1:md127: active with 2 out of 2 mirrors
2025-08-24T19:51:23.214588+00:00 localhost kernel: md127: detected capacity change from 0 to 200704
2025-08-24T19:51:23.214590+00:00 localhost kernel: usb 1-3.2: New USB device found, idVendor=046d, idProduct=c31c, bcdDevice=49.20
2025-08-24T19:51:23.214609+00:00 localhost kernel: usb 1-3.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
2025-08-24T19:51:23.214610+00:00 localhost kernel: usb 1-3.2: Product: USB Keyboard
2025-08-24T19:51:23.214612+00:00 localhost kernel: usb 1-3.2: Manufacturer: Logitech

* More info

I'm also testing an Excalibur installation on a PC using devuan_excalibur_6.0-20250823_amd64_netinstall.iso. I now see that the annoyances of auto starting RAID arrays on boot isn't just a problem with the live desktop environment. My Excalibur PC is also autostarting the test RAID array I created, even though I didn't specify for this RAID array to be assembled on boot.

I tried setting START_DAEMON=false, and that didn't disable the RAID auto start.

So I'll do a bunch of digging to find a solution for disabling this RAID auto start. If I figure it out, I'll post an update.

#14 Re: Documentation » apt-mirror config for a local Devuan repo » 2025-08-23 03:22:10

I ran into the same "Use of uninitialized value" error in daedalus + apt-mirror. The problem is that although the apt-mirror script's parsing logic for the Packages file (found in the binary packages) has been updated so that the md5sum is an optional field, the parsing logic for the Sources file (found in the source code packages) has NOT been updated. So it assumes that every package must contain the "Files:" section containing md5sums.

This means that if you're not mirroring the source code (not using deb-src), you won't see this error.

The problem isn't specific to Devuan. I tested with a Debian repository and saw the same errors.

- Devuan test, mirror.list

deb-src http://deb.devuan.org/merged daedalus-security main

  This downloads the file from: http://deb.devuan.org/merged/dists/daed … Sources.gz

- Debian test, mirror.list

deb-src http://deb.debian.org/debian bookworm-updates main

  This downloads the file from: http://deb.debian.org/debian/dists/book … Sources.xz

If you look open the Sources file in each test, you'll see that neither of their package descriptions contain "Files:" (md5sums), nor "Checksums-Sha1", only "Checksums-Sha256". So the problem is definitely an out-of-date apt-mirror script.

* A Solution

The only solution I have is to update the apt-mirror script. And since I know enough Perl to be dangerous big_smile , I came up with a solution.

- Copy the apt-mirror script into another directory, such as /opt, which is usually empty. That way, if anything goes wrong, the original apt-mirror script is still available.

cp /usr/bin/apt-mirror /opt

- Open /opt/apt-mirror in a text editor. Look for the function process_index(), and the "else" block with the comment "Sources index", which should be at line 919.

- The entire "else" block's code must be replaced with this code, which will handle any combination of the presence/absence of the 3 section names:

        else
        {    # Sources index

            #The sections "Files:", "Checksums-Sha1:", and "Checksums-Sha256" contain 1 or more lines with 3
            #space-separated fields: md5/sha1/sha256 checksum, file size in bytes, file name
            #There's no guarantee that all 3 sections will be defined, or that they will have the same list of files.
            #Use a hash to keep track of the file names and their sizes. Then update the ALL and NEW files afterwards.
            my %package_files;
            my %sections = (
                "Files:"            => *FILES_MD5   ,
                "Checksums-Sha1:"   => *FILES_SHA1  ,
                "Checksums-Sha256:" => *FILES_SHA256,
            );

            foreach my $section_name (keys %sections)
            {
                if (!defined($lines{$section_name})) { next; }

                foreach ( split( /\n/, $lines{$section_name} ) )
                {
                    next if $_ eq '';
                    my @file = split;
                    die("apt-mirror: invalid Sources format") if @file != 3;
                    print { $sections{$section_name} } $file[0] . "  " . remove_double_slashes( $path . "/" . $lines{"Directory:"} . "/" . $file[2] ) . "\n";
                    $package_files{$file[2]} = $file[1];
                }
            }

            my $file_size_bytes;
            my $file_path;
            foreach my $file_name (keys %package_files)
            {
                $file_size_bytes = $package_files{$file_name};
                $file_path       = remove_double_slashes( $path . "/" . $lines{"Directory:"} . "/" . $file_name );
                
                $skipclean{ $file_path } = 1;
                print FILES_ALL $file_path . "\n";

                if ( need_update( $mirror . "/" . $lines{"Directory:"} . "/" . $file_name, $file_size_bytes ) )
                {
                    print FILES_NEW remove_double_slashes( $uri . "/" . $lines{"Directory:"} . "/" . $file_name ) . "\n";
                    add_url_to_download( $uri . "/" . $lines{"Directory:"} . "/" . $file_name, $file_size_bytes );
                }
            }
        }

- To run the updated apt-mirror:

/opt/apt-mirror <your apt-mirror config file>

- To run the originally installed apt-mirror script:

apt-mirror <your apt-mirror config file>

#15 Re: Desktop and Multimedia » Calculator freezes » 2025-07-25 03:33:29

I don't know anything about KDE 3.5's Kcalc. But I just did a quick test with Devuan Daedalus + Mate Desktop, and the Mate Calculator does support your example: 50 - 20% -> 40

#16 Re: Hardware & System Configuration » Transfering an OS from HD to SSD » 2025-06-30 05:51:25

I don't know anything about the logging system. But your post reminded me of back in the early 2010s when SSDs were getting cheap enough for not-so-rich computer geeks to buy. I remember there were Linux tweaks for reducing unnecessary disk writes by specifying "relatime" or "noatime" or something like that to reduce or stop updating a file's "access time" every time it's read. Back then, those tweaks had to be done manually.

If I had to speculate about today, I would suspect you'd still have to do those tweaks manually. My reasoning is that whatever the logging system an app or Linux is using would have no idea what type of storage device the log files are being written to. It could be an HDD, SSD, a RAID array that's a MIX of HDD+SSD (weird, but possible), a RAM disk, even a network shared folder on another PC. Since there's no way to know (or at least, no EASY way to know), I conclude that it's up to the sys admin to make those disk optimization tweaks manually.

Of course, a real sys admin can tell us for sure. smile

#17 Re: Hardware & System Configuration » Transfering an OS from HD to SSD » 2025-06-30 00:33:26

I don't have any experience with "partition transfer utilities", but I have done these at one time or another:

- Whole disk cloning. Something like "dd if=/dev/sdx of=/dev/sdy".
- Copy the OS files from one partition to another. I think it was something like "cp -ax /path/to/old_drive_partition/* /path/to/new_drive_partition". This required either changing the new partition's UUID to match the old one, or modifying /etc/fstab to match the new partition UUID.
- Copy the OS files from a single disk to a RAID array. A RAID array is multiple storage drives working together to behave like a single giant storage drive. This required modifying some files so that the RAID array started first on boot, then it could be mounted to the file system.

In all cases, Linux didn't care about the change in the storage drive. I think all Linux cares about is finding the file system's UUID, and successfully mounting it based on the file system type (example: ext4).

One reason I can think of why the OS might behave differently is if an SSD loaded files so quickly during boot, it exposes some race condition, as some apps have dependencies that haven't been fully initialized yet. A a slower HDD would hide these issues because the apps are loaded more slowly, giving the dependencies more time to initialize properly.

The only time I can remember seeing a problem with cloning hard drives is on my Win7 work laptop. That's because one of the software licensing apps used the storage drive name or whatever to generate its license key. When I replaced the laptop HDD with a cloned SSD, the license app said my license was no longer valid. I just requested and installed a new license, and that was all.

#18 Documentation » [HowTo] Set up VNC server "x11vnc" to run as a service on boot » 2025-06-29 01:33:01

Eeqmcsq
Replies: 0
* Intro

By starting the VNC server "x11vnc" as a service on boot, you can control the PC remotely, even if no one has logged in yet.

Make sure you research VNC's security risks and confirm that it's safe to use VNC on your network. I'm using VNC on my home LAN with no other users and no wifi access points, so I concentrated primarily on getting the VNC server to work, with a minimum amount of security (VNC password only).

I tested these instructions on Devuan Daedalus and Devuan Jessie with a fresh installation of each release. The instructions are identical, so I assume the instructions will work on the other Devuan releases between these two.

These instructions appear lengthy because I included tests at each step to confirm the setup before going to the next step.

* What you need

- The IP address of the x11vnc server. Example of finding the IP address:

sudo ifconfig

- A remote PC with a VNC client app. I tested using Devuan Jessie's "xvncviewer" app.

* Install x11vnc
sudo apt-get install x11vnc
* Find the X authority file used by the display manager

The X authority file is required for x11vnc to connect to the X server. If a user is logged in, that file is located at $HOME/.Xauthority. But if no one is logged in, then x11vnc will need the login manager (also known as the display manager)'s X authority file.
 
Devuan uses the SLiM login manager by default, so these instructions will find SLiM's X authority file. If you're using another login manager, you'll have to research that login manager.

more /etc/slim.conf

Look for the setting "authfile". The default is:

authfile  /var/run/slim.auth

  > Test x11vnc with this auth file from a console window
    - Open a new terminal window for this test.
    - Unset this environment variable so x11vnc doesn't use it to find an X authority file.

unset XAUTHORITY

    - Start the x11vnc server WITHOUT specifying the X authority file.

sudo x11vnc -display :0

      This should fail. The server exits immediately.

    - Start the x11vnc server and specify the X authority file.

sudo x11vnc -auth /var/run/slim.auth -display :0

      This should work. The server remains running. To stop the server: CTRL+C
   
    - Close the test terminal window.

* Recommended: Create a VNC password file

Choose a location to store the VNC password file. For these instructions, I chose /opt, because that directory is always empty on my PCs.
 
The max VNC password length is 8 characters. If you enter a longer password, the extra characters are ignored.

- Create a VNC password file.

sudo x11vnc -storepasswd /opt/yourx11vncpasswordfile

- View the password.

sudo x11vnc -showrfbauth /opt/yourx11vncpasswordfile

  > Test the password.
    - Open a new terminal window for this test.

sudo x11vnc -auth /var/run/slim.auth -display :0 -rfbauth /opt/yourx11vncpasswordfile

    - On a remote PC, use a vnc client app and connect to this PC.
    - Close the vnc client app. The VNC server automatically terminates.
    - Close the test terminal window.

* Create the init script

- Create an executable file in /etc/init.d. Then open it in a text editor.

cd /etc/init.d
sudo touch your11vnc
sudo chmod +x yourx11vnc
sudo <your text editor> yourx11vnc

- Copy this text to the script:

#!/bin/sh

### BEGIN INIT INFO
# Provides:          yourx11vnc
# Required-Start:    slim
# Required-Stop:     
# Default-Start:     1 2 3 4 5
# Default-Stop:      0 6
# Short-Description: Your x11vnc server
# Description:       This is your x11vnc server script.
### END INIT INFO

#Explanation of the args:
# -auth    : Specifies the path to the X authority file required to connect to the X server. If no one is logged
#            in, but the login screen is displayed, you can specify the X authority file of the login screen's
#            display manager.
# -display : Tell x11vnc which display to try first. Otherwise, x11vnc will first try opening the display "", which
#            fails. Then x11vnc will delay for 4 seconds, then try ":0", which finally works.
# -forever : Continue listening for more connections after the first client disconnects. By default, x11vnc exits
#            when the client disconnects.
# -loop    : Create an outer loop that restarts x11vnc whenever it terminates. Useful if the X server terminates
#            and restarts, such as when logging out.
# -rfbauth : Use this password file created by "x11vnc --storepasswd". To run x11vnc without a password (NOT
#            RECOMMENDED), remove this arg.
X11VNC_ARGS="-auth /var/run/slim.auth -display :0 -forever -loop -rfbauth /opt/yourx11vncpasswordfile"
X11VNC_BIN_PATH="/usr/bin/x11vnc"

case "$1" in
    start)
        #Any args after the "--" are passed unmodified to the program being started.
        start-stop-daemon --start --oknodo --background --exec $X11VNC_BIN_PATH -- $X11VNC_ARGS
    ;;

    stop)
        start-stop-daemon --stop --oknodo --exec $X11VNC_BIN_PATH
    ;;

    status)
        start-stop-daemon --status --exec $X11VNC_BIN_PATH
        STATUS_CODE=$?

        #Print out a human readable message.
        case $STATUS_CODE in
            0) STATUS_MSG="$X11VNC_BIN_PATH is running."; ;;
            1) STATUS_MSG="$X11VNC_BIN_PATH is not running, /var/run pid file exists."; ;;
            2) STATUS_MSG="$X11VNC_BIN_PATH is not running, /var/lock lock file exists."; ;;
            3) STATUS_MSG="$X11VNC_BIN_PATH is not running."; ;;
            *) STATUS_MSG="Unknown status code: $STATUS_CODE"; ;;
        esac

        echo "$STATUS_MSG"
        exit $STATUS_CODE
    ;;

    *)
        echo "Usage: $0 start|stop|status" >&2
    ;;

esac

  * Optional adjustments to the script
    - If you're using another display manager besides "slim":
      - Change the "-auth" arg to match the display manager's X authority file.
      - Change the "Required-Start" line to specify the name of the service that starts the display manager.
    - If a VNC password is not required, remove the "-rfbauth /opt/yourx11vncpasswordfile" arg.

  > Test the script manually

    - To test the script, manually start the service.

sudo service yourx11vnc start

    - On a remote PC, use a vnc client app and connect to this PC. This should work.
    - Manually stop the service.

sudo service yourx11vnc stop
* Set up the script to run on boot and shutdown

- Create the startup/shutdown links:

sudo update-rc.d yourx11vnc defaults

  > Test the script on bootup
    - Reboot the PC. If the PC has auto login enabled, log out to return to the login screen, so that you can confirm that x11vnc works without requiring anyone to be logged in.
    - On a remote PC, use a vnc client app and connect to this PC. The remote PC should connect successfully and see the login screen.

* Troubleshooting the script

- Manually stop the service.

sudo service yourx11vnc stop

- Edit the script:

sudo <your text editor> /etc/init.d/yourx11vnc

  - Remove the -background option so the server's output is shown in the current console.
  - Remove the -loop, so the server doesn't get stuck in an infinite loop if something's wrong.

- Manually start the service in a console window. The output will now be shown in the current console window, so you can troubleshoot the problem.

sudo service yourx11vnc start

  To stop the server: CTRL+C

* Cleanup after troubleshooting
  - Restore the -background and the -loop options.
  - Manually restart the service.

sudo service yourx11vnc start
* Uninstall the script

- Manually stop the service. Then disable the startup/shutdown links.

sudo service yourx11vnc stop
sudo update-rc.d yourx11vnc remove

- You can now move the script "yourx11vnc" from /etc/init.d to another directory, or delete it if you're SURE you don't need it.

#19 Re: DIY » Need advice, building a small server for city-library-Devuan mirror » 2025-06-10 21:20:56

That backup plan sounds similar to what I'm doing at home for my work PCs. I have an HDD + SATA-to-USB3 dock, and I connect this to my work laptop once a week when it's time to back up the laptop's SSD by cloning the SSD to a file, something like "dd if=/dev/sdx of=file.bin". I wrote my own script to do most of the dirty work, including checking the backup HDD's file system for errors, deleting old backups if there isn't enough space, and constructing a date/time for the backup file name.

One difference in your proposal compared to my home PCs is that my OS is also on a RAID1, so the OS can withstand a drive failure. That's saved me the time of having to reinstall and reconfigure the OS when one of the drives failed.

I think that's the limit of my home experiences that can apply to your project. From a brainstorming perspective, other issues that spring to mind are:

* Handling multiple client requests: If multiple client requests causes a lot of disk activity on the OS disk, maybe an SSD IS needed for the OS.

* Web page design: These days, web pages must be designed for mobile users, since everyone has a phone. I have no knowledge of modern web design.

#20 Re: DIY » Need advice, building a small server for city-library-Devuan mirror » 2025-06-10 18:53:15

Two thoughts, based on maintaining my own PCs at home over the years.

* Who's going to maintain the server if something goes wrong? Example: If the server gets fried by lightning, who knows how to rebuild the server and restore the data from a backup? Who will make the backups?

* I've learned the hard way that SSDs aren't great for long term data storage, especially with static data (files that don't change). Based on what I can figure out from research, the SSDs memory cells suffer from voltage drift over time. So it gets harder to read old data, because the SSD controller has to work harder to determine what a cell's voltage level is, which determines the data bit values. This leads to slower SSD read speeds. In theory, the drift can get so bad that the data bits would be misread, thus get corrupted.

So you might want to ask a sys admin how they manage and maintain servers with SSDs in a work environment, especially with servers storing lots of static data. Are there SSDs that will silently refresh these memory cells and keep the read speeds and the data bits from degrading? Or do they go through regular hardware refreshes, so they end up replacing their whole server every few years, including the SSDs?

#22 Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-05-26 19:50:30

Eeqmcsq
Replies: 32

I assumed that I could safely boot into the live desktop on any PC, and it would not cause any changes to the PC unless I explicitly ran commands to change something on the PC. But I encountered two instances where the live desktop broke something on a PC.

* PC1

- OS: ubuntu 10.04
- RAID setup:
  - /dev/md0 - RAID1: 2xSSD for the OS files. Metadata version: 0.90
  - /dev/md1 - RAID1: 2xHDD for data files. Metadata version: 0.90

A few months back, I booted the Devuan Daedalus live desktop on this PC. After rebooting, my PC failed to boot after the initial grub menu. The error was something about not finding the disk. After some troubleshooting, I figured out that /dev/md0 failed to start, and it was caused by the Preferred Minor being changed from 0 to a large number, I think 127. The solution was to use a live desktop environment, assemble the disks in a raid using /dev/md0, while updating the Preferred Minor number. Something like this:

    mdadm --assemble /dev/mdx --update=super-minor --uuid=<RAID UUID>

After booting back into the OS, the data RAID1 also failed to start for the same reason, and I fixed it using the same solution.

* PC2

- OS: devuan jessie
- RAID setup:
  - /dev/md0 - RAID1: 2xSSD for the OS files. Metadata version: 0.90
  - /dev/md1 - RAID5: 4xSSD for data files. Metadata version: 0.90

Last week, while testing the Devuan Excalibur Preview's memtest, I decided to boot into the live desktop just to see if there were any obvious problems to report. During the boot, the console output showed that it had autostarted the arrays, but the RAID5 was started with only THREE disks instead of 4. I have no idea why, and at the time, I didn't investigate because I was doing the memtest legacy vs UEFI boot tests.

After completing the memtest boot tests, I booted back into the PC. Unlike the first PC, this PC correctly started the OS RAID /dev/md0, and I'm not sure why. Maybe jessie's RAID driver is smarter than the ubuntu 10.04 RAID driver and it doesn't depend on the preferred minor number.

However, it reported that the /dev/md1 array had totally failed. In hindsight, what might have happened was that the RAID driver tried to start the 4 SSDs (which all had the same RAID UUID), but it detected that 3 of the disks were "out of sync" with the 4th, and it somehow decided to fail the other 3 (or maybe they were "removed") and keep the 4th. So /dev/md1 was started with 1 active disk, thus a failed array.

To use dummy driver letters w-z, the RAID 5 disks looked like this:
 
  /dev/sdw - Preferred Minor 1
  /dev/sdx - Preferred Minor 126
  /dev/sdy - Preferred Minor 126
  /dev/sdz - Preferred Minor 126

At first, I thought I had lost the data on the array. After thinking about it, I realized that the 3 disks x/y/z could work as a functioning (but degraded) array, and would still have the data. The solution was to stop the currently running failed md1 array containing just sdw, wipe out the RAID info on sdw, start the RAID array with the other 3 disks (while fixing the preferred minor number), and add sdw back into the RAID. Something like this:

  mdadm --stop /dev/md1
  mdadm --zero-superblock /dev/sdw1
  mdadm --assemble /dev/md1 --update=super-minor --uuid=<the RAID5's RAID UUID>
  mdadm --manage /dev/md1 --add /dev/sdw1

And that worked. As far as I can tell, I didn't lose anything on the RAID5 array.

Also, I'm pretty sure that the RAID5 wasn't already degraded, because I wrote my own RAID monitoring script that checks the status every 5 minutes. If the RAID5 was degraded, the script would have sent out a multicast, and every PC on my LAN has another script to listen for this multicast and display an error notification on the desktop (using the notify-send). I would have noticed if the RAID5 array was already degraded.
 
* Thoughts for discussion

I understand that metadata 0.90 might be an obsolete format today and might not be used much. Maybe that's why I couldn't find any stories about live desktop environments ruining a RAID array. But it still seems dangerous for the live desktop to blindly autostart all RAID arrays during boot. Example: What if the array was already degraded, and the owner wanted to "freeze" the array to avoid any chance of losing another disk and the whole array?

So, should the live desktop be auto starting RAID arrays?

#23 Re: Installation » Daedalus desktop live: Cannot load memtest » 2025-05-22 20:05:53

I think I figured out what's going on with the legacy boot's .bin files not running. Isolinux is a variation of the Syslinux boot loader, and it uses the ISO 9660 format for CD/DVD images.

Here's a description of the "kernel" keyword.

https://wiki.syslinux.org/wiki/index.ph … ERNEL_file

KERNEL file

Select the file SYSLINUX will boot. The "kernel" doesn't have to be a Linux kernel, it can be a boot sector or a COMBOOT file.

Chain loading requires the boot sector of the foreign operating system to be stored in a file in the root directory of the filesystem. Because neither Linux kernel boot sector images, nor COMBOOT files have reliable magic numbers, Syslinux will look at the file extension. The following extensions are recognized (case insensitive):

 none or other	Linux kernel image
 .0		PXE bootstrap program (NBP) [PXELINUX only]
 .bin		"CD boot sector" [ISOLINUX only]
 .bs		Boot sector [SYSLINUX only]
 .bss		Boot sector, DOS superblock will be patched in [SYSLINUX only]
 .c32		COM32 image (32-bit COMBOOT)
 .cbt		COMBOOT image (not runnable from DOS)
 .com		COMBOOT image (runnable from DOS)
 .img		Disk image [ISOLINUX only]

Apparently, the "kernel" keyword tries to detect the type of boot image based on the file name's extension. So isolinux must be trying to load the memtest .bin files as a "CD boot sector", instead of a "Linux kernel image".

Using the keyword "linux" forces isolinux to boot the file as a linux kernel image, regardless of the file extension.

If that's all correct, I'm now wondering: since the Excalibur's isolinux for legacy boot and grub for UEFI boot are capable of running the memtest .efi, is it really necessary to include the memtest .bin in the boot menus?

#24 Re: Installation » Daedalus desktop live: Cannot load memtest » 2025-05-22 18:17:03

I figured out how to get the memtest86+ia32.bin and memtest86+x64.bin files to boot in legacy boot. Edit the menu item and insert the word ".linux" before the file name. This also works with the .efi files. In other words, all of these work in legacy boot:

.linux /live/memtest86+ia32.efi
.linux /live/memtest86+ia32.efi
.linux /live/memtest86+x64.bin
.linux /live/memtest86+x64.efi

This suggests that isolinux.cfg menu items should be using the keyword "linux" instead of "kernel". Current menu items:

label memtest-efi
    menu label Memory test (memtest86+x64.efi)
    kernel /live/memtest86+x64.efi

label memtest-bin
        menu label Memory test (memtest86+x64.bin)
        kernel /live/memtest86+x64.bin

I couldn't find any explanation what "kernel" does. I took a guess that if the "linux" keyword works at loading /live/vmlinuz (a compressed kernel), and "memtest86+x64.bin" is a bzimage (also a compressed file), then perhaps the "linux" keyword would work, and it did on my legacy-boot only PC.

Perhaps the .bin file was created for users using an older version of isolinux that does not know how to boot an efi file.

#25 Re: Installation » Daedalus desktop live: Cannot load memtest » 2025-05-22 06:48:43

Here are my test results. In addition to the menu items, I also tested the other two ia32 memtest files included in the .iso. I found some unexpected issues, so I organized my notes into a Q/A format.

* Tested images
- devuan_excalibur_6.0-preview-2025-05-19_1437_amd64_desktop-live.iso
- devuan_excalibur_6.0-preview-2025-05-19_1919_amd64_minimal-live.iso

* Memtest file presence test
Q) Are the memtest files present in the .iso?
A) Yes, both .isos contain 4 memtest files:
  - /live/memtest86+ia32.bin
  - /live/memtest86+ia32.efi
  - /live/memtest86+x64.bin
  - /live/memtest86+x64.efi

* Menu item test
Q) Do both memtest menu items refer to the correct file name?
A)
  - desktop-live, legacy boot : Yes
  - desktop-live, UEFI boot   : Yes
  - minimal-live, legacy boot : Yes
  - minimal-live, UEFI boot   : NO!
    - error: file '/live/memtest-x64.bin' not found.
    - error: file '/live/memtest-x64.efi' not found.
    - Looks like the file name is missing the "86". This is confirmed by looking in the .iso's file: /boot/grub/grub.cfg
    - I can edit the menu item and add the "86", and memtest86-x64.bin and .efi are correctly launched.

* Memtest starts successfully test
Q) Does the memtest actually run in legacy boot and UEFI boot?
A)
  * Legacy boot, desktop-live and minimal-live
    - /live/memtest86+ia32.bin : The PC hangs.
    - /live/memtest86+ia32.efi : Memtest works.
    - /live/memtest86+x64.bin  : The PC hangs.
    - /live/memtest86+x64.efi  : Memtest works.

  * UEFI boot, desktop-live and minimal-live
    - /live/memtest86+ia32.bin : Memtest works.
    - /live/memtest86+ia32.efi : Memtest works.
    - /live/memtest86+x64.bin  : Memtest works.
    - /live/memtest86+x64.efi  : Memtest works.

Since the ia32 versions are not available from the menu, I edited the menu item and manually typed in the file name.

And I still have NO CLUE why in legacy boot mode, the .efi files WORK, but the .bin files HANG. Based on the file extension, I would have expected that to be reversed.

- PCs that I tested on:
  - A 2011 PC, so it's legacy boot only.
  - A 2019 PC, which has both boot options.
  - A 2022 mini PC, which is UEFI boot only.

The legacy and UEFI results are confirmed on multiple PCs.

* Bonus test: minimal-live's accessible beep
Q) I discovered that the minimal-live's menu's "access" and "access-toram" beeping is INTENTIONAL. It's described in the README_minimal-live.txt. Do both boot mode's menus trigger the accessible beep?
A)
  - minimal-live, legacy boot : Yes
  - minimal-live, UEFI boot   : NO!
    - In legacy boot's config file "isolinux/isolinux.cfg", both the "access" and "access-toram" menu labels have a 0x7 byte, the ASCII beep character.
    - But in the UEFI boot's config file "/boot/grub/grub.cfg", these menu items do NOT have the 0x7 byte. Maybe it was left out by mistake? Or maybe grub.cfg doesn't support the beep character?

* Other useful things that I learned from this test
  - At the boot menu, you can tell if the PC booted in legacy boot or UEFI boot by looking at the background.
    - In legacy boot, the desktop-live has a colorful blue background, and the minimal-live has a bright white background.
    - In UEFI boot, both have a plain black background. It looks like something you'd see in a console window.

  - The location of the boot menu config is:
    - Legacy boot: /isolinux/isolinux.cfg
    - UEFI boot: /boot/grub/grub.cfg

Board footer

Forum Software