You are not logged in.
I assumed that I could safely boot into the live desktop on any PC, and it would not cause any changes to the PC unless I explicitly ran commands to change something on the PC. But I encountered two instances where the live desktop broke something on a PC.
* PC1
- OS: ubuntu 10.04
- RAID setup:
- /dev/md0 - RAID1: 2xSSD for the OS files. Metadata version: 0.90
- /dev/md1 - RAID1: 2xHDD for data files. Metadata version: 0.90
A few months back, I booted the Devuan Daedalus live desktop on this PC. After rebooting, my PC failed to boot after the initial grub menu. The error was something about not finding the disk. After some troubleshooting, I figured out that /dev/md0 failed to start, and it was caused by the Preferred Minor being changed from 0 to a large number, I think 127. The solution was to use a live desktop environment, assemble the disks in a raid using /dev/md0, while updating the Preferred Minor number. Something like this:
mdadm --assemble /dev/mdx --update=super-minor --uuid=<RAID UUID>
After booting back into the OS, the data RAID1 also failed to start for the same reason, and I fixed it using the same solution.
* PC2
- OS: devuan jessie
- RAID setup:
- /dev/md0 - RAID1: 2xSSD for the OS files. Metadata version: 0.90
- /dev/md1 - RAID5: 4xSSD for data files. Metadata version: 0.90
Last week, while testing the Devuan Excalibur Preview's memtest, I decided to boot into the live desktop just to see if there were any obvious problems to report. During the boot, the console output showed that it had autostarted the arrays, but the RAID5 was started with only THREE disks instead of 4. I have no idea why, and at the time, I didn't investigate because I was doing the memtest legacy vs UEFI boot tests.
After completing the memtest boot tests, I booted back into the PC. Unlike the first PC, this PC correctly started the OS RAID /dev/md0, and I'm not sure why. Maybe jessie's RAID driver is smarter than the ubuntu 10.04 RAID driver and it doesn't depend on the preferred minor number.
However, it reported that the /dev/md1 array had totally failed. In hindsight, what might have happened was that the RAID driver tried to start the 4 SSDs (which all had the same RAID UUID), but it detected that 3 of the disks were "out of sync" with the 4th, and it somehow decided to fail the other 3 (or maybe they were "removed") and keep the 4th. So /dev/md1 was started with 1 active disk, thus a failed array.
To use dummy driver letters w-z, the RAID 5 disks looked like this:
/dev/sdw - Preferred Minor 1
/dev/sdx - Preferred Minor 126
/dev/sdy - Preferred Minor 126
/dev/sdz - Preferred Minor 126
At first, I thought I had lost the data on the array. After thinking about it, I realized that the 3 disks x/y/z could work as a functioning (but degraded) array, and would still have the data. The solution was to stop the currently running failed md1 array containing just sdw, wipe out the RAID info on sdw, start the RAID array with the other 3 disks (while fixing the preferred minor number), and add sdw back into the RAID. Something like this:
mdadm --stop /dev/md1
mdadm --zero-superblock /dev/sdw1
mdadm --assemble /dev/md1 --update=super-minor --uuid=<the RAID5's RAID UUID>
mdadm --manage /dev/md1 --add /dev/sdw1
And that worked. As far as I can tell, I didn't lose anything on the RAID5 array.
Also, I'm pretty sure that the RAID5 wasn't already degraded, because I wrote my own RAID monitoring script that checks the status every 5 minutes. If the RAID5 was degraded, the script would have sent out a multicast, and every PC on my LAN has another script to listen for this multicast and display an error notification on the desktop (using the notify-send). I would have noticed if the RAID5 array was already degraded.
* Thoughts for discussion
I understand that metadata 0.90 might be an obsolete format today and might not be used much. Maybe that's why I couldn't find any stories about live desktop environments ruining a RAID array. But it still seems dangerous for the live desktop to blindly autostart all RAID arrays during boot. Example: What if the array was already degraded, and the owner wanted to "freeze" the array to avoid any chance of losing another disk and the whole array?
So, should the live desktop be auto starting RAID arrays?
Last edited by Eeqmcsq (2025-05-26 20:20:21)
Offline
I think you make a good case for turning mdadm off in the live isos. Easy enough to do.
In /etc/default/mdadm
# START_DAEMON:
# should mdadm start the MD monitoring daemon during boot?
START_DAEMON=false
If you need mdadm in the live session, turn it on with /etc/init.d/mdadm start
Now all I gotta do is remember to put it in the release notes.
Might be better to make a hook script so it could be turned on or off at the boot command. (talking to myself now)
Offline
Thanks. I'll marked this as solved.
Offline
I tested the Aug 13 Excalibur preview live desktop, both UEFI and legacy boot, and it's still auto starting the RAID array. I confirmed that /etc/default/mdadm shows START_DAEMON is set to false, but the output of /proc/mdstat shows my test RAID array was started at /dev/md127.
https://files.devuan.org/devuan_excalibur/desktop-live/
devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso
devuan@devuan:~$ cat /etc/default/mdadm
# mdadm Debian configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#
# AUTOCHECK:
# should mdadm run periodic redundancy checks over your arrays? See
# /etc/cron.d/mdadm.
AUTOCHECK=true
# AUTOSCAN:
# should mdadm check once a day for degraded arrays? See
# /etc/cron.daily/mdadm.
AUTOSCAN=true
# START_DAEMON:
# should mdadm start the MD monitoring daemon during boot?
START_DAEMON=false
# DAEMON_OPTIONS:
# additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"
# VERBOSE:
# if this variable is set to true, mdadm will be a little more verbose e.g.
# when creating the initramfs.
VERBOSE=false
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
- /var/log/kern.log, partial output, showing that md127 was started.
2025-08-24T19:51:23.214552+00:00 localhost kernel: [drm] GART: num cpu pages 262144, num gpu pages 262144
2025-08-24T19:51:23.214553+00:00 localhost kernel: [drm] PCIE GART of 1024M enabled (table at 0x0000000000162000).
2025-08-24T19:51:23.214574+00:00 localhost kernel: radeon 0000:00:01.0: WB enabled
2025-08-24T19:51:23.214577+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000020000c00
2025-08-24T19:51:23.214578+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000020000c0c
2025-08-24T19:51:23.214579+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000072118
2025-08-24T19:51:23.214581+00:00 localhost kernel: radeon 0000:00:01.0: radeon: MSI limited to 32-bit
2025-08-24T19:51:23.214582+00:00 localhost kernel: radeon 0000:00:01.0: radeon: using MSI.
2025-08-24T19:51:23.214583+00:00 localhost kernel: [drm] radeon: irq initialized.
2025-08-24T19:51:23.214584+00:00 localhost kernel: [drm] ring test on 0 succeeded in 1 usecs
2025-08-24T19:51:23.214585+00:00 localhost kernel: [drm] ring test on 3 succeeded in 3 usecs
2025-08-24T19:51:23.214586+00:00 localhost kernel: md/raid1:md127: active with 2 out of 2 mirrors
2025-08-24T19:51:23.214588+00:00 localhost kernel: md127: detected capacity change from 0 to 200704
2025-08-24T19:51:23.214590+00:00 localhost kernel: usb 1-3.2: New USB device found, idVendor=046d, idProduct=c31c, bcdDevice=49.20
2025-08-24T19:51:23.214609+00:00 localhost kernel: usb 1-3.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
2025-08-24T19:51:23.214610+00:00 localhost kernel: usb 1-3.2: Product: USB Keyboard
2025-08-24T19:51:23.214612+00:00 localhost kernel: usb 1-3.2: Manufacturer: Logitech
* More info
I'm also testing an Excalibur installation on a PC using devuan_excalibur_6.0-20250823_amd64_netinstall.iso. I now see that the annoyances of auto starting RAID arrays on boot isn't just a problem with the live desktop environment. My Excalibur PC is also autostarting the test RAID array I created, even though I didn't specify for this RAID array to be assembled on boot.
I tried setting START_DAEMON=false, and that didn't disable the RAID auto start.
So I'll do a bunch of digging to find a solution for disabling this RAID auto start. If I figure it out, I'll post an update.
Offline
FWIW: My experience has been mainly with hardware raid, but if i remember correctly the kernel md modules probed for raid at boot.
That means that the Live media kernel version which may be different from the on disk version may come into play.
If it is still done that way you might be able to blacklist the md? modules or pass something like raid=noautodetect?.
Looking...
Running 'modinfo md-mod' gives two parameters ' start_dirty_degraded:int' and 'create_on_open:bool' but grepping
drivers/md/md-autodetect.c does give a hit on 'noautodetect', suggest that warrants further investigation.
Offline
@g4sra - I tried the suggestion about "raid=noautodetect" as a kernel boot arg, but that didn't work. Based on my research, that's supposed to work only when the RAID code is compiled directly into the kernel, instead of being a separately loaded module.
After searching online and lots of testing of the init ram disk, including borking it a few times , I think I found the easiest solution.
* Solution
Add these two lines to /etc/mdadm/mdadm.conf:
#Disable all auto-assembly, so that only arrays explicitly listed in mdadm.conf or on the command line are assembled.
AUTO -all
#At least one ARRAY line is required for update-initramfs to copy this mdadm.conf into the init RAM disk. Otherwise,
#update-initramfs will ignore this file and auto generate its own mdadm.conf in the init RAM disk.
ARRAY <ignore> UUID=00000000:00000000:00000000:00000000
Then rebuild the init RAM disk:
sudo update-initramfs -u
* Explanation
The AUTO keyword is described in the mdadm.conf man pages as a way to allow or deny auto-assembling an array. The "all" keyword matches all metadata.
From my tests "AUTO -all" doesn't affect any ARRAY lines that you manually define in mdadm.conf. Also, your ARRAY lines can be placed before or after the "AUTO -all" line, and they'll still work.
The reason why "AUTO -all" isn't enough is because when you run "update-initramfs", the script for handling mdadm first checks if your mdadm.conf has any ARRAY lines defined. If not, the script will ignore your mdadm.conf file containing the "AUTO -all" and run its own script "mkconf" to auto generate an mdadm.conf containing any RAID arrays that it detects on the PC, even if they're not currently started, and even if you didn't want any of those arrays to be auto-started.
So we define a dummy ARRAY line to ensure that your mdadm.conf is copied into the init ramdisk.
See: /usr/share/initramfs-tools/hooks/mdadm, /usr/share/mdadm/mkconf
* How to check the mdadm.conf in the init ram disk
Create a temp dir to hold the files. Then extract the files from the init ram disk. Replace the file name with your initrd file. The files are extracted into the current directory.
mkdir /tmp/testdir
cd /tmp/testdir
(cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.41+deb13-amd64
cat ./etc/mdadm/mdadm.conf
* Other info
I tracked down exactly what triggers the RAID arrays to get auto started during boot, even if the init ram disk's mdadm.conf contains no ARRAY definitions. It's this udev rule file:
/usr/lib/udev/rules.d/64-md-raid-assembly.rules
I don't know how to read udev rules, so I don't know what this file does, but I've confirmed that moving this file out of this directory, then rebuilding the init RAM disk will disable auto starting arrays on boot. Since there are other md related rules in here, I don't know what the side effects are of removing just this file.
Offline
before starting a new thread(and potentially requesting it to be a frontpage sticky) i wanted to clarify something
(mostly to @fsmithred given admin/elevated-status/mod/etc. but @all as well)
if i understand this correctly there may be millions of live-cd/dvd/usb/etc distros existing out there globally.......
that.....
bork the booting of an uncertain number of machines owned/used by an unknown number of unknowing victims?
seems like the makings of a good commentary on arstechnic, hacker news, reddit, slashdot, etc...
hmmm.....
i'll be the first to admit that i was unaware that many/most/who_knows? of currently circulating live-distros(cd/dvd/usb/etc), claiming to be completely safe to boot up on any machine without changing _anything_ on that physical computer, instead...
bork it?
i do not think that the vast majority of casual live-distro users(past/present/future) are aware of this potential issue.
in other words, i guess i really want to be wrong about this. hoping anywho.
edited to turn it down a couple notches...squirrels and coffee you know...
Last edited by stargate-sg1-cheyenne-mtn (2025-08-25 12:29:34)
Be Excellent to each other and Party On!
https://www.youtube.com/watch?v=rph_1DODXDU
https://en.wikipedia.org/wiki/Bill_%26_Ted%27s_Excellent_Adventure
Do unto others as you would have them do instantaneously back to you!
Offline
Eeqmcsq, I added your edits to mdadm.conf and made a new iso. Thank you!
https://files.devuan.org/devuan_excalib … p-live.iso
stargate, this is the first such report I've seen and mdadm has been active in devuan-live for 10 years and refracta isos for a few years more than that (since 2011 I think). I don't know what other live distros include mdadm, so I can only say that thousands or maybe tens of thousands of users have booted these isos.
Edit/update: That iso somehow got the default /etc/default/mdadm with START_DAEMON=true. I'm making a new iso now.
Also, I can't seem to start mdadm when running this live-iso. Can you still use the mdadm command without the service running?
Offline
This one has START_DAEMON=false in /etc/default/mdadm:
https://files.devuan.org/devuan_excalib … p-live.iso
Offline
looks like mdadm has been in debian/ubuntu since at least debian 3 and ubuntu 5.10 based on this webpage:
https://www.howtoforge.com/linux_software_raid
so it is possible that an original or a remix may/might possibly cause these failures/issues
using DDG Lite and searching for ubuntu 8.04 mdadm finds numerous results, just one of which is:
https://wiki.ubuntu.com/BootDegradedRaid
which appears to have similar questions regarding out of the box operations
@fsmithred, fortunately there would have been few of these "situations" however it is "non-zero" and one must wonder how many borked systems were never accurately diagnosed regarding the exact cause of failure(after all, the innocent live-distro user believes wholeheartedly that booting their liveusb thumbdrive in an unknowing/unsuspecting owners/users machine doesn't change anything on the computer itself...because that is what the distros claim...a claim we now know is sometimes false)
Last edited by stargate-sg1-cheyenne-mtn (2025-08-25 12:52:48)
Be Excellent to each other and Party On!
https://www.youtube.com/watch?v=rph_1DODXDU
https://en.wikipedia.org/wiki/Bill_%26_Ted%27s_Excellent_Adventure
Do unto others as you would have them do instantaneously back to you!
Offline
@Eeqmcsq, looking at my daedalus /lib/udev/rules.d/64-md-raid-assembly.rules as an alternative to removing the script, 'nodmraid' on the kernel command line will cause the script to jump to the end bypassing all of the 'ACTION' directives.
# "nodmraid" on kernel command line stops mdadm from handling
# "isw" or "ddf".
IMPORT{cmdline}="noiswmd"
IMPORT{cmdline}="nodmraid"
ENV{nodmraid}=="?*", GOTO="md_inc_end"
What needs to be considered is whether a user is expecting to be able to install (onto md storage) from a Live image ?
Offline
g4sra, thanks. Good find.
If "nodmraid" in the boot command works, then I can add that to all the boot menu entries except one and label it "with mdadm" or similar. I like that idea better.
Offline
There seems to be differing methods of activation depending whether md raid is compiled in or a module, plus that udev rule that would run.
I found this type of thing a bane when performing data recovery by attaching faulty\corrupt media to a workstation for repair\recovery.
Kconfig CONFIG_MD_AUTODETECT
file drivers/md/md-autodetect.c
#ifdef CONFIG_MD_AUTODETECT
static int __initdata raid_noautodetect;
#else
static int __initdata raid_noautodetect=1;
#endif
if (raid_noautodetect)
printk(KERN_INFO "md: Skipping autodetection of RAID arrays. (raid=autodetect will force)\n");
printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n");
printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n");
Offline
@g4sra: Here are my test results of adding nodmraid to the boot menu kernel options. Short answer: no difference. My test raid arrays are still auto started in the Excalibur live desktop at md126/md127.
Test file: devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso
At the initial menu, I edited the boot options and included "nodmraid", and "nodmraid=1". Then I ran the live desktop.
- Test 1: No cmd line change:
devuan@devuan:~$ cat /proc/cmdline
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram nottyautologin
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
98944 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
- Test 2, cmd line change, added "nodmraid":
devuan@devuan:~$ cat /proc/cmdline
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram nottyautologin nodmraid
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
98944 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
- Test 3, cmd line change, added "nodmraid=1":
devuan@devuan:~$ cat /proc/cmdline
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram nottyautologin nodmraid=1
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active (auto-read-only) raid1 sdb1[1] sda1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sdb2[1] sda2[0]
98944 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
================================
@stargate-sg1-cheyenne-mtn:
Although mdadm may have been in Debian/Ubuntu since the 2000s, it occurred to me that the live desktops might not have mdadm installed or running. To satisfy my curiosity, I tested various Devuan live desktops on my test PC with my 2 test RAID arrays to see when the live desktop started auto starting the RAID arrays.
Short answer: Beowulf, based on Debian 10 Buster.
- jessie (based on Debian 8 Jessie)
- devuan_jessie_1.0.0_amd64_desktop-live.iso
- /proc/mdstat : File not found.
- sudo which mdadm : mdadm not found
- lsmod | grep raid : No raid0, raid1, etc. modules.
- ascii (based on Debian 9 Stretch)
- devuan_ascii_2.0.0_amd64_desktop-live.iso
- /proc/mdstat : File not found.
- sudo which mdadm : mdadm not found
- lsmod | grep raid : No raid0, raid1, etc. modules.
- beowulf (based on Debian 10 Buster)
- devuan_beowulf_3.1.1_amd64_desktop-live.iso
- /proc/mdstat : File found. RAID arrays auto started at md127, md126.
- sudo which mdadm : /sbin/mdadm
- lsmod | grep raid : raid0, raid1, etc. modules found
So that might explain why there hasn't been a history of complaints about the live desktop ruining RAID arrays - they weren't auto started until just a few years ago.
Offline
@fsmithred: I haven't tested your two isos yet. Instead, I satisified my curiosity by investigating exactly what this "START_DAEMON" settings does and how it affects mdadm.
The START_DAEMON option is used in the script /etc/init.d/mdadm to determine if it should start mdadm with --monitor. According to the man pages, the monitor mode periodically checks the arrays to check for status changes and reports these events by logging to the syslog, sending a mail alert, etc.
$ ps -ef | grep mdadm
root 1434 1 0 19:57 ? 00:00:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog
At first, I thought this option was harmless. Then I see this in the man pages:
MONITOR MODE
<...>
As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group or domain and if the destination array has a
failed drive but no spares.
That's definitely a NO for a live desktop.
Otherwise, in my tests, turning off the START_DAEMON didn't affect my mdadm cmd line commands.
* Other issues
While testing the START_DAEMON setting to see if "mdadm --monitor" really writes to the syslog, I couldn't find /var/log/syslog on my Excalibur PC! The package "rsyslog" wasn't installed. Is that an oversight in the Excalibur netinstall iso, or is rsyslog no longer installed by default? If not, what are the alternative log files?
On a side note, my tests show that mdadm --monitor DOES log to the syslog when it detects a failed drive or a drive added back to a degraded array. I used mdadm --fail, --remove, --add, and observed the syslog.
Last edited by Eeqmcsq (2025-08-26 03:59:39)
Offline
@Eeqmcsq, have you blocked md autodetction at boot ?
The results that you have posted suggest md raid was activated during boot, not after.
blacklist the md? modules or pass something like raid=noautodetect?.
nodmraid is an alternative to removing the UDEV, rule.
@Eeqmcsq, looking at my daedalus /lib/udev/rules.d/64-md-raid-assembly.rules as an alternative to removing the script, 'nodmraid' on the kernel command line will cause the script to jump to the end bypassing all of the 'ACTION' directives.
Offline
@g4sra - Yep. I've tried 2 of your 3 suggestions. Adding raid=noautodetect to the boot menu's kernel options didn't disable autostarting the RAID. Adding nodmraid also had no effect. The dmraid test output can be found in this comment:
I did NOT try blacklisting the md modules, since I had no idea how to blacklist a module. That's on my list of things to investigate.
And just to make sure I didn't screw anything up in my previous tests, I retested several combinations of raid=noautodetect and nodmraid in both the Excalibur live desktop and my Excalibur PC. In all cases, the RAID arrays auto started at md127, md126.
These were the test cases I ran tonight.
- default: No kernel cmd line changes
- Added: raid=noautodetect
- Added: nodmraid
- Added: nodmraid=1
- Added: raid=noautodetect nodmraid
- Added: raid=noautodetect nodmraid=1
For the live desktop, I used "devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso". For the PC installation, I set my mdadm.conf to this and updated the init ram disk.
#AUTO -all
ARRAY <ignore> UUID=00000000:00000000:00000000:00000000
During the tests, I confirmed the cmd line args by checking /proc/cmdline.
Offline
I did NOT try blacklisting the md modules, since I had no idea how to blacklist a module. That's on my list of things to investigate.
Simply create file in the /etc/modprobe.d/ with the module to be blacklisted and now I see/think about it, should the politically correct police have had that term changed to not be offensive in their minds....
An example of the system doing it.
root@9600k:~# cat /etc/modprobe.d/intel-microcode-blacklist.conf
# The microcode module attempts to apply a microcode update when
# it autoloads. This is not always safe, so we block it by default.
blacklist microcode
Offline
lsmod | grep raid or lsmod | grep md to see what modules are loaded.
I did that and decided that md_mod was probably the one to blacklist. When I went to /etc/modprobe.d to create a file, I saw that mdadm.conf was already there. Does anyone else have this file?
# mdadm module configuration file
# set start_ro=1 to make newly assembled arrays read-only initially,
# to prevent metadata writes. This is needed in order to allow
# resume-from-disk to work - new boot should not perform writes
# because it will be done behind the back of the system being
# resumed. See http://bugs.debian.org/415441 for details.
options md_mod start_ro=1
Offline
I tried adding "blacklist md_mod" to /etc/modprobe.d/mdadm.conf and rebuilding the initramfs. Did it with and without commenting out the existing line (...ro=1) and it didn't help. All the raid modules are still being loaded.
Also noted that 'modprobe -r md_mod' fails with message that md_mod is in use. There's no raid on my test box, so I don't know who is using it. (mdadm is not running)
Another option I thought of would be to build the iso without mdadm installed but with the deb package in the iso in case someone needed it. I think that could work, but I'm not sure.
One other thought came up after booting an older kernel whose initrd was made before installing mdadm. That would be to have two initrds in the iso, one of them made without mdadm installed and one with. That's more complicated to automate.
Offline
@fsmithred
Re script:/etc/modprobe.d/mdadm.conf mine comes from the installed package:
ii mdadm 4.2-5 amd64 Tool to administer Linux MD arrays (software RAID)
With the execute mode removed from script:/usr/share/initramfs-tools/hooks/mdadm mdadm is not invoked (not installed) in the initramfs.
That in conjunction with sysvinit 'mdadm' service being disabled md_mod is not automatically loaded at boot.
NB: If dm_raid has loaded that would use md_mod.
Offline
@fsmithred: Tonight, I tested your two images:
- devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso
- devuan_excalibur_6.0-preview-2025-08-25_1056_amd64_desktop-live.iso
First some quick replies to your comments:
- You said the 0923 iso "somehow got the default /etc/default/mdadm with START_DAEMON=true".
- Ans: I didn't see that on the 0923 iso. It was START_DAEMON=false, the same as the "2025-08-13" that I tested a few days ago.
- You also asked if I could "still use the mdadm command without the service running?"
- Ans: Yes, I was able to run the mdadm command in the 0923 iso. These two examples worked:
- sudo mdadm --examine --scan
- sudo mdadm --stop /dev/md127
* Test results for 0923 and 1056 were the same
- /proc/mdstat showed that the RAID arrays auto started at md127, md126.
- /etc/default/mdadm has START_DAEMON=false
- Checking "ps -ef | grep mdadm", I didn't see any "mdadm --monitor". The mdadm monitoring/reporting daemon was NOT running.
- Can I run the mdadm cmd? Yes, these cmds worked:
- sudo mdadm --examine --scan
- listed both of my test RAID arrays
- sudo mdadm --stop /dev/md127
* Investigation
So why did the RAID arrays auto start? Here are my investigation notes. Note that I tested using legacy boot.
- /etc/mdadm/mdadm.conf does contain the lines "AUTO -all" and the dummy line "ARRAY <ignore>" with UUID of all zeroes. This looks OK.
- I extracted the contents of /boot/initrd.img.-6.12.38+deb13-amd64. This init ram disk in the live desktop apparently has 4 total archives concatenated together, with the last one being compressed.
(cpio -id ; cpio -id ; cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.38+deb13-amd64
And the contents of ./etc/mdadm/mdadm.conf contains the two lines "AUTO -all" and "ARRAY <ignore>". So this looks OK.
And yet, the live-desktop STILL auto started my test RAID arrays???
- Checking the output of dmesg for clues, the second line shows the "Command line" args. And one of the lines show initrd=/live/initrd.img.
Aha, /live looks like something on the .iso image.
- I mounted the file "devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso" to the local file system, copied ./live/initrd.img to a temp directory and extracted it using the same 3* cpio + zstdcat|cpio, and examined ./etc/mdadm/mdadm.conf. The contents look like a default file. No "AUTO -all" or dummy "ARRAY <ignore>" lines.
- Conclusion: This would explain why the RAID arrays got autostarted. The live-desktop loaded THIS init ram disk that's located on the .iso.
- More info: The legacy boot menu comes from the iso image file "./isolinux/isolinux.cfg", which adds the "initrd" and other args to the boot cmd line.
label toram
menu label devuan-live (amd64) (load to RAM)
linux /live/vmlinuz
append initrd=/live/initrd.img boot=live username=devuan toram nottyautologin
* Test and investigation (UEFI live desktop)
- The test results and investigation results were the same, except that dmesg does NOT contain an "initrd" arg, so there's no clue about what init ram disk was loaded.
- Mounting the .iso again and checking "./boot/grub/grub.cfg", the menu entry contains a separate initrd line, which refers to the same "/live/initrd.img".
menuentry "devuan-live (load to RAM)" {
set gfxpayload=keep
linux /live/vmlinuz boot=live username=devuan toram nottyautologin
initrd /live/initrd.img
}
So both the UEFI and legacy boot must be loading the same init ram disk on the .iso file, which doesn't contain the two lines "AUTO -all" and "ARRAY <ignore>".
Offline
- I mounted the file "devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso" to the local file system, copied ./live/initrd.img to a temp directory and extracted it using the same 3* cpio + zstdcat|cpio, and examined ./etc/mdadm/mdadm.conf. The contents look like a default file. No "AUTO -all" or dummy "ARRAY <ignore>" lines.
I think you nailed it. The build script copies the kernel and initramfs to the /live directory from the chroot system that's being built. I think it's doing it too early, so it's not getting the final changes. I have to look at the code to confirm this (and correct it).
FWIW - unmkinitramfs will unpack the initrd easily with just one command.
Offline
@fsmithred, been playing.
Blacklisting md_mod is not sufficient although it does prevent passive loading.
When 'mdadm' is invoked with certain parameters it will then in turn trigger the kernel to load md_mod.
That is why 'chmod ugo-x /usr/share/initramfs-tools/hooks/mdadm' and thus knobbling 'mdadm' fixed it for me.
NB, and knobbling the UDEV rules as well...
Last edited by g4sra (2025-08-28 10:58:49)
Offline
@fsmithred - You asked if anyone has the file "/etc/modprobe.d/mdadm.conf".
Ans: Yep, I also have that file on my excalibur PC, and I also see it in the init ram disk.
I ran a quick test to see if I can see what effects "start_ro" has. "start_ro" has an effect on the RAID arrays I defined in /etc/mdadm/mdadm.conf, but it has no effect on the auto-started arrays at md127, md126. The setting will set whether or not the explicitly defined arrays are started with "auto-read-only" or not. For auto-started arrays at md127, the array is always started with "auto-read-only". I didn't dig any further to see what "auto-read-only" means, but the setting DOES show an effect.
When the "start_ro=1" line is defined (md22 is the raid number I explicitly defined in mdadm.conf):
$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid1 sdc2[1] sdb2[0]
98944 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md22 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
When the "start_ro=1" line is commented out:
$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md22 : active raid1 sda1[1] sdc1[0]
100352 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda2[1] sdc2[0]
98944 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
Next, I'll play around with blacklisting the md module. If I find out something new that hasn't already been mentioned, I'll post my test results.
Offline