The officially official Devuan Forum!

You are not logged in.

#26 Desktop and Multimedia » Excalibur RC1 + Xfce + slim + auto login shows Untitled window » 2025-10-12 20:22:19

Eeqmcsq
Replies: 31

Has anyone tried using Excalibur + slim display manager and turned on auto login? I'm seeing a black window covering the desktop with an "Untitled window" in the top panel. My investigation suggests that slim is launching this window. I can click on it and close it with Alt+F4, and the normal Xfce desktop is now shown.

  https://imgur.com/a/71EY4uv

I've seen this on two PCs now, (AMD A6-3620, AMD FX-8120), so I don't think it's something weird with one specific PC.

I don't see this problem in Excalibur + MATE, because choosing MATE installs lightdm, not slim. I also don't see this problem in Daedalus + Xfce + slim. So something must have changed in slim between Daedalus and Excalibur.

If no one has any solutions or workarounds, I'll have to download the slim source code in Daedalus and Excalibur, and find out what's going on - yuck sad

* Test details

- Image file: devuan_excalibur_6.0.0-rc1_amd64_netinstall.iso
  - Downloaded from: https://files.devuan.org/devuan_excalib … aller-iso/
- UEFI or Legacy boot doesn't matter.
- In the installer's "Software selection" screen, I chose these:
  - Devuan desktop environment
  - Xfce
  - SSH server
  - standard system utilities

- After the installation, manually log in to the Xfce desktop.
- Turn on slim's auto login:
  - sudo mousepad /etc/slim.conf
  - Uncomment "default_user", and specify <your user name>.
  - Uncomment "auto_login", and specify "yes".
- Reboot to trigger the auto login to the Xfce desktop
- In the Xfce desktop, the main desktop pane is blocked by a black window. The Xfce's top and bottom panels appear on top of the window in their correct positions (top/bottom) of the screen. In the top panel, I see a window "button" titled "Untitled window".

If I right click on this "Untitled window" -> Launch New Instance, I see this error: "Failed to execute child process "/proc/1415/exe" (Permission denied)".
 
Using ps -ef | grep 1415, the 1415 process is "/usr/bin/slim -d".
 
If I run "sudo /proc/1415/exe", I see "slim: Another instance of the program is already running with the PID 1415", confirming that 1415 is slim.
 
My current conclusion is that this window was launched by slim, intended for the login screen. But after auto logging in, slim didn't clean up the unneeded window.

#27 Re: Documentation » [HowTo] MATE Desktop: Change the Applications icon to the Devuan icon » 2025-09-23 03:50:31

sad about your being burned out. And yes, I did see something about "death metal" in the Excalibur directory, which is why I thought Excalibur would have a new icon file name. It's also why I spent this evening testing the other 3 distros - ascii, beowolf, chimaera - so I can complete the list of icon files in my original post and make sure the list is accurate.

#28 Re: Documentation » [HowTo] MATE Desktop: Change the Applications icon to the Devuan icon » 2025-09-23 01:33:13

Oh wow. Nope, I had no clue. I'm pretty much jumping from Jessie to Excalibur, with just a bit of Daedalus testing, so I had no clue that there have been themes for each release. Also, it just dawned on me that Excalibur hasn't actually been released yet. I've been so busy writing my own setup notes that I've been treating Excalibur like it's been released. That would mean that Excalibur will likely have a different icon file, right?

Should I just delete this post?

#29 Documentation » [HowTo] MATE Desktop: Change the Applications icon to the Devuan icon » 2025-09-23 00:17:58

Eeqmcsq
Replies: 4

When you install Devuan + Xfce, the Applications menu icon is the Devuan icon. But for MATE desktop, it uses the icon from your current theme. Here's how to change the icon.

- Install dconf-editor if it's not installed: sudo apt-get install dconf-editor
- Run dconf-editor.
- Go to: org -> mate -> panel -> menubar
- Change the value to the string that matches the distribution:

Excalibur : sapphire-round-32x32
Daedalus  : sapphire-round-32x32
Chimaera  : deepsea-round-32x32
Beowulf   : gdo-icon
Ascii     : gdo-icon
Jessie    : gdo-icon

The setting name refers to a file located in /usr/share/pixmaps. Example: /usr/share/pixmaps/sapphire-round-32x32.png.

- Click the Apply button to save the change. The icon should change immediately.

- To undo the change, set the "menubar" value back to its default string: start-here

#30 Re: Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-09-08 17:55:19

I just tested devuan_excalibur_6.0-20250830_amd64_netinstall.iso, and I confirmed that rsyslog IS installed. Thanks for making the new iso with this change.

#31 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-09-03 03:45:18

Sure, and thanks for listening to my request, AND for fixing the memtest.

#32 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-09-01 18:27:31

I tested devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso on the two PCs that I had reported in the first comment. I ran the live desktop, then rebooted into the PC, and the RAID arrays were started normally with no issues. I repeated the test 2 more times on each PC, and no RAID issues, or any other issues to report.

So the solution of not having any RAID installed by default on the live-desktop worked.

#33 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-31 06:09:28

* Quick summary

@fsmithred - I tested the live desktop iso devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso, and I confirmed that it did NOT auto start my test RAID arrays, which is good. I also confirmed that I can install the mdadm.deb in the root directory and do some RAID stuff.

The next test is the REAL test: run the desktop live on the 2 PCs that I reported in the first comment and make sure there are no other hidden surprises. Of course, I'll have to first run backups on those two PCs, which means digging out some extra HDDs, USB to SATA dock, etc. from my closet of tech stuff. So it might take a few days before I have more test results.

* Test notes

* Checking the iso's /live/initrd.img
  - file: devuan_excalibur_6.0-noraid-2025-08-29_1719_amd64_desktop-live.iso
  - Steps: Mount the iso, copy live/initrd.img to a temp directory.
  - cmd: unmkinitramfs initrd.img <output directory>
  - Weird. Extracting the initrd.img produces 3 directories: early, early2, main. Since it works, this could be just the way the live desktop init ram disk is organized.
  - Inside main, the contents look like the usual init ram disk.
  - find . | grep mdadm : No files found. Nothing related to mdadm is inside this ramdisk.
  - find . | grep rules.d : None of the rules files in udev/rules.d have any relation to md or raid.

Conclusion so far: If the init ram disk is has no knowledge of RAID, it shouldn't auto start any of my test RAID arrays.

* Checking the live desktop
  - cat /proc/mdstat  : file not found
  - sudo which mdadm  : file not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.
  - lsmod | grep md   : No md_mod module.

Conclusion so far: Now that's what I call a solution. As predicted, since the init ram disk is totally clueless about RAID, it didn't (or couldn't) auto start any of the RAID arrays.

* Live desktop: installing mdadm.deb located on the root directory
  - install cmd: sudo dpkg -i /mdadm_4.4-11devuan3_amd64.deb
    - The install worked. There were some info messages about "live system is running on read-only media", and an error with grub-probe, but I'll see how far I can get.
  - Checking lsmod, md_mod is now loaded, but none of the raid modules are loaded.
  - cat /proc/mdstat: File exists. No RAID personalities listed. No arrays started.
  - sudo mdadm --examine --scan
    - Correctly discovered two RAID arrays, /dev/md/11, dev/md/22.
      - I don't know what the extra / means between the "md" and the number, but I'll keep going.
  - sudo mdadm --assemble /dev/md77 --uuid=<RAID UUID>
    - Array started on the intentionally weird md number, md77.
    - Array is confirmed in /proc/mdstat.
    - /proc/mdstat also shows the "raid1" personality.
    - lsmod now shows that the raid1 module is loaded. I guess the raid modules get loaded when they're needed.
      - This is convenient. The user doesn't have to manually load specific RAID modules in order to start their RAID arrays.
  - sudo mdadm --stop /dev/md77
    - Unsurprisingly, this also works.

  + Retest with no network connection (unplug from the PC).
    - Results: all same

  + Retest with no internet connection (plug PC into LAN, but unplug router's WAN).
    - Results: all same

Conclusion: Confirmed that the user can install mdadm locally and do RAID stuff, and there's no dependency on an Internet connection or any LAN connection.

#34 Re: Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-29 18:40:19

I can file the bug. But since the next netstall iso will include rsyslog and is coming soon, I'll test that netinstall build first. If rsyslog is confirmed to be installed, I'll skip the bug report.

#35 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-29 05:21:00

Here's what I've learned tonight about blacklisting that's not been mentioned in other comments.

* What it takes to unload the md_mod module

Running "sudo modprobe -r md_mod" returns the error: "modprobe: FATAL: Module md_mod is in use.". But it doesn't tell you who's using it.

Running "sudo rmmod md_mod" returns the error: "rmmod: ERROR: Module md_mod is in use by: raid1 raid10 raid0 raid456".

So the solution is:

- Stop all RAID arrays so that the raid modules can be unloaded:
  - sudo mdadm --stop /dev/mdx /dev/mdy, etc

- Remove the 4 raid modules, then md_mod.
  - sudo rmmod raid1 raid10 raid0 raid456 md_mod

* What if you blacklisted the raid modules

I modified /etc/modprobe.d/mdadm.conf and added these lines, then rebuilt the init ram disk, then rebooted.

blacklist raid1
blacklist raid10
blacklist raid0
blacklist raid456
blacklist md_mod

The result was that the RAID arrays block devices were STILL created, but are inactive. /proc/mdstat also shows no raid "personalities" available.

$ ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 Aug 28 21:45 /dev/md127
brw-rw---- 1 root disk 9,  22 Aug 28 21:45 /dev/md22

$ cat /proc/mdstat 
Personalities : 
md22 : inactive sda1[1] sdc1[0]
      200705 blocks super 1.2
       
md127 : inactive sda2[1] sdc2[0]
      197953 blocks super 1.2
       
unused devices: <none>

Looking at the loaded modules "sudo lsmod | egrep 'raid|md' | sort", the md_mod is STILL loaded, but none of the raid modules were loaded. As mentioned by g4sra, blacklisting doesn't prevent other ways of loading the md_mod.

#36 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-29 03:04:46

@fsmithred - You asked if anyone has the file "/etc/modprobe.d/mdadm.conf".

Ans: Yep, I also have that file on my excalibur PC, and I also see it in the init ram disk.

I ran a quick test to see if I can see what effects "start_ro" has. "start_ro" has an effect on the RAID arrays I defined in /etc/mdadm/mdadm.conf, but it has no effect on the auto-started arrays at md127, md126. The setting will set whether or not the explicitly defined arrays are started with "auto-read-only" or not. For auto-started arrays at md127, the array is always started with "auto-read-only". I didn't dig any further to see what "auto-read-only" means, but the setting DOES show an effect.

When the "start_ro=1" line is defined (md22 is the raid number I explicitly defined in mdadm.conf):

$ cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active (auto-read-only) raid1 sdc2[1] sdb2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md22 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

When the "start_ro=1" line is commented out:

$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md22 : active raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

Next, I'll play around with blacklisting the md module. If I find out something new that hasn't already been mentioned, I'll post my test results.

#37 Re: Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-29 02:29:10

OK, thanks for the reply. I have installed rsyslog manually on my Excalibur PC, and a quick check shows no obvious problems. I wanted to make sure it wasn't an oversight in the Excalibur installer.

I'll marked this as solved, since it's a known issue.

#38 Installation » [SOLVED] Excalibur net install 20250823 not installing rsyslog » 2025-08-28 08:15:09

Eeqmcsq
Replies: 9

I mentioned this problem in the RAID auto start thread, but it may have gotten lost in the deluge of testing. Since this problem isn't related to RAID, I'm creating a new post to track this problem.

- Installer iso: devuan_excalibur_6.0-20250823_amd64_netinstall.iso
- Download directory: https://files.devuan.org/devuan_excalib … aller-iso/

* Problem

After installing the netinstall to my PC, /var/log/syslog is not found. I discovered that the package rsyslog isn't installed. Is this a problem with the installer, or is rsyslog no longer installed by default? If not, what logs would I look at?

Note that in the Excalibur live desktop, rsyslog IS installed.

#39 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-28 05:41:37

@fsmithred: Tonight, I tested your two images:
  - devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso
  - devuan_excalibur_6.0-preview-2025-08-25_1056_amd64_desktop-live.iso

First some quick replies to your comments:

- You said the 0923 iso "somehow got the default /etc/default/mdadm with START_DAEMON=true".
  - Ans: I didn't see that on the 0923 iso. It was START_DAEMON=false, the same as the "2025-08-13" that I tested a few days ago.

- You also asked if I could "still use the mdadm command without the service running?"
  - Ans: Yes, I was able to run the mdadm command in the 0923 iso. These two examples worked:
    - sudo mdadm --examine --scan
    - sudo mdadm --stop /dev/md127

* Test results for 0923 and 1056 were the same

  - /proc/mdstat showed that the RAID arrays auto started at md127, md126.
  - /etc/default/mdadm has START_DAEMON=false
  - Checking "ps -ef | grep mdadm", I didn't see any "mdadm --monitor". The mdadm monitoring/reporting daemon was NOT running.
  - Can I run the mdadm cmd? Yes, these cmds worked:
    - sudo mdadm --examine --scan
      - listed both of my test RAID arrays
    - sudo mdadm --stop /dev/md127

* Investigation

So why did the RAID arrays auto start? Here are my investigation notes. Note that I tested using legacy boot.

- /etc/mdadm/mdadm.conf does contain the lines "AUTO -all" and the dummy line "ARRAY <ignore>" with UUID of all zeroes. This looks OK.

- I extracted the contents of /boot/initrd.img.-6.12.38+deb13-amd64. This init ram disk in the live desktop apparently has 4 total archives concatenated together, with the last one being compressed.

(cpio -id ; cpio -id ; cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.38+deb13-amd64

   And the contents of ./etc/mdadm/mdadm.conf contains the two lines "AUTO -all" and "ARRAY <ignore>". So this looks OK.
   
   And yet, the live-desktop STILL auto started my test RAID arrays???

- Checking the output of dmesg for clues, the second line shows the "Command line" args. And one of the lines show initrd=/live/initrd.img.

  Aha, /live looks like something on the .iso image.

- I mounted the file "devuan_excalibur_6.0-preview-2025-08-25_0923_amd64_desktop-live.iso" to the local file system, copied ./live/initrd.img to a temp directory and extracted it using the same 3* cpio + zstdcat|cpio, and examined ./etc/mdadm/mdadm.conf. The contents look like a default file. No "AUTO -all" or dummy "ARRAY <ignore>" lines.

- Conclusion: This would explain why the RAID arrays got autostarted. The live-desktop loaded THIS init ram disk that's located on the .iso.

- More info: The legacy boot menu comes from the iso image file "./isolinux/isolinux.cfg", which adds the "initrd" and other args to the boot cmd line.

label toram
    menu label devuan-live (amd64) (load to RAM)
    linux /live/vmlinuz
    append initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin

* Test and investigation (UEFI live desktop)
- The test results and investigation results were the same, except that dmesg does NOT contain an "initrd" arg, so there's no clue about what init ram disk was loaded.

- Mounting the .iso again and checking "./boot/grub/grub.cfg", the menu entry contains a separate initrd line, which refers to the same "/live/initrd.img".

menuentry "devuan-live (load to RAM)" {
    set gfxpayload=keep
    linux   /live/vmlinuz boot=live username=devuan toram nottyautologin 
    initrd  /live/initrd.img
}

So both the UEFI and legacy boot must be loading the same init ram disk on the .iso file, which doesn't contain the two lines "AUTO -all" and "ARRAY <ignore>".

#40 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-27 05:25:08

@g4sra - Yep. I've tried 2 of your 3 suggestions. Adding raid=noautodetect to the boot menu's kernel options didn't disable autostarting the RAID. Adding nodmraid also had no effect. The dmraid test output can be found in this comment:

I did NOT try blacklisting the md modules, since I had no idea how to blacklist a module. That's on my list of things to investigate.

And just to make sure I didn't screw anything up in my previous tests, I retested several combinations of raid=noautodetect and nodmraid in both the Excalibur live desktop and my Excalibur PC. In all cases, the RAID arrays auto started at md127, md126.

These were the test cases I ran tonight.

  - default: No kernel cmd line changes
  - Added: raid=noautodetect
  - Added: nodmraid
  - Added: nodmraid=1
  - Added: raid=noautodetect nodmraid
  - Added: raid=noautodetect nodmraid=1

For the live desktop, I used "devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso". For the PC installation, I set my mdadm.conf to this and updated the init ram disk.

  #AUTO -all
  ARRAY <ignore> UUID=00000000:00000000:00000000:00000000

During the tests, I confirmed the cmd line args by checking /proc/cmdline.

#41 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-26 03:57:03

@fsmithred: I haven't tested your two isos yet. Instead, I satisified my curiosity by investigating exactly what this "START_DAEMON" settings does and how it affects mdadm.

The START_DAEMON option is used in the script /etc/init.d/mdadm to determine if it should start mdadm with --monitor. According to the man pages, the monitor mode periodically checks the arrays to check for status changes and reports these events by logging to the syslog, sending a mail alert, etc.

$ ps -ef | grep mdadm
root      1434     1  0 19:57 ?        00:00:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog

At first, I thought this option was harmless. Then I see this in the man pages:

MONITOR MODE
  <...>
  As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group or domain and if the  destination  array  has  a
  failed drive but no spares.

That's definitely a NO for a live desktop.

Otherwise, in my tests, turning off the START_DAEMON didn't affect my mdadm cmd line commands.

* Other issues

While testing the START_DAEMON setting to see if "mdadm --monitor" really writes to the syslog, I couldn't find /var/log/syslog on my Excalibur PC! The package "rsyslog" wasn't installed. Is that an oversight in the Excalibur netinstall iso, or is rsyslog no longer installed by default? If not, what are the alternative log files?

On a side note, my tests show that mdadm --monitor DOES log to the syslog when it detects a failed drive or a drive added back to a degraded array. I used mdadm --fail, --remove, --add, and observed the syslog.

#42 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-26 03:27:20

@g4sra: Here are my test results of adding nodmraid to the boot menu kernel options. Short answer: no difference. My test raid arrays are still auto started in the Excalibur live desktop at md126/md127.

Test file: devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso

At the initial menu, I edited the boot options and included "nodmraid", and "nodmraid=1". Then I ran the live desktop.

- Test 1: No cmd line change:

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin

devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- Test 2, cmd line change, added "nodmraid":

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin nodmraid

devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sda2[1] sdc2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- Test 3, cmd line change, added "nodmraid=1":

devuan@devuan:~$ cat /proc/cmdline 
BOOT_IMAGE=/live/vmlinuz initrd=/live/initrd.img boot=live username=devuan toram  nottyautologin nodmraid=1

devuan@devuan:~$ cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sdb1[1] sda1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      98944 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

================================

@stargate-sg1-cheyenne-mtn:

Although mdadm may have been in Debian/Ubuntu since the 2000s, it occurred to me that the live desktops might not have mdadm installed or running. To satisfy my curiosity, I tested various Devuan live desktops on my test PC with my 2 test RAID arrays to see when the live desktop started auto starting the RAID arrays.

Short answer: Beowulf, based on Debian 10 Buster.

- jessie (based on Debian 8 Jessie)
  - devuan_jessie_1.0.0_amd64_desktop-live.iso
  - /proc/mdstat      : File not found.
  - sudo which mdadm  : mdadm not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.

- ascii (based on Debian 9 Stretch)
  - devuan_ascii_2.0.0_amd64_desktop-live.iso
  - /proc/mdstat      : File not found.
  - sudo which mdadm  : mdadm not found
  - lsmod | grep raid : No raid0, raid1, etc. modules.

- beowulf (based on Debian 10 Buster)
  - devuan_beowulf_3.1.1_amd64_desktop-live.iso
  - /proc/mdstat      : File found. RAID arrays auto started at md127, md126.
  - sudo which mdadm  : /sbin/mdadm
  - lsmod | grep raid : raid0, raid1, etc. modules found

So that might explain why there hasn't been a history of complaints about the live desktop ruining RAID arrays - they weren't auto started until just a few years ago.

#43 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-25 05:48:11

@g4sra - I tried the suggestion about "raid=noautodetect" as a kernel boot arg, but that didn't work. Based on my research, that's supposed to work only when the RAID code is compiled directly into the kernel, instead of being a separately loaded module.

After searching online and lots of testing of the init ram disk, including borking it a few times big_smile , I think I found the easiest solution.

* Solution

Add these two lines to /etc/mdadm/mdadm.conf:

#Disable all auto-assembly, so that only arrays explicitly listed in mdadm.conf or on the command line are assembled.
AUTO -all

#At least one ARRAY line is required for update-initramfs to copy this mdadm.conf into the init RAM disk. Otherwise,
#update-initramfs will ignore this file and auto generate its own mdadm.conf in the init RAM disk.
ARRAY <ignore> UUID=00000000:00000000:00000000:00000000

Then rebuild the init RAM disk:

sudo update-initramfs -u

* Explanation

The AUTO keyword is described in the mdadm.conf man pages as a way to allow or deny auto-assembling an array. The "all" keyword matches all metadata.

From my tests "AUTO -all" doesn't affect any ARRAY lines that you manually define in mdadm.conf. Also, your ARRAY lines can be placed before or after the "AUTO -all" line, and they'll still work.

The reason why "AUTO -all" isn't enough is because when you run "update-initramfs", the script for handling mdadm first checks if your mdadm.conf has any ARRAY lines defined. If not, the script will ignore your mdadm.conf file containing the "AUTO -all" and run its own script "mkconf" to auto generate an mdadm.conf containing any RAID arrays that it detects on the PC, even if they're not currently started, and even if you didn't want any of those arrays to be auto-started.

So we define a dummy ARRAY line to ensure that your mdadm.conf is copied into the init ramdisk.

See: /usr/share/initramfs-tools/hooks/mdadm, /usr/share/mdadm/mkconf

* How to check the mdadm.conf in the init ram disk

Create a temp dir to hold the files. Then extract the files from the init ram disk. Replace the file name with your initrd file.  The files are extracted into the current directory.

mkdir /tmp/testdir
cd /tmp/testdir

(cpio -id ; zstdcat | cpio -id) < /boot/initrd.img-6.12.41+deb13-amd64

cat ./etc/mdadm/mdadm.conf

* Other info

I tracked down exactly what triggers the RAID arrays to get auto started during boot, even if the init ram disk's mdadm.conf contains no ARRAY definitions. It's this udev rule file:

/usr/lib/udev/rules.d/64-md-raid-assembly.rules

I don't know how to read udev rules, so I don't know what this file does, but I've confirmed that moving this file out of this directory, then rebuilding the init RAM disk will disable auto starting arrays on boot. Since there are other md related rules in here, I don't know what the side effects are of removing just this file.

#44 Re: Installation » [SOLVED] Should the live desktop auto start RAID arrays? » 2025-08-24 22:07:15

I tested the Aug 13 Excalibur preview live desktop, both UEFI and legacy boot, and it's still auto starting the RAID array. I confirmed that /etc/default/mdadm shows START_DAEMON is set to false, but the output of /proc/mdstat shows my test RAID array was started at /dev/md127.

https://files.devuan.org/devuan_excalibur/desktop-live/
devuan_excalibur_6.0-preview-2025-08-13_0014_amd64_desktop-live.iso

devuan@devuan:~$ cat /etc/default/mdadm 
# mdadm Debian configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#

# AUTOCHECK:
#   should mdadm run periodic redundancy checks over your arrays? See
#   /etc/cron.d/mdadm.
AUTOCHECK=true

# AUTOSCAN:
#   should mdadm check once a day for degraded arrays? See
#   /etc/cron.daily/mdadm.
AUTOSCAN=true

# START_DAEMON:
#   should mdadm start the MD monitoring daemon during boot?
START_DAEMON=false

# DAEMON_OPTIONS:
#   additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"

# VERBOSE:
#   if this variable is set to true, mdadm will be a little more verbose e.g.
#   when creating the initramfs.
VERBOSE=false
devuan@devuan:~$ cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active (auto-read-only) raid1 sda1[1] sdc1[0]
      100352 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

- /var/log/kern.log, partial output, showing that md127 was started.

2025-08-24T19:51:23.214552+00:00 localhost kernel: [drm] GART: num cpu pages 262144, num gpu pages 262144
2025-08-24T19:51:23.214553+00:00 localhost kernel: [drm] PCIE GART of 1024M enabled (table at 0x0000000000162000).
2025-08-24T19:51:23.214574+00:00 localhost kernel: radeon 0000:00:01.0: WB enabled
2025-08-24T19:51:23.214577+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000020000c00
2025-08-24T19:51:23.214578+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000020000c0c
2025-08-24T19:51:23.214579+00:00 localhost kernel: radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000072118
2025-08-24T19:51:23.214581+00:00 localhost kernel: radeon 0000:00:01.0: radeon: MSI limited to 32-bit
2025-08-24T19:51:23.214582+00:00 localhost kernel: radeon 0000:00:01.0: radeon: using MSI.
2025-08-24T19:51:23.214583+00:00 localhost kernel: [drm] radeon: irq initialized.
2025-08-24T19:51:23.214584+00:00 localhost kernel: [drm] ring test on 0 succeeded in 1 usecs
2025-08-24T19:51:23.214585+00:00 localhost kernel: [drm] ring test on 3 succeeded in 3 usecs
2025-08-24T19:51:23.214586+00:00 localhost kernel: md/raid1:md127: active with 2 out of 2 mirrors
2025-08-24T19:51:23.214588+00:00 localhost kernel: md127: detected capacity change from 0 to 200704
2025-08-24T19:51:23.214590+00:00 localhost kernel: usb 1-3.2: New USB device found, idVendor=046d, idProduct=c31c, bcdDevice=49.20
2025-08-24T19:51:23.214609+00:00 localhost kernel: usb 1-3.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
2025-08-24T19:51:23.214610+00:00 localhost kernel: usb 1-3.2: Product: USB Keyboard
2025-08-24T19:51:23.214612+00:00 localhost kernel: usb 1-3.2: Manufacturer: Logitech

* More info

I'm also testing an Excalibur installation on a PC using devuan_excalibur_6.0-20250823_amd64_netinstall.iso. I now see that the annoyances of auto starting RAID arrays on boot isn't just a problem with the live desktop environment. My Excalibur PC is also autostarting the test RAID array I created, even though I didn't specify for this RAID array to be assembled on boot.

I tried setting START_DAEMON=false, and that didn't disable the RAID auto start.

So I'll do a bunch of digging to find a solution for disabling this RAID auto start. If I figure it out, I'll post an update.

#45 Re: Documentation » apt-mirror config for a local Devuan repo » 2025-08-23 03:22:10

I ran into the same "Use of uninitialized value" error in daedalus + apt-mirror. The problem is that although the apt-mirror script's parsing logic for the Packages file (found in the binary packages) has been updated so that the md5sum is an optional field, the parsing logic for the Sources file (found in the source code packages) has NOT been updated. So it assumes that every package must contain the "Files:" section containing md5sums.

This means that if you're not mirroring the source code (not using deb-src), you won't see this error.

The problem isn't specific to Devuan. I tested with a Debian repository and saw the same errors.

- Devuan test, mirror.list

deb-src http://deb.devuan.org/merged daedalus-security main

  This downloads the file from: http://deb.devuan.org/merged/dists/daed … Sources.gz

- Debian test, mirror.list

deb-src http://deb.debian.org/debian bookworm-updates main

  This downloads the file from: http://deb.debian.org/debian/dists/book … Sources.xz

If you look open the Sources file in each test, you'll see that neither of their package descriptions contain "Files:" (md5sums), nor "Checksums-Sha1", only "Checksums-Sha256". So the problem is definitely an out-of-date apt-mirror script.

* A Solution

The only solution I have is to update the apt-mirror script. And since I know enough Perl to be dangerous big_smile , I came up with a solution.

- Copy the apt-mirror script into another directory, such as /opt, which is usually empty. That way, if anything goes wrong, the original apt-mirror script is still available.

cp /usr/bin/apt-mirror /opt

- Open /opt/apt-mirror in a text editor. Look for the function process_index(), and the "else" block with the comment "Sources index", which should be at line 919.

- The entire "else" block's code must be replaced with this code, which will handle any combination of the presence/absence of the 3 section names:

        else
        {    # Sources index

            #The sections "Files:", "Checksums-Sha1:", and "Checksums-Sha256" contain 1 or more lines with 3
            #space-separated fields: md5/sha1/sha256 checksum, file size in bytes, file name
            #There's no guarantee that all 3 sections will be defined, or that they will have the same list of files.
            #Use a hash to keep track of the file names and their sizes. Then update the ALL and NEW files afterwards.
            my %package_files;
            my %sections = (
                "Files:"            => *FILES_MD5   ,
                "Checksums-Sha1:"   => *FILES_SHA1  ,
                "Checksums-Sha256:" => *FILES_SHA256,
            );

            foreach my $section_name (keys %sections)
            {
                if (!defined($lines{$section_name})) { next; }

                foreach ( split( /\n/, $lines{$section_name} ) )
                {
                    next if $_ eq '';
                    my @file = split;
                    die("apt-mirror: invalid Sources format") if @file != 3;
                    print { $sections{$section_name} } $file[0] . "  " . remove_double_slashes( $path . "/" . $lines{"Directory:"} . "/" . $file[2] ) . "\n";
                    $package_files{$file[2]} = $file[1];
                }
            }

            my $file_size_bytes;
            my $file_path;
            foreach my $file_name (keys %package_files)
            {
                $file_size_bytes = $package_files{$file_name};
                $file_path       = remove_double_slashes( $path . "/" . $lines{"Directory:"} . "/" . $file_name );
                
                $skipclean{ $file_path } = 1;
                print FILES_ALL $file_path . "\n";

                if ( need_update( $mirror . "/" . $lines{"Directory:"} . "/" . $file_name, $file_size_bytes ) )
                {
                    print FILES_NEW remove_double_slashes( $uri . "/" . $lines{"Directory:"} . "/" . $file_name ) . "\n";
                    add_url_to_download( $uri . "/" . $lines{"Directory:"} . "/" . $file_name, $file_size_bytes );
                }
            }
        }

- To run the updated apt-mirror:

/opt/apt-mirror <your apt-mirror config file>

- To run the originally installed apt-mirror script:

apt-mirror <your apt-mirror config file>

#46 Re: Desktop and Multimedia » Calculator freezes » 2025-07-25 03:33:29

I don't know anything about KDE 3.5's Kcalc. But I just did a quick test with Devuan Daedalus + Mate Desktop, and the Mate Calculator does support your example: 50 - 20% -> 40

#47 Re: Hardware & System Configuration » Transfering an OS from HD to SSD » 2025-06-30 05:51:25

I don't know anything about the logging system. But your post reminded me of back in the early 2010s when SSDs were getting cheap enough for not-so-rich computer geeks to buy. I remember there were Linux tweaks for reducing unnecessary disk writes by specifying "relatime" or "noatime" or something like that to reduce or stop updating a file's "access time" every time it's read. Back then, those tweaks had to be done manually.

If I had to speculate about today, I would suspect you'd still have to do those tweaks manually. My reasoning is that whatever the logging system an app or Linux is using would have no idea what type of storage device the log files are being written to. It could be an HDD, SSD, a RAID array that's a MIX of HDD+SSD (weird, but possible), a RAM disk, even a network shared folder on another PC. Since there's no way to know (or at least, no EASY way to know), I conclude that it's up to the sys admin to make those disk optimization tweaks manually.

Of course, a real sys admin can tell us for sure. smile

#48 Re: Hardware & System Configuration » Transfering an OS from HD to SSD » 2025-06-30 00:33:26

I don't have any experience with "partition transfer utilities", but I have done these at one time or another:

- Whole disk cloning. Something like "dd if=/dev/sdx of=/dev/sdy".
- Copy the OS files from one partition to another. I think it was something like "cp -ax /path/to/old_drive_partition/* /path/to/new_drive_partition". This required either changing the new partition's UUID to match the old one, or modifying /etc/fstab to match the new partition UUID.
- Copy the OS files from a single disk to a RAID array. A RAID array is multiple storage drives working together to behave like a single giant storage drive. This required modifying some files so that the RAID array started first on boot, then it could be mounted to the file system.

In all cases, Linux didn't care about the change in the storage drive. I think all Linux cares about is finding the file system's UUID, and successfully mounting it based on the file system type (example: ext4).

One reason I can think of why the OS might behave differently is if an SSD loaded files so quickly during boot, it exposes some race condition, as some apps have dependencies that haven't been fully initialized yet. A a slower HDD would hide these issues because the apps are loaded more slowly, giving the dependencies more time to initialize properly.

The only time I can remember seeing a problem with cloning hard drives is on my Win7 work laptop. That's because one of the software licensing apps used the storage drive name or whatever to generate its license key. When I replaced the laptop HDD with a cloned SSD, the license app said my license was no longer valid. I just requested and installed a new license, and that was all.

#49 Documentation » [HowTo] Set up VNC server "x11vnc" to run as a service on boot » 2025-06-29 01:33:01

Eeqmcsq
Replies: 0
* Intro

By starting the VNC server "x11vnc" as a service on boot, you can control the PC remotely, even if no one has logged in yet.

Make sure you research VNC's security risks and confirm that it's safe to use VNC on your network. I'm using VNC on my home LAN with no other users and no wifi access points, so I concentrated primarily on getting the VNC server to work, with a minimum amount of security (VNC password only).

I tested these instructions on Devuan Daedalus and Devuan Jessie with a fresh installation of each release. The instructions are identical, so I assume the instructions will work on the other Devuan releases between these two.

These instructions appear lengthy because I included tests at each step to confirm the setup before going to the next step.

* What you need

- The IP address of the x11vnc server. Example of finding the IP address:

sudo ifconfig

- A remote PC with a VNC client app. I tested using Devuan Jessie's "xvncviewer" app.

* Install x11vnc
sudo apt-get install x11vnc
* Find the X authority file used by the display manager

The X authority file is required for x11vnc to connect to the X server. If a user is logged in, that file is located at $HOME/.Xauthority. But if no one is logged in, then x11vnc will need the login manager (also known as the display manager)'s X authority file.
 
Devuan uses the SLiM login manager by default, so these instructions will find SLiM's X authority file. If you're using another login manager, you'll have to research that login manager.

more /etc/slim.conf

Look for the setting "authfile". The default is:

authfile  /var/run/slim.auth

  > Test x11vnc with this auth file from a console window
    - Open a new terminal window for this test.
    - Unset this environment variable so x11vnc doesn't use it to find an X authority file.

unset XAUTHORITY

    - Start the x11vnc server WITHOUT specifying the X authority file.

sudo x11vnc -display :0

      This should fail. The server exits immediately.

    - Start the x11vnc server and specify the X authority file.

sudo x11vnc -auth /var/run/slim.auth -display :0

      This should work. The server remains running. To stop the server: CTRL+C
   
    - Close the test terminal window.

* Recommended: Create a VNC password file

Choose a location to store the VNC password file. For these instructions, I chose /opt, because that directory is always empty on my PCs.
 
The max VNC password length is 8 characters. If you enter a longer password, the extra characters are ignored.

- Create a VNC password file.

sudo x11vnc -storepasswd /opt/yourx11vncpasswordfile

- View the password.

sudo x11vnc -showrfbauth /opt/yourx11vncpasswordfile

  > Test the password.
    - Open a new terminal window for this test.

sudo x11vnc -auth /var/run/slim.auth -display :0 -rfbauth /opt/yourx11vncpasswordfile

    - On a remote PC, use a vnc client app and connect to this PC.
    - Close the vnc client app. The VNC server automatically terminates.
    - Close the test terminal window.

* Create the init script

- Create an executable file in /etc/init.d. Then open it in a text editor.

cd /etc/init.d
sudo touch your11vnc
sudo chmod +x yourx11vnc
sudo <your text editor> yourx11vnc

- Copy this text to the script:

#!/bin/sh

### BEGIN INIT INFO
# Provides:          yourx11vnc
# Required-Start:    slim
# Required-Stop:     
# Default-Start:     1 2 3 4 5
# Default-Stop:      0 6
# Short-Description: Your x11vnc server
# Description:       This is your x11vnc server script.
### END INIT INFO

#Explanation of the args:
# -auth    : Specifies the path to the X authority file required to connect to the X server. If no one is logged
#            in, but the login screen is displayed, you can specify the X authority file of the login screen's
#            display manager.
# -display : Tell x11vnc which display to try first. Otherwise, x11vnc will first try opening the display "", which
#            fails. Then x11vnc will delay for 4 seconds, then try ":0", which finally works.
# -forever : Continue listening for more connections after the first client disconnects. By default, x11vnc exits
#            when the client disconnects.
# -loop    : Create an outer loop that restarts x11vnc whenever it terminates. Useful if the X server terminates
#            and restarts, such as when logging out.
# -rfbauth : Use this password file created by "x11vnc --storepasswd". To run x11vnc without a password (NOT
#            RECOMMENDED), remove this arg.
X11VNC_ARGS="-auth /var/run/slim.auth -display :0 -forever -loop -rfbauth /opt/yourx11vncpasswordfile"
X11VNC_BIN_PATH="/usr/bin/x11vnc"

case "$1" in
    start)
        #Any args after the "--" are passed unmodified to the program being started.
        start-stop-daemon --start --oknodo --background --exec $X11VNC_BIN_PATH -- $X11VNC_ARGS
    ;;

    stop)
        start-stop-daemon --stop --oknodo --exec $X11VNC_BIN_PATH
    ;;

    status)
        start-stop-daemon --status --exec $X11VNC_BIN_PATH
        STATUS_CODE=$?

        #Print out a human readable message.
        case $STATUS_CODE in
            0) STATUS_MSG="$X11VNC_BIN_PATH is running."; ;;
            1) STATUS_MSG="$X11VNC_BIN_PATH is not running, /var/run pid file exists."; ;;
            2) STATUS_MSG="$X11VNC_BIN_PATH is not running, /var/lock lock file exists."; ;;
            3) STATUS_MSG="$X11VNC_BIN_PATH is not running."; ;;
            *) STATUS_MSG="Unknown status code: $STATUS_CODE"; ;;
        esac

        echo "$STATUS_MSG"
        exit $STATUS_CODE
    ;;

    *)
        echo "Usage: $0 start|stop|status" >&2
    ;;

esac

  * Optional adjustments to the script
    - If you're using another display manager besides "slim":
      - Change the "-auth" arg to match the display manager's X authority file.
      - Change the "Required-Start" line to specify the name of the service that starts the display manager.
    - If a VNC password is not required, remove the "-rfbauth /opt/yourx11vncpasswordfile" arg.

  > Test the script manually

    - To test the script, manually start the service.

sudo service yourx11vnc start

    - On a remote PC, use a vnc client app and connect to this PC. This should work.
    - Manually stop the service.

sudo service yourx11vnc stop
* Set up the script to run on boot and shutdown

- Create the startup/shutdown links:

sudo update-rc.d yourx11vnc defaults

  > Test the script on bootup
    - Reboot the PC. If the PC has auto login enabled, log out to return to the login screen, so that you can confirm that x11vnc works without requiring anyone to be logged in.
    - On a remote PC, use a vnc client app and connect to this PC. The remote PC should connect successfully and see the login screen.

* Troubleshooting the script

- Manually stop the service.

sudo service yourx11vnc stop

- Edit the script:

sudo <your text editor> /etc/init.d/yourx11vnc

  - Remove the -background option so the server's output is shown in the current console.
  - Remove the -loop, so the server doesn't get stuck in an infinite loop if something's wrong.

- Manually start the service in a console window. The output will now be shown in the current console window, so you can troubleshoot the problem.

sudo service yourx11vnc start

  To stop the server: CTRL+C

* Cleanup after troubleshooting
  - Restore the -background and the -loop options.
  - Manually restart the service.

sudo service yourx11vnc start
* Uninstall the script

- Manually stop the service. Then disable the startup/shutdown links.

sudo service yourx11vnc stop
sudo update-rc.d yourx11vnc remove

- You can now move the script "yourx11vnc" from /etc/init.d to another directory, or delete it if you're SURE you don't need it.

#50 Re: DIY » Need advice, building a small server for city-library-Devuan mirror » 2025-06-10 21:20:56

That backup plan sounds similar to what I'm doing at home for my work PCs. I have an HDD + SATA-to-USB3 dock, and I connect this to my work laptop once a week when it's time to back up the laptop's SSD by cloning the SSD to a file, something like "dd if=/dev/sdx of=file.bin". I wrote my own script to do most of the dirty work, including checking the backup HDD's file system for errors, deleting old backups if there isn't enough space, and constructing a date/time for the backup file name.

One difference in your proposal compared to my home PCs is that my OS is also on a RAID1, so the OS can withstand a drive failure. That's saved me the time of having to reinstall and reconfigure the OS when one of the drives failed.

I think that's the limit of my home experiences that can apply to your project. From a brainstorming perspective, other issues that spring to mind are:

* Handling multiple client requests: If multiple client requests causes a lot of disk activity on the OS disk, maybe an SSD IS needed for the OS.

* Web page design: These days, web pages must be designed for mobile users, since everyone has a phone. I have no knowledge of modern web design.

Board footer

Forum Software