You are not logged in.
it supports tpm since it has a connector for the module
Do you have a module plugged in ?
The installation media will install itself in the same mode that it is booted in.
Turning off Legacy is one way to ensure it boots and install in UEFI.
In computing terms that motherboard is ancient and likely does not support TPM.
Check the motherboard manual and check your firmware is the latest.
@steve_v
the same ARTS
Oh we are, Trinity Desktop Environment ships with a (probably updated for compatibility) KDE 3 implementation.
the operative word there is "video"
Ok gotcha, thanks for the clarification.
I always think of video with audio as two separate streams ...
Been using ALSA with RTP for a while, but only on the local net.
Do not use indexes, use the designated device names consistently and totally...
# PlayStation®VR USB Streaming used in 3D Mode Only (HDMI used in 2D Cine Mode)
pcm.psvr_hw {
type hw
card 'PlayStation®VR'
device 0
channels 2
}
This is an example from my config of a sound device which comes and goes without breaking anything.
@steve_v, nice to read that history of ALSA sound, I can relate to your comments.
I currently use arts, and have not had one issue with it in the last eight years.
There is nothing that I have wanted to do that ALSA has not been able to do, once I have learnt how.
However I am confused by your statement that ALSA is not capable of 'streaming', could you please elaborate on exactly what you mean ?
Being a dinosaur 'streaming' may have a different meaning for me than the context you are using it in.
There seem st be a lot of confusion in this thread about what a 'machine id' is and what a 'dbus session id' is. They are not the same thing despite dbus using the machine id as a unique but determinable temporary filename.
The reason '~/.dbus/session-bus/' fills up is evidence that the machine id is changing. Under systemd the machine id is never supposed to be changed so the filename never changes and the contents are simply overwritten.
NB This is also why you should avoid changing the machine id once logged in, it can have side-effects on dbus.
root@localhost:~# cat ~/.bash_logout
rm -f ~/.dbus/session-bus/*root@localhost:~# ls -l /var/lib/dbus/machine-id
-rw-r--r-- 1 root root 33 Sep 18 09:09 /var/lib/dbus/machine-id@fsmithred, been playing.
Blacklisting md_mod is not sufficient although it does prevent passive loading.
When 'mdadm' is invoked with certain parameters it will then in turn trigger the kernel to load md_mod.
That is why 'chmod ugo-x /usr/share/initramfs-tools/hooks/mdadm' and thus knobbling 'mdadm' fixed it for me.
NB, and knobbling the UDEV rules as well...
@delgado, do you have multiple /boot/grub/grub.cfg files, that is, multiple /boot partition mounts ?
Which EFI/<directory>/grub64.efi is being loaded by your PC UEFI, so which EFI/<directory>/grub.cfg is chain loading which /boot/grub/grub.cfg ?
Hit F1,F2,F11,<add whatever key here> during boot to access your PC UEFI boot priority to see which one it is going for.
Installing UEFI bootloaders can rewrite your PC UEFI variables without telling you it has done so.
N.B. UEFI has been forced on you to make your PC more secure and straight forward to configure because as the owner of the PC you are not capable of making those decisions for yourself.
@fsmithred
Re script:/etc/modprobe.d/mdadm.conf mine comes from the installed package:
ii mdadm 4.2-5 amd64 Tool to administer Linux MD arrays (software RAID)With the execute mode removed from script:/usr/share/initramfs-tools/hooks/mdadm mdadm is not invoked (not installed) in the initramfs.
That in conjunction with sysvinit 'mdadm' service being disabled md_mod is not automatically loaded at boot.
NB: If dm_raid has loaded that would use md_mod.
@Eeqmcsq, have you blocked md autodetction at boot ?
The results that you have posted suggest md raid was activated during boot, not after.
blacklist the md? modules or pass something like raid=noautodetect?.
nodmraid is an alternative to removing the UDEV, rule.
@Eeqmcsq, looking at my daedalus /lib/udev/rules.d/64-md-raid-assembly.rules as an alternative to removing the script, 'nodmraid' on the kernel command line will cause the script to jump to the end bypassing all of the 'ACTION' directives.
There seems to be differing methods of activation depending whether md raid is compiled in or a module, plus that udev rule that would run.
I found this type of thing a bane when performing data recovery by attaching faulty\corrupt media to a workstation for repair\recovery.
Kconfig CONFIG_MD_AUTODETECT
file drivers/md/md-autodetect.c
#ifdef CONFIG_MD_AUTODETECT
static int __initdata raid_noautodetect;
#else
static int __initdata raid_noautodetect=1;
#endif if (raid_noautodetect)
printk(KERN_INFO "md: Skipping autodetection of RAID arrays. (raid=autodetect will force)\n"); printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n");
printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n");@Eeqmcsq, looking at my daedalus /lib/udev/rules.d/64-md-raid-assembly.rules as an alternative to removing the script, 'nodmraid' on the kernel command line will cause the script to jump to the end bypassing all of the 'ACTION' directives.
# "nodmraid" on kernel command line stops mdadm from handling
# "isw" or "ddf".
IMPORT{cmdline}="noiswmd"
IMPORT{cmdline}="nodmraid"
ENV{nodmraid}=="?*", GOTO="md_inc_end"
What needs to be considered is whether a user is expecting to be able to install (onto md storage) from a Live image ?
Devuan follows Debian, so you need to know the Debian release timetable.
FWIW: My experience has been mainly with hardware raid, but if i remember correctly the kernel md modules probed for raid at boot.
That means that the Live media kernel version which may be different from the on disk version may come into play.
If it is still done that way you might be able to blacklist the md? modules or pass something like raid=noautodetect?.
Looking...
Running 'modinfo md-mod' gives two parameters ' start_dirty_degraded:int' and 'create_on_open:bool' but grepping
drivers/md/md-autodetect.c does give a hit on 'noautodetect', suggest that warrants further investigation.
it was a Debian 10, 11, 12
Right! I believe your main issue is (esp the HAL stuff) there is hangover files from systemd.
Examine your system carefully for SystemD packages that have not been purged.
I have never had to do this so someone else may be able to advise you a good way to do so.
LABEL=home /home ext4 rw,suid,dev,exec,auto,nouser,nofail 0 2
Adding 'nofail' *may* enable booting without recovery, it depends on your LVM configuration which I ha
ve not seen.
Either way it wont harm (other than /home not getting mounted on an error)
Do you mean /etc/init.d/cryptdisks
My mistake, yes.
I searched my personal historic docs and found that I actually modified the shell library that /etc/init.d/cryptdisks pulls in
Check /lib/cryptsetup/cryptdisks-functions and the do_stop() function
--- cryptdisks-functions.orig 2023-10-10 14:51:52.654685939 +0100
+++ cryptdisks-functions 2023-10-10 14:55:10.184788099 +0100
@@ -191,6 +191,19 @@
devno_rootfs="$(get_mnt_devno /)" || devno_rootfs=""
devno_usr="$(get_mnt_devno /usr)" || devno_usr=""
+ if [ "$INITSTATE" = "remaining" ]; then
+ if [ -x /sbin/lvm ]; then
+ vgs="$(/sbin/lvm vgscan | sed -n '/"/s/^.*"\([^'\'']*\)".*$/\1/p')"
+ if [ -n "${vgs}" ]; then
+ log_action_cont_msg "\nDeactivating volume groups:"
+ for vg in ${vgs}; do
+ log_action_cont_msg " \"${vg}\""
+ /sbin/lvm vgchange -a n ${vg} >/dev/null 2>&1
+ done
+ fi
+ fi
+ fi
+
crypttab_foreach_entry _do_stop_callback
log_action_end_msg 0
}That was how I dealt with the issue of Debian not supporting LVM on LUKS in 2023, I do not know if that is still required today.
Updating cryptsetup in the future will obviously clobber any changes you make.
A very quick quickie, I have just had my ear bent by the boss this morning for being here when I should be working on other things.
That all looks very fishy to me, what is the history of this system ?
Was it a fresh Devuan install or did you migrate from something else ?
I havn't seen your /etc/fstab so this is an assumption...
Appending 'nofail' as an option to the mount line for '/home' may permit the boot to complete without having to reboot into rescue
The historic errors shutting down your crypt device are most probably because of 'still active' lvm on it.
The fix is to call 'lvchange -an' before cryptsetup trying to close it, possibly in /etc/init.d/crypsetup.
I have many (partly self-created) packages.
First, have you kept these up to date ?
Second, Why was this necessary ?
Anybody else feel free to jump in, my time is up.
Assuming you are using sysvinit...
Just (sudo if not root) for now to prevent apparmor startup at boot
sudo update-rc.d apparmor disableOnce your system is booting properly, turn it back on
sudo update-rc.d apparmor enableThe directory '/etc/udev/rules.d/' is for custom rules created by the administrator (not from an installed package).
I have no idea what that rule is for, if short, post its contents to this thread.
But probably I first should fix this problem
Always fix what is broken first before making 'enhancements'.
@zapper, API = Application Program Interface, the resources that a python module (in this case) provides to your program to call on.
What is coming down the line....
https://docs.python.org/3/deprecations/index.html
Someone needs to port fail2ban to the newer python version forced on it by trixie.
For now...you are being presented with the prompt
Please unlock disk md-crypt:
Ignore all further udev errors and enter nothing but and only the passphrase
Start by dealing with
apparmor="DENIED" operation="change_onexec"
temporarily suppress apparmor while you error trace.
Then fix
invalid rule '/etc/udev/rules.d/41-odvr.rules:1'
before rebooting.
NB: I never set up my systems to automount crypted mdraid (\home in your case) because of the nightmare when things go wrong.
My preference for sysvinit makes it trivial to boot runlevel 2 and providing that is error free automatically trigger runlevel 3 which then mounts the md also starting any daemons dependant on it.
When an error does occur the system still boots and root can login (over the network if headless) in runlevel 2 and repair the raid.
OK thanks for the explanation @RedGreen925, if it works don't fix it ![]()
You used three of my pet hates there altogether (pulse, wireplumber, pipewire) and was wondering if there was some black magic incantation with 'exec'ing the environment that you were leveraging that I didn't know about.
Trixie has forced dependency on newer versions of python modules which have dropped depreciated API's.
Of greater concern is the list of faults in trixie, prohibiting it's use in any mission critical role.
https://www.debian.org/releases/trixie/ … ssues.html
Credit to golinux for first posting of the link
@igorzwx its obvious you are a chatbot, and because you are a chatbot there is nothing you can say that will prove that you are not.
@RedGreen925
I'm curious, why do you 'exec' when you are also '&' backgrounding ?
igorzwx is obviously still stuck on ChatGPT4, they attempted to fix hallucinations in the ChatGPT5 release