previous
2022-11-27
08:39:42 <rwp> Hi Juest. Is that a question? Are you asking how to recovery and reset a new password?
08:41:21 <rwp> The safest and easiest recovery is to download the installer image, I suggest the "netinstall" image, flash it to a USB device, boot it in Rescue Mode.
08:41:24 <rwp> https://www.devuan.org/get-devuan
08:41:55 <rwp> In Rescue mode it will say "Rescue" in the corner. Important not to be installing but to be Rescue-ing.
08:42:20 <rwp> It will guide you through mounting your system and obtaining a shell on your system. Then changing your password.
08:50:15 <fluffywolf> I usually just init=/bin/sh and passwd...
12:45:17 <sfox> WTF is a systemd timer and how can I install sanoid without one?
12:45:18 <sfox> https://github.com/jimsalterjrs/sanoid/blob/v2.1.0/INSTALL.md
12:45:41 <sfox> trying to follow this guide using the debian instructions to backup my zroot pool within my workstation to a second pool I just setup
12:45:58 <sfox> is it like a proprietary version of cron?
14:28:06 <gnarface> sfox: probably
14:36:58 <lts> Yup. Just setup a cronjob for "sanoid --cron".
19:54:32 <onefang> Any one know a simple way to get a MIDI drum controler to actually produce drum sounds on my Devuan Chimaera desktop, using ALSA? They all seem to want JACK.
19:56:21 <onefang> I rescued one of these https://alesis.com/products/view2/v25 from garbage, where it had been soaking in dirty water over night. Got the drub pad half working after crleaning it. Still have to clean the keyboard part. MIDI monitors tell me the drum pad events are being sent.
20:12:32 <Juest> thanks rwp!
20:39:50 <Juest> im using ascii for parity and im having black screen in xorg in vmware
22:01:53 <onefang> Ah hydrogen is the answer. It just defaulted to low volume, s-o I didn''t hear my drums the first times I tried it. lol
22:05:53 <onefang> Also I have to thump the drum pads very hard to get loud sounds out of them.
22:44:15 <fluffywolf> alas, I lack any musical abilities, and thus do not know anything about midi drum pads.
22:51:14 <gnarface> that might be the reason it was in the trash, they do wear out
22:51:26 <gnarface> but maybe with some bending and gluing you can refurb it
22:51:55 <gnarface> they're supposed to be pressure-sensitive so some variation in volume is expected, it's just a question of whether it's appropriately calibrated
22:52:38 <gnarface> maybe if the sensors are clearly dislodged or deformed on the inside they can just be fixed by hand
22:52:53 <gnarface> most people wouldn't open them up to check
23:26:56 <Nrml> I'm about to install a new server (headless machine on basic x64 server hardware), and I'm just about fed up with systemd and all its issues, so of course I will be going with Devuan. I checked https://mirror.leaseweb.com/devuan/ and see there's already a "Daedalus Preview 20221121" available. It's safe to install from it, or should I just install Chimaera and later do a `dist-upgrade`?
23:29:54 <nemo> IMO if you're just starting anything you should stick with stable
23:30:00 <nemo> I mean, that's the debian way, right? :)
23:30:11 <nemo> but gnarface here would know the status of the dev version
23:30:27 <nemo> BTW, for anyone who was trying to help with the mystery of my VM issues
23:30:36 <gnarface> no, i don't know about the status of the preview, but i do know that if you do a minimal install of chimaera you can upgrade very quickly
23:30:53 <nemo> the problem turns out to be something called "desktop central agent" which is installed on all the VMs to monitor things
23:31:12 <nemo> one of the things "desktop central agent" does is execute a system status of all services. for no useful reason on these machines
23:32:01 <gnarface> i think fsmithred would know the status of the preview though
23:32:14 <nemo> oh... right fsmithred aaagh. sorry. bad memory
23:32:24 <nemo> you're right. fsmithred is the go-to-guy for those images
23:32:37 * nemo needs more coffee and more hanging out in #devuan again
23:32:46 <Nrml> gnarface, fsmithred: it would be a minimal install plus a couple of extras I can't live without: openssh-server, screen, vim-noex, etc plus docker. All my 'fancy' stuff would go inside docker containers and so would be isolated from any changes in the basic machine
23:33:13 <Nrml> nemo: I already run a couple of Chimaera machines with total success and no issues
23:33:27 <nemo> Nrml: do you have desktopcentralagent installed? :)
23:33:31 <nemo> if not. unrelated ;)
23:33:42 <Nrml> nemo: no, all my machines are headless
23:34:03 <Nrml> haven't even heard about desktopcentralagent TBH :-)
23:34:19 <nemo> yes. it has nothing to do with headless either
23:34:33 <Juest> umm hey, any clue why ascii on vmware gets stuck in a black screen with Xorg?
23:35:31 <nemo> Nrml: https://www.manageengine.com/products/desktop-central/agent-installation.html since you seemed interested it's this thing
23:35:55 <nemo> so anyway. back to my issue... it seems what they are doing is essentially... /bin/ls /etc/init.d/* | while read f; do $f status;done
23:36:12 <nemo> the result of that is when it hits rcS which does not handle a "status" param at all, they shut everything down
23:36:44 <nemo> I'm trying to figure out (1) why does rcS exist - is it needed and (2) would it hurt anything if I added a check for a status param so this doesn't happen.
23:37:06 <nemo> my guess is they wrote their code for systemctl then put in a lazy legacy init workaround
23:38:19 <nemo> for now I ran mv /etc/init.d/rcS /root
23:38:23 <nemo> (it's just a symlink though)
23:38:47 <nemo> now I'm wondering what other stuff they screwed up
23:39:55 <gnarface> i'm almost 100% sure that you should just check for it calling rcS and skip it
23:40:15 <gnarface> i think you do actually need it
23:40:35 <gnarface> i think it's basically just the one that calls all the other scripts
23:41:12 <gnarface> i think actually you should skip rc, rcS, and rc.local
23:41:51 <gnarface> it probably won't hurt anything if it calls "/etc/init.d/README status" but i'd probably skip README too
23:43:07 <Nrml> nemo: thanks for the link
23:46:08 <Juest> i guess im not getting help because im running a unsupported version and also older vmware workstation 14.1.8
23:48:21 <gnarface> Juest: i can only guess it has something to do with guest extensions
23:48:29 <nemo> gnarface: yeah. it was funny. I saw it erroring on README which pretty much confirmed my * suspicions
23:48:45 <nemo> gnarface: I can't unfortunately control the agent. I'm going to file a bug with manage engine though
23:48:52 <nemo> gnarface: other folks besides me put this on our instances
23:49:22 <nemo> gnarface: so I need to edit the init.d scripts to handle a status keyword for now
23:49:29 <nemo> even if it's just to exit on calling it
23:49:41 <nemo> gnarface: really this sort of thing is a bad sign. increasingly poor and lazy support for traditional init
23:49:50 <nemo> I might be forced to abandon devuan in this new orld
23:50:13 <gnarface> many of the properly behaved ones do just define a status function that does nothing
23:50:14 <nemo> I mean. at least systemd has had a decade to shake out bugs, even if their scope creep has not gone down at all
23:50:45 <nemo> gnarface: would it hurt anything if I added those to the rc scripts?
23:50:50 <nemo> I could just copy and paste it
23:51:05 <nemo> I would have to replicate it to all the devuan machines here though. kind of annoying. I'd installed a fair # of them. about a dozen
23:51:21 <gnarface> so crazy it might just work
23:51:44 <nemo> gnarface: would it be reasonable to ask devuan to support this upstream as a safety measure?
23:51:50 <nemo> I'm also going to report it to manageengine though
23:52:00 <nemo> even if odds are thin of them being sympathetic.
23:53:12 <gnarface> hmm, i'm the wrong person to ask about it
23:53:21 <nemo> actually only scripts that don't handle any keywords at all should be a problem
23:53:28 <nemo> most scripts that do handle one would just exit if no match
23:53:43 <nemo> like /etc/init.d/networking handles start/stop/reload/restart/force-reload
23:53:52 <nemo> giving it no params echoes usage, giving it "status" does nothing
23:54:08 <nemo> well. echoes usage :)
23:54:45 <nemo> I think rcS is the only bad one then
23:54:56 <nemo> all the others actually check params
23:56:45 <nemo> I don't see why rcS is even a thing
23:57:01 <nemo> would it break things if I just moved it out of the way on the dozen machines?
23:57:13 <gnarface> like i said, i think it would actually break things to remove it
23:57:17 <nemo> yeah. every other script in init.d checks input params which is proper behaviour
23:58:07 <nemo> gnarface: ok. sorry. I'm typing on a very very small screen. I just spotted the line where you said it calls all the others
23:58:24 <nemo> ok... I will just have it check for input params then
23:58:50 <gnarface> nemo: my instinct is that these people are doing it entirely wrong and they're supposed to just call telinit once, and as such this is not an actionable bug, but fsmithred might have a different opinion (i agree it also doesn't seem like it'd hurt anything to trap status on rcS)
23:59:11 <gnarface> well, it doesn't seem like it'd hurt anything except that it might encourage bad behavior
23:59:28 <gnarface> and i don't think that's really what we're about here
---------- 2022-11-28 ----------
00:00:53 <Nrml> So re: using Chimaera stable vs Daedalus Preview 20221121, the consensus is that I should install Chimaera and later do a `dist-upgrade`?
00:01:30 <gnarface> Nrml: that's unquestionably the safest approach, but those images are probably there to solicit public feedback, so it kinda depends on what your priorities are
00:04:14 <Nrml> gnarface: well, I have some leisure re:issues with this particular install (a couple of days) but not much... eg can't afford to stay a week with issues
00:04:44 <Nrml> OTOH I would like to contribute to Devuan if I have any issues, or with a positive report if I don't have any
00:06:00 <Nrml> So I think I will try Daedalus Preview and if some heavy sh!t hits the fan, reinstall with Chimaera stable
00:06:01 <gnarface> Nrml: i would just do the upgrade to be safe, but i would make sure not to install anything except "standard system utilities" in the initial install, to avoid complications with desktop migrations
00:06:32 <gnarface> the preview images really might be fine though, i dunno
00:06:38 <Nrml> re: "standard system utilities", are docker and docker-compose standard?
00:07:08 <gnarface> no, but the point is that if you install them after upgrading to daedalus then you don't have to download everything twice
00:07:14 <djph> Nrml: I wouldn't consider them a "standard system utility"
00:08:42 <Nrml> re: "download everything twice", that's not an issue, I have plenty of bandwidth for this server
00:08:50 <gnarface> by "standard system utilities" i literally just mean that option on the tasksel panel of the installer
00:09:21 <gnarface> it contains basic shell tools like "less" and other simple conveniences that aren't strictly required, but it's safe because it doesn't contain any graphical programs and it is a trivial amount of extra download
00:09:52 <Nrml> thanks for the clarification re: "standard system utilities"
00:10:22 <gnarface> what i would definitely NOT check is any of the desktop environment checkboxes, or server type checkboxes
00:11:22 <Nrml> My idea is to check just "standard system utilities" in the initial install and then add openssh-server, screen, vim-noex, docker and docker-compose via `apt-get install` after first boot
00:11:33 <gnarface> if you have the time and bandwidth to afford the risk of one wasted try though, please don't let me stop you from trying out the daedalus preview
00:11:47 <gnarface> and us know how it works
00:11:59 <gnarface> i can only actually recommend methods that have worked for me
00:12:07 <gnarface> i mean that i've actually tried
00:12:21 <Nrml> I will do that. Thanks for the guidance!
00:12:39 <Nrml> bye, and back later with a positive report I hope
00:13:29 <gnarface> nemo: about this calling everything in /etc/init.d/* thing... the more i'm thinking about it the more i'm thinking they should actually be checking the runlevel and then calling everything in the appropriate /etc/rc*.d/ directory
00:14:44 <gnarface> either that or maybe, just maybe they should be calling rc directly
00:15:04 <gnarface> but that might require modification of rc and then you'd be back to square 1
00:33:38 <Sompi> http://paste.dy.fi/H6u
00:36:06 <gnarface> use paste.debian.net if you want me to actually look at that
00:36:42 <Sompi> it is just cpuinfo of my server
00:44:34 <Juest> this is the rescue live image gnarface
00:46:15 <nemo> gnarface: hm... what do you think... exit on $1 != "" or just on "status"
00:46:34 <nemo> it's a shame it doesn't take an explicit stop :) I guess it's an old old path
00:50:46 <nemo> will just go with status for now
00:51:24 <gnarface> status seems safer
00:51:32 <gnarface> though i can't be sure it'll matter in practical terms
00:53:18 <gnarface> Juest: i think for vmware you need a vmware specific guest driver package from non-free
00:53:28 <nemo> if [ "$1" = "status" ];then echo "bad invocation";exit;fi
01:14:04 <Juest> for xorg to work?
01:14:10 <Juest> it used to work out of the box before
02:55:42 <nemo> it should work better with vmware additions but it definitely shouldn't be necessary
05:24:18 <onefang> gnarface: I was paying particular attention to the drum pads while cleaning, for exactly that reason, they might have worn out AND be more susceptible to water damage.
05:27:16 <onefang> I've already downloaded the configuration software for setting things like the velocity curves, which will likely help. Alas they are Mac and Windows only. I haven't tried them yet. Might be able to do that with SysEx or something from Linux if they stuck with the standards. But for now I'm happy it seems to work fine, the MIDI data coming out of looked good to these old MIDI developer eyes. Next is cleaning up the piano keys.
05:27:29 <onefang> But first, time to wake up.
06:42:35 <psionic> 12,12 Debian has no hope :(
06:42:35 <psionic> 0,12 Debian has no hope :(12,12 1,1
06:42:35 <psionic> 12,12 Debian has no hope :(12,12 1,1
06:42:35 <psionic> 0,0 1,1 Debian has no hope :(
07:18:02 <onefang> I just checked, it does have the microhope package. So there is hope, just a tiny amount.
07:19:54 <onefang> Oh wait, that wasn't a psionic support question, more a topic for #devuan-offtopic.
07:45:31 <onefang> BTW the "dd a partition image into a qemu-nbd connected qcow2 image's partition device" did work, but it was horribly slow. Took days.
07:48:03 <onefang> 187 GB on spinning rust before you ask.
07:49:03 <rrq> I tend to use raw images for qemu, and on occasion raw partitions
07:49:45 <onefang> I like the features of qcow2 images.
07:50:49 <onefang> Like not storing empty space, which I could see as I watched it grow slowly while I was dding to it.
07:52:14 <onefang> And "store changes to this file" so I can easily rollback after an experiment gone bad.
2022-11-28
next