You are not logged in.
Thank you for helping.
In contrast to OBSD, linux can list modules:
lsmod and then show dependency:
lsmod | grep lp
modinfo -F depends lp
I'm aware of all that - OpenBSD doesn't need to list lkms as there aren't any (historically the functionality was there but lkm support was removed years ago). FreeBSD and DragonFly have similar tools for loading and listing lkms (kldload, kldstat, etc).
What is interesting is that other distros just let go of whatever I remove from kernel (aside vital parts of course). Devuan seems really attached to parallel port
The Linux kernel is the same, no matter the Linux distribution in use.
I assume the parallel port is completely disabled in the system BIOS / UEFI? As with it disabled I would guess that the modules should not be loaded regardless?
Disclaimer : it's a few years since I touched Linux...
grep your kernel config for those drivers. If any are built in rather than compiled as a kernel module (.ko file) then blacklisting won't have any effect.
If some or all are built as a modules, as I recall, another module which isn't blacklisted can still cause a blacklisted module to be loaded - and of course a blacklisted module can still be inserted manually. In the case of parport, etc, I'm not sure what other modules might cause them to load.
Despite the fact that I've hosted 10 different Linux OSes in VM on this machine perfectly fine.
(All the working ones have one thing in common: they either minimise or don't have systemd on them on default install.)
I hear this argument often. A certain type of user goes to a forum, posts a problem and claims that "it worked perfectly on Linux distro xyz". It really doesn't prove anything. You can't isolate the fault to one particular component unless you have actually found the solution to the problem with that particular Linux distributrion - just removing it and installing a completely different distribution with a different kernel release doesn't really prove anything. The many more things they don't have in common could be the source of your problems.
Lubuntu is also of course L, for 'Lightweight', which means graphics shouldn't be an issue, and any graphics failure should drop down into text based, not this weird-ass clown face.
This does show that you have little understanding of the Linux kms/drm driver stack and X.org. The "lightweight" tag applies to the window manager or desktop environment in the case of these Ubuntu respins (I believe the L is for LXDE), the base system is usually the same - as I recall it was possible to install/remove the other full blown desktops by installing/removing the meta packages - may no longer be the case for all I know. None of this should have any bearing on the X.org implementation - that should be the same. The X server dropping to a tty also has no bearing on this. If the display is configured or detected incorrectly (again nothing to do with systemd), then the result can be what you posted. I would have to guess, but perhaps your "hardware" (virtualised in your case) wasn't supported and X.org tried to initialise a frame buffer device. (In glorious 8 bit colour). If you'd examined the log files for X.org you might have found out why or at least found something to point you in the right direction. As you've not stated what VM you're using nor the emulated architecture or how it's configured (e.g. if vt-x or amd-v are enabled), there's no way of knowing.
Even putting aside the blatant blame-shifting game (what bug hunting or diagnosis did you use to reach the conclusion on what's occurring on my machine?), it's a valid question - but directed at the wrong person, and should be directed at the systemd developers themselves; one wonders why desktop environments (such as say, LXDE, KDE and XFCE) are being integrated into systemd.
The X server renders graphics - your low res and low colour depth, distproportioned image can only originate from the X server (which I can assure you is in no way "integrated" into systemd). It's an easy conclusion to come to. But if you can provide factual evidence that systemd is in fact responsible I'll accept it's my error.
You could also ask the Virtualbox, LXDE developers or maybe the X.org or drm/kms developers on their mailing lists. The answer will probably be that they haven't integrated their projects with systemd.
And don't bother with the 'init'/'umbrella project' switcheroo, someone already tried it and from what I saw they didn't come back (unless it was under a sock account).
What?
Well before you start piling on with any further chastisement... the OP was apparently issued with a 1 week ban on 08/10/17.
http://dev1galaxy.org/viewtopic.php?id=1637&p=2
If those were the terms, then this should have expired, but as yet they haven't returned.
Sadly "anti" type movements - and in this case - projects relating to exclusion/avoidance of a particular piece of software are often prone to hijacking by agenda driven types. Or they can simply attract misinformed zealots. We had a few at the debianuserforums.org a few years back. When challenged on logic or facts their arguments usually fall away and their technical ability is almost always found to be lacking. It usually amounts to "I read it somewhere" (usually a blog or forum) combined with lots of FUD.
While systemd is highly contentious software - non fact based attacks on systemd and it's developers aren't helpful to anyone.
And of course the only difference between the releases was systemd...? Not ~ three years of development, in the kernel and video driver stack to name just a few...?
Setting aside what I might think of systemd, this kind of thread only fuels the argument that many systemd opponents are just ill-informed fanboi types. I'm not sure what Linux KMS/DRM video driver stack and X.org server have got to do with systemd. This looks more like a driver bug (or a configuration error)?
Despite the claims, *buntu family have always used "bleeding edge" technology from Debian testing/unstable/upstream rather than basing off the stable release. It's not unusual for their to be a regression or several in a particular release's kernel.
As we don't know what kind of virtualisation was used in this case, nor how it was set up, it's hard to say what cause might be.
And that's before we even get onto the subject of "testing in VMs"...
It can be tempting to install the various Debian respins and the Ubuntus and Mints, etc to 'help' relatives/friends... but if things go wrong - and they do, you will want a system you can support/administer and fix quickly and easily.
My father is over 80 and has been running FreeBSD on his old laptop for about 2 years. He can only manage basic web browsing, reading the papers, etc - and it more than does the job. If I were to install some Ubuntu or whatever and just leave him with that, when it came to upgrade time I'd be fighting with an unfamiliar system and wasting huge amounts of time. I recently upgraded him from 11.0-release to 11.1-release (a bit late) and now all is well for another few months.
All in all though, he's the exception, I tend to avoid proselytising about operating systems as 99% of people just aren't interested and want a working tool/entertainment system. I've found that while many are initially impressed by free *nix operating systems, when it dawns on them that they can't run the crap they used to run or do what they used to do - the novelty wears off very quickly - and you could be put in an awkward spot (even being asked to reinstall .e.g MS Windows).
Flash needs to die fast. I'm not sure why someone would run a free *nix like OS and install a horrible obfuscated piece of (proprietary) crap on top of it, which comes loaded with numerous security and privacy issues. Youtube and 'catch up TV' sites have mostly moved to HTML5 already.
Firefox and Seamonkey are based on XUL and don't depend on GTK+ directly, as far as I know, it uses (emulates) the themes.
If you want finer grained control over dependencies, you'll have to build from source. Upstream packagers whether Mozilla or the various distributions will not feel obliged to exclude certain software...
Building modern browsers from source can be time consuming however and rebuilding every time there's an update will be tedious.
That's one of the downsides of the static compiled binaries - the pulseaudio dependency...
Seamonkey is a good alternative.
You can download from the pre-built binaries mozilla.org: https://ftp.mozilla.org/pub/firefox/releases/56.0.1/
It's better and easier to just download the binary (statically compiled) firefox from Mozilla and stick the whole directory it in /usr/local/ or similar and sim link to /usr/local/bin. Makes updates simpler as well.
What would happen if I were to remove these dot files?
Dot files usually store state information, user customisations and other data such as e.g. shell history or a web browser's cache. They're not installed from a package, but created by the user.
Deleting the lot would usually be a bad idea.
Deleting seemingly redundant ones may also be a futile exercise as the programme will simply regenerate them when it starts up again.
Well yeah, the 130 line browser can do most if not all that, not just forums, works almost everywhere i've tried it, even youtube.
"130" lines of python script to run massive bloated webkit-gtk doesn't seem "light" to me.
And yes it was consumerist propaganda, what I heard was "Throw away yer old hardware because Mozilla, that's why".
You probably "heard" what you wanted to hear.
It's quantitatively no different than when the spoiled child whines that it's ALL the other softwarez that's to blame, not my precious cause it's perfect. (systemd).
It's completely different in fact.
Ridiculous, zero chance i'll let a browser make my computer buying decisions for me.
Good for you, I wasn't making that case...
But anybody who buys into this and wants to throw away their old hardware because it's a little slow running Chromium, please send them to me, i'll happily re-furbish them and pass them along to families in need that can't afford to pay 1000 bucks for a new browser-support-system.
You seem to find differing opinions threatening to say the least? In fact you react in a judgmental, kneejerk defensive fashion to my expressing an opinion different to your own. I'm happy that you want to use older hardware - I use older hardware as well (my main system is 10 years old - I do this mainly to save £££s and to avoid questionable "technologies" such as UEFI and IME/PSP). but I don't think it's realistic to raise people in general's expectations on older hardware running the latest desktop environments, browsers and other applications. This is because your use case, is probably not everyone else's use case (and neither is mine).
I was kinda worried even saying baloney, don't want to seem antagonistic or whatnot, i'm just passionate about the things I do and believe in, so I hope nobody ever takes offense at my yowling, I don't mean anything personal by it and i'm not angry.
No offence taken.
Except dconf, and I ******* hate that ****** ******* ****.
The wonders of binary configuration. I don't "hate" it, I simply find it quite loathsome and cumbersome.
Unfortunately this stuff finds its way in to everything:
$ pkg_info -Q dconf
dconf-0.26.0 (installed)
dconf-editor-3.22.3
Complex sites are slow to load though, all that bling bling trying to load. And occasionally video causes it to crash.
That almost sounds like webkit running on old hardware...
And lest ye forget:
https://en.wikipedia.org/wiki/Wirth%27s_law
https://en.wikipedia.org/wiki/Moore%27s_law
Well pardon me but that's baloney[etc]
I prefer the phrase "what a load of old bollocks" for such occasions...
I hand-built the system that runs it like I build 'em now.
I "hand built" most of my x86 based systems from the various intel/AMD i686 based machines in the mid 90s to the present date. It didn't make them any quicker than an off the shelf OEM system, but it did make them cheaper (back then it was) and more flexible in terms of hardware.
Admittedly browsers have gotten cumbersome, bloated and just a PITA to use, but just because Chromium and Mozilla don't have their **** together doesn't mean I should start tossing perfectly good hardware,
Sadly these bloated masses of C++ (or "rust") are what most people need to access the web. If you've ever built firefox or worse still chromium from source, you'll know what I mean. Yes I can use a basic browser with no javascript support to access a few forums, I can even use elinks or lynx or whatever, but can you pay your bills, shop online or do your online banking...?
that's some silly consumerist propaganda right there, it's what I would expect from Redmond or Cupertino folks, but not on a linux forum.
It's neither "consumerist" not "propaganda"... Linux itself has also become "consumerist" - you essentially have free software, released under free copyleft licences - but all financed and coordinated by fortune 500 companies... the webkit and blink layout engines also happen to be corporate sponsored. But not being of "Cupertino" or "Redmond" origin no doubt that doesn't count...
Good to see you again.
Still using OpenBSD here yes. I've not done much with Debian since the Wheezy release. Did play with the jessie release a few years back, just to see what all the fuss was about, but didn't hang about for long.
I don't think "reviews" of this kind, what I'd term "tech press" reviews, are worth a lot (In fact it's arguable if distrowatch has any real value...). The reviewer talks about what he can install, how easy things are, the environment, applications, etc - all from the perspective of a consumer...
Surely when discussing a so called "lightweight" Linux derivative distribution, it's necessary to look at why it claims to be "lightweight" and to see if the claims have an basis at all?
When it comes to Debian and so called derivatives, it's usually down to sticking one's own customised desktop experience on a livecd - the earliest derivatives, such as Ubuntu, started out as this kind of thing.
"Lightweight" then usually means, much the same thing with a window manager and "less" installed". The overall memory footprint, kernel, resource usage, etc is is never examined in any real depth in these kind of reviews. In fact once the end user installs the packages they need, "light" is pretty much over and done with at that point. Firefox, for example consumes huge amounts of memory and CPU. You could install any lightweight Linux distribution with a basic window manager and still find your creaking hardware is not up to the job of running a modern browser.
And as the comments pointed out, the reviewer doesn't even test on old hardware or discuss performance...
And then it comes back to - just how useful is very old hardware anyway? The answer is: not very. If you have something with around a 2GHz clock and a few GB of RAM you have usable "old" hardware. At this point what Linux distribution you install doesn't really matter so long as you don't expect the latest KDE or gnome to perform (at all).
The ~ 1GHz and 1GB of RAM era hardware and older, is sadly just not up to the job.
For an old PC from this era you could try installing something like NetBSD or OpenBSD and using as a router, access point, firewall, etc...
Grub + JFS has been known to be problematic. There may be workarounds (try searching the web), but it's probably best avoided and just use lilo (which doesn't understand file systems and doesn't attempt to). As you're using a DOS MBR, there shouldn't be any issues.
lilo is actually very dependable, simple and solid, always has been. It requires one simple plain text configuration file rather than several incomprehensible ones, spanning multiple directories. It can be configured to just boot the kernel, without fuss, or set up as a text based or simple graphical menu.
I don't think looking for the grep'ing for the string "systemd" is going to to yield anything useful.
/etc/systemd, /lib/systemd and /var/lib/systemd are directories.
Anything in /usr/share/man/ is irrelevant as are the lintian files
/lib/systemd/system/* seems like files (services?) relating systemd/udev?
None of the above are binary as far as I know.
The only binary is libsystemd0, which is essentially useless cruft without systemd installed.
I suppose it's the same with any package, you get cruft which relates to software you don 't use and don't install.
udev is, of course, part of the systemd source.
It's not that simple and there aren't really levels. In Debian based systems there isn't a "base system" as such, everything is a package and only 'essential' packages could be regarded as a "core" of sorts (and not even a kernel is considered essential - though as I recall you're warned if your current system doesn't have one). systemd transcends what you would regard as "core system", desktop stuff / userland automation and networking, etc, etc.
In the case of 'qupzilla', at a glance, it seems to pull in the whole Qt kitchen sink. libsystemd0 is also part of that dependency chain. I didn't go in depth, but there may be other systemd dependencies for the package, as packaged for Debian unstable.
Firefox on the other hand isn't (or wasn't) toolkit dependent and is more monolithic.
I think I was wrong with my assumptions of the independence of QT.
That's often the case with assumptions. Qt, as with many so called "open Source" projects, has corporate entanglement and allows commercial licensing. However...
So I tried the installation of qupzilla from the sid repository to see what it depends on (I knew of some libwebkit5 stuff) and WOW!!!
It wanted to get rid of eudev, sysvinit, and a whole bunch of core devuan stuff and install among 30 other things systemd!
This is a browser based on Qt that is supposed to be not just system independent but even desktop independent.
So all these tied up chain is based on QT stuff that are linked to systemd.
It's important to understand how dependency resolving package managers work. Debian's in particular is a complex mass of dependency chains, with fragmented packages (usually one source package builds many smaller packages, rather than one big binary package), meta packages, dummy packages, etc. It's designed to pull in and resolve all dependencies for all use cases, it's not designed to make ideological decisions, for example it's not going to cater to people who don't like gnome or don't like software from a certain developer - and as you're mixing repositories it's not surprising that things don't work as intended. And as Debian is to all intents and purposes a systemd (+gnome) distribution, it's no surprise at all that it pull in systemd.
So, I think qt needs a close up look and maybe it is not as innocent. Or is it that debian links all this together as a chain?
It's the second one - a Debian (or distribution specific) issue. The dependency chain will "link back" to, mostly freedesktop.org, userland components which will link to systemd userland components, which in turn pull in systemd itself. This is because packages were built with systemd flags enabled and so in order to produce a working system for everyone, those systemd depencencies have to be listed in the package's control file.
The Debian approach to leaving out systemd is to run it with systemd-shim as far as I know? This means it's acceptable to install systemd in all cases, but not run it as the init daemon ("PID 1") if the user has installed a different init system (this is probably what equates to 'init choice' for some). This is why your mixture of Devuan and Debian unstable, while it works for now, will ultimately fail. You will - at some point - hit a nasty broken dependency, where the only solution is some in depth 'package management foo'.
Rather than installing stuff from Debian's unstable branch, I suggest that you might be better off porting the newer upstream releases of your top favourite software over to Devuan (building from source and packaging them). This allows better control over dependencies - especially with respect to systemd - installing binary packages from another distribution gives you no control at all.
The same exact package on Devuan now is intallable and it only brings in a couple of other files, webkit5 and some libwebkit...
Of course.
It's called "backporting". It's generally what you have to do if something you want isn't available. It's actually more or less how the Debian backports started out. Originally they weren't official repositories but as time went on, got absorbed by the project.
It's far better and more rewarding to learn to do it yourself, than hanging on in the vain hope that someone else will do it for you...
Good luck and stick with it.
Well generally, one would:
# apt-get build-dep tint2
Make a "build" directory in your /home/your_user
$ apt-get source tint2
Then build the deb package from the source in the usual way (dpkg-buildpackage in a fakeroot environment).
Once the package is built, the build dependencies (dev packages) can be removed.
You could built it from source... i.e. create your own backport from the package source in the unstable repository.
gNewSense is an FSF sponsored project which predates systemd and has very specific goals and is very much driven by ideology. It's easy to see why the Devuan people didn't decide on just "joining" that one.
Also gNewSense (or indeed FSF/Stallman) doesn't seem to have any specific goals or opinions regarding systemd. It's not a given that future releases of that Linux distro won't just include systemd once it becomes the expedient choice. (the focus there is more on removing device firmware from the Linux kernel so as to make hardware, which should run out of the box, unusable).
That surprises me about OpenBSD, it's Canadian based, & when I use it, it is with Fluxbox & Firefox, it must be the programs that are added bringing it in.
There is no systemd in OpenBSD (or any of the other *BSD derived projects). You might find some cruft in the way of redundant directories or configs in certain ports (some ports may even spit out redundant dot files), but none of it is functional or of any use.