You are not logged in.
Profit, that's why.
Processed foods that are quickly digested leave you hungry again soon after = more profitable.
Sugar (or corn syrup) is incredibly cheap, and makes everything more palatable = more profitable.
Homogenous ingredients that are low in fibre (goto 1) are easier to process, and keep and freeze better = more profitable.
In general, if cheaper ingredients can be substituted and palatability recovered by adding sugar, fat, or flavourings... That's exactly what will be done.
I've worked around enough food production to see this in action, and it's usually a "slippery slope" process - one little recipe change, manger gets his bonus for cutting costs, repeat until your product is corn syrup and lard, with artificial flavour and colour.
There are no incentives to make "over-the-counter" goods better for you, only cheaper, tastier, and more addictive.
The "health food" market isn't much better, and has it's own brand of deceptive profiteering - usually leveraging public ignorance to inflate prices for things nobody actually needs.
Either way, outside of old-school "farmers market" type independents, honest food always looses to profitable food.
Regulators are primarily concerned with safety, and none of this bullshit will make anyone sick... at least in the short term.
The long-term impact nobody wants to tackle, because it's difficult and time-consuming to prove (and food megacorps will just buy some counter "studies" or tie things up with lawyers and lobbying anyway), and it boils down to a collective-good vs. corporate-profit fight, which nobody wants to start.
Welcome to capitalconsumerism, the shining land where everyone (scientists and regulators included) has their price, and we allow corporations to become more powerful than governments. ![]()
Am I a "Conspiracy Theorist"?
Only when you take garden-variety (pun not intended
) stupidity and greed and dress it up as some super-secret plot to take over the world.
The problems with the worlds food supply are old, pervasive, and readily apparent to anyone with a brain...
Also nowhere more obvious than in the USA, and other "Americanised" western countries where large food corporations have taken root.
That's not a conspiracy, it's just an observation.
While I agree it's a problem the truth is it still comes back to individual choices. Just because you live in the US doesn't mean your diet has to consist of Oreos, and pizza rolls. Quality food is available in the fresh produce section. You can also find quality meat, but you have to skim through the junk.
Indeed. The problem is that you're essentially fighting a psychology war with corporations far better resourced than you are, and for many, that's a fight they're not going to win.
How can we expect people to make good choices when that's made artificially difficult (yes, fresh produce is at the back for a reason, etc.) at every turn, and they are constantly bombarded with advertising telling them to do the opposite?
having worked with all of the main-stream linux distros,
and working with different configurations for years
Then I suppose you also have years worth of benchmarks to back up your claims, right? Please, post them so we can all see exactly how much faster your "Hyper-Gamer" install is.
using ROOT is a taboo subject
No, it's just a stupid idea to run a GUI as root. It's insecure, it will likely provoke permissions-related bugs because developers don't test usually this scenario, and it and provides no benefit whatsoever (besides making it easier to trash your install, which seems to be what you're aiming for anyway).
improved access to system resources
For which you have still not provided any evidence whatsoever, or even presented a plausible mechanism.
based on my personal experience,
and the development of enhanced interaction between Linux,
computer hardware, computer programs, and computer games
are a daily work in progress.
Sounds like a long-winded way of saying "I have not properly tested any of this, but I want to sound like I know what I'm talking about" to me. ![]()
You can claim experience all you like, without proof all you are doing is blowing smoke.
if you have any suggestions related to improving the performance
of Linux, and gaming on Linux, by all means share them
Disable side-channel mitigations, extraneous background tasks, and power saving / clock modulation features (and/or consider installing and configuring tuned). Run a light DE or window manager and turn off compositing.
On an otherwise properly configured system, expect a couple of percentage points at best.
The end.
Oh, and refrain from making ridiculous claims without providing any data to back them up.
You do whatever you want to your own system, nobody cares. Advising others to engage in this nonsense is another matter entirely.
Not to put too fine a point on it: Post some (properly controlled) benchmarks or I call shenanigans.
Without data to back it up (with the notable exception if side-channel mitigations, which have a well-documented performance impact), this is nothing but pointless ricer rambling.
This in particular:
What happens when tweaking out a computer for hyper-performance, is that a ton of bugs will appear,
and programs will behave in ways that are totally bizarre, and unheard of to others. Most people,
sit around waiting for innumerable loading times, or intervals, instead of skipping past the red tape,
and taking full control of their computer, and the programs on it. And this is going to cause some errors,
but that is normal because generally most programs are designed to run at slow speeds, and not hyper-speed.
Is complete and utter hogwash.
Computers are deterministic systems, and applications haven't been written to rely on system performance for correct operation (e.g. delay loop timing) since the 1980s. If your "tweaks" are causing bugs or unintended behaviour then your system is grossly misconfigured, your compiler is producing broken code, or your hardware is unstable (i.e. overclocked).
Programs being "designed to run at slow speeds" is pure fantasy, running as root to "reduce delays" is questionable at best (again, benchmarks or baloney) and gratuitously dangerous advice at worst, and the rest is a lightly-mangled manual install guide with a garnish of hyperbole and a dash of blatant misinformation.
If you're going to advise people to ignore decades of security best-practice and run a GUI as root, at least provide some benchmarks so they can make an informed decision WRT the supposed benefits.
Which is a file system check, that runs automatically, based on some hidden parameters,
and which does not work on an f2fs drive preventing me from booting into the desktop environment at all.
The conditions which cause a fsck on boot are not hidden, but f2fs is prone to triggering a full (and extremely slow) fsck on kernel version changes. Its fsck is also arguably quite broken, and both of those are good reasons it's not a supported root filesystem in the installer.
Forcibly skipping filesystem checks is a real good way to end up with a hosed filesystem, and comments like this do not improve the credibility of your "mini-guide" one bit...
Nor does:
I chose mbr, which is the first little 8mb partition I made initially.
Because the MBR is not a partition and can never be one, so this is misleading bollocks. What you are presumably referring to is reserving space for GRUB at the start of the disk, and 8MB for that is ridiculous.
FWIW, I run Gentoo as a daily-driver, and have benchmarked things like LTO and some (not completely stupid) experimental compiler optimisations fairly comprehensively. I too can make an unstable system with weird bugs and bizarre behaviour... That doesn't make it any faster under real-world workloads, only more broken.
Even the most rabid -fbroken-math -funroll-brain -fomg-speed gentoo ricers admit that objective gains are minimal for anything but very specific (usually scientific data analysis or synthetic benchmark) workloads, and that's going deep into the weeds at the compiler level, not just futzing about with "gaming" kernels and unfsckable filesystems on a binary distro.
About the only useful advice here is 'mitigations=off' (which should also have a warning WRT the security implications), and the use of lazytime over noatime, which is not widely known but will actually reduce disk access.
For the rest:
Where is your proof that f2fs is faster than ext4 or xfs?
Where is your proof that running as root improves latency?
Where is your proof that the xanmod kernel improves game performance?
And last but not least, what is:
Systemctl tweaks
doing in a "guide" for a distribution that doesn't package systemd or systemctl to begin with?
Let me put it this way: When I upgrade my desktop (hardware), I build the new machine, swap the drive(s) in from the old box, and boot just it up. ![]()
99% of the drivers you will need are already present in the kernel / modules, it's just a question of autodetection, and that's pretty good IME.
Last time I wanted Devuan on a new-old machine, I actually just grabbed a backup of the whole disk form another completely different machine... Which worked without issue, and saved a bunch of time.
I know elogind is not systemD but comes to fill a gap systemD made.
Elogind is a part of systemd, forked and split out into a standalone daemon. Nothing more, nothing less, and nothing at all to do with "gaps".
After getting acclimated to all the Wind'ohs rigmarole this is confusing.
The windows nvidia drivers also support multiple generations, at least technically. In practice windows keeps a record of hardware IDs and the associated driver, which causes update/reinstall rigmarole (found new hardware wizardidiot spam anyone?) if anything changes.
Linux OTOH just loads the drivers you (or udev rules) tell it to, and each driver probes for supported hardware. Dumb, but 99% of the time it just works (and makes things like livecds and moving installs to new hardware much easier).
The disadvantage is that if you want different drivers for two or more compatible devices, you need to set that up manually. Pretty rare thing in practice though.
fsmithred wrote:we got to the fifth post and nobody has yet asked why network-manager got removed?
If the first post had been an attempt at a bug report, I'm sure more would have been done to encourage diagnosis and sending of the relevant information to bugs.devuan.org
But since it was essentially just a rant just about dropped balls and low bars, I'd say the responses were just right.
Indeed.
Had the OP phrased the post as a bug report or request for help, help they likely would have received. Whining about how terrible the distro is because an update on unstable "Removed internet access" gets... Some ROFL at best.
I'll tolerate plenty of complains if a routine update to a stable release causes borkage (particularly if the post includes technical details, as opposed to "$gui_thing got broke because I didn't read apt output"), hell, sometimes I'll even join in...
But unstable and testing are subject to this kind of dependency screwyness at any time, and that's kinda the point of having unstable and testing branches to begin with.
i.e.
One thing that I learned many years ago when running unstable...
Look at the terminal output before agreeing to an upgrade. If it's going to remove something that you consider detrimental to your system, tell it n (no). You can then upgrade the other packages individually that aren't part of the packages that will/would be removed.
Under normal circumstances, it will be fixed within a few days. Other times, it may take a week or two.
This^ The answer is: fix it yourself (and/or file a bug report), or wait.
this is basically Debian Testing
And right there in the name is it's intended purpose - testing. If you want to run it as a daily driver that's up to you, but sooner or later you will encounter bugs, the productive response to which is filing a bug report.
internet access just being removed????
Since when is networkmanager == "internet access"? ![]()
There are plenty of other ways to configure networking, and networkmanager is not even remotely a critical package.
If this occured in a stable release it might be problematic, but you should be comfortable with manual network setup (among other troubleshooting / testing tasks) if you're going to run testing or unstable.
why is wayland bad
It's not that it's "bad" per-se, it's that adoption is being aggressively pushed while it's still rather buggy, lacks a number of features people are used to from X11, and only really works properly at all with GNOME sessions.
Much the same as we saw with udev, dbus and systemd, and as we're now seeing with pipewire... which are (totally coincidentally I'm sure) products of the same group of interconnected organisations.
what is the problem with gtk3?
What used to be the preferred flexible and DE agnostic FOSS widget toolkit is now the GNOME toolkit, where stable ABIs and non-gnome use cases are given precious little consideration (and in some cases actively discouraged, see somewhat infamous "Decide if you are a GNOME app or not" quote and obnoxious attitude given to KDE devs trying to make GTK apps fit in on plasma), and in spite of solid user demand customisation and theming options are removed to protect the "GNOME brand" and the holy HIG.
On top of that, it's big and it's slow.
The package libdbus-glib is dependency of firefox-esr, according to apt-get.
No,
libdbus-glib-1-2
is a dependency of firefox-esr.
So how does firefox successfully pull in a dependency that does not exist?
It doesn't. libdbus-glib-1-2 is a perfectly valid package:
$ apt show libdbus-glib-1-2
Package: libdbus-glib-1-2
Version: 0.110-6
Priority: optional
Section: oldlibs
Source: dbus-glib
Maintainer: Utopia Maintenance Team <pkg-utopia-maintainers@lists.alioth.debian.org>
Installed-Size: 216 kB
Depends: libc6 (>= 2.14), libdbus-1-3 (>= 1.9.14), libglib2.0-0 (>= 2.40)
Homepage: https://www.freedesktop.org/wiki/Software/DBusBindings
Tag: role::shared-lib
Download-Size: 73.0 kB
APT-Manual-Installed: no
APT-Sources: http://deb.devuan.org/merged chimaera/main amd64 PackagesIOW,
a question on hidden dependencies
s/hidden/mistyped/g
FTFY.
Logs should be disabled by default, and the user should be able to enable a specific one when setting up.
I disagree, logs are extremely useful and shouldn't use much disk space... So long as they're not full of almost-always-irrelevant warning and debug spam, which release builds of well written software shouldn't be generating to begin with.
IME most of that comes from GUI toolkits and DE related components, because for some reason leaving debug messages on and not actually fixing warnings that appear on pretty much every system is what you do when writing GUI applications. Out of sight, out of mind.
The session manager / .xsession-errors is really just doing what it's supposed to do, catching stdout/stderr that would otherwise go into the void. Not X's fault if your apps won't STFU. ![]()
Imagine if CLI apps barfed all over stderr like that, you'd never get anything done.
why do I need the default "man" pages in 37 languages?
You don't, which is why localepurge has been a thing for decades.
aluma wrote:This is Trinity's log management tool, a legacy of KDE.
Very nice tool.
FWIW, kdebugsettings still exists in current KDE/Plasma, and has a bunch of new features to boot (custom rules & groups, load/save settings etc.).
IIRC there's no obvious menu entry for it though, so you need to call it from a terminal or krunner.
You don't need X or a working GUI of any kind to install GPU drivers. X also has a generic VESA driver that will work with just about anything, though IMO it's more hassle than just installing the right driver from the CLI.
Worst case if the existing driver doesn't work with a new card is you disable KMS (i.e. nomodeset and co.) and/or blacklist the module and reinstall drivers from the console.
I've been through many generations of nvidia hardware on GNU/Linux (at least as far back as NV40/GT6xxx), and I don't think I've ever bothered to "prepare" anything. Just swap it in, reboot, and if X doesn't load then see about fixing the drivers.
Does it look like there will be a push by major players in the Linux community to force it to be implemented as we have seen with systemd?
There is already talk of disabling X11 session support (at least by default) in GNOME... But then that's to be expected, it's GNOME. IMO Redhat/systemd/freedesktop/GNOME might as well be considered the same enitity at this point, and the NIH attitude is strong over there.
There doesn't seem to be any justification for it, again, like systemd, so I hope the "if it ain't broke" principle will hold here, as well. X works fine for any ordinary use cases.
The justification is that X is old, complicated, contains a lot of functionality nobody uses any more, and as usual writing shiny new code is more interesting than fixing and/or maintaining old code.
Personally I'll consider wayland when wayland reaches feature-parity with X, and the majority of applications I need work properly with it. Right now it's still full of bugs and several important features aren't even standardised in the protocol yet... So I expect everyone except GNOME to maintain X support for at least a while longer.
Given that it is becoming a standard like Internet Exploder used to be, I do sometimes have to use Chrome to access certain sites.
And the only reason it's become (past tense BTW) a "standard" is because people just accept they have to have it for "certain sites" that only work properly with it, instead of complaining to the webmaster concerned that their site is broken... Much the same as in the bad old days of IE.
As of now we really only have one independent (two if you count opera, but that's not FOSS) modern browser engine left that isn't based on (and largely slave to the whims of) google chrome. If you don't like what google is doing, I suggest you use Firefox instead.
Will the Chromium 'unbranded' version also be forced to follow this new standard? Or could they possibly keep hooks for both types of extensions?
Chromium is really just the open-source build of chrome, without google's proprietary bits. Little else is changed, and I expect they'll follow very close behind everything chrome does.
Some of the more extensive chrome modifications (brave etc.) might hold out a bit longer, but IMO it'll only be a matter of time before everything based on chrome/chromium is manifest v3 only.
Firefox might go down a similar path of course, but as of now that's still a "might", as opposed to chrome/chromium's "will, early next year". Pick your poison.
It concentrates power at the top which is the whole point of infantilizing users.
I know. See edit rant above. ![]()
On BSOD in particular, among the many things that drove me to switch to GNU/Linux in the first place were the powerful always-available CLI and the verbose, informative error handling. Both of which are apparently in the process of being deprecated, by people whose motivations are suspect to say the least.
Of bigger concern to me is the implied attitude - VTs and text interfaces are "archaic" (and need to be explained to contributors to an init system), and the way to present system errors is with QR codes.
The dumbing down of interfaces and infantilisation of users is a trend that needs to stop. It's systemic in everything coming out of redhat / freedesktop these days, and IMO it's as much an attack on software freedom as their concurrent push toward weak(or non)-copyleft open-source licences is.
With software there are only two possibilities: either the users control the program or the program controls the users. If the program controls the users, and the developer controls the program, then the program is an instrument of unjust power.
-- Richard M Stallman
I see little functional difference between the developer controlling the software because the source is withheld, and the developer controlling the software because it's intentionally designed to be difficult for the user to understand, and crucial aspects of it's functioning are hidden behind abstraction layers and "user freindly" interfaces.
Likewise, if the user doesn't understand the software (or more to the point, isn't meant to), the user cannot control it.
Learned helplessness is where we're headed with our current trajectory of big tech monopolies, disposable subscription gadgets, toddler-oriented interfaces and and inscrutable AI "assistants", and helpless users are at the mercy of big technology corporations and the developers they employ.
Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
-- Frank Herbert, Dune
The problem is that when you restart the session the symbolic link will be replaced back by a regular file and will start to grow again. To avoid this you must add the following lines to the .bashrc
Or just use a bind mount as already suggested above, as it won't have this problem to begin with.
OTOH, if you don't want anything written to it, why not just 'chattr +i .xsession-errors'? That's kind of what the immutable attribute is for.
Figured I had better crosspost this gem from an also somewhat interesting thread on the Gentoo board, for a good laugh if nothing else.
Couple of choice hot-takes:
Following Poettering's guidance, I created this bsod tool.
I have to modify it to take over the entire screen, turn it blue, and display the QR code.
...
in case you wonder what a VT is, it's this archaic textual display logic that the linux kernel uses to do early boot logging before wayland/x11 take over
I wish it were parody... But apparently this really is the level some systemd contributors are operating on, and aping even the most inane and idiotic windows "features" is not only given consideration, this stinky floater has actually been merged.
If it were me reviewing that pull, I would have laughed my arse off, and followed up with a flat "no".
Long may Devuan (and other sane distributions of note) continue to not package or otherwise encourage this amateur-hour circus.
we still have an unadulterated command line available
We did, until somebody decided to remove console scrollback... A change which royally pisses me off, because I use the console TTYs on a daily basis, and now I have to add screen or tmux to the mix just to get a usable interface.
On top of that nonsense, the systemd crowd is currently pushing to have no console TTYs at all by default, because "the gui should be the primary interface" ![]()
Thankfully the latter madness has not (yet) infected the few remaining "old school" distros that actually offer meaningful choice beyond "systemd + GNOME + wayland + pipewire, or GTFO".
On the abomination that is GTK3 (and CSDs), this patchset goes a fair way to making it at least somewhat usable and removing the worst of the mobile-UI junk.
It's still bad mind, but if the choice is between sticking to unmaintained GTK2 stuff or eviscerating GTK3 with mushrooms and an ever-growing list of my own "revert $idiotic_change" patches, I guess I'll take the latter. Grudgingly.
here's a small selection in various price ranges
Also: Literally anything that can run GNU/Linux (or BSD) and has 2 or more ethernet ports, in combination with a switch one already has.
Depending on what junk you have laying about (I'm currently running a PCEngines APU2 as a router, but I used an old Pentium III SFF desktop for many years prior), that could mean price ~ zero with features and flexibility superior to an off the shelf router.
More generally, details on the connection probably matter here. Pretty much any fibre connection will require a router of some kind, and some require VLAN or PPPoE support on the router as well.
The sort of works for one machine bit inclines me to think it's straight IPv4 DHCP in this case, though it may also be using VLANs for traffic shaping.
I have to let Devuan lock me out of Win7 and then hope I can fix it later?
Could be much worse, the Windows installer will simply overwrite any other bootloader without prompting, and you'll need to fix it manually from a live distro.
In other news, so long as you have appropriate bootable recovery media (once known as a floppy disk with a linux bootloader on it, now a live distro on CD or USB drive), you're not "locked out" of anything.
The x86 boot process is complicated and somewhat fragile, and so (IMO anyway) is it's successor UEFI. This is why most proprietary operating systems offer the user no opportunity whatsoever to screw it up, by simply assuming there is only one OS installed.
Grub is at least nice enough to give you some control over it's installation and options to boot multiple operating systems, though in absence of os-prober by default you will need to explicitly enable / configure the latter.
In any case, if you want multiboot, you'd do well to do at least some basic reading on how the boot process works and have a live USB handy to fix things if you need to.
FWIW I think the decision to disable os-prober by default is kinda silly, but it's not a big deal to re-enable it or otherwise add entries to the bootloader after the fact. There are comments in the relevant configuration files, and the documentation (either online or via man and info) is extensive.
ESP? Extrasensory perception?
Somewhat obviously, no. EFI System Partition, only relevant if you are using (U)EFI boot.
Let us also not overlook the fact that systemd and pretty much everything else coming out of redhat/freedesktop et-al these days is under weak-copyleft or non-copyleft licences (e.g. LGPL or MIT).
If you want to speculate on plausible, non-technical motivation for systemd aggressively absorbing so much functionality previously provided by independent projects, look no further than circumventing the inconvenient (for redhat and their corporate overlords) restrictions in the GPL (especially GPL3) regarding linking non-free code against GPL code and its distribution as part of commercial products.
IOW, is it really "Embrace, Extend, Extinguish", or more "Embrace, Corrupt, Sell"? I expect only time will tell.
@czeekaj Any particular reason for necrobumping a 1.5 year old solved question with something almost entirely unrelated?
@torquebar As HoaS already commented, this is almost certainly due to os-prober using the unstable /dev/sdx names rather than UUIDs.
There are bug reports about it going back over a decade, with the conclusion that UUID support in the kernel can't be assumed, so os-prober shouldn't use it.
Then again os-prober is really just a bunch of shell scripts, so it shouldn't be particularly hard to change if it annoys you enough to do so.
TOR is not a "MitM" defence, it's an anonymous routing network. Thrashing said network with generic bulk traffic that has no need for anonymisation achieves nothing but making the network slower for everyone.
Since I'm running a TOR node, that means your "good idea" is potentially wasting my bandwidth.
APT already has release signing and package checksums, specifically to combat MitM attacks. If you want in-transit encryption as well, use an HTTPS mirror, that's what they're for.
If you're extra paranoid you can always verify packages certificates and signing keys manually, but unless you're inside a network that blocks normal access to the repository mirrors or have a pressing need to hide the fact that you are running Devuan, using TOR is just stupid.
Seriously, the amount of ridiculous tinfoil-hat "security" misadvice floating about these days is just tiring. Stop already.
if I don't block IPv6 I can't get an IPv4 address to ssh into the SBC, and I wonder which packages I might have missing.
The only thing you really "need" for a static-ip ethernet connection is a working NIC driver and ifconfig or ip (from net-tools and iproute2 respectively).
When you say "get an IP" I assume you mean over DHCP? If so, I suggest trying with a hand-configured or ifupdown managed static IP (i.e. dump networkmanagermangler) first, then moving on to manually invoking whatever DHCP client you're using with some appropriate --verbose or --debug flags to see what's going on.
Frankly, other than for laptops or tablets that move between wireless networks constantly (and even then only if you must have a shiny GUI) I find networkmangler more aggravation than it's worth. IME ditching it is step one in any network troubleshooting.
packages like wpa_supplicant, iw
Are relevant only for wireless connections, and the former only for WPA encrypted wireless connections specifically.
which are the mandatory ones?
Mandatory? This is GNU/Linux we're talking about here, there is no "mandatory" beyond a kernel, init and shell.
That said you'll almost certainly want net-tools, netbase and ifupdown. Probably a dhcp client of some kind as well.
The rest is up for debate, and heavily influenced by your choice of desktop and the services you intend to run and/or connect to.