You are not logged in.
At a guess (haven't used XFCE in a while), thunar uses GVFS/FUSE user mounts for FTP or copies the file to a temporary directory, so there's a real filesystem path MPV understands. Dolphin (and the rest of KDE) uses KIO for transparent FTP, which only works with applications that understand KIO URIs.
With so little detail provided, a guess is the best you're likely to get.
The bigger question might be why you're using FTP to stream from a local server to begin with, when things like NFS and SMB (or even SSHFS) exist... Or if you must use FTP, why it's not set up as an FTPFS mount. curlftpfs removed as of bookworm, reasons unknown
FTP is designed for contiguous file transfers over the internet (or dialup BBS), not the kind of random-access one expects directly accessing files (i.e. NAS-style) on a local fileserver.
Eventually this led me to use Ratpoison as my WM: a completely keyboard-driven tiling WM that has no overlapping windows (don't need 'em) and no need to take your hands off your keyboard.
Haven't used it in a while, but Ratpoison is almost certainly still awesome. Also, the post that started it all is hilarious (and kinda true IME ).
Personally I have 2 modes of operation:
Screwing around, feet up on desk. One hand for beer, one for rodent, all the overlapping windows and flashy animation nonsense is fine.
Not screwing around, both hands on keyboard and rodent well out of the way. Usually that means CLI, in an old-school tty or a console window either fullscreen or tiled 50/50 with some other (usually a manual or another console) window.
Fortunately KDE can do both, though admittedly it's not as efficient as a dedicated tiling WM at the latter. Ratpoison would be my go-to if kwin ever looses it's configurable keyboard shortcuts or if it didn't do custom tiling modes. (which at the rate KDE is stripping features for "modern" aesthetics wouldn't surprise me).
Video Acceleration API is... A video acceleration API. It shouldn't have anything to do with sound. Nor should mesa... Unless of course there's something totally broken regards the fallback to unaccelerated decoding, but I doubt it. Firefox error messages kinda suck though.
The Cubeb / OnMediaSinkAudioError lines are likely the relevant ones, I suggent a general websearch on those for a start. I don't have Debian / Devuan on any desktops right now to investigate myself.
What more does running with MOZ_LOG=cubeb:5 (or a log level of your choosing) have to say?
Do you have any custom ALSA configuration (.asoundrc, /etc/asound.conf)?
Since nobody has mentioned it yet:
Firefox with ALSA may require exceptions to the browser sandbox to access sound devices, I don't know if Devuan sets this configuration by default but assuming the packages come from Debian I doubt it.
You'll probably need to visit about:config and set
security.sandbox.content.write_path_whitelist /dev/snd/
And if you get e.g.
Sandbox: seccomp sandbox violation: pid xxxx, tid xxxx, syscall 16
in a terminal when you try to play audio, you'll need
security.sandbox.content.syscall_whitelist 16
Note this refers to native ALSA support, I don't know if it applies also to apulse but I wouldn't be at all surprised if it does.
On that: Devuan packagers, please build firefox with native ALSA support. It's literally as simple as a --configure option at compile time, and this apulse shenanigans is stupid and annoying.
The first thing I was thinking about was a new version of Proxomitron. It simply died due to losing support.
If it hadn't, https-everywhere would have killed it anyway. You can't really have local proxies and SSL at the same time, at least not how modern browsers are designed and modern users are trained.
All it takes is for someone to notice that faceborg is suddenly signed by proxomitron instead of DigiCert Inc, and the "muh securitee" wailing starts. Users are of course properly groomed to run arond with their hair on fire if they see any kind of browser "security warning", rather than reading details and making informed decisions.
All this is really just the same old war. Commercial entities want total control over how their sites (and apps, in the mobile space) display and how you interact with them, and that means locking down browser endpoints and preventing tampering with DNS and network traffic.
Pervasive use of obfuscated javascript where HTML would do.
Ever-expanding list of web "standards" and extensions that make it near-impossible for independent browsers to stay compatible.
HTTPS everywhere (even when no sensitive data is being exchanged).
DOH, and the "your ISP could spy on you, better trust google or cloudflare instead" non-argument.
Manifest v3 and the wider attack on content filters.
WEI (shot down for now, but it'll be back).
"SafetyNet" and other mobile OS attestation systems (which will come to PCs in time, we have "anti-cheat" rootkits and the like already).
All of this is touted as being for your security... It's not, it's for theirs. Google is of course the arch-villain here, because nobody wants to control what you see so much as a corporation funded by and founded on advertising.
IMO the only real solution is wholesale rejection of the current "web v2 (and supposed v3)" and it's centralisation of power. It's corrupt beyond recovery at this point, and is only getting worse.
I'd love to see the likes of gnunet, gemini, and other decentralised solutions gain popularity, but so long as all the frogs want is online shopping and social media, we're kinda screwed. If the water only gets a little bit warmer with each new "improvement" it's fine, right?
I have (learned years ago) a monitor and keyboard attached to it.
.....
Remind yourself that, if full disc encryption in place, you "might" have to type your passphrase to get the system running.
Which is why real server motherboards come with a serial console that works right from POST and/or IPMI as standard.
Squid doesn't have anything to do with DNS, it's a caching/filtering HTTP proxy. Such proxies aren't seen much anymore, because they don't really work on encrypted (i.e. HTTPS) connections.
Anything that does caching/inspection/filtering of HTTPS traffic needs to decrypt it, and that means installing a root cert on the client, which pretty much breaks the SSL trust model... Not that it stops corporate networks from doing exactly that, for "safety".
The browser has the SSL keys so it can do all this. That's kinda how most decent ad/content blockers work already, once you go beyond host lists and into selective element blocking (that's also part of why manifest v3 is bad, as it hobbles this ability to modify content before it is displayed or executed).
DNS is another matter altogether, and there's nothing stopping you running your own resolver (e.g. BIND) on your own network, with your own rules. Screwing around with cache expiry/TTL and locally resolved "whitelists" will likely just land you in a world of "my DNS is broken" hurt though.
instantaneous performance
Hold this beer a moment, just need to land my pig, then I can totally I can prove that one.
simple hack to allow vlc to run as root to play background music while you run circles around people in online gaming, and make them look silly
Benchmarks or it didn't happen.
Generally the older kernels (i can't remember the versioning schemes, but basically the point upgrades stable distributions like devuan/debian use are actually faster
Okay then, why (technical explanation, not pseudo-jargon and hand-waving)... Also, benchmarks or it didn't happen.
after installing xfce, logging into it, the system is only using around 686-700 mb of ram, which is extremely impressive.
That's an extremely normal memory consumption for a fresh-boot minimal XFCE desktop.
This equates to faster system response timewhich is necessary for competitive gaming
Unless your system has some weird caching limitation (which AFAIK hasn't been a thing since L2 moved on-die in ~1999), you have severe memory fragmentation, or you are running into memory pressure, how much RAM the desktop uses is of no relevance to system latency.
Again, benchmarks please.
Using the root account for everything literally doubles my download speed, and system response time across the board. Everything is faster, more efficient, and super speedy.
...
this is essentially how using the root account is going to benefit all of the system operations compared to a regular user account. Everything will be twice as fast, including beating people up on online gaming
Sigh. Benchmarks or it didn't happen.
"Everything will be twice as fast" is an extraordinary claim, and extraordinary claims require extraordinary evidence.
This setup is literally like hyper-drive, file system operations are instantaneous and everything else is close behind instantaneous performance
Subjective performance. Where benchmarks?
it seemed to run startup programs out of sync with each other creating a lockup scenario, and failing to get to the desktop envrionment
Either you have found a bug (in which case you should report it, with all relevant technical information and logs), or this is just more hand-wavy pseudo-technical "explanations" for "I broke it because I don't know what I'm doing", like the last post... Guess which one I'm betting on.
...
Or perhaps it's asking to much for someone with "extensive experience, having tested and worked with basically 80% of all the linux distros" to exercise any scientific rigour whatsoever when doing performance tuning and writing "guides"?
“Are you for the computer or is it for you?”
Or, "Is this a hobby or a tool?". Either is a completly reasonable approach, and there are far worse things one could spend free time on than tinkering.
Most of my computers are workhorses, others are, well, more like pets. Same goes for my vehicles TBH, and electronic test equipment...
Well it is Motörhead, I mean that's kinda the point.
the channels mentaloutlaw and lukesmith on youtube.
If you want to find someone who is trustworthy for linux info, never trust a far right nutcase like him.
Except Luke actually includes technical, educational content in some of his videos, where the aforementioned Brodie has silly faces, hype, and clickbait.
I'll take a 'tuber with intense (but ultimately irrelevant) political leanings who knows what they're talking about over a politically "correct" one who doesn't any day of the week.
Why you 'muricans think your political infighting is everyone else's problem (and feel the need to inject it into Every. Single. Discussion.) I will never understand. I watch videos to learn something, not get outraged about how left/right/upside-down the presenter is.
This obsession with putting people in little "political affiliation" boxes and discarding information from outgroups is extremely tiring, and I'll have no part of it.
Using computers is like being in a relationship with someone else[snip]
Please, smash that ChatGPT prompt some more, we all love irrelevant nonsense.
TBH I generally find his channel to be more overexcited SNS clickbait than I'll tolerate, but this is the first time I've managed to trigger the "desktop linux" fanbois hard enough to get a post read on youtube... At least there's plenty of "discussion" in the comments, so win? I guess?
Colour of the error screen? Who cares.
That it's clearly copied from Windows? Cringe and worth a laugh, but ultimately irrelevant.
Replacing a full page (or more, while console scrollback was still a thing) of information with "the last error" and a QR code? Dumb. Just dumb. Shall we add a sad face emoticon as well?
Overlaying the GUI or dropping to a console? Sure, now do it while using the full screen for debug output and without the QR code nonsense.
Systemd contributors needing an explanation of "archaic" VT code and text interfaces before X/Wayland "takes over"... Deeply concerning. This is an init system and low-level utilities we're talking about here, anyone hacking on it should be well familiar with those things.
Sounds like a pretty comfy rock you have there
Looks like I made on the 'tube, LOL. Needless to say, he largely misses the point.
I.e. in what way this fullscreen QR code with "the last error message" (i.e. 99% of the time not the whole story) is better than a kernel panic and/or stack trace, both of which we already have, beyond being supposedly "more accessible" for non technical users. (AKA the aforementioned dumbing-down of interfaces).
I don't entirely disagree with you, I just extend "Say no to GMO" with "until rigorous testing and effective regulation is in place".
Any new technology has pitfalls and dangers, and biology and ecology are particularly problematic due to the complexity of the interactions involved. That doesn't mean we shouldn't pursue it, just that it needs to be done slowly and carefully. In light of the track record concerning corporations funding their own "safety" studies, IMO that means considerable expansion of the FDA, or formation of a new agency entirely.
IOW, I'm not against GM in principle, rather reckless application of technology in general. If done properly, GM could be a game changer for a great many fields... If not, it's a disaster just waiting for a place to happen.
Much the same could be said for the current AI craze, or any number of other technologies. As ever, the fundamental problem is rampant under-regulated commercialisation of technologies that should still be confined to the lab.
We have been purchasing as many "organic" items as we can; however, how can we actually know those items are organic?
You can't. Like most product labelling standards, the requirements for putting "organic" on packaging are far too lax to put much trust in.
We saw it with "Fat free".
We saw it with "No added sugar".
We're seeing it all over again, ten times worse, with the plethora of "environmentally friendly" / "low emissions" etc. etc. greenwashing.
It's all lies, it's always been lies, the only real questions are "how big a lie can we get away with" and "will increased sales cover any potential fines".
There are of course ways around this, but I'm intentionally avoiding mention of alternatives that are largely unavailable to many people.
Not everyone has the option to grow their own, buy at a local market / farm gate or speciality organic store, or even know where their food comes from beyond what's on the label.
For many the options are even worse, as they simply can't afford to be choosy. The default option should of course be "reasonably safe and healthy", but often it's closer to "what's cheap today, I just need to eat".
That all said, IMO "organic" isn't really the whole answer anyway, at least not when it comes to feeding billions (doubly so with the looming impacts of climate change). What we need is responsible food production (and marketing / labelling).
Mechanised farming, synthetic fertilisers, and even pesticides, antibiotics and genetic engineering have massively increased our ability to produce high quality food, but as with anything there is plenty of room for abuse - especially in an environment where producers are constantly pressured on price (often with no recourse, as so much purchasing-power is concentrated in a few large corporations) and regulatory bodies are effectively toothless.
On genetic engineering in particular, we've been doing that (the slow way) for thousands of years. We now have faster and more powerful tools, so we need correspondingly faster and more powerful research and regulation.
If that means "stop doing [x] until we have more data", so be it, but knee-jerk blanket GM == bad is a short-sighted and regressive approach.
The FDA wins...
The FDA is supposed to be on your side in this, but is horribly ineffective due to conflicts-of-interest, political and economic pushback, a lack of meaningful enforcement options, and chronic over-reliance on a "safe until proven otherwise" attitude and manufacturer-provided safety studies.
If the FDA (and other regulatory bodies) had the power and will to block new products until independent research is completed and actually crack down on corporate misbehaviour (as opposed to slap-on-the-wrist fines), we wouldn't be in this situation to begin with.
SUGAR BLUES by William Dufty
That is a pretty interesting read, and I dare say the burying of inconvenient truths and defanging of regulation it details parallels organisations beyond the FDA as well. Corporate interests vs. public good is a war as old as civilisation.
Applying "innocent until proven guilty" to chemical engineering, the health of the population, and the behaviour of organisations that will do literally anything they can get away with to turn a profit is... Not particularly bright, but that's where we're at.
Profit, that's why.
Processed foods that are quickly digested leave you hungry again soon after = more profitable.
Sugar (or corn syrup) is incredibly cheap, and makes everything more palatable = more profitable.
Homogenous ingredients that are low in fibre (goto 1) are easier to process, and keep and freeze better = more profitable.
In general, if cheaper ingredients can be substituted and palatability recovered by adding sugar, fat, or flavourings... That's exactly what will be done.
I've worked around enough food production to see this in action, and it's usually a "slippery slope" process - one little recipe change, manger gets his bonus for cutting costs, repeat until your product is corn syrup and lard, with artificial flavour and colour.
There are no incentives to make "over-the-counter" goods better for you, only cheaper, tastier, and more addictive.
The "health food" market isn't much better, and has it's own brand of deceptive profiteering - usually leveraging public ignorance to inflate prices for things nobody actually needs.
Either way, outside of old-school "farmers market" type independents, honest food always looses to profitable food.
Regulators are primarily concerned with safety, and none of this bullshit will make anyone sick... at least in the short term.
The long-term impact nobody wants to tackle, because it's difficult and time-consuming to prove (and food megacorps will just buy some counter "studies" or tie things up with lawyers and lobbying anyway), and it boils down to a collective-good vs. corporate-profit fight, which nobody wants to start.
Welcome to capitalconsumerism, the shining land where everyone (scientists and regulators included) has their price, and we allow corporations to become more powerful than governments.
Am I a "Conspiracy Theorist"?
Only when you take garden-variety (pun not intended ) stupidity and greed and dress it up as some super-secret plot to take over the world.
The problems with the worlds food supply are old, pervasive, and readily apparent to anyone with a brain...
Also nowhere more obvious than in the USA, and other "Americanised" western countries where large food corporations have taken root.
That's not a conspiracy, it's just an observation.
While I agree it's a problem the truth is it still comes back to individual choices. Just because you live in the US doesn't mean your diet has to consist of Oreos, and pizza rolls. Quality food is available in the fresh produce section. You can also find quality meat, but you have to skim through the junk.
Indeed. The problem is that you're essentially fighting a psychology war with corporations far better resourced than you are, and for many, that's a fight they're not going to win.
How can we expect people to make good choices when that's made artificially difficult (yes, fresh produce is at the back for a reason, etc.) at every turn, and they are constantly bombarded with advertising telling them to do the opposite?
having worked with all of the main-stream linux distros,
and working with different configurations for years
Then I suppose you also have years worth of benchmarks to back up your claims, right? Please, post them so we can all see exactly how much faster your "Hyper-Gamer" install is.
using ROOT is a taboo subject
No, it's just a stupid idea to run a GUI as root. It's insecure, it will likely provoke permissions-related bugs because developers don't test usually this scenario, and it and provides no benefit whatsoever (besides making it easier to trash your install, which seems to be what you're aiming for anyway).
improved access to system resources
For which you have still not provided any evidence whatsoever, or even presented a plausible mechanism.
based on my personal experience,
and the development of enhanced interaction between Linux,
computer hardware, computer programs, and computer games
are a daily work in progress.
Sounds like a long-winded way of saying "I have not properly tested any of this, but I want to sound like I know what I'm talking about" to me.
You can claim experience all you like, without proof all you are doing is blowing smoke.
if you have any suggestions related to improving the performance
of Linux, and gaming on Linux, by all means share them
Disable side-channel mitigations, extraneous background tasks, and power saving / clock modulation features (and/or consider installing and configuring tuned). Run a light DE or window manager and turn off compositing.
On an otherwise properly configured system, expect a couple of percentage points at best.
The end.
Oh, and refrain from making ridiculous claims without providing any data to back them up.
You do whatever you want to your own system, nobody cares. Advising others to engage in this nonsense is another matter entirely.
Not to put too fine a point on it: Post some (properly controlled) benchmarks or I call shenanigans.
Without data to back it up (with the notable exception if side-channel mitigations, which have a well-documented performance impact), this is nothing but pointless ricer rambling.
This in particular:
What happens when tweaking out a computer for hyper-performance, is that a ton of bugs will appear,
and programs will behave in ways that are totally bizarre, and unheard of to others. Most people,
sit around waiting for innumerable loading times, or intervals, instead of skipping past the red tape,
and taking full control of their computer, and the programs on it. And this is going to cause some errors,
but that is normal because generally most programs are designed to run at slow speeds, and not hyper-speed.
Is complete and utter hogwash.
Computers are deterministic systems, and applications haven't been written to rely on system performance for correct operation (e.g. delay loop timing) since the 1980s. If your "tweaks" are causing bugs or unintended behaviour then your system is grossly misconfigured, your compiler is producing broken code, or your hardware is unstable (i.e. overclocked).
Programs being "designed to run at slow speeds" is pure fantasy, running as root to "reduce delays" is questionable at best (again, benchmarks or baloney) and gratuitously dangerous advice at worst, and the rest is a lightly-mangled manual install guide with a garnish of hyperbole and a dash of blatant misinformation.
If you're going to advise people to ignore decades of security best-practice and run a GUI as root, at least provide some benchmarks so they can make an informed decision WRT the supposed benefits.
Which is a file system check, that runs automatically, based on some hidden parameters,
and which does not work on an f2fs drive preventing me from booting into the desktop environment at all.
The conditions which cause a fsck on boot are not hidden, but f2fs is prone to triggering a full (and extremely slow) fsck on kernel version changes. Its fsck is also arguably quite broken, and both of those are good reasons it's not a supported root filesystem in the installer.
Forcibly skipping filesystem checks is a real good way to end up with a hosed filesystem, and comments like this do not improve the credibility of your "mini-guide" one bit...
Nor does:
I chose mbr, which is the first little 8mb partition I made initially.
Because the MBR is not a partition and can never be one, so this is misleading bollocks. What you are presumably referring to is reserving space for GRUB at the start of the disk, and 8MB for that is ridiculous.
FWIW, I run Gentoo as a daily-driver, and have benchmarked things like LTO and some (not completely stupid) experimental compiler optimisations fairly comprehensively. I too can make an unstable system with weird bugs and bizarre behaviour... That doesn't make it any faster under real-world workloads, only more broken.
Even the most rabid -fbroken-math -funroll-brain -fomg-speed gentoo ricers admit that objective gains are minimal for anything but very specific (usually scientific data analysis or synthetic benchmark) workloads, and that's going deep into the weeds at the compiler level, not just futzing about with "gaming" kernels and unfsckable filesystems on a binary distro.
About the only useful advice here is 'mitigations=off' (which should also have a warning WRT the security implications), and the use of lazytime over noatime, which is not widely known but will actually reduce disk access.
For the rest:
Where is your proof that f2fs is faster than ext4 or xfs?
Where is your proof that running as root improves latency?
Where is your proof that the xanmod kernel improves game performance?
And last but not least, what is:
Systemctl tweaks
doing in a "guide" for a distribution that doesn't package systemd or systemctl to begin with?
Let me put it this way: When I upgrade my desktop (hardware), I build the new machine, swap the drive(s) in from the old box, and boot just it up.
99% of the drivers you will need are already present in the kernel / modules, it's just a question of autodetection, and that's pretty good IME.
Last time I wanted Devuan on a new-old machine, I actually just grabbed a backup of the whole disk form another completely different machine... Which worked without issue, and saved a bunch of time.
I know elogind is not systemD but comes to fill a gap systemD made.
Elogind is a part of systemd, forked and split out into a standalone daemon. Nothing more, nothing less, and nothing at all to do with "gaps".
After getting acclimated to all the Wind'ohs rigmarole this is confusing.
The windows nvidia drivers also support multiple generations, at least technically. In practice windows keeps a record of hardware IDs and the associated driver, which causes update/reinstall rigmarole (found new hardware wizardidiot spam anyone?) if anything changes.
Linux OTOH just loads the drivers you (or udev rules) tell it to, and each driver probes for supported hardware. Dumb, but 99% of the time it just works (and makes things like livecds and moving installs to new hardware much easier).
The disadvantage is that if you want different drivers for two or more compatible devices, you need to set that up manually. Pretty rare thing in practice though.
fsmithred wrote:we got to the fifth post and nobody has yet asked why network-manager got removed?
If the first post had been an attempt at a bug report, I'm sure more would have been done to encourage diagnosis and sending of the relevant information to bugs.devuan.org
But since it was essentially just a rant just about dropped balls and low bars, I'd say the responses were just right.
Indeed.
Had the OP phrased the post as a bug report or request for help, help they likely would have received. Whining about how terrible the distro is because an update on unstable "Removed internet access" gets... Some ROFL at best.
I'll tolerate plenty of complains if a routine update to a stable release causes borkage (particularly if the post includes technical details, as opposed to "$gui_thing got broke because I didn't read apt output"), hell, sometimes I'll even join in...
But unstable and testing are subject to this kind of dependency screwyness at any time, and that's kinda the point of having unstable and testing branches to begin with.
i.e.
One thing that I learned many years ago when running unstable...
Look at the terminal output before agreeing to an upgrade. If it's going to remove something that you consider detrimental to your system, tell it n (no). You can then upgrade the other packages individually that aren't part of the packages that will/would be removed.
Under normal circumstances, it will be fixed within a few days. Other times, it may take a week or two.
This^ The answer is: fix it yourself (and/or file a bug report), or wait.