You are not logged in.
Not very much... At least in theory. Primarily the ability to have /usr be a separate partition (or network mount) not required to boot the system, which hasn't been a common configuration for several decades now.
In practice, it's a bunch of needless disruption and a whole lot of packaging and testing work to ensure there are no file collisions or other jank, for the sake of some RedHat "improvements" and the usual "systemd doesn't support that, so let's make sure nobody else can either".
Personally I'm going with "IDRGAF unless it breaks my system, but I'll switch when I'm good and ready and I'll be proper annoyed if anyone tries to force it down my throat."
For the record, that doesn't mean I don't think it's stupid. It's just the kind of stupid that (probably) won't set anything on fire if handled carefully.
Standard-issue plug for Gentoo's "Unavoidable if you're running systemd, otherwise here's a profile option and a migration script, do whatever you like with them" approach here.
Pretty much every bit of the highly visible "progress" WRT "desktop" GNU/Linux (and OSS/FOSS in general) these days is to please:
a: RedHat IBM
b: IBM's customers
c: Commercial software vendors
d: Users who care more about running commercial software than software freedom
Hence the ongoing campaign to standardise enshitify or replace core components (under weak licences of course), reduce distro fragmentation diversity, agressively break compatibility with deprecated perfectly servicable but agenda-inconvenient software and systems, simplify packaging of closed-source software spyware and other garbage, and hide the system behind inscrutable abstraction layers and "user friendly" UIs so windows refugees feel more comfortable IBM can sell more support contracts.
I still don't understand what specific issue OP is facing
You, me, and probably everyone else as well.
With all the hyperbole and shouting, I'd be surprised if anyone has the slightest clue what the OP's actual problem is... Some ill-defined hatred of 64bit (20-ish years too late)? Generalised railing against the inevitability of Andy and Bill's law?
Originally it sounded like some problem with xfce, but now? At least the thread is a mildly entertaining read. vOv
On gentoo ( not that that makes much difference):
$ git clone --recurse-submodules https://github.com/transmission/transmission Transmission
---snip---
$ cd Transmission
$ cmake -B build -DCMAKE_BUILD_TYPE=Release
---snip---
$ cd build
$ time cmake --build . -- -j 20
--snip---
[ 88%] Built target transmission-gtk
---snip---
[100%] Built target transmission-qt
real 1m10.308s
user 18m8.638s
sys 0m48.282s5 commands, less than 2 minutes, no additional packages required, and no issues whatsoever to report.
Transmission is a perfectly ordinary cmake project, and building it is exactly as easy as it might seem... Provided, as the OP has discovered, you actually have the complete source tree to begin with.
If I was motivated enough (and had a debian/devuan desktop handy) I could likely build a Debian package almost as easily, but I'm still waiting to hear what's so great about this shiny new version.
3.x works just fine for my (headless server + webui) needs, and I see no reason to disturb a working system for the sake of "newer always better".
OTOH, if we keep up the "I don't know the answer, so just use appimage" and "download from official website" (AKA "I have 'ex-windows user' tattooed on my forehead") type nonsense, this place can be as devoid of useful technical information as FDN in no time. ![]()
Wouldn't it be easier to download qbittorrent appimage
While it might be easier, I really have no idea what qbittorrent or appimages have to do with compiling transmission.
How is this even remotely relevant or helpful?
@jemadux: You are extremely unlikely to find all the third-party sources (and correct versions) transmission needs as Debian/Devuan packages, at least not for a bleeding-edge version that isn't in any repository.
Closer investigation would suggest that only the "Source code" tarballs are incomplete (likely autogenerated), and the .tar.xz archive (i.e. this) is in fact what you want. Or just pull from github directly as I already suggested.
Also, less gratuitous full-quoting would be nice.
and there's no log file
Why would there be a log file, when the cmake output already makes the problem blindingly obvious?
-- Could NOT find DHT (missing: DHT_LIBRARY DHT_INCLUDE_DIR)
CMake Error at /usr/share/cmake-3.25/Modules/ExternalProject.cmake:3115 (message):
No download info given for 'dht' and its source directory:
/home/jemadux/projects/source/transmission-4.0.5/third-party/dht
is not an existing non-empty directory.Have you looked at the directory structure mentioned? Does it contain sources for the DHT library?
Presumably you are trying to compile from the release tarball... Which for reasons unknown does not appear to include any of the required third-party sources.
Those you will likely need to retrieve via git, either manually or by cloning the project with --recurse-submodules as per the "Building Transmission from Git" instructions.
Or, as already suggested, you could simply forgo the SNS and use a stable, tested release from the repositories. What "must have" feature does 4.0.5 include anyway?
Apt has been able to handle local files on the command line for some time now, dependency resolution and all.
The problem at hand (after fixing missing sources) however has nothing to do with apt, Debian, or Devuan, and everything to do with poor quality packaging of commercial software... As usual.
At a guess (haven't used XFCE in a while), thunar uses GVFS/FUSE user mounts for FTP or copies the file to a temporary directory, so there's a real filesystem path MPV understands. Dolphin (and the rest of KDE) uses KIO for transparent FTP, which only works with applications that understand KIO URIs.
With so little detail provided, a guess is the best you're likely to get.
The bigger question might be why you're using FTP to stream from a local server to begin with, when things like NFS and SMB (or even SSHFS) exist... Or if you must use FTP, why it's not set up as an FTPFS mount. curlftpfs removed as of bookworm, reasons unknown ![]()
FTP is designed for contiguous file transfers over the internet (or dialup BBS), not the kind of random-access one expects directly accessing files (i.e. NAS-style) on a local fileserver.
Eventually this led me to use Ratpoison as my WM: a completely keyboard-driven tiling WM that has no overlapping windows (don't need 'em) and no need to take your hands off your keyboard.
Haven't used it in a while, but Ratpoison is almost certainly still awesome. Also, the post that started it all is hilarious (and kinda true IME
).
Personally I have 2 modes of operation:
Screwing around, feet up on desk. One hand for beer, one for rodent, all the overlapping windows and flashy animation nonsense is fine.
Not screwing around, both hands on keyboard and rodent well out of the way. Usually that means CLI, in an old-school tty or a console window either fullscreen or tiled 50/50 with some other (usually a manual or another console) window.
Fortunately KDE can do both, though admittedly it's not as efficient as a dedicated tiling WM at the latter. Ratpoison would be my go-to if kwin ever looses it's configurable keyboard shortcuts or if it didn't do custom tiling modes. (which at the rate KDE is stripping features for "modern" aesthetics wouldn't surprise me).
Video Acceleration API is... A video acceleration API. It shouldn't have anything to do with sound. Nor should mesa... Unless of course there's something totally broken regards the fallback to unaccelerated decoding, but I doubt it. Firefox error messages kinda suck though.
The Cubeb / OnMediaSinkAudioError lines are likely the relevant ones, I suggent a general websearch on those for a start. I don't have Debian / Devuan on any desktops right now to investigate myself.
What more does running with MOZ_LOG=cubeb:5 (or a log level of your choosing) have to say?
Do you have any custom ALSA configuration (.asoundrc, /etc/asound.conf)?
Since nobody has mentioned it yet:
Firefox with ALSA may require exceptions to the browser sandbox to access sound devices, I don't know if Devuan sets this configuration by default but assuming the packages come from Debian I doubt it.
You'll probably need to visit about:config and set
security.sandbox.content.write_path_whitelist /dev/snd/And if you get e.g.
Sandbox: seccomp sandbox violation: pid xxxx, tid xxxx, syscall 16in a terminal when you try to play audio, you'll need
security.sandbox.content.syscall_whitelist 16Note this refers to native ALSA support, I don't know if it applies also to apulse but I wouldn't be at all surprised if it does.
On that: Devuan packagers, please build firefox with native ALSA support. It's literally as simple as a --configure option at compile time, and this apulse shenanigans is stupid and annoying.
The first thing I was thinking about was a new version of Proxomitron. It simply died due to losing support.
If it hadn't, https-everywhere would have killed it anyway. You can't really have local proxies and SSL at the same time, at least not how modern browsers are designed and modern users are trained.
All it takes is for someone to notice that faceborg is suddenly signed by proxomitron instead of DigiCert Inc, and the "muh securitee" wailing starts. Users are of course properly groomed to run arond with their hair on fire if they see any kind of browser "security warning", rather than reading details and making informed decisions.
All this is really just the same old war. Commercial entities want total control over how their sites (and apps, in the mobile space) display and how you interact with them, and that means locking down browser endpoints and preventing tampering with DNS and network traffic.
Pervasive use of obfuscated javascript where HTML would do.
Ever-expanding list of web "standards" and extensions that make it near-impossible for independent browsers to stay compatible.
HTTPS everywhere (even when no sensitive data is being exchanged).
DOH, and the "your ISP could spy on you, better trust google or cloudflare instead" non-argument.
Manifest v3 and the wider attack on content filters.
WEI (shot down for now, but it'll be back).
"SafetyNet" and other mobile OS attestation systems (which will come to PCs in time, we have "anti-cheat" rootkits and the like already).
All of this is touted as being for your security... It's not, it's for theirs. Google is of course the arch-villain here, because nobody wants to control what you see so much as a corporation funded by and founded on advertising.
IMO the only real solution is wholesale rejection of the current "web v2 (and supposed v3)" and it's centralisation of power. It's corrupt beyond recovery at this point, and is only getting worse.
I'd love to see the likes of gnunet, gemini, and other decentralised solutions gain popularity, but so long as all the frogs want is online shopping and social media, we're kinda screwed. If the water only gets a little bit warmer with each new "improvement" it's fine, right? ![]()
I have (learned years ago) a monitor and keyboard attached to it.
.....
Remind yourself that, if full disc encryption in place, you "might" have to type your passphrase to get the system running.
Which is why real server motherboards come with a serial console that works right from POST and/or IPMI as standard.
Squid doesn't have anything to do with DNS, it's a caching/filtering HTTP proxy. Such proxies aren't seen much anymore, because they don't really work on encrypted (i.e. HTTPS) connections.
Anything that does caching/inspection/filtering of HTTPS traffic needs to decrypt it, and that means installing a root cert on the client, which pretty much breaks the SSL trust model... Not that it stops corporate networks from doing exactly that, for "safety".
The browser has the SSL keys so it can do all this. That's kinda how most decent ad/content blockers work already, once you go beyond host lists and into selective element blocking (that's also part of why manifest v3 is bad, as it hobbles this ability to modify content before it is displayed or executed).
DNS is another matter altogether, and there's nothing stopping you running your own resolver (e.g. BIND) on your own network, with your own rules. Screwing around with cache expiry/TTL and locally resolved "whitelists" will likely just land you in a world of "my DNS is broken" hurt though.
instantaneous performance
Hold this beer a moment, just need to land my pig, then I can totally I can prove that one. ![]()
simple hack to allow vlc to run as root to play background music while you run circles around people in online gaming, and make them look silly
Benchmarks or it didn't happen.
Generally the older kernels (i can't remember the versioning schemes, but basically the point upgrades stable distributions like devuan/debian use are actually faster
Okay then, why (technical explanation, not pseudo-jargon and hand-waving)... Also, benchmarks or it didn't happen.
after installing xfce, logging into it, the system is only using around 686-700 mb of ram, which is extremely impressive.
That's an extremely normal memory consumption for a fresh-boot minimal XFCE desktop.
This equates to faster system response timewhich is necessary for competitive gaming
Unless your system has some weird caching limitation (which AFAIK hasn't been a thing since L2 moved on-die in ~1999), you have severe memory fragmentation, or you are running into memory pressure, how much RAM the desktop uses is of no relevance to system latency.
Again, benchmarks please.
Using the root account for everything literally doubles my download speed, and system response time across the board. Everything is faster, more efficient, and super speedy.
...
this is essentially how using the root account is going to benefit all of the system operations compared to a regular user account. Everything will be twice as fast, including beating people up on online gaming
Sigh. Benchmarks or it didn't happen.
"Everything will be twice as fast" is an extraordinary claim, and extraordinary claims require extraordinary evidence.
This setup is literally like hyper-drive, file system operations are instantaneous and everything else is close behind instantaneous performance
Subjective performance. Where benchmarks?
it seemed to run startup programs out of sync with each other creating a lockup scenario, and failing to get to the desktop envrionment
Either you have found a bug (in which case you should report it, with all relevant technical information and logs), or this is just more hand-wavy pseudo-technical "explanations" for "I broke it because I don't know what I'm doing", like the last post... Guess which one I'm betting on. ![]()
...
Or perhaps it's asking to much for someone with "extensive experience, having tested and worked with basically 80% of all the linux distros" to exercise any scientific rigour whatsoever when doing performance tuning and writing "guides"?
“Are you for the computer or is it for you?”
Or, "Is this a hobby or a tool?". Either is a completly reasonable approach, and there are far worse things one could spend free time on than tinkering.
Most of my computers are workhorses, others are, well, more like pets. Same goes for my vehicles TBH, and electronic test equipment... ![]()
Well it is Motörhead, I mean that's kinda the point. ![]()
the channels mentaloutlaw and lukesmith on youtube.
If you want to find someone who is trustworthy for linux info, never trust a far right nutcase like him.
Except Luke actually includes technical, educational content in some of his videos, where the aforementioned Brodie has silly faces, hype, and clickbait.
I'll take a 'tuber with intense (but ultimately irrelevant) political leanings who knows what they're talking about over a politically "correct" one who doesn't any day of the week.
Why you 'muricans think your political infighting is everyone else's problem (and feel the need to inject it into Every. Single. Discussion.) I will never understand. I watch videos to learn something, not get outraged about how left/right/upside-down the presenter is.
This obsession with putting people in little "political affiliation" boxes and discarding information from outgroups is extremely tiring, and I'll have no part of it.
Using computers is like being in a relationship with someone else[snip]
Please, smash that ChatGPT prompt some more, we all love irrelevant nonsense. ![]()
TBH I generally find his channel to be more overexcited SNS clickbait than I'll tolerate, but this is the first time I've managed to trigger the "desktop linux" fanbois hard enough to get a post read on youtube... At least there's plenty of "discussion" in the comments, so win? I guess? ![]()
Colour of the error screen? Who cares.
That it's clearly copied from Windows? Cringe and worth a laugh, but ultimately irrelevant.
Replacing a full page (or more, while console scrollback was still a thing) of information with "the last error" and a QR code? Dumb. Just dumb. Shall we add a sad face emoticon as well?
Overlaying the GUI or dropping to a console? Sure, now do it while using the full screen for debug output and without the QR code nonsense.
Systemd contributors needing an explanation of "archaic" VT code and text interfaces before X/Wayland "takes over"... Deeply concerning. This is an init system and low-level utilities we're talking about here, anyone hacking on it should be well familiar with those things.
Sounds like a pretty comfy rock you have there ![]()
Looks like I made on the 'tube, LOL. Needless to say, he largely misses the point. ![]()
I.e. in what way this fullscreen QR code with "the last error message" (i.e. 99% of the time not the whole story) is better than a kernel panic and/or stack trace, both of which we already have, beyond being supposedly "more accessible" for non technical users. (AKA the aforementioned dumbing-down of interfaces).
I don't entirely disagree with you, I just extend "Say no to GMO" with "until rigorous testing and effective regulation is in place".
Any new technology has pitfalls and dangers, and biology and ecology are particularly problematic due to the complexity of the interactions involved. That doesn't mean we shouldn't pursue it, just that it needs to be done slowly and carefully. In light of the track record concerning corporations funding their own "safety" studies, IMO that means considerable expansion of the FDA, or formation of a new agency entirely.
IOW, I'm not against GM in principle, rather reckless application of technology in general. If done properly, GM could be a game changer for a great many fields... If not, it's a disaster just waiting for a place to happen.
Much the same could be said for the current AI craze, or any number of other technologies. As ever, the fundamental problem is rampant under-regulated commercialisation of technologies that should still be confined to the lab.
We have been purchasing as many "organic" items as we can; however, how can we actually know those items are organic?
You can't. Like most product labelling standards, the requirements for putting "organic" on packaging are far too lax to put much trust in.
We saw it with "Fat free".
We saw it with "No added sugar".
We're seeing it all over again, ten times worse, with the plethora of "environmentally friendly" / "low emissions" etc. etc. greenwashing.
It's all lies, it's always been lies, the only real questions are "how big a lie can we get away with" and "will increased sales cover any potential fines". ![]()
There are of course ways around this, but I'm intentionally avoiding mention of alternatives that are largely unavailable to many people.
Not everyone has the option to grow their own, buy at a local market / farm gate or speciality organic store, or even know where their food comes from beyond what's on the label.
For many the options are even worse, as they simply can't afford to be choosy. The default option should of course be "reasonably safe and healthy", but often it's closer to "what's cheap today, I just need to eat".
That all said, IMO "organic" isn't really the whole answer anyway, at least not when it comes to feeding billions (doubly so with the looming impacts of climate change). What we need is responsible food production (and marketing / labelling).
Mechanised farming, synthetic fertilisers, and even pesticides, antibiotics and genetic engineering have massively increased our ability to produce high quality food, but as with anything there is plenty of room for abuse - especially in an environment where producers are constantly pressured on price (often with no recourse, as so much purchasing-power is concentrated in a few large corporations) and regulatory bodies are effectively toothless.
On genetic engineering in particular, we've been doing that (the slow way) for thousands of years. We now have faster and more powerful tools, so we need correspondingly faster and more powerful research and regulation.
If that means "stop doing [x] until we have more data", so be it, but knee-jerk blanket GM == bad is a short-sighted and regressive approach.
The FDA wins...
The FDA is supposed to be on your side in this, but is horribly ineffective due to conflicts-of-interest, political and economic pushback, a lack of meaningful enforcement options, and chronic over-reliance on a "safe until proven otherwise" attitude and manufacturer-provided safety studies.
If the FDA (and other regulatory bodies) had the power and will to block new products until independent research is completed and actually crack down on corporate misbehaviour (as opposed to slap-on-the-wrist fines), we wouldn't be in this situation to begin with.
SUGAR BLUES by William Dufty
That is a pretty interesting read, and I dare say the burying of inconvenient truths and defanging of regulation it details parallels organisations beyond the FDA as well. Corporate interests vs. public good is a war as old as civilisation.
Applying "innocent until proven guilty" to chemical engineering, the health of the population, and the behaviour of organisations that will do literally anything they can get away with to turn a profit is... Not particularly bright, but that's where we're at.