You are not logged in.
The Microsoft Crowdstrike outage is a prime example of this. Push update gone wrong, servers everywhere become unavailable. So when your data exists on the cloud, good luck, it might as well not exist.
What has this gotta do with Devuan? This outage doesn't affect Linux, thankfully. But the way the systemd crowd is going, it's also in the same direction: push updates (e.g. Ubuntu) and centralization on upstream servers.
Don't be fooled, kids. "Cloud" computing isn't the panacea people are making it out to be. We need to be very careful what we force users to depend on. Anything that requires internet access for something that can be done equally well on the local machine, is suspect. Anything that allows remote servers to push updates without the user's consent is also suspect. (Just imagine what would've happened if the Crowdstrike outage was malicious. I.e., malicious code was pushed to all these servers. Worldwide chaos would ensue.)
Offline
Never liked the idea of The Cloud, nor pushed updates, & this outage just proves my point...
Online
Just imagine what would've happened if the Crowdstrike outage was malicious. I.e., malicious code was pushed to all these servers. Worldwide chaos would ensue.
If it can happen, it will happen sooner or later.
Offline
Never liked the idea of The Cloud, nor pushed updates, & this outage just proves my point...
Yep, me too. Been skeptical of it since day one. In fact, one of my prime reasons to ditch the Windoze world and go all-out Linux was to get away from the MS Big Brother mentality that says "trust us, we're the good guys, we'll manage your system for you, we'll handle your data for you, we'll run your services for you". It's one thing if they could actually fulfill that promise. Their track record, however, says otherwise. Besides, the problem is that they become a single point of failure.
Same philosophy goes behind the whole cloud hype: "trust company XYZ, their job is to handle the network, you don't have to do it yourself, they can do it better than you, leave it to them". Maybe they can, but the problem? Single point of failure.
Same philosophy with mail servers that store all your messages on the network. Problem? Single point of failure. When the server is down, your messages are inaccessible. If the server gets compromised, or if the provider has ulterior motives, your data is at their disposal. I much prefer downloading my messages to my personal machine, thank you very much, you don't have to keep it on some server somewhere out there where it's constantly present on a publicly-accessible server where it's just a matter of time before somebody breaks in and accesses data they aren't supposed to.
Same philosophy with source code with remote dependencies. When the internet connection goes down, you can't compile your program anymore. Or if the remote server decides to go offline for a vacation. How does that even make any sense?! You're entrusting the buildability of your program to the competence of some anonymous stranger somewhere out who knows where in the wild west of the internet. How is that even a reasonable solution at all?!
...
Now it seems certain personae in the Linux world want to go back to this same philosophy, the MS, centralized, bureaucratic model. Systemd being one of the symptoms of this sort of attitude. "Trust us, we're the good guys, there's nothing wrong with systemd having massive scope creep and doing way too many things that should have been left to other applications." Push updates is another symptom, one that the Ubuntu folks can't seem to wait to force down our collective throats. Again, single point of failure. If the push server for any reason gets compromised, you're looking at the entire flaming customer base being at the disposal of the attacker. That's just way too much power to be centralized in one entity. Too attractive of a target for attackers. And too big of a consequence in the case of failure. And again: single point of failure.
Sometimes it really makes one wonder, when will people finally get the point??!
Last edited by quickfur (2024-07-19 19:00:54)
Offline
If it can happen, it will happen sooner or later.
Best case scenario: the entire world crashes, outage everywhere, and people get a nasty wakeup call.
Worst case scenario: the attackers install a backdoor in the whole entire world, and nobody notices. Five years later, long after it's far too late, people suddenly realize that all their data has been compromised, and has been compromised for years.
...
Wait, the best case scenario already happened this morning.
Next time, it will probably be the worst case scenario.
Maybe it has already happened, and we just haven't noticed yet!
Offline
Yep, me too. Been skeptical of it since day one. In fact, one of my prime reasons to ditch the Windoze world and go all-out Linux . . .
Me too not long after Y2K and the first announcements that windoze was going to the cloud . . .
Offline
"Cloud" computing isn't the panacea people are making it out to be.
cloud = someone else's drive
Brings to mind the Kim Dotcom raid when so many legitimate users around the world lost their life's work. Don't know how many bought into that hype and currently use big tech's drives.
Wonder how many cloud users, and those update devs, see the relevancy of Franklin's quote on safety.
Offline
cloud = someone else's drive
Exactly!! For certain things, it might make sense (e.g., publish a folder of files to a remote webserver for a website so that it can be served to customers -- distribute it across multiple servers to reduce server-to-customer roundtrip times).
For other things, it totally does not make any sense. Like having system-critical functions depend on connectivity to the cloud. Like part of your OS doesn't exist on the local drive and must be downloaded before your system is usable. Or backing up personal data "to the cloud", usually with weak or non-existent encryption. Or encryption where the provider holds the keys (wow, really? you'd trust your private data to someone who can access it whenever they want without you knowing?). Or having push updates that update your entire clientele's worth of computers only to turn out faulty, causing your entire clientele to go down in flames.
Now multiply this by 100,000 or 1,000,000 customers all depending on the same 3 companies to do their jobs, and you're talking about 1/3 of the entire world's internet infrastructure going down when one of these companies make a boo-boo. Or worse yet, these 3 companies depend on each other's services in order to work. (Remember the AWS outage? IIRC, MS was also affected. So apparently MS depends on AWS, which probably in turn depends on MS. When either of them fail, both automatically fail. Just great.)
I know somebody who used to work in the commercial aircraft design industry, and there, through the painful lessons of actual airplane crashes and losses of life, they learned to design their aircrafts without a single point of failure. That is, every subsystem has a backup, and the backup is not a mere clone of the primary; it's a completely separate design by a completely different, independent team that has no access to the first team's design apart from the functional specifications. Each subsystem also has its own, independent circuitry with its own power sources that are not linked to the other systems in any way.
The reason is this: if the primary system and its backup are running the same code, if there's a bug in the code, it will affect both systems. Therefore, when the first system fails, the backup system will likely also fail due to the same problem. Result: plane crash, lives lost. If the primary and secondary systems are independent, then it's far more likely the second system won't fail due to the same circumstances that caused the first system to fail. Unless the second system shares the same power source as the first; then a failure of the power source will mean a plane crash and lives lost. So in order for the backup subsystem to truly be a backup, in the true sense of the word, it must be completely separate from the primary system. There must be no shared component between the two systems; they must be completely separate. They don't even share the same power circuitry; the secondary monitors the primary system using independent sensors, and if it detects a problem, it automatically takes over control of the aircraft.
In a nutshell, this is the design principle of no single point of failure.
Today's "cloud" infrastructure, unfortunately, violates this principle almost everywhere. See how many systems went down today because of Crowdstrike. It's because it's a single component that has no redundant backup, or the redundant backup depends on the same faulty component as the primary system, so when this component fails, it takes down both systems. Now it's already bad enough if one critical piece of infrastructure (say banks) depended on this single component. But here we're talking about all kinds of systems that, at first glance, ought to be independent of each other, but turn out to depend on the same single point of failure: i.e., MS's wonderful OS and the single point of failure within it: the Crowdstrike component. Just look at the scale of the outage today. How many systems, companies, industries, depend on this single faulty component? If the world were an aircraft, we'd all be dead right now.
You'd think people would've clued up since the infamous AWS outage a couple o' years ago. But nooo, it happened again. And will continue to happen again as long as people keep barging ahead like there's no tomorrow without learning the lessons they should be learning. (Well, maybe there truly isn't any tomorrow, if the current trends continue.)
Offline
Of course, there's also the more sinister side of things: the reason for centralization is so that upstream gets to control you.
In the old days, you could just copy Windows bit-for-bit and you'd have a second, working installation. MS, obviously is quite unhappy with this because they're losing lots of money over illegal copies of Windows. So they went the route of copy protection, which unfortunately, since the days of the good ole Apple II has proven time and again to be ineffective. As long as the bits exist on the user's computer, there will always be a way to copy it. This is impossible to prevent, due to the nature of how bits are stored and how computers work. So the logical next step is, force the user not to have all the bits on their computer. Instead, their bits exist in MS-controlled servers, and said bits are only served piecemeal to the user's computer when they are needed.
Thus the whole cloud computing philosophy was born, as sheep's clothing for the centralized control wolf. And now, in 2024, you can see very clearly the direction where this is headed: back in the day applications, just like the OS itself, can be trivially copied, again leading to MS's hurting pockets. So what's the solution? Copy-protection was tried and proved worthless. (It has always been worthless; but for some reason people keep repeating the same failures.) The solution is: make the application live in the cloud! It doesn't exist on the user's computer, therefore it's uncopyable. The front-end code may (temporarily) exist in the browser, but the business logic sits securely on MS's servers and never leaves. Genius. Better yet, MS can now charge subscription fees for the usage of their servers.
Sounds totally reasonable, until you realize that the counterexample has always existed. In fact, by MS itself: said application logic does not need to live in the cloud!!!!!!. 99% of this logic works perfectly fine on the user's PC. Back in the 90's, entire applications lived on the user's PC and it worked just fine. But of course, MS ain't gonna entertain that; not now that they've fought long and hard to brainwash users to think that "the cloud" is somehow superior, and therefore application logic magically works better "there" than here on the local PC. Oh no, you better use the cloud version, because that's superior! And by the way, the non-cloud version is deprecated, no longer maintained, and soon will be phased out, and we will force all newer files to be gratuitously incompatible with it so that you'll have no choice but to "upgrade" to the cloud.
But they didn't stop there. Why stop with applications living in the cloud? Have the user's files also live there! Then they can really tighten their iron grip on users. The user will have no more recourse -- you cannot return to the cloudless clear sky of the 90's, because your files now no longer live on your PC. So you better subscribe to the cloud, otherwise your data will vanish like the vapor that the cloud is. So in this way, they've vendor-locked their users into a world where users' data is enslaved by upstream, thereby giving upstream the leverage they need to force users to run applications on the cloud. This hasn't fully happened yet, but it's very obvious by now that this is where things are headed. And once it does, it means that now upstream has total control over you: your PC is merely their tool to force you to do so. Goodbye user privacy, goodbye user empowerment, goodbye user ownership of the hardware they purchased and the data they created themselves.
Everything belongs to upstream, and upstream controls everything.
I left that world behind when I decisively ditched the Windoze world back in the 90's. And I'm not planning to have anything to do with that world ever again. Debian started down the same road with the systemd fiasco, that's why now I'm here on Devuan. And I'm ready to jump ship again, if any trend in that direction rears its ugly head again. I do not wish my life to be controlled by some impersonal corporation whose ultimate, and real, motivation is only money.
Offline
@quickfur, thanks for sharing your precious irreplaceable time with this thread. unfortunately 99.99 percent of humans have deaf ears until it happens to them.
also see for reference(and present-day correlation):
ttps://en.wikipedia.org/wiki/Nineteen_Eighty-Four
ttps://en.wikipedia.org/wiki/The_Iron_Heel
Be Excellent to each other and Party On!
https://www.youtube.com/watch?v=rph_1DODXDU
https://en.wikipedia.org/wiki/Bill_%26_Ted%27s_Excellent_Adventure
Do unto others as you would have them do instantaneously back to you!
Offline
Not only am I against The Cloud & push updates, but have always been against systemd, & the way it is creeping into everything it can, we still have a few distros that understand how bad systemd is, but if they ever cease to exist, I'm prepared & ready to transfer everything over to one of the BSD systems.
Online
IME most of the time this kind of thing isn't really about control, it's about liability (at least in the corporate world).
If a company contracts an external provider to handle endpoint protection, veeps who know nothing about IT get to tick their "all practicable steps to protect customer data" and "certified industry standard solution" boxes. If anything goes wrong they can just point at (and potentially sue) $external provider, with no risk of blowback.
The bigger and more widely known the provider, the harder they can lean on the "industry standard" "we're doing what everyone else is doing" angle.
This is how we end up here, not because of some grand conspiracy (not to say they're not out to get you, mind) but because big IT knows they can exploit human laziness and risk-aversion for profit, and their corporate customers often like dealing with a monopoly because of the "safety in numbers" effect.
As for "cloud" more generally, my answer (both personally and professionally) is "go away"... Unless the rep brought biscuits, in which case I'll pretend to listen while I eat them*. It simply does not and can not meet my reliability, serviceability and redundancy requirements.
I don't like surprises, and if I really do need to call somebody at 3am to figure out why a critical system isn't working, I expect the answer to be "I'll be on site, with parts, in 30 minutes" not some phone droid with "looks like $cloud whatever is down, better file a ticket".
'tis the same reason I prefer hardware solutions over inscrutable software "fixes", and local contractors over megacorps with 15 layers of bumbling bureaucracy between their technicians and reality.
*the biscuits, not the rep... Though the reverse is often tempting.
Last edited by steve_v (2024-07-20 12:07:03)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
Yes, it is something like this, or even worse.
It has never been like this and now it is exactly the same again.
I am waiting for Steve Gibson to tell the story in details.
EDIT:
Security Now: CrowdStruck
_https://youtu.be/eLkfKizz6NU
Last edited by igorzwx (2024-07-24 22:28:25)
Offline
It's not so much a grand conspiracy, as the natural consequence of seeking a business model which is safe against cheaters and ensures continual income. In general they're not out to control you in the sense of a dictatorship, but they do want to control how you use their software so that you only use it in a way that ensures their continued income. All this is done, ultimately, for the bottom line, as they say.
But the inevitable outcome of such a design motivation is always that the end users suffer. Both in terms of suboptimal software design, performance, and lost privacy and freedom to use the software as a tool in the way you see fit. Arbitrary restrictions are placed on what would otherwise be normal, reasonable use, because of the bottom line. Interoperability suffers because it defeats vendor lock in, which hurts the bottom line. Software and devices that, logically speaking, ought to work together, don't work together, or do so only poorly, because of the bottom line.
You see these symptoms everywhere in modern software. Apple apps don't work on Windows, Windows programs don't work on iPhones, and nothing runs on Linux except Linux programs because nobody else wants to bother with interoperability. It's easy to copy files on Android but on iPhone next to impossible except for specialized purposes. There's no technical reasons for this; it's all about restricting what the user can do in order to maximize the bottom line. I could go on all day about arbitrary restrictions and crippled functionality that arose because of the bottom line. Things that ought to be easy are needlessly hard because of the bottom line.
This is why I can no longer trust designs controlled by corporations whose sole underlying motivation is money.
Offline
and nothing runs on Linux except Linux programs
The problem is that Linux programs, which were always working on Linux without any problems, may not work anymore.
For example, on Fedora, both maxima and wxMaxima do not work, and all sorts of maxima's flatpaks also fail.
On Devuan, maxima works, and wxMaxima is so buggy that is impossible to use. But you can compile it yourself.
However, you can install pulseaudio on Mac (if you want) with Homebrew
pulseaudio
Install command:brew install pulseaudio
Sound system for POSIX OSes
_https://formulae.brew.sh/formula/pulseaudio
systemd is not yet available for macOS.
Offline
I'm skeptical of flatpaks. It just seems a needless additional layer on top of the OS. If something doesn't work as expected, best to just compile from source.
Lately I'm leaning more and more towards source distros rather than binary distros, just because with binary distros you have to deal with ABI compatibility issues and conflicting binaries. These days I prefer installing 3rd party from source rather than binary blobs of unsure provenance.
Offline
Actually, I am very happy that Debian devs managed to compile maxima which works.
It seems that it was a real problem. Without maxima, wxMaxima is useless.
$ maxima
Maxima 5.46.0 https://maxima.sourceforge.io
using Lisp GNU Common Lisp (GCL) GCL 2.6.14 git tag Version_2_6_15pre3
Debian devs used a certain version Common Lisp from git. It works.
Fedora devs used another Lisp, not Common Lisp. The result is "segmentation fault".
On Fedora, you open wxMaxima, type a command, execute it, and wxMaxima does not react.
The Fedora users cannot understand what is going on.
You can compile the same version of wxMaxima on Devuan. It works, because maxima works.
It seems that flatpak is a sort of simple solution to all problems, a sort of cargo cult ritual, perhaps.
Last edited by igorzwx (2024-07-22 07:55:16)
Offline
cloud = someone else's drive
Brings to mind the Kim Dotcom raid when so many legitimate users around the world lost their life's work. Don't know how many bought into that hype and currently use big tech's drives.
Wonder how many cloud users, and those update devs, see the relevancy of Franklin's quote on safety.
And now it's over, billions are using big tech clouds and somebody was able to boot linux from google drive lol
Offline
And on a lighter note, this incident serves as conclusive proof that Windows has true preemptive multitasking: it can boot and crash at the same time!
Offline
@quickfur . . . I bet you could make an excellent contribution to the joke thread with that info!
Offline
The entire post was somewhat intended for the joke thread.
Offline
Sadly not in 25 words or less . . .
Offline
while visiting Bruce Schneier's website for _other_ reasons this was his Crowdstrike commentary:
Be Excellent to each other and Party On!
https://www.youtube.com/watch?v=rph_1DODXDU
https://en.wikipedia.org/wiki/Bill_%26_Ted%27s_Excellent_Adventure
Do unto others as you would have them do instantaneously back to you!
Offline
He hit the nail right on the head. Current incentives are completely bass-ackwards, and the brittle tower of cards that is (most of) Big Tech today is the result. All this for what? To make a quick profit in the short-term, who cares about the long term anyway.
That attitude is prevalent in today's IT sector, where people are highly incentivized to appreciate (and build) the latest and greatest, and to do so as quickly as possible. MS, being, ironically, one of the early pioneers of this approach (remember "release early, fix bugs later", back in the days of Windows 95 and Windows 98?). Today almost all of Big Tech is run this way. Get the product out the door as fast as possible, we'll sort out the bugs later. Let the customers find the problems for us -- we don't have the time & resources to do that ourselves anyway -- we'll fix it the next patch release, where we get to charge the customers more for their efforts! Win-win!
Now we see what value this "win-win" strategy actually has, when the tower of cards collapses. Crowdstrike was only a partial collapse. Can't wait to see what chaos ensues when it will be a full collapse.
Offline