You are not logged in.
Google's war on ad-blocking is no doubt just starting. They will be escalating and adding new tactics as the quest for ever greater profits at the user's expense continues. Other giants like MS and Facebook have no doubt got similar plans in the works, just based on their own unique leverages since they don't own the most popular browser. It's sort of amusing that Google has started trying to cut some of the other giants out with new restrictions on things like third-party cookies.
But I'm now wondering if the centralization of the network has gone too far. We also depend on those huge corporations for a basic access system of the network, DNS. Aside from Google, again, there is Cloudflare (one of the 'backbone' giants), Quad9 (IBM), OpenDNS (Cisco!), and then a few specialty services like AdGuard or DNS Advantage. But they are much smaller. I wonder if they can handle the increased use from a large numbers of people switching over. And if they do, they will then become targets of the ad-pushers. Google will surely do something or other if they perceive such services to be in the way.
Yes, nearly every ISP you can join provides 'their own' DNS servers. But what is this? Just linking into the pool provided by the giant services, isn't it? So then your local ISP (if it's really local and not a hefty player that doesn't quite match the giants, e.g. Frontier, Spectrum Cable, etc...) can then spy on you and sell your internet activity to someone.
There is some degree of filtering of known criminal sites in most of these servers. And that's good. But not protection from snooping and advertising by those same giants providing the 'service'... (except the specialty services, and how long will they last?)
The first thing I expect to do around when the ad-blockalypse really hits is start using custom hosts files full of nulled sites again. It used to be a pretty started thing to do, but ad-block extension just got so good it didn't seem necessary anymore.
It occurs to me, though, that there is clearly a lot of work already being done researching how advertising works to keep the rules for ad-blocking extensions up-to-date, and to keep the custom hosts lists up-to-date. There are clearly many teams of people with developed methods and systems.
Maybe it's time for another approach?
One thought I had was to stop expecting to go anywhere instantly. We really have been spoiled with the universal DNS system. And now we're so used to it that most people just put up with how it is exploited against them for spying and ads.
Hardly anyone listens to the serious security experts, it's too much fun to have the illusion of 'freedom'. But they have been recommending white listing for a long time. This is the opposite of the current approach of both the ad-block extensions and the hosts files. Instead of just allowing your browser to got out to some (corporate) DNS source and get info instantly and then add a quick check from the work done by the independent teams on your local machine, how about having a list of sites you think are safe to visit and a way to check those against 'authoritative' DNS servers occasionally, since they can and do change. But do most of your browsing with a static list (white list) of good sites. Isn't this what good corporate security does? (Or is 'good corporate security' now an oxymoron? Has everyone bu the government gone 'We Work' and 'BYOD' and there is no organizational control anymore? Outside of the tech giants who exist for the purpose of that control, of course....)
This would have multiple effects. First, if DNS lookup is NOT automatic, huge amounts of drive by ads, spyware, malware, and malvertising just wouldn't work anymore. It's the mindless automatic activity that our browsing triggers that allows these things. Second, of course, a major route of surveillance, DNS itself, would be come vastly less useful. All that DNS servers could learn when you tell your DNS 'utility'? to check your DNS list is that certain sites are in your list of interesting sites. Not every single time you access them, in real time.
This is a very rough idea, so I'm not yet sure how to execute it. Modify browsers themselves to have a new 'mode' of usage? Complicated, but certainly not any more complicated than many other parts of modern browsers. Another possibility is a local proxy. This is probably much more doable on Linux than Windows. Way back in the late stone ages, not the days of bang paths, but in the early days or the 'web', there was a nifty utility for Windows called Proxomitron. I used it for the better part of a decade before there even were 'browser extensions'. It was a local proxy which you could setup and program as an HTML filter. It was the best ad-blocker available until dedicated ad-block extensions. I got to be pretty good at writing my own extensions and learned a lot about HTML that way. But Proxomitron had to directly detect site names to block sites. It didn't affect DNS itself. That's what the custom hosts files were for. Unfortunately, support for that ended with the death of the author.
So, maybe something like I am describing can be implemented in Linux without too much trouble? I've heard of Squid but never learned much about it. What I'd heard was it was a local caching tool to save bandwidth, and could be a filter. But can you white list DNS with Squid?
And yet, even if so, what does that do for the common browser user? This is all to say, we need a new approach. I could have posted a question about Squid in a couple of lines.
Last edited by Micronaut (2023-12-06 01:57:24)
Offline
This was implemented a long time ago, but then forgotten.
Firewall was used for such purposes.
Today this method is implemented, for example, in bionicpup32-8.0+26-uefi.
Offline
I used squid proxy server with iptables, I used it for reducing noise, there fore reducing bandwidth ($/month).
"Uber Linux Project" by Ashton Mills, (atomic magazine Australia)A Firewalled Gateway.
Mandrake was the model I setup with Mandriva. You might find it someplace (as a guide, dated).
Anyhow, I don't remember using squid for blocking dns, but I wouldn't be surprised if it did have the ability.
pic from 1993, new guitar day.
Offline
Squid doesn't have anything to do with DNS, it's a caching/filtering HTTP proxy. Such proxies aren't seen much anymore, because they don't really work on encrypted (i.e. HTTPS) connections.
Anything that does caching/inspection/filtering of HTTPS traffic needs to decrypt it, and that means installing a root cert on the client, which pretty much breaks the SSL trust model... Not that it stops corporate networks from doing exactly that, for "safety".
The browser has the SSL keys so it can do all this. That's kinda how most decent ad/content blockers work already, once you go beyond host lists and into selective element blocking (that's also part of why manifest v3 is bad, as it hobbles this ability to modify content before it is displayed or executed).
DNS is another matter altogether, and there's nothing stopping you running your own resolver (e.g. BIND) on your own network, with your own rules. Screwing around with cache expiry/TTL and locally resolved "whitelists" will likely just land you in a world of "my DNS is broken" hurt though.
Last edited by steve_v (2023-12-08 02:47:21)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
The first thing I was thinking about was a new version of Proxomitron. It simply died due to losing support. The existing ad-block filters could work fine as separate programs, and not have to worry about 'browser security' restrictions. But I had forgotten about the change to HTTPS over the years. There is a very large infrastructure lock-in with that. Seems like there would be a disproportional effort needed to get any sort of 'proxy' ready to handle the certificates. May as well just write a new browser.
But that gets back to the problem with everything now being bound to Chrome's engine. Firefox is the last independent engine with any presence, and that is shrinking. I guess the coming controversy when ad-blockers get shut down will be the last chance for a resurgence of some sort.
Offline
The first thing I was thinking about was a new version of Proxomitron. It simply died due to losing support.
If it hadn't, https-everywhere would have killed it anyway. You can't really have local proxies and SSL at the same time, at least not how modern browsers are designed and modern users are trained.
All it takes is for someone to notice that faceborg is suddenly signed by proxomitron instead of DigiCert Inc, and the "muh securitee" wailing starts. Users are of course properly groomed to run arond with their hair on fire if they see any kind of browser "security warning", rather than reading details and making informed decisions.
All this is really just the same old war. Commercial entities want total control over how their sites (and apps, in the mobile space) display and how you interact with them, and that means locking down browser endpoints and preventing tampering with DNS and network traffic.
Pervasive use of obfuscated javascript where HTML would do.
Ever-expanding list of web "standards" and extensions that make it near-impossible for independent browsers to stay compatible.
HTTPS everywhere (even when no sensitive data is being exchanged).
DOH, and the "your ISP could spy on you, better trust google or cloudflare instead" non-argument.
Manifest v3 and the wider attack on content filters.
WEI (shot down for now, but it'll be back).
"SafetyNet" and other mobile OS attestation systems (which will come to PCs in time, we have "anti-cheat" rootkits and the like already).
All of this is touted as being for your security... It's not, it's for theirs. Google is of course the arch-villain here, because nobody wants to control what you see so much as a corporation funded by and founded on advertising.
IMO the only real solution is wholesale rejection of the current "web v2 (and supposed v3)" and it's centralisation of power. It's corrupt beyond recovery at this point, and is only getting worse.
I'd love to see the likes of gnunet, gemini, and other decentralised solutions gain popularity, but so long as all the frogs want is online shopping and social media, we're kinda screwed. If the water only gets a little bit warmer with each new "improvement" it's fine, right?
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
Sony CD copy protection was hacked by a 16-year-old teenager
Let's wait and see...
Offline
I use dnsmasq and these blacklists/blocklists
https://github.com/notracking/hosts-blocklists
and my own blacklist . mostly FAGAM .
Also all sites, which refuse adblockers, will be added and not be accessed anymore.
This avoids a lot of traffic beginning with outgoing DNS inquiries. Saves time
The devil, you know, is better than the angel, you don't know. by a British Citizen, I don't know too good.
One generation abandons the enterprises of another like stranded vessels. By Henry David Thoreau, WALDEN, Economy. Line 236 (Gutenberg text Version)
broken by design :
https://bugs.debian.org/cgi-bin/bugrepo … bug=958390
Offline