/second NoScript. Instead of whitelisting whole sites, you can whitelist links to JavaScript imports across all sites, temporarily or permanently.
So for example, you can whitelist urls to all the major JavaScript frontend frameworks’ CDNs, like bootstrap, etc. while leaving known trackers and spyware blacklisted by default.
Anecdotally it seems most websites still work with their trackers disabled, as long as they have their frontend framework/s loaded.
I disagree. There are way too many sites that require javascript that you'll eventually get into the habit of blindly enabling scripts when a site breaks, negating any security benefits.
I disagree. I'm not just pulling this out of my ass, I've been doing exactly this for years, I can't remember how long. It works fine.
>you'll eventually get into the habit of blindly enabling scripts when a site breaks, negating any security benefits.
The key here is that when you're deciding whether to whitelist a JS import, and you don't know what it is and don't want to take the time to look it up, then whitelist it temporarily not permanently. It will be moved back to the blacklist the next time you restart the browser.
Only permanently whitelist JS that you know for sure isn't a tracker or malware or sketchy.
> Only permanently whitelist JS that you know for sure isn't a tracker or malware or sketchy.
What’s the whitelist based on? URI? Or file content hash? Because today’s “criticalsitefunctunality.js” is tomorrow’s “upstream got p0wned and there’s a Bitcoin miner in there too now”.
Sites churn so often that “permanently” whitelisting hashes is probably a never ending chore, and you’re unlikely to want to constantly re-inspect minimized JS, so this eventually turns into semi-blind faith.
And permanently whitelisting URIs is pure security theatre; that file could contain anything, next request.
I'm aware of all that, but it's not theater, it's just part of a defense in depth strategy. Reduces attack surface area, doesn't eliminate it, while maintaining usability of the web.
If you have a better approach that accomplishes both of those objectives, do tell.
I'm sure it adds some amount of security. I'm just skeptical it adds enough security to be worth the hassle. I discussed the threat model here: https://news.ycombinator.com/item?id=27564457 and came to the conclusion that it wouldn't prevent much attacks in practice.
> If you have a better approach that accomplishes both of those objectives, do tell.
Use a browser that isolates the JS engine in its own process and leave spectre mitigations enabled rather than try to play kid-plugging-holes-in-dike-with-finger by auditing all the world’s constantly-changing JS for spectre/meltdown gadgets?
>Use a browser that isolates the JS engine in its own process
Definitely. All for that.
>and leave spectre mitigations enabled
I do that anyway. The performance cost is unnoticeable to my normal workloads.
>rather than try to play kid-plugging-holes-in-dike-with-finger by auditing all the world’s constantly-changing JS for spectre/meltdown gadgets?
I'll continue doing this too, largely because I want to see what's going on behind the scenes on all the websites I visit. Useful for me to see it all, especially as it changes over time as you observe.
That said, Easylist and Privacylist are also great if you'd rather crowd-source the finger-in-dike-hole-plugging.
I used to do this. It broke too often when doing credit card purchases though... it would take multiple attempts to complete a purchase and figure out which domains needed to be enabled. Sometimes the status would be left ambiguous. Once I double-spent, but fortunately it was a cancellable reservation. I suppose you can do better if you just spend at a few key sites.
I do it with uMatrix. I usually go up to the "all sites" level and enable most everything before going through a credit card payment flow, for this reason.
Security is only part of my motivation, though, and not the main part -- I mostly do it because it protects me by default from all the pop-up type crap that so many websites foist on you. Yes, it's a pain to un-break sites sometimes. But I resent it less than going through the equivalent pain in "privacy settings" popups, wriggling chat widgets, "ate you sure you don't want to sign up for our newsletter?" nags, etc. Websites are already broken; as long as that's true, I'd rather be in control of why.
>it would take multiple attempts to complete a purchase and figure out which domains needed to be enabled.
Yeah I went through this too, figuring out all the CC purchase redirects. Some are just idiotic to the point I wish govts would pass a law mandating zero redirects for online purchases. Stripe, Paypal, Square, Braintree and a few others do payments just fine without the redirects so it's clearly possible.
But eventually even that gets solved and the redirects get whitelisted. Haven't encountered this problem for a long time.
I have used NoScript for over a decade and I've been bitten by this too, but I've noticed that it has gotten better. CC processors seem to have encountered enough crappy browsers and broken JS implimentations that they've improved their services in the last half dozen years or so.
That said, when there's something old, important, and/or dumb looking, I usually spawn a new Firefox container (using Multi-Account Container plugin) and use NoScript's temporary bypass function.
You probably misunderstood : allmost all websites require javascript, yes - but you can selectivly allow only the javascript of that site, their framework etc. and block all the tracker/ads javascript with NoScript/UBlock - and then it is working and probably quite safe. But to mitigate, more and more websites find ways to sneak in the tracker/ads/analytics into the main sites js. So it is not as easy, either.
Which is why I just use basic ublock origin and regulary wipe the browser cache.
>but you can selectivly allow only the javascript of that site, their framework etc. and block all the tracker/ads javascript with NoScript/UBlock
What's the difference between that and just using the standard easylist/easyprivacy filter? I suppose there's a small chance that a third party site went rogue and isn't on the default lists, but I'm skeptical how many attacks that would thrawt in reality. The attacks I heard of tend to be first party/supply chain (would be white listed by you), or delivered through an ad network (probably already be on a blacklist).
Easylist and Privacylist are great. I suppose the main reasons for doing it manually are seeing firsthand what all the sites you visit are doing behind the scenes, and getting a sense of what is legitimately needed functionality, what isn't, and what is just downright sketchy.
I feel that is a bit like driving blindfolded because you might get distracted at some point anyway. Sure that one script you have to enable might be the one to exploit your system, but it might also be one of the dozens that didn't do anything useful.
So what happens if you go to a site and see a blank/broken page? Do you just go back and abandon the page? Do you do a full risk assessment of each of the domains? What does that assessment entail?
First I curse JavaScript developers (sorry). Then I use a heuristic like is this a real website for a real thing that I heard of before today, then temp trust; if it’s click baity or new, don’t trust or try adding in one at a time or mostly just give up. Very little shortage of content.
Depends. Sometimes I leave immediately, other times that blank is just a cover on top of content. And finally, I sometimes have to enable a domain/subdomain using common sense.
It's not that hard, nor time consuming. Again, my wife can do it and she's not a developer.
Still though. There are sites that would not work at all until everything is enabled, including ads. Imagine not being able to buy a plane ticket because wizzair wants to serve you ads
I use a browser called Qutebrowser which doesn't have a noscript addon; but I can disable javascript loading on a domain level.
However, overall I can tell you for absolute certain: if you have JS partially disabled things break in non-obvious ways and I find myself playing whack-a-mole with allowing various domains to load javascript to get the page working.
I'm pretty certain you do also, because it's basically impossible to tell why certain damned sites are broken and the most obvious thing to do is just enable JS temporarily to see if it works at all.
This is especially annoying on some part of a site such as checkout- where reloading the page causes a form resubmission.
If the actual payment is via Paypal I think it usually works without JavaScript in the merchant. And like content, there is no shortage of places to buy stuff.
For banking i use their phone app or else visit them in person. But I use a credit union not a bank as I want to trust the people holding my money.