This is an outrageously dumb thing to say. BIg Tobacco knowingly sold a product that physically addicted (the only real form of addiction) its users and killed them.
Facebook ran experiments on on unknowing teenage girls to study how being shown negative content leads to negative mental health outcomes, which has lead to suicide.
> Contrary to the earlier notion that addiction is predominantly a substance dependency, research now suggests that any source or experience capable of stimulating an individual has addictive potential. This has led to a paradigm shift in the psychiatric understanding of behavioural addictions.
dopamine, the little “hit” you get on social media sites or when you get a “ping”, has a massive role to play in behavioural addictions. and with behavioural addiction it basically causes the same stuff in the brain that cocaine etc does (very simplified explanation).
also, i’m a recovering drug addict. and i can tell you for sure from my lived experience that addiction is definitely not limited to physical stuff like drugs. xD
> Problem gambling (PG), also known as pathological gambling, gambling disorder, gambling addiction or ludomania, is repetitive gambling behavior despite harm and negative consequences. [0]
Addiction isn't just [chemical in blood stream] -> [addiction]. Addiction involves many steps, many of them in the brain, and many of those reactive to non-physical events.
Gambling is conventionally considered addictive, but the user isn't ingesting chemicals. I don't think a physical/non-physical binary really stands up under scrutiny. I mean, aren't all addictions physical insofar as they stimulate the body to produce neurotransmitters?
Plus, smoking doesn't kill people; its pathological outcomes do. Similarly, looking at a phone screen might hurt a user's eyes, but it won't kill them; however, the decisions that user makes over time due to the effects of the subject matter they interact with might definitely put them at risk. And if aspects of that subject matter are deliberately amplified for their addictive properties, should platforms be regulated to control this?
Many users abuse subscriptions in violation of the TOS to run tools like OpenClaw is automated ways. It's an anti-abuse measure. Makes perfect sense. Anthropic's business model is the API business. The $200 subs are a paid demo of the API. Go slam the API with OpenClaw all you want, if you can afford it.
Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all. Be glad a patch was available prior to release.
If they want to be seen as responsible rather than opportunistic, then yeah, they should do a proper coordinated disclosure.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
how many times it has to be said that it is impossible for linux kernel to communicate with anything but a minuscule portion of its downstream and _that_ has been done?
Who cares about how you are seen when you are selling 0day for big bucks? The bad actor makes more money than the 'legitimate' one without breaking any law. Punishing someone who didn't alert distros despite a patch being available encourages the company to simply find flaws and sell them for profit - it pays more to begin with.
If they want to take advantage of disclosure for marketing, they're either going to need to accept the norms around responsible disclosure, or they're going to need to accept how shirking those norms will come off. That's life in society. Sometimes it's annoying and sometimes it doesn't feel rational, but these norms have been negotiated throughout the history of our industry and are the way they are for reasons good and bad.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
It is absolutely not true that viable commercial vulnerability labs need to "accept the norms around responsible disclosure". There are no such norms. "Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules. It was fantastically successful at that, and it's worth pushing back on at every opportunity.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
Microsoft's policy is: "if you contact us with a vulnerability, you automatically agree to the terms of our responsible disclosure policy", which includes waiting 30 days after patch was created, and says nothing about how long that process takes.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
You said "There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all..."
Maybe you're right. I just find it confusing. The language is all-encompassing, doesn't read opt-in to me if taken literally: "By submitting any vulnerabilities to Microsoft". And I found no other pages describing "report in such and such way to have these terms apply instead". But I always have problems with this stuff, perhaps taking it too seriously.
Obviously they can write whatever they want in their policy documents. The thing is, sometimes this is about larger sums of money, or someones reputation, which may or may not actually lead to steps. That is in contrast with whatever TOS/EULA in account signups for some service or whatever, this feels more serious. I've seen some people getting harried after publishing something that fell _outside_ the servicing boundaries. Getting tangled up in whatever is already a loss in my book, even if you "win" in the end.
Note that that policy is also where they set out the safe-harbor conditions, which, according to my read, is tied to the bounty policy and not RD/CVD policy. The RD/CVD page itself specifies no such thing, so I relate them.
I do not speak for MSFT, but last time I spoke with MSRC indeed they would be happy to receive your vulnerability report even if you did not wish to participate in any particular bug bounty program.
Those norms do not exist. Those are people asking companies to do stuff to benefit the person complaining for free, and many companies will not do that.
It seems to me you're unaware of them, but there are strong norms around disclosure. They've been discussed for decades. It is the expectation that vendors would be notified in a scenario like this.
No, there are users who want those to be norms. Qualified researchers happily sell substantive vulns to people who pay (Governments/Cellebrite and companies like that) enough to quell any complaint.
Which is again, irrelevant to the question of how disclosure works and what expectations there are around it because that is not disclosure and is not what was being discussed.
How does someone being incentivized to sell a vulnerability to a private organization over disclosing it publicly preserve a "high trust society"? Do you mean in the context of a "deceptively high-trust society"?
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Harder and harder to get good policy like what you describe when tech-adjacent people loudly argue for criminal penalties for anything other than coordinated disclosure :(
Are you claiming that if I sell 0day through a broker to the national Government of a given jurisdictions that the national Government of that jurisdiction is going to criminally penalize me?
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
It is categorically false that there's a legal obligation not to sell vulnerabilities. There's an obligation not to knowingly sell them directly to ongoing criminal enterprises. That's it. Plenty of people make fuckloads of money selling vulnerabilities for exploitation rather than repair.
(The buyers are the NSA, the IDF, Cellebrite, NSO and its successor corporation and that kind of thing. Depends on what you are offering)
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
the way the disclosed it is the industry standard. think of the biggest security research teams you know (e.g. google), and they follow the same process.
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.
Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
It's not about paying Google. People can buy gift cards with cash and do that; that's not the problem, especially not for commercial use. It's everything else that they're imposing or could impose on a whim and whose device it is they're putting restrictions on.
Last I heard, Google doesn't employ law enforcement. I can auth to the people we vote for and any laws they make such as bank KYC against specific criminal activity. Nobody gets scammed via an apk when it's infinitely easier to put up a webpage or socialmedia profile
I should not have to enter into a business relationship with google just to hand my non-technical friend an APK any more than I have to enter into a business relationship with the Linux Foundation to hand my friend an AppImage.
This is HN. This isn't YouTube. Rossman is beneath this place.
reply