Hacker Newsnew | past | comments | ask | show | jobs | submit | akerl_'s commentslogin

It looks like the expected thing happened.

The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.

Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.

Users pick what distro they use, and how they set up their infra.

Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.

But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.


The fact that you had to take a whole paragraph to explain the contortionist arrival at something that isn't even really super clear after you explained it (you kinda pointed the finger both at end users and at distro maintainers simultaneously) and essentially boils down to "well, you as the end user need to be following kernel CVE's and can't trust distro maintainers to do it" does in fact indicate that there is a deeper issue at play here. You might say "well, there's no implicit chain of trust here". You might be right, but is that really the most effective way of doing things? Of course Linux is Use it at your Own Risk, but is there not a concept of "we as a collective community should get together and try not to drop the ball on some serious shit?"

In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.


To be more blunt: if you’re paying for a product, the vendor owes you whatever things they committed to. If you’re a Redhat customer and your agreed SLA with Redhat for this kind of security fix was passed by, go be mad at Redhat. (I don’t think Redhat is bad here, they’re just the vendor most known for a commercial offering from the lists here. I would say the same thing about Ubuntu Pro)

Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.

I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.

The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.


Right, you’re saying “system is working as designed”, and I’m agreeing, but I’m saying “the system as designed kind of sucks, how can we make it better”?

I disagree that it sucks. It leverages a ton of people putting in their time and resources, and relies on system operators being active participants.

This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.

Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.


Start a distro with your preferred upstream tracking policy.

Is that the only option here? It’s certainly being framed as such.

Just as a purely intellectual exercise, what changes about this if we leave aside ideas of "owe," "deserve ," and "earn?"

There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.


The real advantage of Microsoft is that there is someone you can sue!

Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.


I thought this is why red has exists?

What the heck is up with people today.

Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.


It was clear that the original comment didn't say that, since we can see it right above. It was clear to me that the GP was using quotes as a way to use direct speech, not to imply that the GP literally said those words.

> maybe even criminal

What’s your theory here? What crime?


Exploits are sold and used as weapons, sometimes even weapons of war. Which in many places is criminal, except under very restrictive circumstances.

Also, all kinds of aiding and abetting.


What does that have to do with this comment thread?

Copying from the comment I was replying to:

> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal


If it's not a crime I see no reason not to work with partner nations to build responsible disclosure into a legal framework everywhere because it pretty obviously should be.

If you wanted to somehow make coordinated disclosure into a legal framework, that would be an interesting and complex project.

But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.


This is kind of a thing already in the EU. Under NIS 2, vulnerabilities should be notified to a CSIRT as well as upstream, and the CSIRT shall identify downstream vendors and negotiate a disclosure timeline. I don't know whether they're any good at it or not, though.

You know companies are allowed to pay people to find vulns, and pay people bug bounties?

Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?

That doesn’t sound like a nice future, if it’s even enforceable at all.


Who knows how many attackers had found this vulnerability and had already been using it prior to this research finding it?

Argument from uncertainty is not a good way to reason about this.

I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"


Yes, you could. Thats the core of my point: there is no Right way to handle vulnerability disclosure. There are many competing factors, most of them have major elements of uncertainty because you can’t know who knows what or how various projects or stakeholders will react.

So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.


well now everyone does, so the irresponsible disclosure makes it significantly worse.

It’s your opinion that it’s irresponsible and that it makes something worse.

and its your opinion that it doesn't. Shall we continue stating the obvious? We are communicating using glyphs. This language is English. We are on Hacker News. This branch of the conversation is extremely unproductive.

I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.

The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.

There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.


you seemed to suggest with your initial statement that any disclosure was acceptable as people would have been using the exploit prior to the disclosure. I don't think that's a strong argument given now the initial people who were using the exploit prior to disclosure are now joined by people who have learned of the exploit as a consequence of the disclosure happening before all the distribtions were ready.

So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.


Idk why you felt the need to use quotes to wrap something I didn’t say, and that is a pretty uncharitable attempt at reframing my question. If you wanted a quote, here’s what I’d say:

“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”


Idk why you felt the need to use quotes to wrap something I didn’t say. Despite the fact I didn't say that, its a much more interesting argument than your original statement implies and it is unfortunate we didn't start there.

However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.

Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.


… I used quotes to wrap something that I was saying. I even called out that it was something I was saying, as a more accurate variant of what you’d claimed I meant.

and I prefaced my quotes with the statement "So I feel like the argument reduces into". I mean, idk what punctuation I'm supposed to use there that doesn't offend you, but I just figured we can all read words and it was clear that I wasn't saying you said that, but rather, as I read the argument it was reducable to that and I took issue with that potential reduction.

The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3

I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.


You could try to make that case either way, but as has been pointed out by others all over this thread, the system we've landed on (90/+30) is industry standard after over two and a half decades of experimentation.

Anything else inevitably has worse for the public good.

Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.

There's a reason we've landed where we have as a security community.


The public disclosure page has a big blue "Get the exploit" button.

It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.


I don’t know if “cool” is the word I’d use, but there isn’t an established “right” way to disclose a vulnerability that you found outside of a contracted security review or other employment/contracting arrangement.

I only refer to my kids by their social security numbers until they do something suitably remarkable.

I guess it’s a good thing I’m not a SovCit or I’d just have to call them Traveller Three and Traveller Four


> Their valuation is backed by actual commerce.

Is it?


I think their annualized revenue is 25 billion with 3.4x yearly growth, with 1 billion weekly active users

Now do costs.

You're also ignoring the fact that these companies have been shifting things around to make their books look better than they actually are. Here's a good example explaining how META has been keeping debt and lease obligations off its books to fuel growth (and who's at risk if META doesn't pay up):

https://www.reddit.com/r/economy/comments/1soent7/if_the_ai_...


Many tech companies operate at a loss initially, that is the point of venture markets in firms that invest heavily in R&D, the initial investment will pay off once the technology matures.

As for Meta’s shady accounting, I also inside most tech companies leverage whatever they can to remain competitive in a high growth market. They certainly have the money to get away with it though for now.


> the initial investment will pay off once the technology matures

That's not guaranteed. Just look at the Metaverse. It did not pay off.


Hasn't it cost them $100s of billions to earn that money? Don't they need $100s of billions more to keep the ball rolling?

You were talking about actual commerce, though.

Is that revenue actually tied to something in the market, or is it just all of these companies and investors blowing air into the bubble?


Ah yes, cryptocurrency: built on an immutable ledger so that your money is safely secured by your private key until the project devs decide it’s not.

The article states it's just a fork, AKA a separate coin that copy/pastes the current Bitcoin ledger. Even if every dev wants this, that is a "hard fork" (not backwards compatible), and creates a new version of the coin. Ex: if you want to add a smiley face to some Bitcoin log output, you 1) make the change, and 2) nothing happens until 3) miners agree to use that new version.

Look into "Bitcoin Cash", a near identical coin except it has a larger block size. Completely different token and therefore has 0 effect on Bitcoin.


Might be more illuminating to look at Ethereum Classic.

How’s that doing after the fork led by the central owners of a decentralized blockchain, initiated to reallocate a big pile of money that the devs didn’t think was in the right spot?


Everything is safe until it isn’t

Well yea. But so much of the bitcoin/crypto was specifically built on the idea that technology and “decentralized” blockchains were the right and necessary solution to protect us from this kind of centralized human manipulation.

How does an experienced teacher or pilot differ from an experienced game designer?

It’s not unreasonable to ask but they can and are saying “no”.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: