Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and how many people would fall prey to it?

How exactly? I could be wrong, but I believe it won't make things any worse. I see three scenarios:

First, let's say someone is an antivaxer, already subscribed to that belief. If they're reading some posting, the fact that some other antivaxer vouched for that post's author doesn't change much.

If someone had not made any opinion yet - they won't have any trust connections on that matter, so if they're reading an anti-vaccine posting, they won't have any impact. Their query to the web of trust would say "no idea, you don't have any trust anchors set on this matter, go figure out whom you trust first".

It is only if they have a friend they trust on the subject vaccines, and that friend happens to be an antivaxer, they may have a "your friend vouches for this" result. This is the only scenario I see, where someone could be swayed towards a certain opinion, and I'd be damned if this doesn't already happens without any technologies, completely organically. It's their peer group that sways them, so the technology in question here doesn't make it worse or better, it just accelerates the process.

Again, I could be wrong about either or all of those. I readily recognize all of this stems from my beliefs in humanity (in general) as something that can work out its way through any amount of dissenting opinions, finding out the truths. I mean, historically, society had worked this way. And a controversial belief that accelerating this is a good idea - I admit, I don't have a rational arguments for or against this part - it's just a belief. Guess it comes from hatred of seeing all those slow simmering almost-pertpetual memetic wars and being unhappy about their fallout, yet not believing any forcible silencing of opposing forces is a viable approach.

Oh, and just to clarify: it's extremely important that trust must not be ultimate (exceptions apply for closest peers, e.g. partners or parents), but limited to a subject. E.g. I may trust someone on subject X but at the same time know they're avid subscribers to what I think is a false belief on subject Y, so I explicitly distrust them on that matter.



"If someone had not made any opinion yet - they won't have any trust connections on that matter, so if they're reading an anti-vaccine posting,..."

They see that it is vouched for by 1000 other people. They see that a countering article that is also vouched for by 1000 people, but those people are all so-called "experts" and they've read HN and know that "experts" are biased and have perverse incentives and such. Or they just flip a coin. Either way, they believe the anti-vax article.

Because the feedback loop is very open, they see no consequences for their choice. So they also vouch for the anti-vax article.


> They see that it is vouched for by 1000 other people.

Oh, no, no, if I understood your idea - that's something I really want to avoid. This sort of query must explicitly be against the recommended/designed use. It cannot be a popular vote, it must be peer-to-peer. A web rather than a cesspool. A query must result in "no paths to any trusted sources found".

Just like in the real world - my opinions on, say, some processes in metallurgy (something I have no clue about) cannot depend on opinion of someone I don't have any connections with, even if they're renown expert in that field. I may trust them through some institutions, certification programs or something like that - say, they graduated from some university that I consider trustworthy. But if there are no connections at all I can't say anything about that person's credibility. Best I can do upon reading their opinion is that it's self-appraised.

This web must be designed in a way that people they don't know, people that aren't connected to one's node in the web through peers they know and trust, mustn't have any significant weight, no matter how many there are. One should be free to make their own arbitrary queries, but this online popular vote thing is already known to be a very flawed metric.

Not to mention it would require to solve that very hard one-true-identity problem. If all your connections are people you know, and their connections are people they know, identity becomes a non-issue. As long as the small world hypothesis is close enough to the reality, this should work. And it would be a non-issue even if someone invents a fake identity - it'll either a recognized legit alter ego (pseudonymous account, who still may express opinions people listen to), or a meaningless fake person that no one knows (that may vouch for whatever, the only connections is through its creator).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: