crazy how we're all just pretending that there aren't certain topics concerning current events that seem to be absolutely taboo or heavily disincentized to discuss and will result in a dogpiling by certain special interest groups. we all know who they are and yet we all tacitly accept it.
It means they have the same levers somewhere in the training process. Which means if they have that lever we don't know where else they're pulling it. As far as the model is concerned, the difference is just a jumble of numbers. Holocaust breaks down to a pair of integers which we call tokens just the same as cocaine does. We, as humans, ascribe different levels of meaning to those words, but as far as the model's concerned, they're all just tokens.
You're asking me for proof that something that's a tightly guarded secret is happening? I don't work at OpenAI or anything so I don't know why you think I'd have that. As far as doing it for fun, no, this is a serious matter to me, is it not for you?
Still, if you ask ChatGPT or Claude details on what's going on in the western bank, Israel and Gaza, there's a specific viewpoint being pushed. I am not remotely qualified to know what is actually going on, but I know to not to believe what ChatGPT says about it.
I was able to pull up an example of a Chinese model doing censorship in 2 seconds. So there is clearly a difference in the type of censorship happening if it’s harder than that for you to prove.
Your example is already under dispute by actual humans. Expecting non-AGI to get it right is not realistic.
Please point to an example where the information (or more importantly its practical application) is both censored but is also not legitimately harmful and/or illegal.
At this point, I still don't see a reason to use Opus. I'm happy with Sonnet's performance for a third of the price. Tried several times with not a big gain.
Wonder how Bambu can prevent this kind of forks, where no code - just instructions to AI on how to build a network plugin from scratch.
reply