Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you walk a model like ChatGPT through that reasoning, you’ll often wind up in a spot where the model readily admits that a clear conclusion is logically entailed but it is absolutely forbidden from uttering it.

Do you have an example of this? I want to try it.

 help



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: