Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

During training they gate with a lot of guardrails the format of the reasoning tokens output. They don't just use a reward for getting the correct answer during training but also reward human readable output. That said, if they didn't, the reasoning tokens that are the most efficient to get to the final correct answer during training would most likely look like a lot of gibberish.

There is a relationship between the tokens in the output in the model's vector space, that is the most important, and something hidden we will never see.

 help



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: