During training they gate with a lot of guardrails the format of the reasoning tokens output. They don't just use a reward for getting the correct answer during training but also reward human readable output. That said, if they didn't, the reasoning tokens that are the most efficient to get to the final correct answer during training would most likely look like a lot of gibberish.
There is a relationship between the tokens in the output in the model's vector space, that is the most important, and something hidden we will never see.
There is a relationship between the tokens in the output in the model's vector space, that is the most important, and something hidden we will never see.