Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been using z.ai and codex latest models since last September. Each release has been an improvement.

codex handles longer sessions but the quality seems to decline and it tends to over engineer and lose focus. It will happily add slop on top of slop...which may pass immediate tests of "code works" but doesn't pass my criteria of "code as craft"

I'm using z.ai GLM with opencode. It's obvious when GLM loses its mind when the session gets too long.

I've been using AI to support programming for around 3 years now. The models have gotten amazing. However, unless there is a significant breakthrough I have determined that it's best for me to focus on short sessions.

I a) organize my work, b) improve my AGENTS.md, ensure source has appropriate comments to guide the models to the patterns and separation of concerns c) use shorter sessions d) review and test without AI. This approach means I still own my code. The AI is just an assistant.

With this approach GLM-5.1 is an excellent model. I never run out of token allotment on z.ai or codex plans. At this point, I only keep my OpenAI subscription as the ChatGPT desktop app is excellent at long web research tasks and I get codex with it.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: