Hacker Newsnew | past | comments | ask | show | jobs | submit | tommy_axle's commentslogin

If self-hosting the runners too then it's doable. Not 100% sure for Forgejo but with Gitea and act_runner it's possible and pretty economical if you have an extra mac mini.

Map to "jj" and call it a day since your finger is already on the home row

Also ctrl + [ is standard terminal/ascii for esc so that might be a bit more ergonomic than reaching for esc


Yes but then you get used to jj (or jk) which might not be available on other vi modes (shells vi modes, gdb, glide browser ?) and it's overall quite nice to quickly escape any situation by having the key be closer.

Ctrl + [ would be acceptable if it wasn't, imo, the most important function of the editor.

EDIT: My bad, you can do it with Glide apparently


I've yet to come across something with vim bindings that lacks a .vimrc where you can map 'jk'. Either way, switching back to ESC is as annoying as it is in the first place.

Claude's vim bindings don't support ESC mapping, unfortunately for muscle memory. https://github.com/anthropics/claude-code/issues/25306

Well I have given at least one example. Do you not use bash/zsh/fish/nushell vi modes ?

Do you not use web search to verify assumptions ?

I should have. I don't why I assumed the line editors couldn't handle two keys in a row.

"jk" is even faster (you get to "roll" your fingers)

And keeps you on the same line unless it was the last one, if you were already in normal mode.

Go version came in handy https://github.com/badsectorlabs/copyfail-go especially for systems without the very latest python (os.slice)

Slightly more readable Python version at https://gist.github.com/grenkoca/b82281a4706e936072979acf54b...


Could be worse (we'll see) as this could be a wild ride along with react2shell or some of the compromised packages as of late.

I wouldn't go that far. Right tool for the job as always. Axios offers a lot over fetch for all but the simplest use cases plus you get to take advantage of the ecosystem. Need offline, axios-cache-interceptor already exists. Sure you can do all of those things with fetch but you need more to go with it taking you right back to just using axios. Also is no one annoyed that you can't replay fetch like the xhr? Same with express: solves a problem reliably.


OpenClaw acquisition at work.


Any particular evidence for this other than the conjecture that it might be related?

To me it seems like just a natural evolution of Codex and a direct response to Claude Cowork, rather than something fully claw-like.


Wrong acquisition.


Pick a decent quant (4-6KM) then use llama-fit-params and try it yourself to see if it's giving you what you need.


I habe found llama-fit sometimes just selects a way to conservative load with VRAM to spare.


Nah, prepending will lead to a messier diff than the parent example.


I'm guessing this is also calculating based on the full context size that the model supports but depending on your use case it will be misleading. Even on a small consumer card with Qwen 3 30B-A3B you probably don't need 128K context depending on what you're doing so a smaller context and some tensor overrides will help. llama.cpp's llama-fit-params is helpful in those cases.


More like redux vs zustand. Picking zustand was one of the good standout picks for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: