Hacker Newsnew | past | comments | ask | show | jobs | submit | nh2's commentslogin

This community wiki page is somewhere between 10 and 20 years out-of-date.

https://wiki.haskell.org/index.php?title=Debugging&action=hi...

In particular, it has no mention about the new actual ... debugger:

https://well-typed.github.io/haskell-debugger/

https://discourse.haskell.org/t/the-haskell-debugger-for-ghc...


If you want to use the suggested mitigation (disabling kernel module `algif_aead` with a modprobe config), and you do not want to run that whole obfuscated shell code to get an actual root shell, but only check if the module can be loaded, here is a readable version of its first few lines:

    python3 -c 'import socket; s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0); s.bind(("aead","authencesn(hmac(sha256),cbc(aes))")); print("algif_aead probably successfully loaded, mitigation not effective; remove again with: rmmod algif_aead")'
Similarly, when the mitigation is in place,

    modprobe algif_aead
should fail with an error.

    modprobe algif_aead
    modprobe: FATAL: Module algif_aead not found in directory /lib/modules/6.14.3-x86_64-linode168
Yet this kernel is vulnerable.

That would suggest that CRYPTO_USER_API_AEAD=y in your kernel config. You can disable it in that case by setting that to "n", recompiling your kernel, and putting the new kernel in place.

Indeed, no modprobe.d will help when the feature is compiled into the kernel ("=y") instead of compiled into a runtime-loadable module.

On a git repo that has as remotes

    https://github.com/torvalds/linux.git
    https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git as remotes:
running a search for commit a664bf3d603d's commit message:

    git log --all --grep 'crypto: algif_aead - Revert to operating out-of-place' '--format=%H' | xargs -I '{}' git tag --contains '{}' | sort -u
outputs these tags as having the fix:

    v6.18.22
    v6.18.23
    v6.18.24
    v6.18.25
    v6.19.12
    v6.19.13
    v6.19.14
    v7.0
    v7.0.1
    v7.0.2
    v7.0-rc7
    v7.1-rc1

Here's the diff if you wanna play in your source (Gentoo, looking at you):

https://github.com/torvalds/linux/commit/a664bf3d603d

6.18.25-gentoo-x86_64 has the patch for Gentoo.


Thanks a lot!!!

I was running in Gentoo "6.18.18" (amd64) and the exploit worked (and all other shells which I PREVIOUSLY opened could then just execute "su -" without password to become "root") -> doing temporarily a "modprobe -r algif_aead" on-the-fly did not fix it as I was still able to swap to "root" from the unprivileged user by executing just "su -".

"6.18.25" fixed it (module "algif_aead" still running).

- Maybe older Kernel versions that don't contain the fix should be blacklisted?

- FYI in Gentoo I had to recompile "sys-fs/zfs-kmod" after the minor kernel upgrade (I initially skipped it, but after rebooting with the new kernel I could not mount my raidz1) -> the same might be needed for other external modules.


Yeah in theory genkernel should handle zfs but since I’m zfs_on_root because I like living dangerously I have a one liner that genkernels and then re-emerges zfs and then rebuilds the initramfs.

distros might also apply patches to their own packages, so this isn't a perfect signal (i.e. if you have one of those versions, you almost certainly have the fix, but if you don't, it might still be fixed but you'll need to check the distro's package information to know for sure).

Just curious.. do they list all those kernel version because there is regression in versions after 6.18.22 ?

ie does v 6.19.0 have the flaw in it?


No, it was fixed initially in 7.0, and the patch then applied to the 6.18 and 6.19 branches, fixing the existing bug in versions 6.18.22 and 6.19.12. The bug exists in 6.19.0 to 6.19.11, but not as a regression - those were all released before the bug was fixed.

I tried Zed last month but found that it uses high CPU usage even when idle (up to 50% of 1 core of my i7-7500U).

This is even higher CPU usage than my vscode causes.

Sublime does not do that; in fact it has 0% CPU usage when idle:

    sudo strace -fyp "$(pidof sublime_text)"
shows that Sublime issues no syscalls when idle, as it should be.

(Note, you need to either unfocus it so that the caret stops flashing, or switch from fading caret to fixed / non-fading caret, otherwise it necessarily has to do syscalls to draw itself.)

Zed spams syscalls even when its screen is entirely still:

    strace -fyp "$(pidof zed-editor)"
In fact Zed makes 800 syscalls per second when completely idle and unfocused.

Syscall spamming is one of the main reasons why computers get slow when many apps are running.

Good software does not do that; when idle, it should only consume RAM, not CPU.

Aside: Browsers, and Electron, seem to always syscall-spam no matter what, which is probably a key reason why people feel that all Electron apps bog down their computers. When your computer gets faster, the software just does more syscall loops per second, for unchanged misery.


From what I recall they generally avoid caching anything and just try to repaint the whole UI really, really fast on every frame so I think that's the design.

It's like how a video game renders, which is their stated goal from the beginning.

I always thought their stated design goals were a bit... wonky.


If you look into strace of something like IMGUI demo on say sdl2+opengl backend, you'll see about same syscall/sec number at 60 fps, but it'll all be sequences of writev, recvmsg, poll, clock_gettime and DRM_IOCTL_SYNCOBJ_ ioctls. Which is basically just polling for input and submitting gpu command buffers, nothing expensive, and nothing a cache can help with.

That does not really matter though: Even immediate-mode approach programs should just not be drawing new frames when the program logic says that there are no new inputs that can result in different pixels (e.g. no user inputs, animations, notifications, text content changes, etc.). One does not need to "poll" for input.

> Even immediate-mode approach programs should just not be drawing new frames

They can't not. When the backend asks for a frame you give it a frame, or the result is not defined (a black rectangle instead of window content usually).

Even if you don't redraw anything at all it's still a 2d blit from gpu memory which is two triangles, a texture and sync object. Or you need to tell the window manager what's the window content this frame by any other way which also inevitably crossess process boundaries and thus is a bunch of syscalls. Plus you need to poll for input anyway.

edit: by "poll for input" I mean the literal poll() syscall. Which is ofc the basis of async and all. How else do you get to know there was any input?


But a video game does!

You see why I think their logic that an IDE should work exactly like a video game does is not as strong as it sounds at first.

They claim browser engines just lack the raw speed to build an experience like Zed, but I know of no reason that it should be true.


I've found that some of the language servers can really grind up a storm but Zed itself is usually pretty lightweight.

Can you repro my finding?

I'm running on a Zed with only 1 empty text file in it. So language servers should not be in use.

How do you measure "pretty lightweight"?


I've always thought of it as lightweight, but checking it now, wow.

> Software update deletes this memory.

Are you sure? I believe Sublime preserves all your unsaved tabs even on update.


Last time I have updated (half a year ago) it deleted tabs. And since that time I haven’t been brave enough to update it again as I have too many tabs unsaved :)

i lost all the open tabs last time i upgraded sublime.

burned once, twice shy; i wouldnt update without spending an hour making up names for random junk files


I have not lost any Sublime tab in 15 years (I have tabs this old).

Sublime also saves a backup of its state files next to the state files in your home dir, so you can restore in case anything ever goes wrong (e.g. bugs in the new version).

The .sublime_session state files are JSON, easy to read for a human.

> spending an hour making up names for random junk files

That is completely unnecessary. You can just backup the '.sublime_session' file that contains all that before an upgrade if you are worried. Sublime already stores all its state in 1 file; manually spreading that across N files seems unfun busywork. A quick web search reveals that by the way.

(I perpetually have 40 Sublime windows open, each one with tens to hundreds of tabs. My 'Auto Save Session.sublime_session' is 70 MB.)


It's a shame I didn't know that. Thanks.

But those are all unsafe, taking raw strings.

Why can I easily use "*at" functions from Python's stdlib, but not Rust's?

They are much safer against path traversal and symlink attacks.

Working safely with files should not require *const c_char.

This should be fixed .


> But those are all unsafe, taking raw strings.

The parent was asking for access to the C syscall, and C syscalls are unsafe, including in C. You can wrap that syscall in a safe interface if you like, and many have. And to reiterate, I'm all for supporting this pattern in Rust's stdlib itself. But openat itself is a questionable API (I have not yet seen anyone mention that openat2 exists), and if Rust wanted to provide this, it would want to design something distinct.

> Why can I easily use "*at" functions from Python's stdlib, but not Rust's?

I'm not sure you can. The supported pattern appears to involve passing the optional `opener` parameter to `os.open`, but while the example of this shown in the official documentation works on Linux, I just tried it on Windows and it throws a PermissionError exception because AFAIK you can't open directories on Windows.


I took parent's message to be asking why the standard library fs primitives don't use `at` functions under the hood, not that they wanted the `at` functions directly exposed.

> which Rust's stdlib chose not to expose

i.e. expose through things like `File::open()`.


> why the standard library fs primitives don't use `at` functions under the hood

In this case it wouldn't seem to make sense to use `at` functions to back the standard file opening interface that Rust presents, because it requires different parameters, so a different API would need to be designed. Someone above mentioned that such an API is being considered for inclusion in libstd in this issue: https://github.com/rust-lang/rust/issues/120426


> AFAIK you can't open directories on Windows.

You can but you have to go through the lower level API: NtCreateFile can open a directory, and you can pass in a RootDirectory handle to following calls to make them handle-relative.


You can open directories using high level win32 APIs. What you need NtCreateFile for is opening files relative to an open directory.

The nix crate provides the safe wrappers. https://docs.rs/nix/latest/nix/fcntl/fn.openat2.html

> The agent cannot learn from its mistakes. The agent will never produce any output which will help you invoke future agents more safely

That is not entirely true:

Given that more and more LLM providers are sneaking in "we'll train on your prompts now" opt-outs, you deleting your database (and the agent producing repenting output) can reduce the chance that it'll delete my database in the future.


Actually no, it will increase it. Because it’ll be trained with the deletion command as a valid output.

Exactly. It’s just giving the LLM a token pattern, and it’s designed to reproduce token patterns. That’s all it does. At some point generating a token pattern like that again is literally it’s job.

Why would one set up reinforcement learning like that?

The point of creating samples from user data should surely be to label them good or bad, based on the whole conversation.

You look at what happened eventually, judge the outcome as bad, and thus train the "rm" token in the middle to be less likely.


It is possible, but it requires specifically labelling the data. You have to craft question response pairs to label. But even then the result is only probabilistic.

The LLM in this case had been very thoroughly trained and instructed quite specifically not to do many of the things it actually then when off and did.

It may be that there's a kind of cascade effect going on here. Possibly once the LLM breaks one rule it's supposed to follow, this sets it off on a pattern of rule violations. After all what constitutes a rule violation is there in the training set, it is a type of token stream the LLM has been trained on. It could be the LLM switches into a kind of black hat mode once it's violated a protocol that leads it down a path of persistently violating protocols, and given the statistical model some violations of protocol are always possible.

My mother was a primary school teacher. She used to say that the worst thing you can say to a bunch of kind leaving class down the hall is "don't run in the hall". It puts it in their minds. You need to say "Please walk in the hall", then they'll do it.


While I generally agree, this is an exaggeration:

> This level of production grade fail over and simplicity was point and click, 10 years ago.

While some of the tools are _designed_ for point and click, they don't always work. Mostly because of bugs.

We run Ceph clusters under our product, and have seen a fair share of non-recoveries after temporary connection loss [1], kernel crashes [2], performance degradations on many small files, and so on.

Similarly, we run HA postgres (Stolon), and found bugs in its Go error checking cause failure to recover from crashes and full-disk conditions [3] [4]. This week, we found that full-disk situations will not necessarily trigger failovers. We also found that if DB connections are exhausted, the dameon that's supposed to trigger postgres failover cannot connect to do that (currently testing the fix).

I believe that most of these things will be more figured out with hosted cloud solutions.

I agree that self-hosting HA with open-source software is the way to. These softwares are good, and the more people use them, the less bugs they will have.

But I wouldn't call it "trivial".

If you have large data, it is also brutally cheaper; we could hire 10 full-time sysadmins for the cost of hosting on AWS, vs doing our own Hetzner HA with Free Software, and we only need ~0.2 sysadmins. And it still has higher uptime than AWS.

It is true that Proxomox is easy to setup and operate. For many people it will probably work well for a long time. But when things aren't working, it's not so easy anymore.

[1]: "Ceph does not recover from 5 minute network outage because OSDs exit with code 0" - https://tracker.ceph.com/issues/73136

[2]: "Kernel null pointer derefecence during kernel mount fsync on Linux 5.15" - https://tracker.ceph.com/issues/53819

[3]: https://github.com/sorintlab/stolon/issues/359#issuecomment-...

[4]: https://github.com/sorintlab/stolon/issues/247


Cannot you use Codex (which is open source, unlike Claude Code) with Claude, even via Amazon Bedrock?


Codex with Anthrophic's models is not as good as using the models with the harness it was trained in mine for. Same goes vice-versa too.


My friend recommended to put a small percent late payment fee, stated in the contract and on each invoice.

Haven't really used it yet because we don't have a problem with late payments, but I do think it would work, because our B2B customers are usually very appreciative of saving small percentages when we offer it, and unlikely to just give up that money by being late.


it doesn’t work if they are insolvent, and it can also backfire if they see this clause as a way to get a cheap cash loan. you should still have the clause but i think if this as a tool for the collections attorney to use if the customer defaults.


You set the rate that is punitive.

It doesn’t work when solvency is an issue but you should know your customers and mitigate that risk accordingly


The rates aren't cheap. The standard late payments I've seen work out to approximately 19.5-20% APR.


Start doing it now, before ot becomes a problem!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: