If you want to use the suggested mitigation (disabling kernel module `algif_aead` with a modprobe config), and you do not want to run that whole obfuscated shell code to get an actual root shell, but only check if the module can be loaded, here is a readable version of its first few lines:
python3 -c 'import socket; s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0); s.bind(("aead","authencesn(hmac(sha256),cbc(aes))")); print("algif_aead probably successfully loaded, mitigation not effective; remove again with: rmmod algif_aead")'
That would suggest that CRYPTO_USER_API_AEAD=y in your kernel config. You can disable it in that case by setting that to "n", recompiling your kernel, and putting the new kernel in place.
I was running in Gentoo "6.18.18" (amd64) and the exploit worked (and all other shells which I PREVIOUSLY opened could then just execute "su -" without password to become "root") -> doing temporarily a "modprobe -r algif_aead" on-the-fly did not fix it as I was still able to swap to "root" from the unprivileged user by executing just "su -".
"6.18.25" fixed it (module "algif_aead" still running).
- Maybe older Kernel versions that don't contain the fix should be blacklisted?
- FYI in Gentoo I had to recompile "sys-fs/zfs-kmod" after the minor kernel upgrade (I initially skipped it, but after rebooting with the new kernel I could not mount my raidz1) -> the same might be needed for other external modules.
Yeah in theory genkernel should handle zfs but since I’m zfs_on_root because I like living dangerously I have a one liner that genkernels and then re-emerges zfs and then rebuilds the initramfs.
distros might also apply patches to their own packages, so this isn't a perfect signal (i.e. if you have one of those versions, you almost certainly have the fix, but if you don't, it might still be fixed but you'll need to check the distro's package information to know for sure).
No, it was fixed initially in 7.0, and the patch then applied to the 6.18 and 6.19 branches, fixing the existing bug in versions 6.18.22 and 6.19.12. The bug exists in 6.19.0 to 6.19.11, but not as a regression - those were all released before the bug was fixed.
I tried Zed last month but found that it uses high CPU usage even when idle (up to 50% of 1 core of my i7-7500U).
This is even higher CPU usage than my vscode causes.
Sublime does not do that; in fact it has 0% CPU usage when idle:
sudo strace -fyp "$(pidof sublime_text)"
shows that Sublime issues no syscalls when idle, as it should be.
(Note, you need to either unfocus it so that the caret stops flashing, or switch from fading caret to fixed / non-fading caret, otherwise it necessarily has to do syscalls to draw itself.)
Zed spams syscalls even when its screen is entirely still:
strace -fyp "$(pidof zed-editor)"
In fact Zed makes 800 syscalls per second when completely idle and unfocused.
Syscall spamming is one of the main reasons why computers get slow when many apps are running.
Good software does not do that; when idle, it should only consume RAM, not CPU.
Aside: Browsers, and Electron, seem to always syscall-spam no matter what, which is probably a key reason why people feel that all Electron apps bog down their computers. When your computer gets faster, the software just does more syscall loops per second, for unchanged misery.
From what I recall they generally avoid caching anything and just try to repaint the whole UI really, really fast on every frame so I think that's the design.
It's like how a video game renders, which is their stated goal from the beginning.
I always thought their stated design goals were a bit... wonky.
If you look into strace of something like IMGUI demo on say sdl2+opengl backend, you'll see about same syscall/sec number at 60 fps, but it'll all be sequences of writev, recvmsg, poll, clock_gettime and DRM_IOCTL_SYNCOBJ_ ioctls. Which is basically just polling for input and submitting gpu command buffers, nothing expensive, and nothing a cache can help with.
That does not really matter though: Even immediate-mode approach programs should just not be drawing new frames when the program logic says that there are no new inputs that can result in different pixels (e.g. no user inputs, animations, notifications, text content changes, etc.). One does not need to "poll" for input.
> Even immediate-mode approach programs should just not be drawing new frames
They can't not. When the backend asks for a frame you give it a frame, or the result is not defined (a black rectangle instead of window content usually).
Even if you don't redraw anything at all it's still a 2d blit from gpu memory which is two triangles, a texture and sync object. Or you need to tell the window manager what's the window content this frame by any other way which also inevitably crossess process boundaries and thus is a bunch of syscalls. Plus you need to poll for input anyway.
edit: by "poll for input" I mean the literal poll() syscall. Which is ofc the basis of async and all. How else do you get to know there was any input?
Last time I have updated (half a year ago) it deleted tabs. And since that time I haven’t been brave enough to update it again as I have too many tabs unsaved :)
I have not lost any Sublime tab in 15 years (I have tabs this old).
Sublime also saves a backup of its state files next to the state files in your home dir, so you can restore in case anything ever goes wrong (e.g. bugs in the new version).
The .sublime_session state files are JSON, easy to read for a human.
> spending an hour making up names for random junk files
That is completely unnecessary. You can just backup the '.sublime_session' file that contains all that before an upgrade if you are worried. Sublime already stores all its state in 1 file; manually spreading that across N files seems unfun busywork. A quick web search reveals that by the way.
(I perpetually have 40 Sublime windows open, each one with tens to hundreds of tabs. My 'Auto Save Session.sublime_session' is 70 MB.)
The parent was asking for access to the C syscall, and C syscalls are unsafe, including in C. You can wrap that syscall in a safe interface if you like, and many have. And to reiterate, I'm all for supporting this pattern in Rust's stdlib itself. But openat itself is a questionable API (I have not yet seen anyone mention that openat2 exists), and if Rust wanted to provide this, it would want to design something distinct.
> Why can I easily use "*at" functions from Python's stdlib, but not Rust's?
I'm not sure you can. The supported pattern appears to involve passing the optional `opener` parameter to `os.open`, but while the example of this shown in the official documentation works on Linux, I just tried it on Windows and it throws a PermissionError exception because AFAIK you can't open directories on Windows.
I took parent's message to be asking why the standard library fs primitives don't use `at` functions under the hood, not that they wanted the `at` functions directly exposed.
> why the standard library fs primitives don't use `at` functions under the hood
In this case it wouldn't seem to make sense to use `at` functions to back the standard file opening interface that Rust presents, because it requires different parameters, so a different API would need to be designed. Someone above mentioned that such an API is being considered for inclusion in libstd in this issue: https://github.com/rust-lang/rust/issues/120426
You can but you have to go through the lower level API: NtCreateFile can open a directory, and you can pass in a RootDirectory handle to following calls to make them handle-relative.
> The agent cannot learn from its mistakes. The agent will never produce any output which will help you invoke future agents more safely
That is not entirely true:
Given that more and more LLM providers are sneaking in "we'll train on your prompts now" opt-outs, you deleting your database (and the agent producing repenting output) can reduce the chance that it'll delete my database in the future.
Exactly. It’s just giving the LLM a token pattern, and it’s designed to reproduce token patterns. That’s all it does. At some point generating a token pattern like that again is literally it’s job.
It is possible, but it requires specifically labelling the data. You have to craft question response pairs to label. But even then the result is only probabilistic.
The LLM in this case had been very thoroughly trained and instructed quite specifically not to do many of the things it actually then when off and did.
It may be that there's a kind of cascade effect going on here. Possibly once the LLM breaks one rule it's supposed to follow, this sets it off on a pattern of rule violations. After all what constitutes a rule violation is there in the training set, it is a type of token stream the LLM has been trained on. It could be the LLM switches into a kind of black hat mode once it's violated a protocol that leads it down a path of persistently violating protocols, and given the statistical model some violations of protocol are always possible.
My mother was a primary school teacher. She used to say that the worst thing you can say to a bunch of kind leaving class down the hall is "don't run in the hall". It puts it in their minds. You need to say "Please walk in the hall", then they'll do it.
> This level of production grade fail over and simplicity was point and click, 10 years ago.
While some of the tools are _designed_ for point and click, they don't always work. Mostly because of bugs.
We run Ceph clusters under our product, and have seen a fair share of non-recoveries after temporary connection loss [1], kernel crashes [2], performance degradations on many small files, and so on.
Similarly, we run HA postgres (Stolon), and found bugs in its Go error checking cause failure to recover from crashes and full-disk conditions [3] [4]. This week, we found that full-disk situations will not necessarily trigger failovers. We also found that if DB connections are exhausted, the dameon that's supposed to trigger postgres failover cannot connect to do that (currently testing the fix).
I believe that most of these things will be more figured out with hosted cloud solutions.
I agree that self-hosting HA with open-source software is the way to. These softwares are good, and the more people use them, the less bugs they will have.
But I wouldn't call it "trivial".
If you have large data, it is also brutally cheaper; we could hire 10 full-time sysadmins for the cost of hosting on AWS, vs doing our own Hetzner HA with Free Software, and we only need ~0.2 sysadmins. And it still has higher uptime than AWS.
It is true that Proxomox is easy to setup and operate. For many people it will probably work well for a long time. But when things aren't working, it's not so easy anymore.
My friend recommended to put a small percent late payment fee, stated in the contract and on each invoice.
Haven't really used it yet because we don't have a problem with late payments, but I do think it would work, because our B2B customers are usually very appreciative of saving small percentages when we offer it, and unlikely to just give up that money by being late.
it doesn’t work if they are insolvent, and it can also backfire if they see this clause as a way to get a cheap cash loan. you should still have the clause but i think if this as a tool for the collections attorney to use if the customer defaults.
https://wiki.haskell.org/index.php?title=Debugging&action=hi...
In particular, it has no mention about the new actual ... debugger:
https://well-typed.github.io/haskell-debugger/
https://discourse.haskell.org/t/the-haskell-debugger-for-ghc...
reply