Atomic looks quite interesting, and the "wiki synthesis" is particularly interesting:
I've been working on a suite of skills and a tiny MCP (also SQLite + SQLite-vec based) where the focus is on making it easy to produce "atoms" from quick brain dumps.
The chunking problem is "bypassed" by declaring each section a chunk, and having the LLMs rewrite drafts to sections that chunk well. That means lots of redundancy, and no "As explained above".
The intended reader isn't a human, but rather agents that generate human-friendlier prose, for different target audiences. By assuming the reader is an "expert", the idea is that it's much cheaper to mass-produce reviewed "atoms".
Itching to try that workflow with Atomic or Tolaria.
Or from the other perspective of the trade-off: One caveat with MSSQL is that ALL concurrent transactions must pay the overhead if _some_ transactions need serializable guarantees?
Nice job, eugene-khyst. Looks very comprehensive from an initial skim.
I've worked on something in the same space, with a focus on reliable but flexible synchronization to many consumers, where logical replication gets impractical.
> A long-running transaction in the same database will effectively "pause" all event handlers.
… as the approach is based on the xmin-horizon.
My linked code works with involving the MVCC snapshot's xip_list as well, to avoid this gotcha.
Also, note that when doing a logical restore of a database, you're working with different physical txids, which complicates recovery. (So my approach relies on offsetting the txid and making sure the offset is properly maintained)
Reading that thread it doesn't seem like the official image shipped with any cryptominer at any point, and that it's more likely that the container got compromised in other ways. (A compromised [superuser connection to Postgres can execute shell code](https://medium.com/r3d-buck3t/command-execution-with-postgre...), so that seems more likely than the image shipping with a miner)
Advisory locks are purely in-memory locks, while row locks might ultimately hit disk.
The memory space reserved for locks is finite, so if you were to have workers claim too many queue items simultaneously, you might get "out of memory for locks" errors all over the place.
> Both advisory locks and regular locks are stored in a shared memory pool whose size is defined by the configuration variables max_locks_per_transaction and max_connections. Care must be taken not to exhaust this memory or the server will be unable to grant any locks at all. This imposes an upper limit on the number of advisory locks grantable by the server, typically in the tens to hundreds of thousands depending on how the server is configured.
I've been working on a suite of skills and a tiny MCP (also SQLite + SQLite-vec based) where the focus is on making it easy to produce "atoms" from quick brain dumps.
The chunking problem is "bypassed" by declaring each section a chunk, and having the LLMs rewrite drafts to sections that chunk well. That means lots of redundancy, and no "As explained above".
The intended reader isn't a human, but rather agents that generate human-friendlier prose, for different target audiences. By assuming the reader is an "expert", the idea is that it's much cheaper to mass-produce reviewed "atoms".
Itching to try that workflow with Atomic or Tolaria.