Hacker Newsnew | past | comments | ask | show | jobs | submit | ajbd's commentslogin

The “types as lenses, not schemas” principle and the focus on structure + relationships really stand out. How do systems like this handle temporal stuff over time? (things that change over time, decisions that get revisited, outcomes that didn’t exist when the note was created?) Do those live as relationships between notes, or is there a different pattern for it?


This is interesting, and is solving a real problem. The contradiction resolution approach makes a lot of sense. It feels like a step towards letting models decide when they're out of their depth.

I do have a couple questions though; 1. How does consolidation handle partial updates vs. full contradictions? (eg., "budget is $50k" -> "$50k but can flex to $60k") 2. What's the overhead of the nightly consolidation pass at scale? 3. Does the system ever surface uncertainty when two facts are recent but contradictory, or does it always prefer the newer one?

The 99.2% -> 49.2% seems like a dramatic gap, so I'm interested to see how other memory systems perform when they submit to the benchmark.


is the bottleneck actually retrieval/search, or just that the underlying data is unstructured to begin with? have you seen a setup where context and decision don't get lost in threads over time, without relying on constant summarization?


With how we build it, the content doesn't really matter. The AI and Typesense embedding model handles many of the relationships, so the AI tools don't need to do all the work. Mostly just calling "search_knowledge".

Most teams do not suffer from a lack of information. They suffer from context being scattered across notes, chats, docs, URLs, tickets, and AI sessions, with no reliable way to retrieve the right piece at the right time. Even a good search is not enough if the underlying memory layer is fragmented or opaque.

We have not seen many setups that preserve context and decisions well over time without ongoing cleanup. Usually, people fall back on some combination of repeated prompting, summaries, manual docs, or isolated memory within a single tool. That works for a while, but it decays.

Our view is that the better approach is not endless summarization. It provides teams with a shared memory layer where notes, decisions, URLs, and reusable context remain inspectable, editable, and portable, so retrieval has something durable to work with in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: