I got tired of re-explaining myself to AI tools every session. Claude forgets me. Cursor forgets me. Switch from one to another and you're back to zero.
Existing solutions (Nowledge Mem, Mem0) want you to install apps, configure MCP servers, run local LLMs, and set up plugins per tool. For what? To remember that I prefer tabs over spaces?
Mem-Forever is a GitHub template repo. You click "Use this template", set it to private, and open it with any AI tool. The repo contains instruction files that every major tool auto-reads:
Claude Code reads CLAUDE.md, Cursor reads .cursorrules, Codex reads AGENTS.md, Copilot reads .github/copilot-instructions.md, Gemini CLI reads GEMINI.md.
On first use, the AI asks you a few questions and builds your profile. After that, it saves decisions, lessons, and preferences -- committed and pushed after every update, not batched to session end.
Switch tools? Give the new one your repo URL and a PAT. One sentence, full context.
No server. No app. No account. No vendor lock-in. Your data lives in your private GitHub repo.
Free, MIT licensed.
https://github.com/ilang-ai/Mem-Forever
I've seen many users that need a wider breadth of memory across more topics, where structure and organization of that memory plays a big part in the LLM's performance.
My response to that was a local system that I ended up turning into Sig <https://sig-ai.app/>
It has some overlap to how you've approached it, but differs in other obvious ways.
Having said all that, I'm just highlight another use case for memory. I think your approach is a very valid approach for a lot of people. I appreciate the simplicity and lack of lock-in.
reply