Hacker Newsnew | past | comments | ask | show | jobs | submit | abi's commentslogin

https://github.com/abi/lilo I’m working on Lilo, a Telegram AI agent that can remember things, store files, track your TODOs, manage your calendar, conduct research, build apps, send you reminders and monitor things for you.

I’ve found it super useful in my personal life and is pretty much my #1 app.


What benefit would it truly provide? Companies would simply say they need to cut costs to maximize shareholder value, which is no different than what happened here.

Well, generally speaking I think it’s a better world if corporations are forced to not lie to people.

Presumably investors and those shorting the company would benefit from more accurate information about a company. So the market as a whole would be healthier and less prone to inflationary claims.

I also don’t think that excuse would really hold up under scrutiny: “we fired 14% of our workforce to maximize shareholder value” isn’t exactly a straightforward answer. Right now the answer seems to be latching onto whatever’s trendy and blaming the layoffs on that.

If there is an expectation that reasons will be investigated, then I think you’d just get more accurate information in the market, tldr.


the company is hemorrhaging money, and consistently missing earnings. idk what else you need to know brother

No, we mostly spent our time on data structures and algorithms.


Ugh, good point.


Usually, those get released a few weeks later.


I'm quite confused by this article. If you persist conversation history in a database, and have all agentic turns run on the server, and merely listen to the streaming events/history via a websocket on the client, this is easily achieved. You can have as many clients as you want.

The HTTP layer is fine. Websockets work great. This is how the Codex app server works, I believe: https://openai.com/index/unlocking-the-codex-harness/ Same pattern I've used in my agentic OS/personal assistant project: https://github.com/abi/lilo Works great!


Codex is great.


Exactly.


I’ve experienced that issue as well. Clearing the cache and redownloading seemed to fix it for me. It’s an issue with the upstream library tvmjs that I need to dig deeper into. You should be totally fine on a 32gb system.


FWIW mine fails on the same file. I've tried a few times, including different sessions of Incognito, but seem to repeatedly get the same error with Llama 3 model:

"Could not load the model because Error: ArtifactIndexedDBCache failed to fetch: https://huggingface.co/mlc-ai/Llama-3-8B-Instruct-q4f16_1-ML..."


Use secret llama in a incognito window. Turn off the Internet and close the window when done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: