I have an open source project that is basically that (https://naisys.org/). From my testing it feels like AI is pretty close as it is to acting autonomously. Opus is noticeably more capable than GPT-4, and I don't see how next gen models won't be even more so.
These AIs are incredible when it comes to question/answer, but with simple planning they fall apart. I feel like it's something that could be trained for more specifically, but yea you quickly end up being in a situation where you are nervous to go to sleep with AI unsupervised working on some task.
They tend to go off on tangents very easily. Like one time it was building a web page, it tried testing the wrong URL, thought the web server was down, ripped through the server settings, then installed a new web server, before I shut it down. AI like computer programs work fast, screw up fast, and compound their errors fast.
> They tend to go off on tangents very easily. Like one time it was building a web page, it tried testing the wrong URL, thought the web server was down, ripped through the server settings, then installed a new web server, before I shut it down.
At least it just decided to replace the web server, not itself. We could end up in a sorcerer’s apprentice scenario if an AI ever decides to train more AI.
> it feels like AI is pretty close as it is to acting autonomously
> with simple planning they fall apart
They are not remotely close to acting autonomously. Most don't even act well at all for much of anything but gimmicky text generation. This hype is so overblown.
The step changes in autonomy are very obvious and significant from gpt-3, -4, and to Opus. From my point of view given the kinds of dumb mistakes it makes, it's really just a matter of training and scaling. If I had access to fine tune or scale these models I would love to, but it's going to happen anyway.
Do you think these step changes in autonomy have stopped? Why?
> Do you think these step changes in autonomy have stopped? Why?
They feel like they are asymptotically approaching just a bit better quality than GPT-4.
Given every major lab except Meta is saying "this might be dangerous, can we all agree to go slow and have enforcement of that to work around the prisoner's dilemma?", this may be intentional.
On the other hand, because nobody really knows what "intelligence" is yet, we're only making architectural improvements by luck, and then scaling them up as far as possible before the money runs out.
But training just allows it to replicate what it's seen. It can't reason so I'm not surprised it goes down a rabbit hole.
It's the same when I have a conversation with it, then tell it to ignore something I said and it keeps referring to it. That part of the conversation seems to affect its probabilities somehow, throwing it off course.
Right, that this can happen should be obvious from the transformer architecture.
The fact that these things work at all is amazing, and the fact that they can be RLHF'ed and prompt-engineered to current state of the art is even more amazing. But we will probably need more sophisticated systems to be able to build agents that resemble thinking creatures.
In particular, humans seem to have a much wider variety of "memory bank" than the current generation of LLM, which only has "learned parameters" and "context window".
Humans are also trained on what they’ve ‘seen’. What else is there? Idk if humans actually come up with ‘new’ ideas or just hallucinate on what they’ve experienced in combination with observation and experimental evidence. Humans also don’t do well ‘ignoring what’s been said’ either. Why is a human ‘predicting’ called reasoning, but an AI doing it is not?
Because a human can understand from first principles, while current AIs are lazy and don't unless pressed. See for example, suggesting creating bleach smoothies, etc.
> But training just allows it to replicate what it's seen.
Two steps deeper; even a mere Markov chain replicates the patterns rather than being limited to pure quotation of the source material, attention mechanisms do something more, something which at least superficially seems like reason.
Not, I'm told, actually Turing compete, but still much more than mere replication.
> It's the same when I have a conversation with it, then tell it to ignore something I said and it keeps referring to it. That part of the conversation seems to affect its probabilities somehow, throwing it off course.
Yeah, but I see that a lot in real humans, too. Have noticed others doing that since I was a kid myself.
Not that this makes the LLMs any better or less annoying when it happens :P
This might be a dumb question, but did you ever try having it introspect into its own execution log, or perhaps a summary of its log?
I also have a tendency to get side tracked and the only remedy was to force myself to occasionally pause what I'm doing and then reflect, usually during a long walk.
Inter-agent tasks is a fun one. Sometimes it works out, but a lot of the time they just end up going back and forth talking, expanding the scope endlessly, scheduling 'meetings' that will never happen, etc..
A lot of AI 'agent systems' right now add a ton of scaffolding to corral the AI towards success. The scaffolding is inversely proportional to the sophistication of the model. GPT-3 needs a ton, Opus needs a lot less.
Real autonomous AI you should just be able to give a command prompt and a task and it can do the rest. Managing it's own notes, tasks, goals, reports, etc.. Just like if any of us were given a command shell and task to complete.
Personally I think it's just a matter of the right training. I'm not sure if any of these AI benchmarks focus on autonomy, but if they did maybe the models would be better at autonomous tasks.
> Inter-agent tasks is a fun one. Sometimes it works out, but a lot of the time they just end up going back and forth talking, expanding the scope endlessly, scheduling 'meetings' that will never happen, etc..
sounds like "a straight shooter with upper management written all over it"
Sometimes I'll tell two agents very explicitly to share the work, "you work on this, the other should work on that." And one of the agents ends up delegating all their work to the other, constantly asking for updates, coming up with more dumb ideas to pile on to the other agent who doesn't have time to do anything productive given the flood of requests.
What we should do is train AI on self-help books like the '7 habits of highly productive people'. Let's see how many paperclips we get out of that.
I suspect it's a matter of context: one or both agents forget that they're supposed to be delegating. ChatGPT's "memory" system for example is a workaround, but even then it loses track of details in long chats.
Opus seems to be much better at that. Probably why it’s so much more expensive. AI companies have to balance costs. I wonder if the public has even seen the most powerful, full fidelity models, or if they are too expensive to run.
Right, but this is also a core limitation in the transformer architecture. You only have very short-term memory (context) and very long-term memory (fixed parameters). Real minds have a lot more flexibility in how they store and connect pieces of information. I suspect that further progress towards something AGI-like might require more "layers" of knowledge than just those two.
When I read a book, for example, I do not keep all of it in my short-term working memory, but I also don't entirely forget what I read at the beginning by the time I get to the end: it's something in between. More layered forms of memory would probably allow us to return to smaller context windows.
These AIs are incredible when it comes to question/answer, but with simple planning they fall apart. I feel like it's something that could be trained for more specifically, but yea you quickly end up being in a situation where you are nervous to go to sleep with AI unsupervised working on some task.
They tend to go off on tangents very easily. Like one time it was building a web page, it tried testing the wrong URL, thought the web server was down, ripped through the server settings, then installed a new web server, before I shut it down. AI like computer programs work fast, screw up fast, and compound their errors fast.