Yes, AI should be used as a tool for very specific things. Ones it’s trained on everything it’s completely useless. Anyone who is trying to use it for everyone will fail. I predict by 2030 (if not much sooner) ai bubble will burst. The only good outcome will be all this hardware used will be lequdated for pennies. Mark this prediction it will happen ;-)
It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.
A lot of people suspected that most programs were absurdly redundant and for a very long time. The real issue is that the languages do not really allow for producing code that can be easily shared. Some of the functional languages do, but mostly in ways that are completely irrelevant and useless in practice for such a goal.
Programmers writing their fiftieth mostly identical CRUD handler may not have noticed but a lot of other people did.
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
> Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do this - wait for it - by emitting tokens. Which are then parsed into a function call.
You’re just confusing a harness around an LLM for something more. And the core, the LLM takes input tokens and outputs the most likely next tokens. Those tokens might be interpreted into a tool call or anything else, but it’s still just token prediction.
If you disagree, explain what the actual difference is. I claim that LLMs “use” tools by emitting tokens which are taken and passed to a tool call. If you disagree, how?
> And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.
There’s a massive survivorship bias in the historical record that heavily weights the perspectives of wealthy and literate classes. We also just have much richer records of population centers in complex empires that keep detailed tax and judicial records than populations in more loosely governed areas.
The archaeological record is also heavily biased towards things made out of non-perishable materials (e.g. ceramics and stone last while wood, textiles, and paper don’t).
So basically, we can create a simulacrum of the parts of the past that have survived through to today but it would probably lack verisimilitude for anyone who was actually there.
The TV Series Devs explores this concept as well. It is decently executed, but it is a bit too cringe for my liking (supposedly world-class "devs" working on those keyboards you often see in museums, the protagonist having a fibonacci-off to establish engineering creds). Anyway, might be fun!
I meant the ones you see on those infotainment systems in museums that are super durable, but have terrible ergonomics. The show props those up way too much.
Yeah AI scrapers is one of the reasons why i have closed my public website https://tvnfo.com and only left donors site online. It’s not only because of AI scrapers but i grew tired of people trying to scrape the site eating a lot of reasorcers this small project don’t have. Very sad really it was publicly online since 2016. Now it’s only available for donors. Running a tiny project on just $60 a month. If this was not my hobby i would close it completely long time ago :-) Who know if there is more support in the future i might reopen public site again with something like anubes bot protection. But i thought it was only small sites like mine who gets hit hard, looks like many have similar issues. Soon nothing will be open or useful online. I wonder if this was the plan all along whoever pushing AI on massive scale.
I took a look at the https://tvnfo.com/ site and I have no idea what's behind the donation wall. Can I suggest you have a single page which explains or demonstrates the content, or there's no reason for "new" people to want to donate to get access.
“In fact, so rare it is to find someone who knows what I mean that it feels like a magic moment.”
There, lack of interest from the person you talking to or you when listening. It’s because you have different interests. This is a human feature not a flaw. But it’s interesting to think that LLMs might have similar behavior :-)
“I’ll never again ask a human to write a computer program shorter than about a thousand lines, since an LLM will do it better.”
From my personal experience with ChatGPT it can’t even correctly write few lines of code. But i don’t use AI often. I just don’t find it that useful. From what i see it’s mostly a hype bubble that will burst.
But this is my personal opinion and my own observation. I could be wrong :-)
This also happens a lot on AliExpress most storage devices are fake. Some 64gb flash drives, sd cards or 320gb 2.5 mobile hdds are okay. But you should run f3 test on any new drive you buy.
reply