It has very good throughput since it's targeting the JVM. JEP 514 and JEP 515 are also making AOT a real thing, reducing warmup times. This means user will not even have to use the awesome Babashka project for scripts or drop in GraalVM.
Anything an LLM does on your computer should happen it its own account. No sudo config of course, or at most one that is strictly limited to what you want to allow it to do (risk here, as many programs have non-obvious paths to general command execution).
It should have zero access to your private home directory or your system configs. You can have access to its files of course. That's the beauty of separate accounts and permissions.
How many devs really do run damned near everything from a single account that also has sudo/runas/various_osx_methods access? This threat model has a decidedly non-zero target market.
Even those folks who are cautious enough to require passwords (sudo or plain su) to elevate are still at risk of having their account thoroughly brought under control of an attacker. Just imagine what a baddie could inject into your .bashrc if your editor can change it.
If you run your clanker-controlled emacs in console mode under a restricted user account, best case scenario, system compromise is only one unpatched privesc vuln away from Shai-Hulud completely pwning you.
Doing it in a locked down VM is much better but even then you're only better off by matter of degrees than if you had done a yolo curl - | bash because VM host attacks and even escapes are very much a thing.
These HNers expressing concern about giving a LLM control of an editor are 100% thinking rightly.
This is a textbook motte-and-bailey. You're telescoping threat escalation - chaining together "what if" steps until everything sounds equally catastrophic:
"Your editor can write to .bashrc. Therefore an attacker controls your shell. You probably have sudo. Therefore full system compromise. Even a VM does not help because VM escapes exist. Therefore this is basically curl|bash."
By this reasoning, every program you run under your user account is equally dangerous. Your shell, your file manager, git, make, pip install, npm install, docker, any program that writes files. The argument proves too much, therefore proves nothing.
This is all unhinged poetry - philosophical argy-bargy without any concrete, well-grounded argumentation. I'm just baffled for why none of you guys crying wolf even tried to ask me reasonably productive questions of what do I actually do in my setup.
- My LLM use is mainly not about code generation. Especially it is not about autonomous code generation and execution.
- Why nobody's asking about scope of the LLM file access, audit logs, tool use confirmation, allowlists/denylists, rate limiting/circuit breakers - pre-tool hooks, scoped tool sets per context, etc.?
Whatever. If you think it's unsafe - just don't do what I'm doing. Just please spare me from security-as-ritual, I don't believe in prayers, I preach security-as-engineering. None of you proposed a threat model. None of you started with: "here is the specific attack, here is the attack vector, here is the probability, here is the blast radius", it's all just: "imagine what a baddie could do" followed by an escalation chain that terminates in total system compromise. By that reasoning you should not run any software.
In the interconnected, online world, you can do more damage without root access
"they can read my email, take my money, and impersonate me to my friends, but at least they can't install drivers without my permission"
https://xkcd.com/1200/
So? My terminal has the same full system access. If I didn't use Emacs, I'd be using Claude code in it. It's contained locally on my computer, I don't see any problem here. I use Emacs like my OS-layer. Why would I complain that my OS has access to something? It would be weird and annoying if it's the opposite.
Yeah, that's incredibly unsafe. You made a footgun machine and you're firing it with no shoes on. Don't run that on any machine with credentials you care about.
At the very least, run it in Docker. It's not a security tool, but it's at least some kind of guardrail against data loss and exfiltration.
Having a browser on your machines is unsafe. The browser is a massively more dangerous attack surface than an Emacs-based LLM tool. What I have is a curated set of Lisp functions exposed to an LLM through a protocol I control, running in a single-user process, on my machine, behind my firewall. The attack surface is comically small by comparison.
Any browser that I trust to not instantly[1] eat my face has sandboxing features to at least pretend it wants to be secure. I'm not aware of any text editor that has built in anything of the sort.
It's a nice habit to get into if you can bring yourself to firejail your editor to $HOME/jail and keep all your r/w files in $HOME/jail/Documents and such. But only the most socially unacceptable of paranoid sysadmins do that. Ahem.
[1] FF/Chrome/javascriptless ones. The others are put in prison with no chance of parole and strict visitation policies.
A browser's sandbox exists because it routinely executes arbitrary code from untrusted remote origins. Emacs (or any other editor) with an LLM integration does not fetch and auto-execute code from random origins. Your firejail point proves too much, even though the idea sure is riveting. By that logic, my shell is also catastrophically insecure - it can rm -rf /, read my ssh keys, send some files anywhere. Yet nobody seriously argues shells need browser-style sandboxing. The implicit trust model is different: these are tools where you control what runs.
Yes, there are prompt injection risks, they are legit but that's the property of the LLM, not Emacs. A browser sandbox protects you from code you never chose to run. An editor integration runs code you asked for. These are different problems requiring different mitigations.
You guys keep patronizing me on this, you think I'm some truck driver/florist/butcher by day, and I put on my amateur coder suit at night? Just so you know, I spent years working on security.cisco.com team and went through SANS training and certification. Ever occurred to you that just maybe, perhaps, potentially, theoretically, hypothetically - I'm not completely, utterly ignorant about all this shit?
I don't think it's very reasonable to use claude code on a computer that have credentials without some kind of sandboxing or validing every command it does, at which point I'd rather do things manually
Ah come on, guys, let's talk pragmatically. "Malleable editor as an OS layer" has benefits beyond subjective reasoning. Emacs has had M-x shell-command and arbitrary elisp eval forever. A metacircular MCP isn't some new capability class. Even if I didn't use Emacs - my shell, my editor, my browser extensions, my npm install, my VSCode plugins, my curl | bash from yesterday - they all have the same access. Singling out the LLM in this context is like selection bias.
Of course, reasonable mitigations are a must - just like for any other tool. Narrowing MCP scope - tool routing rules, read-only git defaults, etc. "Docker or nothing" is a lazy answer - Docker-for-everything has real costs: friction, broken integrations, worse ergonomics.
Practical security is all about staying in the goldilocks zone. You shouldn't get relaxed about the basics - sandboxing, 2FA, password managers - they are worth doing, and you can get so paranoid about so many things, and yet against a targeted, well-resourced attacker, your sandboxing posture is mostly irrelevant. The interesting attacks bypass the threat model entirely. Read about Ben Nassi's team research¹ - pretty cool example. There are multitudes of other ways and your Docker container won't stop them. Defend against the boring 99%, and accept that the 1% is someone else's problem (or a much bigger problem than your dev environment)
TLDR LLM Summary: Researchers showed that a device's power LED subtly flickers in brightness and color while the CPU performs cryptographic work, and these flickers leak information about the secret key. By pointing an ordinary video camera (an iPhone or an internet-connected security camera) at the LED and exploiting the camera's rolling shutter, they boosted the effective sampling rate from 60 to 60,000 measurements per second, enough to do cryptanalysis. Using only this video footage, they recovered full ECDSA and SIKE keys from a smartcard reader and a Samsung Galaxy S8, with no malware on the target devices.
It's your computer and you can do whatever yolo nonsense you want, my dude, but put those goalposts back where they were.
"Don't run that shit on a credentialed box with data you care about" is addressing real threats, not some goofy nation state thing or abstract security research.
If you let the footgun machine constantly generate new code and run it on your computer, you're just asking for data loss and bad shit to happen.
Docker isn't a great solution but it at least doesn't let yolo code delete files or access env vars or read the contents of .ssh/
> my browser extensions, my npm install, my VSCode plugins, my curl | bash
Yeah, and you shouldn't yolo those, either lol. If they didn't come from a trusted source, you need to read through them. If you don't want to, don't use them. That's not paranoia, that's, like, normal.
> If you let the footgun machine constantly generate new code
Are you talking about autonomous LLM projects that automatically write code? Yeah, no shit, I wouldn't run anything like that directly on any machine without sandboxing. My typical LLM use inside my editor is never in self-driving mode, there's not even cruise-control - I tell it exactly when to write, where to write and how to do it. Automated scripts never get run by LLM and don't get to run at all without prior precise and meticulous inspection. I'm not moving goalposts - at worst we're in disagreement on the level of pragmatics vs. paranoia, that's all.
I don't even get why people are so crazy about LLMs generating code - on both sides. LLMs for me personally are such a great tool for investigating things, for finding things, for bridging the gaps - the stuff that happens 10K feet above code writing. By the time I'm done gathering the details, code generation becomes an almost insignificant touch of the whole endeavor.
That exactly what it is. People's reaction is a default pattern-matching on "AI executes code on your machine." - Ay the horrors!. They have no idea of my cybersec posture, my network perimeter - vpn, firewall, malware protection, etc.
It's not like I'm giving the LLM root shell. It's as if I said: "I learned how to juggle three chainsaws - so fun...", and people reacted as if I suggested doing that in a school bus full of children going 140kmh down the highway.
It's culturally fitting for HN - signaling caution is always socially safe. Nobody ever got criticized for saying "that sounds risky". But "I evaluated the risks and accepted the tradeoffs for my situation" is the actual, pragmatic engineering. Security is risk management, not risk elimination.
We've lost some classic names in keyboards. It's not mechanical but Keytronic made amazing rubber dome keyboards, and they left the business. I don't know what I'm going to do if I ever need to replace my current one.
As much as it's funny to dunk on meta this type of surveillance is becoming the norm. Failed start ups are selling all their emails, chats, commits, etc for companies to train on. Most job offers now come with statements about how you don't have right to your likeness, or your personal network I think most people assume that's for photo ops, but ... Yea. I expect more and more of this. products and product features rolling out with this as a focus
Companies have shown us that IP going to AI providers is acceptable. Once you cross that line your thought workers are assets not people.
Idk in the US but in France you are allowed to have personal data on your work computer.
Though you have to label it as personal (like creating a « Personal » folder or label and your employer can still access it in case of suspicion but he must do it in your physical presence and accompanied with a witness, generally a representative of the employees.
So you theoretically don’t have full privacy on this computer but you can’t be sanctioned for this usage.
I don't think we have sweeping regulations about it, at least in California.
Most companies I've worked at have a policy of some "reasonable personal use" being permitted. The concern is usually focused on the other way around: Companies do not want their IP on your personal machines.
They can certainly look at whatever is on their own machines, however, regardless if it is your personal data or not.
One large caveat: If you do any work on your company's equipment, they may possibly own it, no matter how relevant it is to the company. It's one of the legal tests used to judge the ownership of your work.
It is even worse in France: if you code open source "on the side" of you work, at home, the company which employs you may claim the copyrights of it. I had to add explicit exclusion of this claim of copyrights in my job contracts to protect my personal work.
That was a few years back, dunno if that was fixed.
That is not correct; assuming you are not using an employer’s equipment on employer’s time, and/or working on what the employer pays you to do for them or are working on something that is competing and a few other reasonable caveats.
As far as I was told, this is not enough, you have to add extra legal care, even more if you are on an 'executive' type job contract, and you have to double that if there is "too much" connection/"look-a-like", between the software at work and the open source software you contribute to "at home".
On an french executive like contract, the boundary between "at home" and "at work" is very, very blurry.
AFAIK it's the same in the USA, that's why one of the first questions when interviewing with a company is to ask them about their moonlighting policy if you do want to work on a side project.
It varies by state in the USA. Some states have strong protections for work you do "on your own time, on your own equipment, that isn't connected to your work." Others, not so much...
This is common in North America too. In Canada, people really should be going through their personal projects and getting a moonlighting clause added before they sign any employment agreements. Employment has gotten tough so a lot of juniors aren’t doing this with their first jobs and we’ll start to see the ramifications of that in about five or ten years.
Can depend on the field too. I work in drug discovery and if the FDA was to request data that requires my computer they would have access to everything I had on it...Including my texts if I happened to log in to my personal apple account since it's a Mac.
Same in Germany, although the employer can forbid this but needs to do this explicitly. Most employers don't forbid personal data on work machines or using your work email for personal things.
Reasonable personal use does not at all in any universe imply privacy from a personal perspective.
Is the same reason why they have to say reasonable.
It’s best to have separate devices so they just don’t have the intelligence about you. That can be permanent, left behind, and then increasingly possibly available to AI models forever.
Not having AI companies is reasonable trade off for not having all of my data including full DNA sequence being recorded 24/7 with absolutely zero care of privacy or protection and shared with everyone who has some marginal amount of money to buy it.
Thats... a poorly crafted mumble jumble without any underlying sense, even ignoring insults. Can't handle existence of society where quality of life is higher priority (and you see it on the ground very well) than some sum on account or meaningless titles and rat race achievements or office zero sum games?
It's obviously an unwitting parody account. Calling yourself "Der Einzige" while reciting an incoherent script of internet clichés is indistinguishable from satire -- hilariously unintentional parody.
The workers have always been assets though. They turn JIRA tickets into money. Any notion a company would treat a person as a human being and not a means to an end is unfounded, full-stop. The company is a machine that makes money. Machines do not have feelings.
Machines don't have feelings. But if a human is subjected to machine treatment there should be safe guards. Otherwise we all may as well live in goo filled tubes like in the matrix. At some point we have to decide what is fair treatment for human beings, similar to how we decide fair treatment for lab rats and lab puppies.
Would it benefit neural link to dog food their employees? What if there was a 5% chance of death. What if the employees signed in the dotted line anyway. Someone might say, sure that's fair play. Others might say as a society we shouldn't allow people to be treated as assets.
Is it reasonable to change someone's job description to having every action they take be subjected to company ownership? Depends on who you ask I guess.
"Companies have shown us that IP going to AI providers is acceptable" This is where I'm expecting future collision; you can't both value IP for it's training value, and yet devalue it for the actual sources of IP (people owning their own likenesses or orgs collecting data from their own activity)
It's going to cause a major break at some point, probably sooner than later.
Already 10 years ago, I got an email from a webshop I used to use once, informing me they were closing down. They'd happily sell the customer database to me, if I were interested. Mind you, they were so desperate that they made this offer to all their customers. Its anecdotal, and only tangentially related. But my point is, companies blatantly selling your data isn't exactly a new thing, and not really AI related either. They are doing this since a long time, but usually got less publicity.
This goes back to 1995 when I was just finishing up grade twelve but it left quite the taste in my mouth. The web industry was just starting to kick off in 1995 and people were opening up web design firms. At the time, young people had part time jobs and while my attempts to pump gas had all ended in rejection, I managed to get a job doing ‘web design’ which at the time meant typing things like <tr> and <td> hundreds of times a page.
There were issues. One of the biggest was that it was 1994-1995, I lived in Regina and that city was not an early adopter. But the guy who ran the company had us doing all kinds of stuff for him.
Then he ran out of money. Since he couldn’t pay his staff he tried to sell his almost non existent client list to a competitor. I got a little lost on the details because they didn’t really make sense but apparently I was supposed to work for free for six months so he could sell his client list and then pay me.
I was 17 and really badly wanted to buy a Pentium processor before I started university so I was tempted but my parents had to explain that that was the single dumbest thing they had ever heard. I didn’t get a Pentium processor until 1997 because of that dude and I’m still a little bitter.
Moral is, buy the client list so the nerds can get to 90mhz. :)
I know right, so much pain and horror has been unleashed in the world by Meta… I have zero sympathy for their employees. Someone should’ve said no to developing this tech in the first place but here we are.
It's not like people have an unlimited number of places to work, even if they have Meta on their resume. Many of my colleagues (and myself included) had struggled in the job market in the past before landing at Meta. If it's work for Meta, or suffer more tumult in the hiring market; it's easy to understand why many might decide to take the offer even with the moral implications. I used to bring up politics in the office with coworkers and many people are simply unaware of the consequences of the company's products. There are a few different categories that these people fall into, but the main ones I saw in the office:
1) Chinese H1B holders who are happy to be working in the US at all, and generally apolitical (or view anything as better than the status quo of where they come from)
2) Just normal people who are interested in their own lives and have never been trained to think about the world in a big picture way (some overlap between 1&2 exist of course)
It's very western of us to always be tracking the conseqentiality of our actions even when we're just the cog in a wheel at BigCo. I think that it's the right thing to do, but this sort of reasoning largely absent in eastern cultures, or even for some in the west—even among those who are well educated. It's kind of hard to blame individuals when they either are rightfully consumed by worrying about their own welfare or are for whatever reason not as seminally hyperaware or woke as we can be in the west. Growing up I liked imposing my political philosophies onto everyone; maturity is understanding that even objectively righteous values are only useful for the right types of minds.
On the contrary, once someone has truly been made aware of the ramifications of their actions; it's more difficult for me to extend my sympathy to them. I consider mark and priscilla to be fully implicated based on their exposure to the harm that they're actively, willingly, knowingly causing. Other employees may never get that memo, though, people obviously avoid political talk in the workplace.
What Meta does (and here I want to be clear that you can replace Meta with Apple, Microsoft, Google, Palantir...) is eventually public knowledge, profusely discussed even on HN. This means substantial amount of people have been aware, for decades.
And even if "just quit" is not an option - why not push for policy to regulate these corps? Why is it that after all this time, these same corps now also own at least 1 branch of the US government?
And when the EU/Australia/China.. tries to regulate punish those corps, suddenly everyone comes out on HN to explain protectionism, overreach, some -ism, and "actually we need to give them the benefit of the doubt" etc... why not support that momentum?
> And when the EU/Australia/China.. tries to regulate punish those corps, suddenly everyone comes out on HN to explain protectionism, overreach, some -ism, and "actually we need to give them the benefit of the doubt" etc... why not support that momentum?
I really, really want to believe it's bot warfare. But there is this running theme of HN posters who think because something is _legal_, or because you can point at it historically and go "acktually it's always been like this", it's therefore _moral_ and we should not ever push back on the excesses of these awful fucking companies.
> And even if "just quit" is not an option - why not push for policy to regulate these corps? Why is it that after all this time, these same corps now also own at least 1 branch of the US government?
Because money is the current representation and approximation of power. It used to be "the yams," but now it's money.
You remind me of my former, younger self and I applaud the appeal you are making to our better selves. All I'm stating is simply that many people don't care, or can't be made to care. But further, there is a pontificating nature about the way you reason about these workers. In the case of my colleagues at Meta, many feel that they are so fortunate to be able to work in the US at all. Even if they did care, it would be rational for them to continue working there against their moral qualms anyways. Because no one would choose to go back to their home country and do the same work for a paltry fraction of the pay.
Not speaking philosophically. I'm just talking about my experience on the ground working with chinese (as a fellow chinese). Some of them are interested in global affairs, certainly, but I find it to be more common from people raised in the west.
> It's kind of hard to blame individuals when they either are rightfully consumed by worrying about their own welfare or are for whatever reason not as seminally hyperaware or woke as we can be in the west.
If you care that your employer is being unethical (such as storing your keystrokes), that's being hyperaware, woke?
I know the definition of woke can stretch like taffy, but it now seems dislodged from its origins concerning race and gender and is now just a vague disparagement of any speaking up to injustice.
Was quite tired when i wrote this; just want to be on the record saying that i don't necessarily think that people in the east haphazardly just do whatever they're told. there's more nuance to it than that; but i just observe generally that in the east there isn't a culture of political motivation or organizing, or democracy at all. So it's not at all surprising when people don't assign any political meaning to their work—even in the cases where one so overtly exists.
Feels good to read the "ex-"-part in your sentence. It'd be analog to my supervisor sitting right behind me and keeping a super dense protocol - no fucking way, ever.
No. It would be best if it included the higher-ups too. I think we all just assume that the c-suite, and anyone who might talk to the legal department, are exempted. And HR (medical info). Or maybe meta is just that stupid that they havent.
This is a naive take on this. Do you think it stops with just metamates(lmao that’s what they call themselves) being surveilled? Nope. This is the exact type of thing that software IC’s should reject in solidarity. Being happy with BadCompanyX trampling employee expectations directly allows for GoodCompanyY to enact the same policies.
I'm happy to see the metamates (lol) receiving the same pain they inflict on others. Maybe it will teach them a lesson in solidarity.
You can't have solidarity about a bad thing with the people who are doing the bad thing! They have to stop doing the bad thing first! That's how solidarity works!
Don't expect any solidarity to come from such people, they literally sold out humanity for slightly higher salaries. They made their beds, least they can do is feel bad.
Why do you think they don't fully know what they are doing, they are smart folks. Now we all know how everybody needs to be the hero of their story, but self-lying only gets you so far in life, sub-consciousness will give you shit.
Don't put some mystery where simple greed is perfect enough explanation and there is little worry about others, some could use the word 'selfish' too. US society at large seems to me structured that way - there is no social net for the unlucky, healthcare also varies a lot based on disposable cash/job, good education is only for rich.
I've lived long enough to know that "smart" folks can be extremely dumb.
There are people who are naturally gifted and intelligent. These people can just pick up and learn different things on their own.
There are also people who do well at certain tasks and their life allowed them to obtain a higher education.
For example, I used to assume doctors were smart. However, the reality is that a small number of doctors are smart/intelligent people. Others are just going by the book, you throw them a curve ball and they fold. This applies to people at Meta, Google, NASA, and any organization.
I would argue smart/intelligent people see the negative impact of things BEFORE it affects THEM directly, there were people whistleblowing the real impact of smoking, fossil fuels before the information became public and well known.
I thought mass quitting in solidarity would happen when programmers realize how their work is used to train AI and replace them. How many quit because of that? Doesn't seem like many.
Apparently, money wins over principles for 99% of us. How is this different and how are we better than Meta employees?
I don't think the two things are comparable. While it would be inconvenient for me personally if I was replaced by AI, it would be an enormous social good as the resources saved could go somewhere else. The same could not be said about everyone under constant surveillance by some megacorp or the government.
Are you so sure that replacing humans is "enormous social good"? For whom is it good, exactly?
Also, capturing keystrokes and mouse movements only when at work and on work computer isn't really constant surveillance. Capturing all our code, text, photo and video (made at work or at home) seems worse and we don't bat an eye.
I work in a non-profit sector, if they could save money by replacing me they could use the money elsewhere where they desperately need money. So lots of people would benefit. That same principle wouldn't apply if I worked for some mega corp of course.
But the discussion was about Meta employees in general. They're heavily involved in the second type of surveillance that you alude to.
Maybe in 2010 or 2015, but in 2026? Nobody is quitting their high paying job when the job market is this rough. A bubble has burst and there just are not the tech jobs out there that there used to be.
And employers know this, so they are enacting all kinds of draconian policies because they know employees know that they can't just leave the job and also keep their families fed.
job market is 2019 levels this rhetoric is nice, but doesn't stack up. yes it's not 2021 levels which is where they over hired and hired a bunch of people they would not have hired before then.
If only there was some way where workers in this profession could form some type of JOIN(but like a vertical version?) between different sets of workers, even crossing company boundaries, so that workers could coordinate to ensure that everyone would be quitting at once, and therefore have any power at all to block anti-worker edicts.
It always happens to the most deserving group of people before it happens to you, and then there's no one to voice any concerns about your own fate, because they all got what you supposed they deserved.
TL;DR: The history of fascism and Nazism in the 1930s Europe.
There are large organizations at Meta focused on basic research & design (FAIR, Open Compute, PyTorch, etc) and giving back to the community. Not everyone is maximizing revenue.
Like all of us these people make a cost-benefit analysis when it comes to their choice of employer and how much it suits their purposes and personal priorities like giving back to the community.
This is just another factor they’ll have to grapple with in their analysis.
I’m sure some of them will find it a bridge too far but not enough to really matter. The work will continue as will the expansion of Meta and the negative externalities that it produces.
My favorite is, fortunately, a lot less depressing. Sinead Lohan, right on the cusp of making it big, touring with some of the biggest names in folk at the time. Realized she didn't like the music industry so she stopped and retired right then and there. I have no doubt she'd be a legendary folk name if she continued. Whatever It Takes is my favorite song by her.
Oh man. I always wondered what happened to her. She absolutely would've been huge if she had kept with it. And it probably would've destroyed her spirit.
Interesting. Unless we have different standards for what constitutes a cooked oat, maybe we're talking about slightly different things? The full-size rolled oats (sometimes called 'robust') here in Germany are nowhere close to soft (and are still distinctly floating in the milk) after simmering for 20+ minutes. The alternative is also described as rolled oats (sometimes called 'tender') but are visually smaller; that's what cooks in 5-6 minutes.
This must be different, the "old fashioned rolled oats" sold in America would be more than done after 20 minutes of simmering.
Going by Bob's Red Mill, which is an excellent brand, we've got:
* Old Fashioned rolled oats, 10 minutes: https://www.bobsredmill.com/product/regular-rolled-oats [the store brand I always see, on the other hand, is 5 minutes]
* Steel cut oats, 15-20 minutes (this is a lie, it takes longer than 20 minutes for them to get sufficiently soft in my experience, for any steel cut oat brand): https://www.bobsredmill.com/product/steel-cut-oats
They also have a second species of oats that are significantly higher in protein, and they take 15+ minutes to cook in "rolled oat" form, which from personal experience is accurate: https://www.bobsredmill.com/product/protein-oats
One thing data scientists brought to the table was statistical rigor in the models, but that seems to have left the building at this point with LLM-based solutions.
reply