> My point is that I have an issue with his tone and rhetoric, not with the thing he’s advocating.
This is often a hand wavy excuse by people who simply don’t agree with a cause and/or think it’s not important, but won’t admit it. If you don’t think what he is advocating for is important just say so. If you do, support him. You can’t possibly believe he’s so out of line that it’s worth prioritizing that opinion over the cause itself.
To be clear: I really do agree with his message, and I appreciate the tremendous impact he has had in the right to repair, DRM and all these things. So in that sense, I support him.
It’s just that his style is really appalling to me. Am I not allowed to criticize his style while at the same time supporting his stance?
I think it’s unfair to then imply I must not be admitting that I’m against his cause and using it as an excuse, because nothing I have said indicated this.
> can wrangle into solving tedious, but straightforward, problems correctly. It still makes a ton of mistakes and needs to be very rigidly guided,
I don’t know about the rest of y’all but I find “rigidly guiding” LLM’s incredibly tedious and frustrating in the same way seeing an error code throw for the 40th time while troubleshooting something on my computer for two hours is frustrating. It also feels somewhat like micromanaging a direct report. I don’t find that process fun or enjoyable in the slightest and it teaches me little in the process. It’s just trading styles of work, and I guess the response to that is “some people prefer that of work.” I just don’t like being told by the world we all have to work that way now I guess.
I agree. I find it endlessly frustrating and kind of hate what programming has become. But at least for me it meets the minimum bar of "it works if you push things" now. For past models, under no circumstances could I get them to semi reliably solve these kinds of problems correctly without giving them so many "hints" that they weren't actually saving me time. The kind of reasoning I'm talking about is stuff like "can you actually construct a trace from program start for this condition that looks locally reachable?" Past model simply cannot reliably answer such questions as soon as the control flow involves enough hops or requires tracing through enough function calls.
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
My current bar is “if you know I’m expecting to hear from a person don’t paste unedited ChatGPT outputs and hit send.” Everybody wants to send out the efforts of their corner-cutting, but nobody wants to receive them.
Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.
I’d go a step further and say there is never a good reason to share unedited ai output.
The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.
The asymmetry is that lots of people want to use LLMs to produce things, and nobody wants to consume the things LLMs produce.
The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.
Not quite. Ultimately the lions-share of income of model producer's is coming from firms.
Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.
Its a whole lot more nuanced than some shitty game theory.
That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.
If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.
I've thought the 3D-printing analogy is pretty apt for about a year now. It had a lot of promise at first but it never quite has the impact people thought it would. There are still 3D printers for sale, and people still prototype with them, but nobody's printing out a dustpan when they need one.
I think you’re missing the thrust of my comment and the responses. Nobody is saying 3-D printers are worthless, but if you remember what it was like when they were first emerging into the mainstream, you would think we would all have one in our living rooms by now just spitting out everything we need constantly. We would all be building our own furniture and repairing every niche thing in our house with them. We’d all be on some magical network sharing files with each other. We’d have a massive surge in printed guns.
Everything was theorized and it all was a variation of “nothing will be the same for anyone ever again,” not “some specific areas will be really different.”
I'd say that's a pretty accurate analysis. Something that is easily generated by an LLM obviously has low value and there is no moat.
Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.
Yeah there is no doubt that some companies are going to radically change their operations because of agentic coding in particular. But the revolution that is being promised, and the investment that has gone along with it, is going to smash against some pretty nasty shoals of reality sooner rather than later
Some are going to radically change their operations, but we have yet to actually see if the ROI on that comes through for them. It will be an interesting thing to watch.
Fair point. My implication (though I completely failed to indicate it lol) is that for some companies it will be a huge, mostly positive change I imagine. But it won’t be the majority of the companies trying to make that happen right now that’s for sure. Unless we want to consider every company deploying a chat bot for user support I guess…though I wouldn’t exactly say that is the massive leap in technology AI is promising
A lot of time I will just say “Gemini/Claude is telling me…” just like I would for a Google search result. Sometimes helpful to use the common wisdom embedded in the LLMs as a starting point for the discussion.
I also don’t get why people keep saying “who cares so long as it’s correct?”
That’s a huge assumption! And I care a lot, because I want to know a person looked at the result and decided it was correct. If you don’t do that, you’re dumping that work on to me and ignoring that I asked you for a reason.
Anyone can open up ChatGPT and ask for a quick answer. What on earth makes people think I want them to just do that for me when I ask them a question?
> until we collectively redefine and enforce a value system that benefits us all
Tons of us called for common sense guard rails and a little bit of actual intention as we rolled out LLM’s, but we were all shouted down as “luddites” who were “obstructing progress.”
We all knew this was coming. It’s been incredibly frustrating knowing how preventable so much of it has been and will continue to be.
Edit: these responses are absurd. Banning GPU’s…? What are you on about? Who said anything about stopping or banning LLM’s? Did none of you see “guardrails”? “A little bit of actual intention”? Where are you getting these extreme interpretations?
I’m talking basic regulatory framework stuff. Regulations around disclosure, usage, access, etc. you know, all the stuff we neglected and are now paying for with social media in droves? We have done this song and dance so many times. No one is going to take away your precious robot helper, we’re just saying “maybe we should think about this for more than two seconds and not be completely blinded by dollar signs.” I mean people have literally died in my state because Zuckerberg wants to save a few bucks building his data center.
It feels like AI evangelists come out the woodwork seething if anybody even implies you shouldn’t be allowed to do literally whatever you want at all times.
Sigh, and what guardrails are common sense? Are those the same level of common sense as those advocated for guns ( and narrowed down at every possible opportunity )? Some of us see this tech as possibly revolutionary and thanks to useful individuals calling for muzzling that tech we now have the worst of both worlds: centrally controlled, not really open ( weights are just weights -- though Meta actually deserves some credit here ), and heavily muzzled.
Clearly, powers that be learned all too well from internet rollout.
“Sigh” is “I don’t respect you and will now talk down to you.”
“Common sense” at least invites the question, “what do you consider common sense solutions?” and if I were to balk at that then clearly I’m not discussing the topic in good faith.
But it's just argument bait, isn't it? What are the odds that the guardrails you consider common sense will agree with my own?
It's like how 90% of people might be in favor of "common sense gun control," but when you drill down and propose specific gun-control measures, you find that it didn't help a bit to start the conversation by invoking "common sense." We're seeing the same with AI.
Except that it's not preventable. Technology is always an arms race. If you don't create it, someone else will, and then they'll have the advantage and subjugate you, so you might as well be the one to do it first. Whatever it is that you're trying to prevent, someone is going to do it if it gives them power.
It wasn't "preventable" though. How would you prevent what's been happening ? Pass a law making GPUs illegal ? Just ..."convince" everyone that the machine that can write working software, business letters and render good enough banner and print advertising for nearly free is evil and just don't use it (ask Emily Bender how that's going)? There is no realistic way from stopping any of this from happening. Need a different approach.
<< common sense guard rails and a little bit of actual intention
The issue is that you seem to be proposing nothing but platitudes and when called on it, did not elaborate but high tailed to the cloak of misunderstood defender of sense and sensibility.
I understand what you mean; Claude is a tool and does not have feelings, thats clear to me. But how else can I describe what I did? "Wrote to Claude" has the same issue. Posted, typed, inputed?
“I used Claude to…” “I tried to X using Claude” etc
Anyway doesn’t matter. I’m just kind of whining, I probably should’ve never written that comment in the first place. I think it just sticks out to me unlike like a lot of common parlance in other industries, which can definitely steer into anthropomorphizing, because we’re seeing all kinds of issues with people attributing actual intelligence to these things or just experiencing general psychological distress because of them. Using language that ascribes human characteristics to describe using LLMs just feels weird in that context
Given these machines are the product of massive intentional and increasingly successful efforts to humanize computers, increased anthropomorphization is appropriate.
The behavior/attribute overlap isn't a coincidence or misunderstanding, it is by design.
In case of "ask", that describes our behavior not the machines.
But if a machine is able to recall and use some fact fluently then it makes sense to say it "knows" it. We routinely use words like "know", without any confusion, when talking about simpler lifeforms that are far less human-like than these models.
None of the above means the machine feels pain, is conscious, has a continuous identity, etc. Yet.
> I think there are lots of other things going on there over and above the moderation issue, but one is that the early Internet culture was very self-selected for people who thought that the ability to talk to people and the ability to access information were morally virtuous.
Honestly I think it mostly self selected based on who had the technical ability to participate, especially at that time.
Also early internet access was gated by institutions. Most people were using their work or school internet access to be online, and so behavior was naturally more controlled. When I was first online (circa 1990), I could have been "kicked off the internet" by my college's IT department.
I was talking about this in a thread yesterday. It’s why I don’t like blogs that are just LLM generated. I don’t care how good you think it is, I don’t care that you consider a facsimile of you good enough. If I want a rote, boring LLM response, I will prompt it myself. I do not appreciate reading blogs and other assumed to be human-generated content and having somebody attempt to trick me into reading their prompt results like some annoying middleman.
I came to your blog to read what you had to say. Why are you writing a blog if you aren’t even going to write it?
This is often a hand wavy excuse by people who simply don’t agree with a cause and/or think it’s not important, but won’t admit it. If you don’t think what he is advocating for is important just say so. If you do, support him. You can’t possibly believe he’s so out of line that it’s worth prioritizing that opinion over the cause itself.
reply