Interestingly, that kind of parallels the real world too: if you want a quick and high level answer, talk to someone in person; if you want something detailed and info-dense, get them to write it down.
The AI attached to their voice chat is running a completely different model. Ask any question and you quickly realize it is completely, unapologetically lobotomized. If you want to talk to it about how you feel after your gf/bf broke up with you, it is fine. If you want to ask it something about tunneling machines and how tunneling through different types of rock impact engineering decisions, it is going to skim the first four sentences of some blog article and then defend whatever hill it has chosen until it dies, regardless of what the larger body of work on the topic says. OpenAI's voice chat being so bad and being totally divorced from their SOTA models is largely why I cancelled my subscription. I am tempted to wire up piper/whisper and the OAI api to get back what I actually want/need. But today you cannot have a conversation about engineering questions and get anything close to factually reliable answers out of it.
Turning advanced voice still leaves "regular" voice interaction which are actually (for me:) much much better - it's just the regular response, verbalized :). Voice quality isn't worse, it just doesn't try to summarize in one casual sentence.
(I still hate that the voice is getting more and more "natural" - the umms and ahhs and weird pauses)
The entire city shuts down and loses their mind with just a millimeter or two of snow here. Last time we got 0.25 of an inch there were ~9 accidents within a 2-mile span on the highway in the morning, and we just ended up shutting the highway down for the day.
I love Waymo in other cities, but it'd be especially helpful here during the 1 day every other year that we actually get any snow ... if we ever get snow here again.
Stars for me are basically "this might be interesting but I don't have time to look at it now, hopefully I'll think about it later and give it a second look".
> Remote Work Protections: Leadership is instituting a mandatory RTO, forcing numerous remote employees across the US to work from a physical office or be forced to resign.
This alone is enough reason for anyone to be up in arms and unionize. Kudos to them for banding together for better working conditions. I hope more big companies follow suit.
> It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
Taking more breaks and "not working" during the work day sounds like something we should probably be striving to work towards more as a society.
This was always the undelivered promise of "tech" in my opinion. I remember seeing the Apple advertisement from the 80s (??) when a guy gets a computer and then basically spends his afternoon chilling.
Some how I've found myself living in a fairly rural place, and while farming can be hard, I don't want to downplay the effort of it, the type of farming people do around me is fairly chill / carefree. They work hard but they finish at 3pm and log off and don't think about work. Much o my career is just getting crushed by long hours, tight deadlines, and missing out on events because even though my job has always been automation focused, there is just so much to automate.
This is absolutely something to potentially be worried about, but one thing I never see highlighted in critiques of AI-assisted cognition is that some elements of physiology may not actually be biologically necessary if they can be fully supplanted by some replacement (in this case, new tools). I can't traverse as much land on foot as my ancestors did (my muscles are weaker, my endurance is less, etc), but I can travel even further than they could by car/plane/etc.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
Comparing LLMs to movement aids is an interesting analogy. Imagine a world in which they are similarly prescribed to people who legitimately need them to function, and withheld from the general public to keep people healthy and keeping public spaces nice to use.
Setting aside medical movement aids for a moment, I am reminded of places where people commonly ride various kinds of scooters on sidewalks.
There is a particular feeling of unfairness when you are pitted against essentially a small vehicle zipping past you with little warning, easily going double your speed without any physical effort from the rider. I remember seeing people in Seoul, especially older people, being startled by and occasionally having to almost jump out of the way of this sort of traffic having the right of way. I won’t lie, I like that riding those things is illegal where I am now.
Let’s talk about medical movement aids, though.
The analogy gets interesting here. Unlike the various scooters, these aids are normally restricted to average walking speed, though I imagine “jailbreaking” them is probably a thing, too.
On flip side, I know for a fact that there are places
where perfectly able people are known to ride purported medial movement aids (just for the kicks or in protest). Is this a bad thing? Who is to say whether one is disabled or not anyway? If one is physically able but buys this machine, should one have the freedom to drive around on the sidewalk? Why don’t we just do it by default? What about a flipped world where everybody drives a movement aid everywhere and only special people (Olympian athletes, weirdos, etc.) ever walk?
Only either indirectly or extremely partially. Everybody was driving by the 40s in the US but the obesity epidemic didn't start until the late 70s. Really, obesity is an example of exactly what the GP was talking about- our evolved relationship with food is not inherently good, and it's better for us to change our behaviors than to abandon our advancements and return to the food-scarce world we're adapted to.
>our evolved relationship with food is not inherently good, and it's better for us to change our behaviors than to abandon our advancements and return to the food-scarce world we're adapted to.
So are you arguing we should change our relationship with human intelligence? What does that even mean?
My intent was to argue that the obesity epidemic is not a supporting analogy for their point. I didn't mean to imply that the problems with it analogize back to AI.
reply