I think they worked that out long ago - that segmenting users has no downside for them and IPv6 has minor upside. It's only mobile devices that help us but I'm sure there will be kinks in the chain that don't get fixed.
That would indeed be a disaster because there is a lot of IPv6 usage and a lot more support from operating systems and hardware that has to exist first and now we should start again with something else would introduce a 3rd unsuccessful attempt with ....what benefits?
I think we've been shunted into an alternate universe by NAT - one which reinforces the power of large companies because we essentially cannot communicate computer to computer without going through some service.
As for security.........are we really that secure running code in our browsers that we downloaded from who knows where? Is nat really saving us?
And now here we are with IPv6 and the real age of the network could begin.
True. But I mean these are photos (from strangers that you aren’t even willing to exchange phone numbers with?). It is a really non-essential feature anyway, so most likely everybody who doesn’t have an Apple device skips it.
The real choice though is between (a) buying an apple gizmo and not having to set up local networks; and (b) buying a non-apple gizmo and having to do that.
To send files locally why not set up a wifi hotspot?
Then you can transfer files to and from uncool people with Android or Linux phones/computers using localsend.
I've never found this difficult and often use hotspots when I'm overseas - it's cheaper to get internet for one phone and share it with the others for example.
Bluetooth is also very slow. Airdrop and Localsend achieve speed by using local wifi networks. The problem with Localsend is that the user themselves need to manage the creation of the local network.
Who really truly enjoys that and doesn't see it as a chore?
I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.
I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
> Who really truly enjoys that and doesn't see it as a chore?
This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.
Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
> Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....
The conservation of Complexity (Tesler) seems immediately insightful to me just as a sentence:
"Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated."
But then in the explanation seems to me to devolve down to a trite suggestion not to burden your users. This doesn't interest me because users need the level of complexity they need and no more whatever you're doing and making it less causes your application to be an unflexible toy. So this is all, to a degree, obvious.
I think it's more useful to remember when you're refactoring that if you try to make one bit of a system simpler then you often just make another part more complex. Why write something twice to end up with it being just as bad the other way round?
I think professionals are almost always doing things that are at least 30% new...otherwise they've had a long time in one job which is a fortunate thing nowadays.
My last job started with "here's a book about go programming." 2 years later I was learning FastAPI. Now I'm programming in C again but I have spent most of my time learning about git actions and writing SCCS->git conversion software. I've never used SCCS before.
reply