Hacker Newsnew | past | comments | ask | show | jobs | submit | gopalv's commentslogin

The AI psychosis is not the anti-opinion to the use of AI.

I use AI coding tools every day, but AI tools have no concept of the future.

The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.

The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).

Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.

But one thing it doesn't do is "write less code"[1].

[1] - https://xcancel.com/t3rmin4t0r/status/2019277780517781522/


> I use AI coding tools every day, but AI tools have no concept of the future. The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.

Maybe it's just my prompt or something but my coding agent (Opus 4.7 based) says things like "this is the kind of thing that will blow up at 2am six months from now" all the time.


It's really inconsistent though.. it takes shortcuts and leaves todos all the time without really calling it out explicitly, you have to pay close attention.

Sonnet is also throwing overloaded error.

My systems are hitting exponential delay retries, so this might not get better because retries overload things again.

> {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}, 'request_id': 'req_ ...

I can see a weird spike in my cache hit-rate a few minutes before, so this might actually be some extra caching they have thrown in.


> when BigTechCos buy SmallCos and then unceremoniously kill them off fairly shortly after

There's many reasons, but in general incompetence, malice and small crumbs problem.

I've done my small share of M&A DD work as an engineer, which was a lot of fun, but the results on my sanity and my outlook was bad.

On one hand, you get to go talk to a core founder of a company and they're entirely open to you picking their brain on "Why this" / "Did it pay off?" on pure eV math they did in their heads.

On the other, you see what happens after your recommendation and it is not within your control to change any of it.

Incompetence is generally "Please rewrite this software by our practices" devops hell or "Let's look for better customers for this product, ignore the old ones" in the ICP land. Google and dodgeball comes to mind.

Malice is more clear cut, where "Let's buy it and shut it down, so that we don't have a threat to our business" - I'm eagerly waiting to see what happens with Groq and Nvidia for example. AWS buying Groq would've been massively different. Classic case in point is Apple buying Fingerworks & shutting it down, but launching the iPhone.

Lastly, there's the small crumbs problem (or as it has been famously said "Do not anthropomorphize the lawn mower").

A company can get bought and the product doesn't really add great value to the buyer, beyond getting a few people who really know the space. The small number of people them gets redistributed into a neat set of existing reqs where they just accelerate the existing company's products based on that knowledge or in general fail to surface back to make a significant ripple in the future.

For example, I am wondering what will happen to Promptfoo after OpenAI.


> The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces.

The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.

I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.

Ended up reading the Ink Terminal implementation of React

https://github.com/vadimdemedes/ink

Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.

The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.


I tried everything to customize the colors of some elements in CC’s TUI, like the dispatched prompt bg/fg, and found it impossible.

A lot of issues that surface for people with colorblindness or adjacent sight issues also arise for eInk monitors.


The better part of this is having a local-first AI, particularly because it has tool-calling builtin & structured output.

I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.

[1] - https://notmysock.org/code/voice-gemini-prompt.html


Flickr was the coolest thing Yahoo had when I worked there (Brickhouse was a close second).

I really loved all the places where they snuck in "Game Never Ending" in the product, because they didn't set out to make a photo sharing product, but steered hard into that.

Flickr was the only property which was allowed their own version of PHP and despite having PHP inside, every single URL said ".gne" (Game Never Ending). I worked for the PHP team and that was my only excuse to show up to work in the SF office instead of being stuck in Sunnyvale when visiting the US.

They had all the right bits of architecture built out - rest of Yahoo had great code (like vespa or the graph behind Yahoo 360), but everything was more complex than it should be.

Flickr had the simplest possible approach that worked and they tried it before building anything more complex - the image urls, the resize queues, the way albums were stored, machine-tags, gps co-ordinates.

I also took a lot of photos to put up on flickr, trying to get featured on the explore page up front - it was like getting published in a magazine.

Every presentation I made had CC images backed by flickr, it was a true commons to share and take.

And then Instagram happened.


I have been going back some times to flickr and dropped insta, since it’s a crap place these days (like most of the big socials)

The elegance of flickr is just nice and browsing is fun.

I wonder if there are more sites like it.


Saw some other folks starting to use https://glass.photo


That looks really interesting! Ty for that link.


+1 on Flickr being the best acquisition and product Yahoo! had.

I still have my account and old photos there. And because I licensed most of them as CC, a couple of them landed on Wikipedia because of that - felt nice.


I had everything set as CC until I noticed a photo of my very pregnant wife was getting many more views then anything else and I found it cited in a paper on training AI. That was somehow less endearing then someone getting a good use out of my images (which also happened at least once with one of my images)


> a couple of them landed on Wikipedia because of that - felt nice.

as someone who goes down many rabbit holes on wikipedia, i appreciate this comment and all of those CC photos


When I was doing more graphics-rich presentations, the CC photo resource on Flickr was really useful. (In case someone asks, I usually wasn't being paid directly for giving presentations so I convinced myself I could feel comfortable using CC content in general even with strings like non-commercial attached.)


I think the thing was that Instagram was snappy, and Flickr (to my strong recollection) was really slow.


I also loved Flickr, and Pipes was a really cool technology too.

It’s cool that they used PHP, I always thought it was RoR platform.

https://en.wikipedia.org/wiki/Yahoo_Pipes


I thought Flickr predated RoR, went to check Wikipedia: it says both were launched in 2004 but Flickr a few months earlier.


I never understood the appeal of instagram over Flicker.


It's dumbed down. In today's world, dumb always wins.


I was trying to think when I stopped browsing and using Flickr. You just reminded me.


> Flickr was the coolest thing Yahoo had

From my point of view Yahoo destroyed Flickr. I was a happy user for many years and lost access to my photos due to authentication changes. At least Google had the decency to just shut down Reader as opposed to Yahoo's enshittification of a product that sparked joy.


Strong agree that Flickr went downhill rapidly when acquired by Yahoo - but also happy to report that it has since bounced back.

The community isn’t the same of course, but the platform itself is a joy to use again - especially as someone who got tired of Instagram when it stopped being about photography.


The linked clang PR is also very readable.

https://github.com/llvm/llvm-project/pull/181288/files

As the PR clearly points out, you can do this in a register but not inside vectors.

I don't think fastdiv has had an update in years, which what I've used because compilers can't do "this is a constant for the next loop of 1024" like columnar sql needs.


> Multiplication alone requires depth-8 trees with 41+ leaves i.e. minimal operator vocabulary trades off against expression length.

That is sort of comparable to how NAND simplify scaling.

Division is hell on gates.

The single component was the reason scaling went like it did.

There was only one gate structure which had to improve to make chips smaller - if a chip used 3 different kinds, then the scaling would've required more than one parallel innovation to go (sort of like how LED lighting had to wait for blue).

If you need two or more components, then you have to keep switching tools instead of hammer, hammer, hammer.


I'm not sure what you mean by this? It's true that any Boolean operation can be expressed in terms of two-input NAND gates, but that's almost never how real IC designers work. A typical standard cell library has lots of primitives, including all common gates and up to entire flip-flops and RAMs, each individually optimized at a transistor level. Realization with NAND2 and nothing else would be possible, but much less efficient.

Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.


Yeah, what you're going to get is more efficient proofs: you can do induction on one case to get results about elementary functions. Not sure where anyone's getting computational efficiency thoughts from this.


Are you under the impression that CPUs are made exclusively from NAND gates? You can't be serious.


Might’ve gotten mixed up with CMOS dominance, or I’m ignorant.


https://en.wikipedia.org/wiki/Mead%E2%80%93Conway_VLSI_chip_...

I'm guessing is what they're really talking about. Which is not about NAND gates.


Just to add a bit, but modern digital circuits are almost exclusively MOS, but even the "complementary" bit isn't universal in a large IC.


I believe you're not ignorant. But many folks probably lack the process knowledge (CMOS) required to understand why :-)


The first part of the parabellum quote matters - we have to let the people who want peace prepare for war.

The Smedly Butler book was eye opening to read for me.

Diplomacy and trade works wonders when the enemy still wants you to buy things.

Sanctions work when they've got things to sell (and raw materials to buy), not bombed out craters where their factories were.

Si vis pacem ...


aposiopesis is followed presumably by some latin phrasing of prepare for war?

[edit, found the real version https://en.wikipedia.org/wiki/Si_vis_pacem%2C_para_bellum ]

adapted from a statement found in Roman author Publius Flavius Vegetius Renatus's tract De Re Militari (fourth or fifth century AD), in which the actual phrasing is Igitur qui desiderat pacem, præparet bellum ("Therefore let him who desires peace prepare for war").


>> It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules

This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.

The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.

Particularly by experimenting on something proprietary like Graviton.


She determines that by having three copies. Or four. Or eight.

Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.


Right, but the impressive part is finding addresses that are actually on different memory channels.


Surprising to me that two memory channels are separated by as little as 256 bytes. The short distance makes it easier to find, surely?


Access optimization or interleaving at a lower level than linearly mapping DIMMs and channels. x86 cache lane size is 64 bytes, so it must be a multiple. Probably 64*2^n bytes.


"This is the sort of thing which was done before in a world where there was NUMA"

You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore. Honest question because I am out if touch.


EPYC chips have multiple levels of NUMA - one across CCDs on the one chip, and another between chips in different motherboard sockets. As a user under Linux you can treat it as if it was simple SMP, but you’ll get quite a bit less performance.

Home PCs don’t do NUMA as much anymore because of the number of cores and threads you can get on one core complex. The technology certainly still exists and is still relevant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: