Hacker Newsnew | past | comments | ask | show | jobs | submit | zer00eyz's commentslogin

> The cost of AI isn't.

This is why there are a ton of corps running the open source models in house... Known costs, known performance, upgrade as you see fit. The consumer backlash against 4o was noted by a few orgs, and they saw the writing on the wall... they didnt want to develop against a platform built on quicksand (see openweb, apps on Facebook and a host of other examples).

There are people out there making smart AI business decisions, to have control over performance and costs.


Someone once described agile as this: Its just pantomime and posit notes... implying that the process (from the outside) was more performative than anything else.

From "scrum masters" to "planing poker" it's all very silly.


That’s Scrum you are thinking of. Not agile.


2024: Industry group invalidates 2,600 official Intel CPU benchmarks — SPEC says the company's compiler used unfair optimizations to boost performance https://www.tomshardware.com/pc-components/cpus/spec-invalid...

2003: Nvidia accused of cheating in 3DMark 03 https://www.gamespot.com/articles/nvidia-accused-of-cheating...

It's almost like the benchmarks were designed with zero understanding of the history of benchmark manipulation.

I like what LLM's are doing and providing. But the industry as a whole seems to live in a vacuum that ignores so much of the hard lessons that have been learned over the last 50 years of computing. It is doing itself a disservice.


What was the cheat in the 2024 Intel situation? The TomsHardware article and the Phoronix article they linked were quite vague. (Not to say I have any doubts, just curious, hadn’t heard of this one).


Intel basically benchmaxxed their compiler optimizations. They used detailed knowledge of the benchmark to make their compiler generate machine code to do better on the benchmark in a way that was not beneficial for non-benchmark scenarios.


I assumed as much, I’m just wondering what exactly they did. For example IIRC some phone company would detect that a benchmark was running by checking for the program name, and then allow the clock to boost higher (increase thermal limits) if it was a benchmark (like you could literally avoid the cheating behavior by changing the name of the program being run).


> It's almost like the benchmarks were designed with zero understanding of the history of benchmark manipulation.

I wonder if this common? We should call it Goodharts law while someone does the research on how common this is.

For real, I’ve assumed from the jump these things were all gamed, with the amount of money on the line.


> I see people choose Kafka and SQS

SQS is dead simple, and if your in AWS (forever) it is "in the stack" with some easy to use features that may make sense to you (delay queue is a great one).

Kafka is... a lot. If you need what it provides, then it's great. You just have to be able to support it, and thats non trivial.

I can point to more than a hand full of Kafka project that exist because it was clear that someone wanted it on their resume. I dont think any one is doing that with SQS, it is just (a fairly good utility). However if you want to leave (or branch out from) AWS and you're reliant on it, good luck.


Crowdstrike, no pe because it just had its first profitable quarter (38 million)

ZScalar No PE

Palo Alto Networks Inc (PANW) 86 PE

Fortinet : (FTNT) 31.63 PE

That last one, didn't get hit at all by the Mythos announcement, because at some level it has at least some grounding in fiscal reality.


SABRE, is a reminder that things that are well designed just work.

How many banks and ERP's, how many accounting systems are still running COBOL scripts? (A lot).

Think about modern web infrastructure and how we deploy...

cpu -> hypervisor -> vm -> container -> run time -> library code -> your code

Do we really need to stack all these turtles (abstractions) just to get instructions to a CPU?

Every one of those layers has offshoots to other abstractions, tools and functionality that only adds to the complexity and convolution. Languages like Rust and Go compiling down to an executable are a step, revisiting how we deploy (the container layer) is probably on the table next... The use case for "serverless" is there (and edge compute), but the costs are still backwards because the software hasn't caught up yet.


I used to work on a project that interfaced with both SABRE and Amadeus and "just works" isn't how I would describe it. The thing is also quite slow and annoying, as it's interface is optimized for the trained operator to use it in a terminal setting and not for us poor shcmucks calling it through some weird API bolted on top.

Also, try to retrieve a PNR on an airline website or do like anything on the airline's own website -- the UX is usually pretty bad and the data loading takes forever. For that too the GDS is to blame.


Library code - This is necessary because some things are best done correctly, just once, and then reused. I am not going to write my own date/time handling code. Or crypto. Or image codecs.

Run time - This makes development faster. Python, Lua, and Node.js projects can typically test out small changes locally faster than Rust and C++ can recompile. (I say this as a pro Rust user - The link step is so damned slow.)

Container - This gives you a virtual instance of "apt-get". System package managers can't change, so we abstract over them and reuse working code to fit a new need. I am this very second building something in Docker that would trash my host system if I tried to install the dependencies. It's software that worked great on Ubuntu 22.04, but now I'm on Debian from 2026. Here I am reusing code that works, right?

VM - Containers aren't a security sandbox. VMs allow multiple tenants to share hardware with relative safety. I didn't panic when the Spectre hacks came out - The cloud hosts handled it at their level. Without VMs, everyone would have to run their own dedicated hardware? Would I be buying a dedicated CPU core for my proof-of-concept app? VMs are the software equivalent of the electrical grid - Instead of everyone over-provisioning with the biggest generator they might ever need, everyone shares every power station. When a transmission line drops, the lights flicker and stay on. It's awe-inspiring once you realize how much work goes into, and how much convenience comes out of, that half-second blip when you _almost_ lose power but don't.

Hypervisor - A hypervisor just manages the VMs, right?

Come on. Don't walk gaily up to fences. Most of it's here for a reason.


> Most of it's here for a reason.

Your argument for host os, virtual os, container is the very point im making. Rather than solve for security and installablity, we built more tooling, more layers of abstraction. Each have overhead, security surface and complexity.

Rather than solve Rusts performance (at build time), switch to a language that is faster but has more overhead, more security surface, more complexity.

You have broken down the stack of turtles that we have built to avoid solving the problem, at the base level...

SABRE, what the article is discussing, is the polar opposite of this, it gives us a hint that more layers of abstraction arent always the path to solutions.


There are shops, I know of that run a java emulator of a GE Mainframe running... Multics. Someone told me that, and I was floored. Multics.


If you poke it a little you will eventually get Java exceptions. Because the AI article is lying. It is not 60 year old code running on unchanged bare metal. Things got reimplemented over time.


Nowhere does TFA claim that SABRE or Amadeus or other similar systems are using 60 year old code.


while we can learn from the past, we probably shouldn't look at it through sunglasses that are rose-tinted ;)

---

sabre, the company that owns and builds the current version of the system SABRE used by major companies today, uses all of those things the parent and you mentioned

> Google Cloud-native infrastructure that is scalable and secure. Microservice-enabled architecture that supports modularity. API-first approach for an open platform. [0]

> We rebuilt Sabre from the ground up: cloud-native technology, AI baked into the foundation, one goal in mind. Your success. [1]

yeah ... it's 'ai powered' now.

[0]: https://www.sabre.com/resources/viewpoints/offer-order-strat... (skip to the 'different by design' heading)

[1]: https://www.sabre.com/about/

---

> Do we really need to stack all these turtles (abstractions) just to get instructions to a CPU?

no. but those abstractions are there for things like scaling, reliability, redundancy, flexibility, ... and a bunch of other things not related to solely getting some instructions to a CPU. the number of turtles has increased because customers have more requirements for software today than they used to have in the 1960s.

sometimes we need the simplest solution with fewest dependencies. sometimes we need lots of turtles... it really depends on the problem in front of us.


Everything that is old is new again.

Payment processing, is better than it was in 2000, but still not good.

Micropayments: this is obnoxiously expensive to do.

Discovery, and discoverability: again here we have better but not good solutions (and many of the ones that were once good are enshitified).

Pricing: this is a problem everywhere, and frankly we need the law to change in a way that is pro consumer. Publishing prices, disclosure of fees, in both services and for payment processing (that 3 percent back from visa looks a lot less attractive when it's part of a 5 percent mark up).

Customer service: well there are already companies promoting models where they cut you off and send you into a black hole (google is a prime example). Good customer service will become a differentiator, and maybe a "paid for" service as well.


> Good customer service will become a differentiator

This does not matter without antitrust, which is why customer service became bad in the first place. 30 years ago, the low quality of customer service we complain about now simply didn't exist, at any size or professional level of business, and never had.

If a company back then had the customer service of the average company now, or even the average government agency now, people would have suspected that it was a covert front for criminals or spies.

If a company doesn't have to compete, it can cut everything until it only has the ghost of a product and a billing department. You don't boycott monopolies, monopolies boycott you. If three companies put you on a list to not have internet, phone service, a bank account or a credit card, etc., you just can't have them. You've become a European human rights judge.


From their docs:

> We are creating not only a new kind of Git client,

Nope, not going to be the tool of the future.

The fundamental problem is it is still based on git.

Till this addresses submodules and makes them a first class citizen it's just tooling on top of a VCS that still ONLY supports single project thinking.


It never started that way.

Time, feature changes, bugs, emergent needs of the system all drive these sorts of changes.

No amount of "clean code" is going to eliminate these problems in the long term.

All AI is doing is speed running your code base into a legacy system (like the one you describe).


> All AI is doing is speed running your code base into a legacy system

Are you implying legacy systems stop growing because I didn't mean to imply those companies stop growing.


Not at all,

Im saying that in the before time, complexity emerged over time (staff changes, feature creep). AI coding (and its volume) is just speed running this issue.


> complexity emerged over time

So complexity is an issue? I don't get it. SFDC is an incredibly complex system that makes billions of dollars. Tell me why I would NOT want to be able to create a system like that with an automated tool?


> Disagree with the overall argument.

It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.

> Human effort is still a moat.

Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.

Thats not due to lack of effort, rather lack of skill.

> judgement is as important as ever,

Not important, critical. And it is a product of skill and experience.

Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.

> issue devs would run into before AI - the codebase becomes an incoherent mess

The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.

We might as an industry need to be honest that we need to be better librarians and archivists as well.

That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: