Hacker Newsnew | past | comments | ask | show | jobs | submit | dpe82's commentslogin

In a proper 2-loop cooling system, the primary loop (with direct electronics contact) and secondary loop (with seawater/external cooling source) are hydraulically isolated by a heat exchanger. The salt water or whatever never gets anywhere near the electronics.

The problem is, it's still in contact with something, even if it's just the secondary loop. Saltwater is not just incredibly aggressive against metal, the major problem with using it for cooling is fouling. Fish, mussels, algae, debris, there are a lot of things that can clog up your entire setup.

Saltwater comes in the air. Just being near it corrodes everything. Both stainless steel and bronze are very expensive. Even if things were made of corrosion proof materials, not everything can be, for strength reasons.

I recently moved all my projects to a self-hosted forgejo instance and have found it quite satisfactory so far. And it's fast! If you're in the market for a github alternative, take a look - there are options.

It’s not fashionable anymore, but I feel that Phabricator deserves an honorable mention as a self-hostable GH alternative too. Actually its “dated” UI is kind of a plus considering how bad everything is now.

It's not "unfashionable". Phabricator has been unmaintained since 2021.

There is Phorge which is a community fork.

It seems that Phorge is the community-run fork of it that's still worked on.

Looks good!


I unironically love the aesthetics of Phabricator.

I also like stacked PRs (which is mercurials default).. Maybe it's worth a shot tbh.


I always balked at GitHub, but was impressed with git very soon after I was first introduced; I migrated from an old Gitea instance to Forgejo for my personal projects and have been very happy with it.

What about Gitea?

Forgejo is just the one I landed on, but like I said - there are options!

A year ago I would have agreed with you, but now anyone can build a perfectly reasonable native app.

What is native on Linux?

What is native on Windows too. These days the term native app is so confused it's hard to come up with a definition that doesn't include electron.

There’s a few options on windows all of which are native. WPF, Winforms and WinUI are all “native”.

> These days the term native app is so confused it's hard to come up with a definition that doesn't include electron.

Electron is _not_ native.



TUIs, apparently. :)

I don't really understand the controversy; there are plenty of licenses an author can choose that restricts commercial use of a project. It feels a bit dishonest to release something under a permissive license and then be upset when someone uses your stuff well within the ways you said is perfectly ok.

So many proprietary companies are built on the back of open-source software. Yes, there is no legal responsibility for Warp to donate to Allacritty. But there is a moral obligation. It's not hard to see open-source maintainers and enthusiasts looking at Warp with skepticism. I didn't know that and will be uninstalling Warp, though I stopped using it months ago.

If someone expects to be compensated for their work they should be upfront about it. IMHO it's dishonest/immoral to freely give something away with no expressed expectation of reciprocity and then get upset when someone doesn't reciprocate.

>> If someone expects to be compensated for their work they should be upfront about it.

Definitely and the Alacritty devs have never asked for anything in return for using their software and code. It's mainly others in the community looking at a commercial company forking and then raising $50M and not even contributing. I've seen huge companies, or their higher ups, Github sponsor developers who are building code they use. It's not unheard of.


  YOU> hey
  C64> HELLO! RE SOUNDS ME. MEFUL!
60s per token for that doesn't strike me as genuinely useful.

Very, very cool project though!


not useful in a disaster scenario:

YOU> HELP I'M DROWNING

C64> YOU' HERE!

YOU> OH NO I'M ON FIRE

C64> IGLAY!

YOU> IM BEING SWALLOWED BE A SNAKE

C64>

YOU> BIRDS ARE NIPPING ON ME

C64> YOU


Reminds me of Terry Davis' random word generator :')

Maybe there is deeper wisdom in there that we have yet to unearth


Power is not the most expensive part of data center lifetime cost; especially these days when you're filling them with several billion dollars of nvidia chips. It's still an important consideration of course, but not the only one.


I don't know if that's really true. Given realistic life cycles of equipment (~10 years, not 3 as commonly believed) the operating power is going to be 75-80% of the TCO, or more.


I don't see how that number could possibly be realistic.

A H100 cost 30k when new, and uses 500W of power.

500W for a year is about 4500kWh, which at $0.10/kWh is $450/year if run at full utilization (unrealistic).

TCO of an AI data center should be entirely dominated by capex depreciation.


In fairness your calculation looks at the most expensive element of the DC but ignores all of the associated parts required to utilize the H100: CPU, memory, cooling, etc. No to say that that flips the calculation (I don't have the answer), but it does leave a lot of power out.


Let's be generous and pretend the rest of the hardware is free but double the energy budget of the H100 to account for all of it along with cooling. You're still at only $1k/yr; $10k over 10 years, or 25% of the TCO (ignoring all other costs).


This seems cool but man it's hard to get over the very, very obvious AI writing.


I agree that this triggered my AI writing senses. Points in favor:

- "It’s not an accident — it’s driven by the same physics." The classic "it's not x, it's y", with an em-dash thrown in for good measure

- "Typhon brings these into the component storage model — not as bolted-on workarounds, but as first-class citizens." More "not x, but y", this time with a leading clause joined by an emdash

- "Blittable, unmanaged, fixed-size, stored contiguously per type — that’s the ECS side." Short, punchy list of examples, emdash'd to a stinger, again typical of LLM writing

- "Schema in code, not SQL. Components are C# structs with attributes, not DDL statements. Natural for game developers, unfamiliar territory for database administrators. If your team thinks in SQL, this is a paradigm shift." This whole mini-paragraph is the x/y style, combined with the triplet / rule-of-three, just at the sentence scale. And then of course, the stinger at the end.

Definitive, no, but it certainly has a particular flavor that reads as LLM output to me.


Points against: “Two Fields, One Problem” :)


I occasionally read a geopolitics blog that is one of the top search results on Google. I honestly couldn't do it anymore. Every other subheading was something along the lines of, "The experts are saying about Ukraine--Without the Fluff".


Every top comment that I've read today is the same shit complaining about AI writing.

At some point this is not positive for the community.


Sorry — this community immune response is the future. Quit posting slop or get left behind.


Are you really sorry?


I'm not really seeing it tbh. I mean, maybe they used a chatbot to help them write it but I don't immediately feel like I'm reading padded slop without actual content, it's fairly to the point. I just clicked around on the blog to see if anything else feels like it, but it's mainly just very "prefab". That did teach me that the author apparently also worked on DOTS previously for Unity, so they at least have actual hands-on experience with game engines.


If anything, this confirms it for me. On his about page, there's this:

"Hi there, I am Loïc Baumann, I’m from Paris area, France I develop, since early 90s, first assembly, then C++ and nowadays mostly .net.

My area of interest are 3D programming, low-latency/highly-scalable/performant solutions and many other things."

Compare that style to what's in this most recent blog - mildly ungrammatical constructions typical of an ESL writer, straightforward and plain style vs breathless, feed-optimized "not x, but y", triplet/rule of three constructions, perfect native speaker grammar but an oddly hollow tone. Or look at this post from 2018: https://nockawa.github.io/microservice-or-not-microservice/ It's just radically different (at a concrete syntactic level, no emdashes). I'm sure he has technical chops and it's cool that he worked on DOTS, but I would bet a very large amount of money he wrote the bullet points describing this project and then prompted GPT 5.3 to expand them to a blog post to "save time".


"very, very obvious" and yet so could be your comment or mine. Can we stop this kind of farming comment already?


When the AI-written articles stop, the comments calling it out will stop, too.


Nitpicking: Once articles which are _obviously_ AI-written stop, the comments calling it out will (should) stop.

It is far more likely that AI-written articles will become harder to spot, not that they will stop being written.


> calling it out

Calling what out? Did we suddenly invent a durable Turing test that will last more than six months? (We didn't, but some people "just know")

The only durable metric is if the article is good, if the ideas are good. Everything else is complaining about Bob Dylan's electric guitar.


vacuous falsity isn't an interesting case to examine


Means, the crying will never stop


"just accept the AI slop, pleb"


are you claiming that you can't recognize default style ai writing after a paragraph or two?


Yeah the "more than a paragraph or two" is key here. Indicators of AI writing work both ways; the more text you write, the more likely you are to "slip" and use some phrasing or syntactical constructions uncommon in LLM output. (This is why AI detectors perform worse on shorter excerpts.)

I posted this elsewhere, but convincingly, consistently "writing like AI" and never slipping once takes an amount of knowledge and skill analogous to art forgery. Except that with art forgery you can at least make millions of dollars off it.


> very, very obvious" and yet so could be your comment or mine. Can we stop this kind of farming comment already?

If you want to read chatbot output, why are you coming here? There's a ton of free chatbots for you to read.

After all, the audience here knows where to go to get chatbot output, but they're coming here instead. What does that tell you?


> What does that tell you?

That HN was a neat community fifteen years ago, but like all things cool made by early adopters, it will eventually attract a following hoping to be somewhere, to exist among people doing things, but the tragedy of such followings is that they bring with them their toxicity, their immunity to their own poison, and drown out what they depend on until the early adopters early adopt away.

The real slop is all this lazy concern farming from an ant mill that is powerless to do anything except validate its own hand wringing.


> The real slop is all this lazy concern farming from an ant mill that is powerless to do anything except validate its own hand wringing.

Which circles back to the question of why, if you want to read AI output, are you still here?

You can read that sort of thing just about anywhere else.


It's also a bit odd they don't mention column-oriented databases at all.


Yeah that was my second thought. ECS' favoring of structs-of-arrays over traditional arrays-of-structs for game entities boils down to the same motivations and resulting physical layout as column-stores vs row-stores.


Why would column-oriented databases be mentioned? My understanding is that these are typically used for OLAP, but the article seems to talk only about OLTP.


Modern database engines tend to use PAX-style storage layouts, which are column structured, regardless of use case. There is a new type of row-oriented analytic storage layout that would be even better for OLTP but it is not widely known yet so I wouldn't expect to see it mentioned.


Because there is a whole section that describes column based storage without mentioning that some databases have column based storage as an option.


This is one of the main problems I have with LLMs. It finds patterns in words but not content. I see this in code reviews and eventually outages. Something looks reasonable at the micro scale but clearly didn’t understand something important (because they don’t understand) and it causes a major issue.


Get over it


None. The magic is still in the model.


How about we legalize construction of new power sources and let the market figure it out?


> How about we legalize construction of new power sources and let the market figure it out?

Who's going to pay for the new plants, that's the issue, nothing else.

"The market" can't figure that out and gets it wrong without additional regulation. If all ratepayers pay for new capacity used only by a few corporations, for their new power needs and their own profits, these corporations get to socialize their capital expenditures while privatizing their profits - that's a form of theft, without any exaggeration.


There was a huge nuclear deregulation bill passed on 2023(?). Hopefully we'll get some reliable power in 10 years out of that.


Trump administration seems to have tried to cancel a bunch of power sources (e.g. offshore wind).


My very fuzzy back of the envelope says easily 10s of thousands per day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: