Hacker Newsnew | past | comments | ask | show | jobs | submit | adam_arthur's commentslogin

Strong typing has clearly won.

However, verbose typing is likely a negative for LLMs.

Algorithms written in "pseudo-code", aka a higher level language without type information, are far more readable to a human, and thus likely an LLM too.

In regards to control flow and general concept of what code is doing, types provide very little info over well named variables. In fact they often impair understanding by breaking up logic with implementation details.

I'd be curious to see some experiments around this, but I'd guess strongly typed languages where the type information is mostly hidden/inferred would have better generation accuracy from a semantics perspective (and likely worse from a type safety perspective, but can be corrected on compile/retry)


> Algorithms written in "pseudo-code", aka a higher level language without type information, are far more readable to a human, and thus likely an LLM too.

What’s the basis of this claim? There are many many more lines of code LLM’s are trained versus pseudo-code.

Also I agree, anecdotally the self-correction is key benefit from static types. If there is a mistake, it is caught at compile time and not at runtime.


It seems clear to me from first principles.

Humans are trained on human language. LLMs are trained on human language.

Thus something that is easier for a human to understand is likely easier for an LLM to understand.

That higher level language with well named variables reads more comprehensibly than code:VERB with:PREPOSITION types:NOUN, intermixed:ADJECTIVE, stems:VERB from:PREPOSITION first:ADJECTIVE principles:NOUN too:ADVERB


For models as complex as these I'm not confident we can apply arguments from first principles; we could just as easily argue that type information is helpful, from first principles. What is much more useful is empirical evidence, and AutoCodeBench [1] found that LLMs are most proficient in Elixir (dynamic) followed by Kotlin (static), with Rust and PHP at the bottom. So it would seem like, as of publication, typing style doesn't really matter!

[1] https://autocodebench.github.io/


As far as the AI is concerned, it's more like

Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

versus

Buffalo:PN buffalo:N Buffalo:PN buffalo:N buffalo:V buffalo:V Buffalo:PN buffalo:N

I think the second one makes much more sense.


In the rare case that all your concepts use the exact same descriptive word, you are probably right!

The majority of the time you can infer the type from reading well written code (to the extent that the shape of the type matters in the context of that piece of code)


You are basically rehashing the false beliefs of the codeless programming camp. Human language that is 99% correct is a standing ovation for a speech writer while it is paying a cyber ransom as the software maker.

If the type can be inferred by the reader it should be inferred by the type system and at least be available to the LLM as a query. But we're also talking about dynamic languages in which type cannot be inferred until runtime. What's the type of x?

x = y + z

Well that depends on the types of y and z, which themselves may depend on the types of other operands, which themselves may not be known until the program actually runs. All that inference takes a lot of thinking, which takes tokens, which cost money. Why not just write the types down? Although we call these things "inference engines" they're really pattern matching explicit tokens, so it's better to actually write down the types so they can be pattern matched than to figure them out at inference time.


Strong typing is not a synonym for static typing, it refers to a different aspect of type safety.

Static typing is, roughly, where variables and expressions have fixed types that can be determined ahead of execution. Strong typing means the language doesn't offer implicit type conversions. Python is dynamically typed, i.e. not statically typed, and strongly typed. (Ignoring its type annotations feature, of course.)


I think Java shows that too much naming is a horrible idea even if there is a good type layer. It's exceptionally easy to have homonyms and other problems that feed errors in LLMs and if every mix of nonsense runs that's all the worse. Add to that attempts to put many English words together and there are no way points left. Everything is an exhausting essay by Dickens where two very long sentences are subtly different (to look at but not in results.)

> types provide very little info over well named variables

Types guarantee invariants at compile time, adding type info to a variable name is just a prayer that the next human or robot will enforce the invariants with respect to that type when it matters. This is like saying you don't need a saw stop because you should just avoid sticking your hand in the saw blade.


This is a lot of speculation with absolutely nothing backing it.

I recently found Gemma 4 e4b surprisingly effective for small "classification" style tasks for something I'm doing at work.

In this case, picking out "semantic" css classes on single dom nodes.

Was able to run it on my 4(?) year old M2 mbp with 16GB of ram and it runs in only 100ms or so per query. Probably it can run much faster, but haven't experimented with batching etc

With tight and targeted context control, you can use extremely small models for useful things. Ideally with problems where the harness can be mostly deterministic and you have known bounds on what you're trying to do


The blackout started back in January before the US even got involved.

Due to widespread protests and an attempt to crack down on coordination. This chain of events was widely reported.

https://en.wikipedia.org/wiki/2026_Internet_blackout_in_Iran


It definitely ramped up with the invasion. I watched the webcam streams go dark.

Before the US started an open war. US has been involved in a relentless anti-Iranian campaign since before I was born (I'm 55).

Fear can create a self fulfilling problem, even if the fundamentals don't warrant that problem.

I don't know in this case, but I'd guess

1) Cyber security risk (straightforward)

2) Software stocks have been declining materially on AI competition fears, which can potentially impair loans to those companies, which cascades if there are enough defaults.

The second is mostly just driven by misinformed investors IMO, but the valuation haircuts as a result are real and can impair existing loans.

(Not to say these software companies weren't overvalued to begin with, just the reasoning for the decline is misplaced imo)


On the other hand, I never understood the focus on computer use.

While more general and perhaps the "ideal" end state once models run cheaply enough, you're always going to suffer from much higher latency and reduced cognition performance vs API/programmatically driven workflows. And strictly more expensive for the same result.

Why not update software to use API first workflows instead?


I don't understand the position that learning through inference/example is somehow inferior to a top down/rules based learning.

Humans learn many, and perhaps even the majority, of things through observed examples and inference of the "rules". Not from primers and top down explanation.

E.g. Observing language as a baby. Suddenly you can speak grammatically correctly even if you can't explain the grammar rules.

Or: Observing a game being played to form an understanding of the rules, rather than reading the rulebook

Further: the majority of "novel" insights are simply the combination of existing ideas.

Look at any new invention, music, art etc and you can almost always reasonably explain how the creator reached that endpoint. Even if it is a particularly novel combination of existing concepts.


Which is exactly how humans learn many things too.

E.g. observing a game being played to form an understanding of the rules, rather than reading the rulebook

Or: Observing language as a baby. Suddenly you can speak grammatically correctly even if you can't explain the grammar rules.


The 9% of borrowers defaulting stat cited in the title is not the same as 9% of the loanbook defaulting.

As stated in the article, 9% is the number of borrowers that defaulted, which was concentrated in smaller borrowers (thus smaller loans).

And then, again, you can say probably half of the dollar amount of those defaults are recoverable.

Bond defaults spiked to around 6% in aggregate in 2008, to use a worst case example.


There is so much misinformed fear-mongering about private credit right now.

Important Facts:

1) The majority of private credit funds are classed as "permanent capital". When you put money into these vehicles, you give the Asset Manager discretion over when to give the money back. Redemptions are often gated at ~5% per quarter.

(So there cannot, by definition, be a run on the bank)

2) Credit is senior to equity, so if you expect mass defaults in private credit, it means the majority of private equity is effectively wiped out. Private equity has to be effectively a 0 before private credit takes any losses.

3) The average "recovery rate" for senior secured loans is 80%. Even if private equity gets wiped to 0, the loss that private credit incurs is cushioned significantly by the collateral backing the loan. These are not unsecured loans the borrower can just walk away from.

(The price of senior secured loans dropped by ~30% in 2008, as a worst case datapoint)

4) Default rates on many of the major private credit managers is ~<1% in recent years. There are other estimates stating higher default rates, but that often classifies PIK income as a default. A loan modified and extended with added PIK that ultimately gets repaid is not a "true" default.

5) Finally, it's true that NAVs are likely overstated, but generally it's by a modest amount. Every Asset Manager today could go out tomorrow, mark NAVs down by 20% and suddenly there is no crisis.

(The stocks of Asset Managers have already traded down such that this seems expected and priced in anyway)


> Private equity has to be effectively a 0 before private credit takes any losses

Technically yes. But the overlap between private equity as it's commonly described and private credit is slim.

> average "recovery rate" for senior secured loans is 80%

Oooh, source? (I'm curious for when this was measured.)

> A loan modified and extended with added PIK that ultimately gets repaid is not a "true" default

True. It's a red flag, nonetheless.

> Every Asset Manager today could go out tomorrow, mark NAVs down by 20% and suddenly there is no crisis

Correct. The question is if 20% is enough, and if a 20% markdown creates a vicious cycle as funding for e.g. re- or follow-on financing dries up.

You seem knowledgable about this. I'm coming in as an equities man. Would you have some good sources you'd recommend that make the dovish cash for private credit today?


> Oooh, source? (I'm curious for when this was measured.)

It depends when you measure, but you can Google around and find figures in the 60-80% range. 80% may have been a bit on the optimistic end of the range. But it's important to note that a "default" doesn't imply a 0.

Of course this will depend on the covenants, underwriting standards, type of collateral.

I would guess software equity collateral recovery rates are lower than hard assets like a building. (Which is why I personally don't like Software loans, nothing to do with AI)

> Correct. The question is if 20% is enough, and if a 20% markdown creates a vicious cycle as funding for e.g. re- or follow-on financing dries up.

I think it's almost certain that new fundraising for private credit will be materially hindered going forward. But this just limits the growth rate of these firms, does not introduce any "collapse" risk.

They may move from net inflows to net outflows and bleed AUM over a period of some years.

If NAVs were inflated previously, they may be forced to mark down the NAV to meet redemptions rather than using inflows to payoff older investors.

In the world of credit, 20% is an enormous haircut. Again, senior secured loans fell by around 30% peak to trough in 2008.

We have the public BDC market as a comparison point where the average price/book is around 0.80x. So the public market is willing to buy credit strategies at a 20% discount to stated NAV.

The real systemic risk here, if we were to reach for one, is really that these fears become self fulfilling.

If investors pull funds out of credit strategies en-masse, there is no first order systemic issue, but it means borrowers of many outstanding loans may not be able to secure refinancing as money is drying up.

This could lead to a self-fulfilling default cycle. But this would be a fear driven default cycle, there is no fundamental issue with cash flows of borrowers or otherwise (in aggregate, currently).

Finally, in regards to the asset managers themselves, many are quite diversified.

Yes, they have private credit funds, but many have real estate funds, buyout funds etc. OWL is one of the biggest managers of data center funds, for example (which they also got hammered for on AI bubble fears)

Given how depressed pricing is in public REITs, for example, I expect a lot of asset managers to pivot towards more real asset funds.


So, if I hold a bunch of Private Equity, and my holdings need a continuity of business loan, would I:

(a) have the holding take out the debt, exposing 100% of my stake

or,

(b) have the holding divest a piece of itself, giving me control of the existing and new entities, then have that piece take out the debt, exposing 0% of my stake?

I imagine any PE firm worth its salt would go with option (b).

Presumably regulators would sometimes try to block such deals, but I cannot imagine that happening during the current administration. (Do the regulators even still work for the US government? I thought they were mostly fired.)

Similarly, I can imagine the banks refusing to lend in scenario (b), but I cannot imagine bank leadership being allowed to make such a decision if the PE firm is politically connected to the current administration.


It sounds like you're effectively describing some fraud scheme.

A smart lender will not issue loans without real collateral. If you create a subsidiary, that subsidiary has to have sufficient collateral and cashflow to secure a loan.


The current governor is proposing cutting property taxes in ~half by eliminating the school district portion and instead funding schools directly via the state's budget surplus.

Remains to be seen, as the next legislative session isn't until 2027.


I mean, most property developers are playing shell games to avoid the requirement of having to build school districts anyway in Texas from my experience living there. Build small developments up to just short of the line where it's required, then continue development as a different legal fiction with what turns out to be ultimately the same beneficent owners. Texas education system leaves much to be desired.


I'm not familiar with that specific example, but I do know that independent players in any economic system will follow the incentives.

Expecting companies, people etc to do the "right thing" when it's financially disincentivized usually doesn't work out.

Same will happen in regards to all these new taxes reinforcing existing population migration trends.


The system is simple. Your development hits a certain size, you have to build and fund a school for the community through fees if you're renting. So they go just short of the line, and crap out two developments and no schools, and leave the populace to figure out the rest. That isn't following incentives. That's being an asshat.


No, it's bad policy.

Cliffs in policies will always lead to players working around the cliffs.

E.g. in NYC there is an additional 1% sales tax on home sales above 1 million dollars.

So nobody in the market would ever sell a home between 1m and 1.01m as the tax increase is greater than the sales price.

These are failed policy implementations (in the above example the tax should be marginal, not thresholded)

Any policy which does not account for individual actors optimizing financially is a badly designed policy.

There are numerous similar examples re: CRE when requiring subsidized housing units for certain sizes of development. Often it's more lucrative to build smaller and get around subsidized unit requirements.

You can call them "asshats", but I'd rather live and discuss policy in reality.

Many of these new, clearly strictly punitively intended, taxes aimed at the wealthy will have the same logical outcome.

Show me the incentive and I'll show you the result


>Show me the incentive and I'll show you the result

Ah, you're one of those.

See, this clever little aphorism of yours is the constantly reached for salve of the "wiseguy". "Everyone would do it if they were in my position; so I'm not going to bother myself about it. Let's work around it."

Problem is, in reality, that isn't the case. Most people will sit there, look at the regulation, realize the development is likely going to attract families or soon-to-be-families, and would realize, yeah. Okay. Need to accommodate that. They approach it in good faith. Then you come along and start acting in bad faith. Your bad faith implementation for maximized extraction creates knock on problems, that create knock on problems, that now are everyone else's problem to solve. Eventually, with a high enough concentration or frequency of such agents, we enter game theory territory, and escalation tends to happen quickly from there.

Historically, this comes with a brand of solutions for people like that. It'd stew to a point, then generally involved an entire community not seeing a damn thing while someone came to physical harm in a tragic accident. Or just straight up Wildcat demonstrations.

Communities/ planners don't want that. So they make regulations that are a good faith attempt at curtailing spirals of reasonably foreseeable problems. A wiseguy comes along and creates reasonably forseen problems through non-compliance.

Are you noticing a pattern yet? You being a bad faith asshat isn't the policy's fault.

That's your fault for being a garbage human being, and maybe just a bit our collective fault for making the world such a comfortable and safe place for humans with garbage mindsets drawn to bad faith in all things business. Nevertheless, the gradient is clear. Do good faith business. Everyone wins. Do bad faith, and you win til it's worth someone's time to ensure you lose.

Too damn smart to learn the virtue of self-restraint, too damn stupid to recognize the threat too many of you pose to everyone else. Or how quickly things go bad once people start catching onto the games you seem to delight in playing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: