Hacker Newsnew | past | comments | ask | show | jobs | submit | Finbel's commentslogin

>Also, when did we stop liking to learn? Why is it a bad thing to know all the ins and outs of a programming language?

I do not know the inns and out of the assembly layer my high level code end up as. It's not because I don't like to learn, it's because I genuinely don't need to. At a certain level of AI performance, how will this be any different?


However, curious programmers who develop in high level languages will dabble with assembly maybe for fun, and will be much better off for it than those who treat parts of the stack like a black box never to be opened.

Because you may not know the specifics of the assembly being generated, but you’ve likely learned a language built on top of assembly. And the compilers do some great tricks behind the scenes to generate efficient assembly, but those tricks are specifically coupled to semantics of the source language.

An LLM is not coupled to anything and can generate output that simply does not relate to the input. This doesn’t happen with compilers, and if it does, then it’s a specific bug to be addressed. An LLM can never guarantee certain output based on the input.

If I write x < 100, I know exactly how the compiler will treat that code every single time, and I know what < means and how it differs from <=

If I tell an LLM that “I want numbers up to 100.” Will that give me < or <= and will it be consistent every single time, even the ten thousandth program that I write?

The language is ambiguous where the code is specific


To me this is semantics as far as it's related to "why don't you want to learn?"

I have a co-worker in another team that write java endpoins we consume. I can tell him what I need and I trust the output. I don't need to know java to trust him, it doesn't mean I don't want to learn.

There are thousand examples like this across every stack and abstraction level. From ssh-handshakes to gps.

Sure my co-worker is fundamentally different from a compiler which is fundamentally different from an LLM.

My argument is that the chain-of-trust where you offload knowledge to an external source is identical. We do it all the time but somehow doing it with an LLM means we no longer want to learn?


One difference is: to use a top notch compiler/assembler you don’t need to pay. They are open source and have a lot of support. To use the latest and greatest models (bc no one around likes to use non sota ones) you need to pay a premium price.

Multibillion dollars companies are now the gateway for every line of code you need to write. That’s dystopian. It sucks


Yes, but that's a completely different argument (that I agree with). Essentially, yes they are conceptually similar but one is bad because you have to pay rent to use it.

Local models are increasingly becoming capable of taking on serious coding tasks that I would have previously sent to a frontier lab

>The difference between a puppy and a cockroach is that we can relate better to the puppy.

I suppose the difference between a human and a cockroach is that we can relate better to the human as well in this reductive way of thinking?


It's not novel in the sense that nobody knew about img2img. It's novel in the sense that nobody thought of using img2img to solve this problem in this way.

It's novel if you never played with img2img, including especially several forms of (text+img)2img. Or, if you never tried editing images by text prompt in recent multimodal LLMs.

That said, I spent plenty of time doing both, and yet it would probably take me a while to arrive at this approach. For some reason, the "draw a sketch, have a model flesh it out" approach got bucketed with Stable Diffusion in my mind, and multimodal LLMs with "take detailed content, make targeted edits to it". So I'm glad the OP posted it.


They’re actually quite good at it. I’ve had a number of situations where I’ve wanted to re-render some of my older comics. You can basically tell any SOTA multimodal model (NB, GPT-Image-X) to treat them as storyboards and prompt for a specific style: newprint, crosshatching, monochromatic ink sketch, etc.

Another thing I’ve gotten very used to doing is avoiding the “one-shot” approach. If I generate something and don’t like the results, I bring it into Krita, move things around, redraw some elements, and then send it back in with instructions to just clean it up (remove any smudges or imperfections). The state-of-the-art models can do an astonishing job with that workflow.

https://imgpb.com/eGDJIb


Ok it might just be me then. I view Nvidia‘s DLSS as a similar thing. There was even this meme that video games will in the future only output basic geometry and the AI layer transforms it into stunning graphics.

There's a certain irony in the fact that whoever you're responsing to got their message removed.


Flagged, not removed. Subtle difference, not saying it's huge, but you can still see their comments if you enable showdead in your settings.


Censored by a different name is still censored.


Agreed. I was just pointing out it's not actually removed, and you can still read it (if you go out of your way to do so).


It's not that conservative opinions are censored. It's that bad opinion with zero merit to any reasonable person, such as insults, racism, sexual harassment, etc, are censored.

Unfortunately that means that most conservative opinions are censored.

Or, at least, the ones that matter said by our most popular politicians.

Rephrased, think of it this way: if I talk like Barack Obama at work, I'm fine. If I talk like President Donald Trump, I'm getting sent to HR on my first day. And that has nothing to do with their political leanings.


As though HR are suddenly The Arbiters of Truth and that declining birth rates and increasing isolation are helped by people at working fearing being sent to HR if they make a mistake or say something non-approved.

I mean, yeah, those stats are being helped by HR, but not in the direction any sane person would favour.


You don't have to be "Arbiter of Truth" to say "hey, you're making women uncomfortable, three women have complained about your language, you're fired"

The only people who consistently have issues with HR are pieces of shit.

What I'm trying to say is that Donald Trump says things like "grab her by the pussy" and "[Haitians] are eating dogs and cats" and that's why talking like him would get you censored.

You can be conservative and not racist, or not sexist, or not a piece of shit in general. Most conservatives cannot manage that, no matter how hard they try. At least - most conservatives currently in power in the US.

So, if that's your baseline or your inspiration, then yes, you will PREDICTABILITY be censored. And I garauntee nobody gives a single fuck.


Of course you're not one of "us" if you're one of "them".


Sorry I'm from Sweden and our banks have a service called Swish where we can send money on the phone. Paying in cash is extremely uncommon now a days. Every time I've bought or sold something on FB Marketplace the last decade I've used Swish. I thought you had something similar called Venmo in the US?


The problem in the US is too many options. In the US, Venmo, PayPal, Zelle and CashApp are all pretty popular. And there are others.

They’re easy if the one you use is the same one the other person is using. If you’re a Venmo person and you want to transact with a CashApp person, well one of you have to download and set up a new payment app or pay cash.


Wow, never thought about the fact that the system deteriorates from having more than one option


Yes, we have phone-based payments in the US, too.

Some people will want cash for in person transactions but it's more rare. In the US you run into a lot of people who don't trust phones, technology, tech companies, the government, or any other number of reasons to demand physical payments.


> Some people will want cash for in person transactions but it's more rare. In the US you run into a lot of people who don't trust phones, technology, tech companies, the government

No, it's because majority of digital payment systems can be abused. Stolen accounts, payment disputes and more can cause a seller to lose the item and the money.

Cash is very, very hard to counterfeit, and there's inexpensive devices[1] to virtually guarantee a bill is genuine. There's no post-transaction fraud scheme that works once cash had exchanged hands.

[1] https://www.walmart.com/ip/PG-MONEY-TESTER-PEN/5487005062


> There's no post-transaction fraud scheme that works once cash had exchanged hands.

Yes but it is vulnerable to other fraud schemes, like misrepresentation or theft.

But yeah, when faced with the possibility of fraud many people instinctively retreat from the unknown (technology) to the easily understood realities of cold hard cash. Its biggest advantage is ease of understanding.


I assert it's more than that. Even Zelle can be susceptible to post-transaction fraud schemes.

Yes, someone can steal your cash - but they can also steal your item.

Setting aside theft - cash is simply the most secure way to ensure you keep your money post-transaction. There is no fraud mechanism to abuse, and no way to reclaim cash once in-hand.

For anything of value, the "old school" rules of meeting in a very public place and only accepting cash are still really sound.


Of course there is fraud risk with cash, it is just all on the buyers end of the transaction.

People are still getting scammed with cash every day with fake/locked/misrepresented/stolen items being sold on marketplace sites.

All of the legitimate reasons to reverse a reversible transaction is a fraud vector that cash is vulnerable to. That’s why reversible transactions exist.


> fake/locked/misrepresented/stolen items being sold on marketplace sites

100% of the risks you mention are still true with digital transactions. The difference is with cash, you close the door on literal fraudulent transaction claims or stolen accounts. It's vastly safer than digital transactions for in-person sales.

To be blunt - with cash, the buyer can't go home and file an unauthorized/fraud complaint with anyone - the seller has cash-in-hand, is anonymous, and the transaction is non-reversible. That's a benefit for these types of transactions, and one you seem to be overlooking.

If you're selling your couch on Facebook Marketplace - cash is king.


Yes, I understand that buyers can fraudulently file chargebacks with some forms of digital payment. I never said otherwise.

I am disputing your repeated and false claim that there are no fraud vectors with cash.

No payment method prevents fraud.


The US sits in a strange incentive landscape.

Since the government and corporations aggressively spy on everyone, and since government programs are often incompetent or overfunded or underfunded or corrupted or evil, there is (justly) little faith in the government.

Cash works fine. It can't be censored easily, it can't be tracked easily. ATMs have it.

When I trust the phones, I'll use phone payments.


Venmo isn't really something I'd consider a "bank service"; it was its own company for a bit, and I think now it's owned by PayPal.

The closest thing here is probably Zelle, but at least with my bank's app, the interface is a bit of a pain. This basically is just another form of what the parent commenter said; how much do I value my own time and convenience compared to what I'd be getting?


Yes but "the search space is too large" is something that has been said about innumerable AI-problems that were then solved. So it's not unreasonable that one doubts the merit of the statement when it's said for the umpteenth time.


I should have been more specific then. The problem isn't that the search space is too large to explore. The problem is that the search space is so large that the training procedure actively prefers to restrict the search space to maximise short term rewards, regardless of hyperparameter selection. There is a tradeoff here that could be ignored in the case of chess, but not for general math problems.

This is far from unsolvable. It just means that the "apply RL like AlphaGo" attitude is laughably naive. We need at least one more trick.


The other trick could be bootstrapping through mathlib.

As you said brute forcing the search space as the starting procedure would take way too long for the AI to build intuition.

But if we could give it a million or so lemmas of human math, that would be a great starting point.


The whole fight with Anthropic was because they wanted to use it for mass surveillance (and autonomous weapon systems).

How is mass surveillance not orwellian.


I wrote: it's not particularly orwellian. like all the other us administrations have had mass surveillance boners too. and the us is not nearly as surveillancey as other fascistic regimes, or even contemporary social democracies.

finally, orwellian means a lot more too, especially "controlling how people think by controlling their language". again, the trump administration doesn't do that much of those things.

this administration has a lot of problems, but its pretty straightforward.


>Hey that's a weird thing in the result that hints at some other vector for this thing we should look at

Kinda funny because that looked _very_ close to what my Opus 4.6 said yesterday when it was debugging compile errors for me. It did proceed to explore the other vector.


> Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

This is the crucial part of the comment. LLMs are not able to solve stuff that hasn't been solve in that exact or a very similar way already, because they are prediction machines trained on existing data. It is very able to spot outliers where they have been found by humans before, though, which is important, and is what you've been seeing.


That's silly, they don't want to manage people, they prefer to build actually useful things. I've recently learned how many programmers actually don't care about building things.

They love the craft, for all they care they could be working in a black box in a void as long as it fed them interesting problems to solve.

They don't see any actual benifit in the AI increasing the velocity of how fast they build useful things. That was never of value to them, all they see is the problems becoming more boring to solve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: