Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Code quality is indeed a virtue that I would like signalled and partly assured, through the means of the author feeling the social pressure of performing an explanation of their use of LLMs, and wanting to not appear careless. In my direct experience, the way you use these tools has a big effect on the outcome, and that’s the information I’d like conveyed here. Clearly it doesn’t cover people who would have produced rubbish anyway, but it isn’t intended to be complete.
 help



Fair enough, we all have our preferences and opinions. What signals would you be looking for to make your judgement call?

> it doesn’t cover people who would have produced rubbish anyway

How do you determine this though?

If it becomes common for people to include an AI usage profile, what does that really do? I can say I followed best practices, steered design, supervised the LLM, reviewed the code, etc... but those are just a stranger's words with little value, and my code might still be garbage.


If you keep saying things that don't hold up, that tends to harm your reputation. Do I really need to explain basic social dynamics? Sure, maybe I'm wrong and this totally wouldn't work. Maybe (probably) not enough people think it's a good idea and it never happens. Maybe everyone just decides to lie through their teeth. Maybe there's no significant relationship between how carefully you use an LLM and the quality of the product. But I thought it was worth suggesting.

I get the social dynamics, and I agree that we need more signals to build trust, even if I disagree on this particular signal.

I think enhancing SDLC visibility could be valuable, which is somewhat related to your original suggestion, though I'm not sure how to validate something like that since workflows and tests tend to be unique. I suppose we have badges like 'build:passing' that aren't standard, but are somewhat adopted and show that some extra effort went into setting that up. It would be nice to have something a bit more standardized and verifiable though.

Anyway, I appreciate your engaged responses, and even if we don't see eye to eye on the LLM signal, I think we both want the same thing, which is increased trust in third-party software. I hope we get there someday, even if it seems we're moving in the opposite direction right now.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: