Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens when that 'weeks of work' is just shifted into the future, as you find out the LLM made things up and you have to figure out what went wrong?


Humans make mistakes too.

I find this "LLMs can be wrong" argument a bit tiresome, and also a bit lazy.

I feel like we have been here before. With wikipedia. With stack overflow. Or with the whole debate about c/assembler vs garbage collected languages.


> Humans make mistakes too.

Well, yes, but fortunately, we build computers to automate things using simple algorithms to remove the risk of such mistakes.

Except when we use LLMs, in which case we increase the risk of mistakes.

> I feel like we have been here before. With wikipedia. With stack overflow. Or with the whole debate about c/assembler vs garbage collected languages.

Well, Wikipedia is a great tool, but it is permanently weaponized.

C/Assembler vs. garbage-collected languages was about decreasing the risk (at the cost of increasing the resource requirement), so, unless I misunderstand what you write, it kinda feels like you're arguing against your side?


Funny you mention Wikipedia, since in most professional settings (particularly research roles) you can't just cite Wikipedia. Maybe in highschool that's okay, but when there are actual stakes on the table, putting some effort into your research beyond reading the Wikipedia article is probably necessary.


For my ETL pipelines I have not had this issue.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: