Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think a language model is that stupid. This smacks of pure human stupidity and/or offshoring.


But LLMs are that stupid. Do you remember that guy who vibe coded a cheating tool for interviews and who literally leaked all his api keys/secrets to GitHub because neither him nor a LLM didn't know better?


Fair enough. Since it's trained on human stupidity, I suppose it would reflect that stupidity as well.


Is that the same guy who had his degree revoked for creating a cheating tool for interviews and is now a millionaire for creating a cheating tool for interviews?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: