Hacker Newsnew | past | comments | ask | show | jobs | submit | nullc's commentslogin

oh man, flashbacks to feeling slightly squicked at the sentient plastic sex toys. lol

If you look at how Quality Assurance works everywhere outside of software it is 99.9999% about having a process which produces quality by construction.

What value does being snotty and dismissive have? they're just going to copy and paste your reply to their chatbot. The toaster doesn't have feelings you can hurt.

> maybe going forward we will be forced to come up with real solutions to the general problem of vetting people

The monekeys paw closes a finger and now you need a formal certification, professional license, and liability insurance to publish software or source code.


> You can just reject the PR.

And with a better and more useful response. Instead of wasting time on the technical details, you can give feedback like "this isn't the sort of change that AI is likely to be helpful with, though if you want to keep trying make at least sure your PRs pass the tests." or "If you'd like to share your prompts we might be able to make some suggestions, we've found on this project it's useful to include <X>".


That's like saying "sorry, source code is too personal. In my 'open' project you get only binaries".

... and then I think about all the weights only "open" AI projects and walk off in disgust.


Keep in mind that in industries where people code but aren't really programmers this literally does happen, sometimes very "big" people will be basically scared to share their code because it won't be very good.

But anyway what I mean is that code is us speaking like a computer, LLMs are the other way around, you can see a lot from how someone interacts with the machine.


not just non-programmers. It's a common problem with jr programmers and good programmer internships (for example) make it a point of forcing the interns to expose themselves ASAP to get over it.

I think if everyone goes into it knowing that it'll be part of what they publish it would be less of an issue.

I mean, unless you're all a bunch of freaks who have instructed your LLM to cosplay as Slave Leia and can't work otherwise, in which case your issues are beyond my pay grade. :P


FWIW, I can say from direct experience people that other people are watching and noting when people are submitting AI slop as their own work, and taking note to never hire these people. Beyond the general professional ethics, it makes you harder to distinguish from malicious parties and other incompetent people LARPing as having knowledge that they don't.

So fail to disclose at your own peril.


The person operating the LLM is not a meaningfully teachable human when they're not disclosing that they're using an LLM.

IF they disclose what they've done, provided the prompts, etc. then other contributors can help them get better results from the tools. But the feedback is very different than the feedback you'd give a human that actually wrote the code in question, that latter feedback is unlikely to be of much value (and even less likely to persist).


Yep, true.

I've done things like share a ChatGPT account with a junior dev to steer them toward better prompts, actually, and that had some merit.


Better to think in terms of distrust rather than trust.

Presumably if a contributor repeatedly made bad PRs that didn't do what they said, introduced bugs, scribbled pointlessly on the codebase, and when you tried to coach or clarify at best they later forgot everything you said and at worst outright gaslit and lied to you about their PRs... you would reject or decline to review their PRs, right? You'd presumably ban the outright.

Well that's exactly what commercial LLM products, with the aid of less sophisticated users, have already done to the maintainers of many large open source projects. It's not that they're not trusted-- they should be distrusted with ample cause.

So what if the above banned contributor kept getting other people to mindlessly submit their work and even proxy communication through -- evading your well earned distrust and bans? Asking people to at least disclose that they were acting on behalf of the distrusted contributor would be the least you would do, I hope? Or even asking them to disclose if and to what extent their work was a collaboration with a distrusted contributor?


I don't think it has anything to do with being a reactionary movement or counter culture. If it were I would expect, among other things, that it would absolutely prohibit the use of AI entirely rather than just require disclosure.

The background is that many higher profile open source projects are getting deluged by low quality AI slop "contributions", not just crappy code but when you ask questions about it you sometimes get an argumentative chatbot lying to you about what the PR does.

And this latest turn has happened on top of other trends in 'social' open source development that already had many developers considering adopting far less inclusive practice. RETURN TO CATHEDRAL, if you will.

The problem isn't limited to open source, it's also inundating discussion forums.


It's reactionary only in that this is the new world we live in. Right now is the least amount of ai assistance that will ever exist in prs.

Yeah, but this is once again doing the thing where you read someone replying to situation #2 (in my comment) and then acting like they are denying situation #1.

I think we all grant that LLM slop is inundating everything. But a checkbox that says "I am human" is more of a performative stunt (which I think they are referring to when they say it's reactionary) than anything practical.

Cloudflare's "I am human" checkbox doesn't just take your word for it, and imagine if it did.

---

People who write good, thoughtful code and also use an LLM have no reason to disclose it just because the idea offends someone, just like I don't disclose when I do an amphetamine bender to catch up in work; I don't want to deal with any prejudices someone might have, but I know I do good work and that's what matters. I pressed tab so the LLM could autocomplete the unit test for me because it's similar to the other 20 unit tests, and then I vetted the completion. I'm not going to roleplay with anyone that I did something bad or dishonest or sloppy here; I'm too experienced to know better.

People who write slop that they can't even bother to read themselves aren't going to bother to read your rules either. Yet that's the group OP is pretending he's going to stop. Once you get past the "rah-rah LLM sux0rs amirite fellow HNers?" commentary, there's nothing here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: