> any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?
I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.
That would constitute an attempt to circumvent their policy, with the consequence of being banned from the project. In other words, it makes not clearly labeling any LLM use a bannable offense.
As a native English speaker I read this as two parts. If it's obvious, the response is immediate and not up for debate. If it's not obvious then it falls in the second part - "any attempt to bypass this policy will result in a ban from the project".
A submarine submission, if discovered, will result in a ban.
Using the phrase "virtual signaling" long ago became a meaningless term other than to indicate one's views in a culture war. 10 years ago David Shariatmadari wrote "The very act of accusing someone of virtue signalling is an act of virtue signalling in itself", https://www.theguardian.com/commentisfree/2016/jan/20/virtue... .
Somewhat off topic, but I can’t believe someone got paid to write that article, what a load of crap. It’s like saying that fallacies don’t exist because sometimes people incorrectly claim the other side is arguing fallaciously.
If you go by the literal definition in the article, it’s very clear what OP meant when he said the AI policy is virtue-signaling, and it has absolutely nothing to do with the culture war.
It's not a useful phrase because a "we accept AI-generated contributions" is also virtue signalling.
You have no doubt heard claims that AI "democratizes" software development. This is an argument that AI use for that case is virtuous.
You have no doubt heard claims that AI "decreases cognition ability." This is an argument that not using AI for software development is virtuous.
Which is correct depends strongly on your cultural views. If both are correct then the term has little or no weight.
From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.
I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. As a strawman exaggeration meant only to clarify my point, "all right-thinking people use AI to write code, so these are really just gatekeepers fighting to see who has the longest neckbeard."
Further, I agree with the observation at https://en.wikipedia.org/wiki/Virtue_signalling that "The concept of virtue signalling is most often used by those on the political right to denigrate the behaviour of those on the political left". I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification.
Calling something "performative", like "virtue signalling" or the older "politically correct", is also a claim that the other party is making the argument under false pretenses.
In all cases, the implication is that it's worthless to discuss the stated issue (in this case, the rejection of LLM-generated contributions) because the real issue is something else.
I've seen LLM-generated software contain code which was clearly derived from an MIT-licensed code base, and where the generated code did include proper attribution.
The USL v. BSDi lawsuit teaches us that operating system developers must be cautious about copyright attribution.
I see no need to conjecture the existence of some hidden reason, as you seemingly have. In addition, the performative game can go both ways. Eg, "Your comment is performative cover for the slap in the face you feel as a coder who uses a lot of LLM support." But that would be malicious conjecture. IMO, any claim of "performative" without support is just bog-boring flaming.
Don't ask don't tell looks like a reasonable policy. If no one can tell that your code was written by an LLM and you claim authorship, then whether you have actually written it is a matter of your conscience.
I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.