Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.


He might have had a leg to stand on here if this was an AI that he had trained himself on ethically-sourced data, but personally I don’t want to be lectured by anyone about “our current capitalist culture” who is intentionally playing right into it by financially supporting the companies at the center of the AI bubble. The very corporations that are known to have scraped countless terabytes of unlicensed data for their own for-profit exploitation, by the way.
If you discard your self-proclaimed values the second that it becomes convenient or “valuable”, you never had any values to begin with.
Practice what you preach, or don’t preach at all.
Ethically sourced data is a hilarious phrase.
Why? You really don’t see any difference between training an AI model off of public domain, creative commons and licensed data, and corporations like Meta and Anthropic pirating millions of books without even so much as consent from the original authors?
I wouldn’t have a problem with AI if it was trained legitimately, but sadly working people are being ripped off by massive corporations on an unprecedented scale.
I think that, considering the goal of ensuring the LLM doesn’t directly reproduce the training data, it really doesn’t matter. I don’t think trillions of characters arranged into words so something can spit out the most likely combination of those words back at me really has anything to do with how those words are sourced.
I also have no issue with piracy and think IP laws are currently way too strongly in favor of IP holders. Maybe my moral compass is off or something, idk.