Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
He might have had a leg to stand on here if this was an AI that he had trained himself on ethically-sourced data, but personally I don’t want to be lectured by anyone about “our current capitalist culture” who is intentionally playing right into it by financially supporting the companies at the center of the AI bubble. The very corporations that are known to have scraped countless terabytes of unlicensed data for their own for-profit exploitation, by the way.
If you discard your self-proclaimed values the second that it becomes convenient or “valuable”, you never had any values to begin with.
Practice what you preach, or don’t preach at all.
Ethically sourced data is a hilarious phrase.
Why? You really don’t see any difference between training an AI model off of public domain, creative commons and licensed data, and corporations like Meta and Anthropic pirating millions of books without even so much as consent from the original authors?
I wouldn’t have a problem with AI if it was trained legitimately, but sadly working people are being ripped off by massive corporations on an unprecedented scale.
I think that, considering the goal of ensuring the LLM doesn’t directly reproduce the training data, it really doesn’t matter. I don’t think trillions of characters arranged into words so something can spit out the most likely combination of those words back at me really has anything to do with how those words are sourced.
I also have no issue with piracy and think IP laws are currently way too strongly in favor of IP holders. Maybe my moral compass is off or something, idk.



