After a long hiatus of security fixes only the development of Etherpad has recently taken up speed again. But it seems to be nearly fully vibe coded with the help of Anthropic Claude and the amount of new features added makes it very unlikely that there is sufficient human code review.
I bring this up because as you probably know we host an instance of it over at https://pads.slrpnk.net/
Aside from the general issues with AI assisted code development and corporate capture through closed AI models like Claude, I consider it also a security risk as the NodeJS ecosystem (Etherpad is using that) is especially vulnerable to supply-chain attacks and AI halluciations make this issue significantly worse.
So basically this means we will probably have to shut down this service soon (stopping to update it is not a good option due to the many security issues found in NodeJS packages all the time), and look for an alternative.
If you have any important collaborative documents on our Etherpad instance, it would be good to export them in the near future. Best would be probably to export it as Markdown, as most alternatives seem to use that syntax.
As for alternatives, suggestions are welcome, but I did look into other options before (Hedgedoc, Cryptpad etc.) and was not so convinced by them either.


This list seems to have a lot of false positives, or at least an unhelpfully broad definition of what is considered “slopware”.
For example curl (stylised as “cURL”) is listed on that page as being “slopware” because it has a “permissive AI policy” because the developer decided against instituting a “strict non-ai policy” that dictated what development tools submitters could use..
The developer of curl (Daniel Steinberg) is generally speaking one of the most anti-AI-OSS voices on Mastodon, and has banned AI-submitted code and bug reports to curl. He has given talks including keynotes at conferences about AI slop. News articles have been written about his stance against AI:
If a project with a well-publicised anti-slop stance and an explicit no-AI policy is considered “slopware”, the list seems questionable.
I agree that some of these are not really a concern. I didn’t write the list.
It’s only “false positives” if you take the list as gospel truth. Since they cite their sources, people can and should judge for themselves whether the sources meet their personal standards.
Because of this, it’s far better that they cast their net wide and let people greenlight elements from the list if they want than that they try to impose their own personal definition of AI slop by excluding anything from the list that they personally think is okay.
But yeah, it would be nice if they included evidence of good behavior too, both for the alternatives they suggest (so they aren’t incentivizing devs to keep quiet) and for the problematic ones to clarify their stance.