- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
On the kernel security list we’ve seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we’re around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
Something I’m predicting is that at least it will change the approach to security fixes: [ … ] software that used to follow the “release-then-go-back-to-cave” model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.
[ … ]
Overall I think we’re going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute. But before this happens, we have to experience a huge mess that might last for a few years to come! Interesting times…
kinda scary when ai slop becomes successful ai analysis
That’s the thing, this isn’t AI slop.
This is using the tools for their intended purpose, rather than trying to use them to replace human-written code.
Exactly. AI slop is just that. Slop.
If it’s just an AI doing something useful, we don’t call it slop, we just call it AI.
When Google’s AlphaFold predicted the folding of over 200 million protein structures, and won a nobel prize for it, I don’t think anyone would call all the research using it to make cures to diseases slop.



