

No, the actual claims here, that describe specific bugs in specific software, can be evaluated. Even without whipping out a test environment to try to reproduce the results with your own proof of concept, you can read the text and evaluate whether the claims make sense on their face.
Again, why would I bother? I do not work for Anthropic nor any of the other open source projects they are claiming to help so I have not stake in this fight. I am content to ignore whatever wins anthropic claims and wait until those open source projects, whom I do trust more, let me know if these claims are real or not.
I don’t give a shit about AI and I’m generally a skeptic of the future of any of these AI companies. But if someone uses AI tools to discover something new in the subjects that I do care about, like cybersecurity, then I’ll pay attention to the results and what they publish in that field.
Ok, you do you bud… I am happy ignoring Ai until they are proven by third parties… not sure what’s so challenging with that notion
drop every Amazon service you use today and advocate for others to do the same