• Jhex@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    19 days ago

    The statements can stand for themselves, evaluated on the merits of the claims, regardless of authorship.

    Sure but where is the practicality of that? According to your POV here, companies can claim whatever and it’s my job now to figure out if they are lying or to what extent. I have already lived through that and decided their output is completely untrustworthy so I rather wait for a trustworthy source before giving them any credit. I am not claiming 100% of what Anthropic says is a complete lie, I am saying I cannot trust it at face value.

    On the flip side, the corollary to the adage that a broken clock is still right twice a day is that you can’t just say “oh the broken clock said this so I can ignore it.”

    Funny you use this saying because a broken clock is never right, reality momentarily aligns with it, which is a completely different thing… and even then, for every minute of the day, a clock is still wrong 1438 times a day… I would rather not use suck broken clock as a reference AT ALL

    The blog post literally describes exactly that, for ffmpeg. And several of the other described vulnerabilities sound like they’re in that category of “here’s a bug but we didn’t find an exploit.”

    Case in point, they do not claim that in the title or intro. Their entire intro (in the blog you posted) is all about how amazing Mythos is

    …who cares?

    People like me who rather not keep feeding the Ai hype. Assuming these vulnerabilities are real and could have been exploited, yes I am happy they get fixed. But I am never giving credit to “Ai” unless it is an absolute certainty Ai did it and did it better than humans would

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 days ago

      According to your POV here, companies can claim whatever and it’s my job now to figure out if they are lying or to what extent.

      No, the actual claims here, that describe specific bugs in specific software, can be evaluated. Even without whipping out a test environment to try to reproduce the results with your own proof of concept, you can read the text and evaluate whether the claims make sense on their face.

      a broken clock is never right, reality momentarily aligns with it, which is a completely different thing

      And that’s why the substance of a statement matters. I don’t believe in the supernatural, so if someone says “I’m a psychic and the missing girl on the news is in a shed near the water,” that doesn’t register with me at all. But if that person says “I’m a psychic and the missing girl is in a shed at 1234 Main Street” that raises eyebrows because it is easily falsifiable. And if the person says “I’m a psychic and the missing girl is in a shed, so I looked and found her and reported it to the cops, and here’s a cryptographic hash of my description of how I found her, which I’ll publish once the cops confirm she’s safe” that’s gonna be a much more serious statement. Even if I don’t believe that the person actually is a psychic, I can pay attention to how the whole thing played out because the person claims serious non-psychic validation of the results, and the results themselves are important entirely externally from the claim of whether psychics have powers.

      This is a story about several cybersecurity vulnerabilities, some of which sound medium or high severity in very commonly used software. That’s important in itself, outside of AI mattering at all. And if they claim to have the receipts in a falsifiable way, that’s the kind of thing that shows a high degree of confidence in the genuineness of what was found.

      I don’t give a shit about AI and I’m generally a skeptic of the future of any of these AI companies. But if someone uses AI tools to discover something new in the subjects that I do care about, like cybersecurity, then I’ll pay attention to the results and what they publish in that field.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        19 days ago

        No, the actual claims here, that describe specific bugs in specific software, can be evaluated. Even without whipping out a test environment to try to reproduce the results with your own proof of concept, you can read the text and evaluate whether the claims make sense on their face.

        Again, why would I bother? I do not work for Anthropic nor any of the other open source projects they are claiming to help so I have not stake in this fight. I am content to ignore whatever wins anthropic claims and wait until those open source projects, whom I do trust more, let me know if these claims are real or not.

        I don’t give a shit about AI and I’m generally a skeptic of the future of any of these AI companies. But if someone uses AI tools to discover something new in the subjects that I do care about, like cybersecurity, then I’ll pay attention to the results and what they publish in that field.

        Ok, you do you bud… I am happy ignoring Ai until they are proven by third parties… not sure what’s so challenging with that notion