Some thoughts on how useful Anubis really is. Combined with comments I read elsewhere about scrapers starting to solve the challenges, I’m afraid Anubis will be outdated soon and we need something else.

  • CrackedLinuxISO@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    There are some sites where Anubis won’t let me through. Like, I just get immediately bounced.

    So RIP dwarf fortress forums. I liked you.

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    New developments: just a few hours before I post this comment, The Register posted an article about AI crawler traffic. https://www.theregister.com/2025/08/21/ai_crawler_traffic/

    Anubis’ developer was interviewed and they posted the responses on their website: https://xeiaso.net/notes/2025/el-reg-responses/

    In particular:

    Fastly’s claims that 80% of bot traffic is now AI crawlers

    In some cases for open source projects, we’ve seen upwards of 95% of traffic being AI crawlers. For one, deploying Anubis almost instantly caused server load to crater by so much that it made them think they accidentally took their site offline. One of my customers had their power bills drop by a significant fraction after deploying Anubis. It’s nuts.

    So, yeah. If we believe Xe, OOP’s article is complete hogwash.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    This… makes no sense to me. Almost by definition, an AI vendor will have a datacenter full of compute capacity.

    Well it doesnt fucking matter what “makes sense to you” because it is working…
    Its being deployed by people who had their sites DDoS’d to shit by crawlers and they are very happy with the results so what even is the point of trying to argue here?

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      It’s working because it’s not very used. It’s sort of a “pirate seagull” theory. As long a few people use it it works. Because scrappers don’t really count on Anubis so they don’t implement systems to surpass it.

      If it were to become more common it would be really easy to implement systems that would defeat the purpose.

      As of right now sites are ok because scrappers just send https requests and expect a full response. If someone wants to bypass Anubis protection they would need to take into account that they will receive a cryptographic challenge and have to solve it.

      The thing is that cryptographic challenges can be very optimized. They are designed to run in a very inefficient environment as it is a browser. But if someone would take the challenge and solve it in a better environment using CUDA or something like that it would take a fraction of the energy defeating the purpose of “being so costly that it’s not worth scrapping”.

      At this point it’s only a matter of time that we start seeing scrappers like that. Specially if more and more sites start using Anubis.

  • Klear@quokk.au
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    If that sounds familiar, it’s because it’s similar to how bitcoin mining works. Anubis is not literally mining cryptocurrency, but it is similar in concept to other projects that do exactly that

    Did the author only now discover cryptography? It’s like a cryptocurrency, just without currency, what a concept!

  • @[email protected]

    So (11508 websites * 2^16 sha256 operations) / 2^21, that’s about 6 minutes to mine enough tokens for every single Anubis deployment in the world.

    IIRC Anubis does make token scoped to a specific IP so you will need more than that for a real distributed adversary, but yeah the math holds.

    My solvers cranks out more than that in ~500ms …, faster than Anubis can verify at diff=4​:blobcatdied:​

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The current version of Anubis was made as a quick “good enough” solution to an emergency. The article is very enthusiastic about explaining why it shouldn’t work, but completely glosses over the fact that it has worked, at least to an extent where deploying it and maybe inconveniencing some users is preferable to having the entire web server choked out by a flood of indiscriminate scraper requests.

    The purpose is to reduce the flood to a manageable level, not to block every single scraper request.

    • poVoq@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      And it was/is for sure the lesser evil compared to what most others did: put the site behind Cloudflare.

      I feel people that complain about Anubis have never had their server overheat and shut down on an almost daily basis because of AI scrapers 🤦

    • seralth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Yeah but at least Anubis is cute.

      I’ll take sucks but cute over dead internet and endless swarmings of zergling crawlers.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    8 months ago

    Sometimes I think. Imagine if a company like google or facebook would implement something like anubis. And suddenly most people’s browsers would start solving cpu intensive constant cryptographic challenges. People would be outraged by the wasted energy. But somehow “cool small company” does it and it’s fine.

    I do not think anubis system is sustainable for all the people to use it, it’s just too wasteful energy wise.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        8 months ago

        Captcha.

        It does all Anubis does. If a scrapper wants to solve it automatically it’s computer intensive, they have to run AI inference, but for the user it’s just a little time consuming.

        With captchas you don’t run aggressive software unauthorized on anyone’s computer.

        Solution did exist. But Anubis is “trendy” and they are masters in PR within some specific circles of people who always wants the lastest most trendiest thing.

        But good old captcha would achieve the same result as Anubis, in a more sustainable way.

        Or at least give user an option of running or not running the challenge and leave the page. And make clear for the user that their hardware is going to run an intensive task. It really feels very aggressive to have a webpage to run basically a cryptominer unauthorized in your computer. And for me having a cargirl as a mascot does not forgive the rudeness of it.

        • tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          “good old captcha” is the most annoying thing ever for people and basically universally hated. Talking about leaving the page, what do you think what will cause more people to leave the page, a captcha that’s often broken or something where people don’t have to do anything but wait a little?

          • daniskarma@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            They don’t have to do anything but let an unknown program to max their cpu unauthorized.

            Imagine if google would implement that. Billions of computers running PoW constantly, what could go wrong?

            • tofu@lemmy.nocturnal.gardenOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              They don’t have to do anything but let an unknown program to max their cpu unauthorized.

              But they currently can’t and that’s the point.