I’ve noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X

Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

  • trackball_fetish@lemmy.wtf
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    Zoomers and gen x that drank the kool aid. What’s worse is they are saying yes to high paying jobs to fuck us all in the ass.

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 hours ago

    Same. I noticed that I finally got banned from a few random instances I’d never visited before under my moderation history, and they were all by the same guy who claimed I was an “anti-AI troll” lmao

    The most hilarious part to this is I feel so dispassionate about the subject, I can seldom remember what it was I might have commented, and was probably something like “yeah this looks like slop” hahaha

  • bss03@infosec.pub
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 hours ago

    If you ignore or are blissfully unaware of the negatives – and all the companies behind all the major product lines do their best to hide and minimize them – then it’s easy to find utility. Basically everyone I know IRL actively chooses to use AI for something. Both CRAP (Computer-Rendered Artificial Pictures) and code generation are very common.

    When I point out the ethical issues, I am generally dismissed entirely (“they’ll fix that” or “my impact is small”) or counter with something about quality (“it works now” and “it’s getting better”), which I find is beside the point.

  • DJKJuicy@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    arrow-down
    10
    ·
    12 hours ago

    AI (LLMs) is/are a fantastic tool.

    But that’s what it is, a tool that can make some tasks easier.

    It’s not world-changing like some tech bros and CEOs think it is because they don’t actually understand the technology.

    It’s also not the apocalypse or The Matrix or Skynet coming to end civilization. It’s just a tool.

    After the AI bubble bursts, AI will still be there, as a tool for humans to use.

    I think it’s possible that some of the people you see on Lemmy may have started using AI a little more in their lives and see it for what it is.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 hours ago

      It’s also not the apocalypse […] It’s just a tool.

      So, the problem with tools is that their existence still affects the systems they’re a part of.

      For instance, war between the US and Russia is much more dangerous now (yes, it used to be dangerous before as well) because now we have nuclear bombs. We did a whole cold war thing about it. Nuclear bombs change the world even when they’re not being used.

      Similarly, meth is just a tool. It is entirely possible to smoke meth, not become addicted, have a great time, vacuum your entire house I guess, come down, chill, and move on with the rest of your life. But, that’s not what we would say meth’s effect on society is, is it?

      I am so happy that you are capable of using AI without becoming a psychopath. I am concerned about the psychopaths.

    • FosterMolasses@leminal.space
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 hours ago

      You know what’s crazy is that everyone has begun rebranding things that existed before AI as AI.

      The algorithm summary of a common question in Google results? Now it’s AI.

      Trello’s automation tasks moving items marked as “Done” to archive? Now it’s AI✨

      It’s idiotic lol

    • SparroHawc@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      LLMs are neat, and useful for some things - but as with practically everything in modern society, capitalism is ruining it.

    • imetators@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 hours ago

      Google at some point also was a great tool. Wikipedia also joins the rankings. LLM chatbots are great but certainly not the primary source of information.

      What annoys me is that people began to use them to not to do simple things like writing their own posts about their own things. They began to generate content instead of making it. It is obvious that anything what takes time to be produced, will most certainly be automated once tools are given. But this annoys the hell out of me.

      Seeing posts, comments, content generated by LLM, I feel that I am being robbed of artistry, curiosity, interactions with real people. I can automate chats with my family, friends, colleagues, children. But that wont be me. That will be perfect grammar sentence generator, not me - real, tons of mistakes, typos, mostly renting about everything, passionate, bored, funny, witty, dull me.

      It saddens me that LLMs are exedcuting (almost?) final blow to a society that is sustaining social media terminal damage.

      • DJKJuicy@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        9 hours ago

        Unfortunately we will always have problems explaining to people how to use the right tool for the right job.

        The old “if all you have is a hammer, everything looks like a nail” saying still applies.

        Using LLMs to automate your social media is dumb as shit and I don’t understand why people are doing that. It is actively destroying social media. Which may be the natural end-state of a social media platform. Isn’t that why most of us are on Lemmy right now? Because of the state of Reddit and Xitter?

        Also, generative AI making art and music and literature is dumb as shit too. Why would you make an AI that does the fun stuff that humans actually want to do? I can’t wait to have AI finish playing BioShock for me…

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      11 hours ago

      To be fair, given the power consumption it requires, it definitely leans towards civilization ending.

      • DJKJuicy@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        10 hours ago

        We also have “the Internet” slurping up massive amounts of energy.

        Current Global Electricity Breakdown:

        • Total Data Center/Infrastructure Demand: Approximately 2.0% of global electricity.
        • AI-Specific Share: Roughly 0.5% of global electricity.
        • “Traditional” Internet/Cloud: Roughly 1.5% of global electricity.

        The Internet is also a tool that humanity uses. Should we shut that down too? (I would argue yes considering how the “Information Superhighway” somehow made the average person dumber, but that’s a different discussion.)

        • SpacetimeMachine@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          4 hours ago

          Except the Internet is actually useful. AI has not shown that it deserves to use that insane amount of energy. It’s actually insane that you think AI isn’t an issue when it’s using 1/3rd as much energy as the ENTIRE INTERNET

  • zeroConnection@programming.dev
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    13 hours ago

    Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

    They’re both “annoying teenage tech-bros who are detached from reality” and they are spreading propaganda they picked up elsewhere.

  • WorldsDumbestMan@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    Current AI is unsuitable, but automation of some kind (maybe not AI) will be necessary for a nearly workless future. Life is kind of dumb as is, it’s better if we spent time in the gym, or doing yoga, or learning something, instead of spending life in the pesticide factory, then dying after 3 years of retirement from a horrific disease.

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      We already had (pre-2020) all the automation we needed to work less than 20 hr/wk and produce all the necessarily calories, fresh water, and housing for everyone. But, instead we chose to turn a few people into decabillionaires and continue to bicker over the scrap like we weren’t in a post-scarcity society.

      LLMs, transformers, convolution layers, characteristic tensors, etc. all have some legitimately novel uses, but all the big “AI” product lines are unethically developed, irresponsibly deployed, and dishonestly marketed.

      If you want an ethical chatbot, I recommend https://en.wikipedia.org/wiki/Apertus_(LLM) .

      I don’t know of a ethical model that’s good for images or code, yet, but I know people are working on them. The IBM Gemini models are getting close, but I don’t know if IBM will ever get the training data completely “clean” / open / free.

      I’ve been told that StarCoder is an ethically-trained free software model, but some of my research ( https://mot.isitopen.ai/model/StarCoder ) contradicts that assertion, and I’ve not looked into it deep enough to resolve that conflict myself. (IMO, we don’t actually need automated code generation, we need to write less code in better languages with better tests and more reuse; but you may not agree.)

  • Tiral@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    16 hours ago

    I think AI has positives to help people, that being said I think it’s out of control currently. I hope the bubble burst soon and we can actually get to a reasonable balance.

    • deadymouse@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      15 hours ago

      I hope the bubble burst soon and we can actually get to a reasonable balance.

      In fictional stories yes, in reality no. The only application that AI will find is to replace all employees, and people will be thrown out into the street.

  • GarboDog@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    16 hours ago

    Humans are social animals, in the United States especially where people are severely separated- they’ll look for and find any kind of easy access towards social interactions: including but not limited to Chat bots. It’s a sad reality that they would dismiss the negative affects it has on our social brains, dismiss the environmental effects it has on our planet, dismiss the social warmings because they’re too involved with LLMS “AI”.

    That’s right, it’s not even AI; it’s only large language models or some agentic systems. Way smaller ones existed in the past, think Dr. Sbaitso (1992) or A.L.I.C.E. (1995.) it’s actually not hard to make a chat bot, just have it echo what the user says with some key phrases. That’s the whole existence of chat bots and today’s current “ai” only they have a LOT more variables that were generated off of huge randomly generated data sets (both off of free open sources and stolen data) and that’s what causes it to hallucinate: it’s the randomness that humans don’t have the ability to change or update simply because it’s such a huge list of variables. It’s so massive people think it’s real intelligence! PEOPLE WERE FOOLED ON 1990’s CHART BOTS TOO! 😭 😂

    Anywho we recommend the movies Desk Set, Space Odyssey, pi and even Alphaville. They’re related to the subject and they’re pretty good at pointing out the bruhs.

      • GarboDog@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        If your point was to say “LLMs are good because it can hack into people’s PCs and make the world worse” I think you gotta start setting priorities towards finding some empathy.

        Besides it was not discovered by an LLM or AI. It was discovered by Taeyang Lee, researcher at Theori and then later refined into an exploit chain by the Xint Code Research Team, whom both used an “AI”-assisted analysis. So no LLMs didn’t magically find a decade old exploit, LLMs simply was used as a search function based on its trained module of the past coding assets and the logic bug in the Linux kernel.

        So yeah it’s basically a glorified search function at that point and if you can find peace fucking a search bar- hey man that’s your thing 🤷🏻‍♀️

        Our sources:

        • mirshafie@europe.pub
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          Holy shit, are you a professional strawman builder? Because you’re really good.

          An LLM helped fix a bug. That’s all we need to know. It’s useful. Saying so has nothing to do with empathy, lack thereof, or robosexuality or whatever the shit kids are in to these days.

      • bss03@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Not the person you asked, but I’d guess it’s multi-factorial. First that LLM-based summaries ARE generally higher quality than the pre-LLM summary tools output. Second, that LLMs are being given away free at point of prompt and are easily found; while summarizing tools have existed at least since 2000 (MS Word contained one), they were not easily found and usually involved purchasing some larger software collection, or a onerous install process. Third, everyone* hates** reading: if you’ve ever has user-support as part of your job, you’ve probably has at least one user where the message they read to you off of their screen tells them exactly what to do, but they chose to call you before really reading the message.

        Also, I’m not sure what “long” is. It can be really hard to keep enough attention on something though 100s of pages, especially when it’s not trying to be engaging and is rather dry.

        To OP, I would say that you might want to rethink using an LLM summary for any decision process. The LLM architecture makes “hallucinations” inevitable so eventually you are going to read an LLM summary that says the document includes something that it does NOT.

  • RoddyStiggs@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    47
    arrow-down
    10
    ·
    1 day ago

    If people weren’t fucking stupid, these scams would eventually stop working.

    What’s it been, 4 years since NFTs? And AI morons are already falling for this shit.

    • bbb@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      12 hours ago

      I lean anti-AI, but comparing generative AI to NFTs is very strange to me. Even if you didn’t intend to imply any similarity beyond both being scams, surely generative AI is at least a much more compelling scam.

      LLMs can now understand, to some extent, almost any text humans can. They might not be able to reason about it well, but they can at least translate it, summarize it, etc. If you had asked me 10 years ago, I’d have told you there was a near-zero chance of that happening within our lifetimes. NFTs were just “if we put baseball cards on the blockchain, people might buy them because of that same quirk of psychology.”

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        Transformers are like blockchain: an interesting use of mathematical principles to solve certain problems in a novel way, where the hype around that core attracts charlatans and scammers and combinations of the two traits who claim that it will go on to solve totally different problems in such a way as to revolutionize the world we live in.

        NFTs were the end of that line for blockchain where the machine started to eat itself. I can see a future, stable use of blockchain in some limited contexts, but cryptobros have always overstated the contexts in which that particular type of digital ledger can be more useful than other types of digital ledgers.

        We’ll see where the end of the road is for transformers, and what’s left at the end. I believe that computer inference will always be useful in some contexts, and that the advances in huge models with absurdly large numbers of parameters have unlocked some previously impractical tasks, but I could also see that settling into a general background existence as just another technological tool for doing things in a world that still looks pretty similar to the world today.

  • Bazoogle@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    4
    ·
    1 day ago

    Honestly, the problem when talking about “AI” is how many different things that can mean.

    • General AI chats
    • Coding agents
    • Automated pentesting/vulnerability discovery
    • Image/video/music generation
    • Grammar checking
    • Automated support agents (phone or chat)
    • Autonomous weaponry

    and so many more. Being Pro-AI could mean you like one or two application of the AI, but be against it in the others. I know very few people that like it for the use of media generation. However, there have been a lot of long time vulnerabilities in very popular open source projects that was only just discovered. That seems like a pretty undeniable use case demonstrating its usefulness.

    Then of course there’s governments that want to get their greedy blood thirsty hands on it to create autonomous weaponry. So now if you try to defend AI for a use case like defensively finding program vulnerabilities you somehow also have to defend AI weaponry?

    For a generic AI model, it is very powerful and can either be used to grow yourself or abused so your brain doesn’t have to work at all. You can use AI to do the hard work for you, or use it as a personal tutor to guide you into what to learn. People will of course mention hallucinations as why it can’t be used to learn, but you don’t have to take AI at its words. If you were to ask it to create a lesson plan on what you should study for a subject, in what order, and resources are available, you can do all of the actual learning using content the AI has no control over. So what you do with that is going to be up to the person, and opinions on it are going to vary wildly.

    Some people argue any use case is not okay given the various concerns of energy and water usage, and where those models sourced their training data. Not to mention if you support AI you must be supporting the AI companies. I agree there are concerns for the environmental impact, and the training data discussion is a long one on its own. However, I do think you can support AI as a technology, and not be okay with the way the technology is being done in regards to environmental impact. And given AI can be done on a local machine, I don’t think it has to be tied at all with the big tech at all.

    “AI” is such a wide and immense topic. And what we talk about with AI today will not be relevant come next year with how quickly it is developing. We shall see if some form of Moore’s law applied with the growth of AI as far as efficiency and quality of the AI goes.

    • clif@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      1 day ago

      One of the first things I say when non tech people ask me about ““AI”” is :

      “The term AI here is just marketing wank”

  • Lasherz@lemmy.world
    link
    fedilink
    arrow-up
    124
    arrow-down
    12
    ·
    2 days ago

    It’s usually bots. Unfortunately it’s not easy to moderate them, but if a bot is reported, doesn’t have a bot flag, and says a bunch of pro-ai stuff in addition to the reported activity it’s usually enough evidence to ban. It’s just one of their current tells, I wouldn’t base a ban only on that though. Report when you suspect them though.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    7
    ·
    2 days ago

    This is nothing new actually, the same thing happend during the crypto boom.

    There’s slop users (autoclankers) and then there’s researchers or developers actually doing the same stuff they’ve been doing for 5+ years.

    I think it just seems that way because there’s always a clash on practically every post.

    Some people don’t see the inherent flaw in outsourcing their physical thoughts to a cloud model, or the massive economic bubble they are helping to create.

    But some people are doing some genuinely interesting things that would have otherwise been impossible several years ago just because AI and model training research got a huge boost for everyone the past few years.

    My personal favorite is a drone that rapidly identifies and counts produce plant quality, output, issues, etc for large farms with some brand spanking new image models, and it costs about as much as maybe a new toolbox. No one wants to manually weed through hundreds of acres to count buds and try to catch problems before its too late. It’s a great upgrade from doing random samples that misses a lot of data.

    On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.

    It’s like when you throw world and ml users into one post. They both think the other is louder, and also the big dumb lol.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      2 days ago

      On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.

      This might be a bit of a hot take, but I don’t really see anything inherently wrong with this. The scientists and engineers will continue doing their serious work regardless of public opinion, and while some of them may have tangentially benefited from from increased interest and funding in the field, most of it is going to these corporate LLM models which are taking up all the oxygen in the room.

      That’s a bubble that needs to burst. I think it’s more important to keep public sentiment rightfully focused in that direction. Let’s face it, you’re really not going to be able to educate the general public on these nuances. The field at large will persist regardless.

      • benjirenji@slrpnk.net
        link
        fedilink
        arrow-up
        9
        arrow-down
        3
        ·
        1 day ago

        If you don’t differentiate and keep the two in the same pot you won’t be able to fund research into the useful stuff. It’s true that consumer hype and research funding decisions are not the same, but they may be indirectly linked. A public fund may fear public outrage if it continues funding X millions of AI projects even if they’re not LLM related.

        So the reputation damage may affects viable, net positive applications.

  • lovingisliving@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    18
    ·
    2 days ago

    People have different opinions on AI, not everyone is vehemently opposed, and some view it as useful if used on the appropriate configuration.