What’s a common “fact” that’s spread around that’s actually not true and pisses you off that too many people believe it?

  • Iconoclast@feddit.uk
    link
    fedilink
    arrow-up
    48
    arrow-down
    8
    ·
    5 days ago

    “LLMs are not AI”

    Artificial intelligence is a term used in computer science to describe a system capable of performing any cognitive tasks that would normally require human intelligence - like generating natural-sounding language. The issue isn’t that the term is being used incorrectly, but rather that most people think it means more than it actually does. It’s a broad term that covers everything from old Atari chess engines to artificial superintelligence.

    • TachyonTele@piefed.social
      link
      fedilink
      English
      arrow-up
      38
      ·
      5 days ago

      The problem is people think llm AI means it’s thinking, when it’s obviously not. Thus: “llms are not ai” is said so people will hopefully stop thinking the llms are thinking.

        • TachyonTele@piefed.social
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          5 days ago

          That’s not thinking. That’s calculating. It doesn’t have any thoughts about your math problems.

          • gdog05@lemmy.world
            link
            fedilink
            arrow-up
            18
            ·
            5 days ago

            It doesn’t have any thoughts about your math problems.

            You say that but I feel judged sometimes.

              • Grail@multiverse.soulism.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                If you’ll allow Me to drop the quips and get more philosophical, I believe that thinking is just a word for processing data. It’s obvious to Me that you disagree, but I don’t understand why. Your idea of thought seems a little more metaphysical or perhaps even spiritual than Mine.

                The obvious assumption I could make is that you believe thinking has internality and data processing doesn’t. But if that’s the case, then you don’t really have any proof for your beliefs, because we can’t ask calculators if their data processing is accompanied by an internal experience. And that’s why it seems to Me that your assertions are unprovable and thus essentially religious in character.

                • TachyonTele@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  4 days ago

                  I like it!
                  I apologize about the spelling, im still on my cup of coffee. I attribute thinking with inwardness, yes. Conciseness is a completely unknown state. No one knows how it works, why it works, what it works in, etc. its a block box.

                  All we know is that we have conciseness. I belive most animals have conciseness, and thus can think. Insects and amoeba, small life forms, have sentience. Sentience is the ability to react to the environment and stimulus, but is unable to think and have conciseness like humans do.

                  Inorganic objects do not have either of those. You can’t imagine what its like to “be” a rock.They simply are just matter. Computers fall into this category. Computers follow the 1s and 0s, and exacute those instructions. They don’t consider what they’re doing. They don’t ponder on why you’re asking or try things on thier own. They are as sentient as a screwdriver.

                  • Grail@multiverse.soulism.net
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 days ago

                    Yeah, that’s what I kinda guessed. You’re just assuming they don’t have internality based on vibes. Your beliefs aren’t falsifiable, they can’t be empirically tested. This is religion, not science.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      As a guy working in tech for decades I disagree.

      We coined the term wrong. The literal words do not match the technology, as in intelligence.

      That ‘we’ agreed on that llm is ai does sadly not make things better.

      Anyhow here we are with neither you nor me being able to leave this hype train.

      • Iconoclast@feddit.uk
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        But we don’t have agreed upon definition for intelligence either:

        • The ability to acquire, understand, and use knowledge.
        • the ability to learn or understand or to deal with new or trying situations
        • the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
        • the act of understanding
        • the ability to learn, understand, and make judgments or have opinions that are based on reason
        • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

        I see AI as a term similar to “plants.” When I hear this complaint it sounds to me like someone asking how strawberries and sequoia trees can both be plants when they couldn’t be further apart. Well yeah, but that’s why we have more specific terms when we’re referring to a particular plant - just like with AI. Plants and AI are both parent categories that cover a wide range of subcategories.

        • Strider@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          Respect for you, good sir! A good point well made.

          It’s just my interpretation or current understanding of intelligence. I think I am adding sentience and motivation accidentially.

          So your original point stands.

          • Iconoclast@feddit.uk
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            Thank you.

            I think the issue is that when people hear “AI,” their minds immediately jump to the sci-fi AI systems depicted as as smart or smarter than humans. They then see the stupid mistakes LLMs make and reasonably conclude these systems are nothing alike, so LLMs don’t count as AI in their minds.

            However, the AI systems in sci-fi aren’t just intelligent - they’re generally intelligent. That’s what LLMs lack.

            The way I see it, there are levels to intelligence. A chess bot is a narrowly intelligent system. It’s great at one thing but can’t do anything else. Then there’s Artificial General Intelligence (AGI), which is basically human-level intelligence. The next step up is Artificial Superintelligence (ASI) - a generally intelligent system that’s superhuman across the entire field of intelligence, unlike a chess bot that’s only “superhuman” at chess.

            I’d say LLMs are somewhere between narrow intelligence and AGI. They can clearly do more than just generate language, but not to the extent humans can, so I wouldn’t call them generally intelligent. At least not yet.

            And yeah, I don’t think sentience necessarily needs to come along for the ride. It might, but it’s not obvious to me that one couldn’t exist without the other. It’s conceivable to imagine a system that’s superintelligent but it doesn’t feel like anything to be that system.

    • vrek@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      Marketing and pr pressure to be able to use the term “Ai” because it’s the current hype. Everything is now Ai. It’s now a meaningless term. Image processing, data calculations, language interpretation, language generation, all claim to be Ai. If your product has Ai it now tells me nothing about what it does.

      • Iconoclast@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Marketing only calls everything AI because that’s the only term people recognize. ChatGPT is AI, yes, but it’s an Large Language Model to be specific. Dall-E is also AI but the more accurate term is Diffusion Model. There’s just no point in using these terms in marketing because 90% of people would have no idea what you’re talking about.

        When people say that LLMs are not AI they usually mean that LLMs are not generally intelligent (AGI) which is true, but they do still count as an AI.

        • vrek@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Exactly but so many people form strong opinions and expectations because the say “Ai” but it could mean so many things.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Minor corrections: AI does not just comprise methods for tasks that require ‘cognition’. Let’s rather use the more general “information processing”. Nor is it restricted to “normally requires humans”. Think of swarm intelligence methods for example, like ant colony optimization.

      There is an inherent issue in the definition of the word “intelligence” though. For labelling a bunch of methods, that’s not as problematic, we could call all that ‘banana milkshake’ as long as we agree upon what we put into that category.

      But we do not even have a good definition of “intelligence” itself. As soon as this issue is solved, we might start rethinking the label ‘artificial intelligence’.

      My proposed “information processing” is also insufficient, as it would make a fancy pocket calculator indistinguishable from what we usually call “AI”.

      Thinking about that: if we would apply some AI methods, e.g. from the field of machine learning, to perform operations that a pocket calculator already solves (which is kind of ridiculous, because we would be using a computer to train an AI model to mimick a computer) does that make a calculator AI? Or the AI a calculator? What would that make us humans?

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      13
      ·
      5 days ago

      Artificial intelligence is a term used in computer science

      Arguing that because nerds appropriated an original term does not mean that we have to change the meaning of the original term…

      I don’t look out my “transpart glass” I look out my windows. Even tho that’s the name of an operating system. If I say I grok something, it means I understand like Heinlen intended, not that I asked a racist AI about it.

      “Artificial Intelligence” and all sorts of things computer nerds are trying to claim they invented have existed in theory at least as far back as Rome.

      So “the problem” is you first heard about it in the context of chatbots, so now you want to insist that is the only meaning the phrase has ever represented and everyone else needs to change to accomdoate you.

      The problem isn’t people are using the phrase wrong, the problem is you don’t know what it means except in a very narrow context.

      None of any of this shit is new, people are just ignorant.

      It’s like when I was a kid and watched pro-wrestling, I thought I was cool and original, because I didn’t know the media that they were blatantly ripping off of.

      That’s where you are at right now with Artificial Intelligence, you only know the version the grifters have appropriated.

      Pre-emptive edit:

      I’m not saying chatbots are AI, I’m saying the definition that calls them AI is incorrect because grifters just changed it to fit what they were doing, for money.

      • Grail@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 days ago

        You say that like computer scientists in the 1950s who invented the concept of AI stole it from science fiction writers instead of the other way around.

      • Zacryon@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        So “the problem” is you first heard about it in the context of chatbots, so now you want to insist that is the only meaning the phrase has ever represented and everyone else needs to change to accomdoate you.

        No, it’s a term used in science and engineering to categorize a bunch of algorithms, methods, and models that is being misunderstood by many people in the first place and has existed well before the first chatbots.
        Such misconceptions are not unusual, which is often a result of using scientific terminology from a colloquial point of view. Think of the term “theory” for another example.

        I’m not saying chatbots are AI, I’m saying the definition that calls them AI is incorrect because grifters just changed it to fit what they were doing, for money.

        I disagree with the money part. You are now throwing scIentists and engineers into one pot with those who exploit this term for marketing purposes alone.
        But I agree that the “intelligence” part is difficult to justify.

        I understand that it is an intuitive choice for labelling methods that can mimick or outperform “natural intelligence” (people, birds, ants, fungi, bacteria, …) on tasks that involve some form of information processing. The “artificial” part underlines that these methods are usually well… not found in nature (although often inspired from) but manufactured, man-made.

        From my point of view the issue really begins at the “intelligence” part. We throw this word around as if it was something unique to humans. Yet, there exists no solid definition of what the fuck ‘intellgience’ even is. I challenge you to think about an airtight definition of ‘intelligence’. If we have a solid definition for that, we can think about how we might carry that over to what we currently call artificial intelligence and may consider relabeling if necessary.

        Currently, I lack an alternative. And for that reason I stick with AI as a commonly accepted working label.

      • theherk@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        First, that actually is how language works. Meaning is given to words by consensus and consensus alone. Generally, since it came to widespread usage in the modern lexicon it means exactly as they described.

        Second, you say it was appropriated. Okay, from what?