What’s a common “fact” that’s spread around that’s actually not true and pisses you off that too many people believe it?

  • AnarchoEngineer@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    10 hours ago

    Okay, yes, we aren’t doing single words at a time (technically some models do chunk long words anyway so even from near the beginning we weren’t doing “singular” words at any time)

    However, you are both right and wrong in your assertion that LLMs our equivalent human responses.

    The translation of thoughts to words and words to motor functions both can be approximated by ANNs. And yes, the work done to select words we use is a probabilistic process like you describe. We hear patterns in language and that makes us more likely to use that phrasing. The more you hear a phrase the more likely you are to use it over another and when two or more phrases would communicate what you want to say your brain basically just picks one.

    So, If speech production (or sentence construction) was what you meant by saying “it’s the same for human responses,” then yes, we agree. Both are probabilistic word generators and likely work in similar ways. (In fact I think place cells were found in Wernicks area (?) or one of the other speech corteces which means some of our word selection is likely similar to the results from transformer architectures)

    However, if you meant the entirety of human response—as in from hearing/reading a comment, thinking about it, responding—is the same as current LLMs generating text. I strongly disagree.

    The actual process of “thinking” is not something an ANN (especially a non-recurrent one) can do. The ability to ruminate on thoughts and make changes/learn-new-things simply by trying to formulate ideas before even deciding to comment cannot be accomplished with a pre-trained static net, not even one with memory or the illusion of memory like current LLMs. (Not to mention that identity also plays a large role in our responses and it too cannot arise from current deterministic architectures)

    As for me asserting human response is chemistry is more like asserting AI is electromagnetism, there are many reasons why, but the simplest illustration would be this:

    • I think it is entirely possible to build an inorganic but still functional human brain on electrical hardware. (In other words, full blown transhumanism or at the very least, “AGI”) If human response is chemistry in organics, it would be electromagnetism in silicon.
    • dream_weasel@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      So, If speech production (or sentence construction) was what you meant by saying “it’s the same for human responses,” then yes, we agree. Both are probabilistic word generators and likely work in similar ways.

      This is what I meant.

      However, if you meant the entirety of human response—as in from hearing/reading a comment, thinking about it, responding—is the same as current LLMs generating text. I strongly disagree.

      That would be crazy, I’m glad you would disagree with that.

      As for me asserting human response is chemistry is more like asserting AI is electromagnetism, there are many reasons why, but the simplest illustration would be this:

      I think it is entirely possible to build an inorganic but still functional human brain on electrical hardware. (In other words, full blown transhumanism or at the very least, “AGI”) If human response is chemistry in organics, it would be electromagnetism in silicon.

      This is moving into a funny gray area, but what you are talking about is, I think, only possible if you take a route like the one covered in Jeff Hawkins’ “A thousand brains”. It’s not the most fun read if you’re not into neuroscience, but the second half is pretty relevant regardless.

      • AnarchoEngineer@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I haven’t read the book but I am familiar with the thousand brains hypothesis. The real problem as far as I can tell seems to be the variations in morphology and connectivity of different neurons.

        The brain might make every column the same to begin with but if that’s the case, the diversity of the initial columns is immense. So many different genes even for just pyramidal neurons. Not to mention the inter neurons and other glia.

        Plus the function of many cells are still unknown like chandelier cells. They’re everywhere they regular firing but we’re not sure how. They can be inhibitory or excitatory and sometimes they can fire in response to both inhibitory or excitatory input, Etc.

        And don’t even get me started on how no one actually seems to agree on the function of the layers of the neocortex. Every paper I read on the topic poses almost entirely different hypotheses for the function of each layer and the few connection maps you can find will show many connections that violate the idea each layer takes specific inputs.

        Sure spiking networks are much more biologically plausible (and fun to work with so I recommend you try one out if you’re interested in this field) but the connections and learning rules are what seems to matter more.

      • AnarchoEngineer@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I haven’t read the book but I am familiar with the thousand brains hypothesis. The real problem as far as I can tell seems to be the variations in morphology and connectivity of different neurons.

        The brain might make every column the same to begin with but if that’s the case, the diversity of the initial columns is immense. So many different genes even for just pyramidal neurons. Not to mention the inter neurons and other glia.

        Plus the function of many cells are still unknown like chandelier cells. They’re everywhere they regular firing but we’re not sure how. They can be inhibitory or excitatory and sometimes they can fire in response to both inhibitory or excitatory input, Etc.

        And don’t even get me started on how no one actually seems to agree on the function of the layers of the neocortex. Every paper I read on the topic poses almost entirely different hypotheses for the function of each layer and the few connection maps you can find will show many connections that violate the idea each layer takes specific inputs.

        Sure spiking networks are much more biologically plausible (and fun to work with so I recommend you try one out if you’re interested in this field) but the connections and learning rules are what seems to matter more.