• 1 Post
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2024

help-circle
  • Thank you for the link. Does the paper reveal anything about kissing specifically? Haven’t had the time to read it yet.
    I know that humans and other animals can feel affection for each other and that physical contact, e.g., by stroking, induces a sensation of ‘affective touch’ facilitated by C-tactile fibers. So while kissing itself might induce similar effects by similar pressure force and temperature, I wonder whether it really makes a difference to ‘poking’ your pet with your finger in a similar way. In other words: if the physical sensation is similar, does another animal understand a kiss versus another form of affective touch?




  • Oh plenty…


    The myth of “alpha wolves” and all the men who build a toxic social and psychological image of themselves and other men because of it, apparently because they would like to live in a zoo and get into conflicts with other men they have never met before or something.

    But seriously, there were some grave errors in how this came to be. This wasn’t observing wolves in their natural environment. There are no “alpha wolves” in nature. The researcher, David Mech, who was in part responsible for this stupidity has been working since then to correct this, but media and society already swallowed the misconception too hard.


    Next one:
    “LLMs are not AI.” Yes, they are. AI is a scientific label for a bunch of methods, algorithms, and models.
    “But they are not ‘intelligent’.” My dear fellow flesh bag, we do not even have a clear definition of what ‘intelligence’ even is. Come up with a good one, then let’s talk about this particular label. Until then, you can rename AI to ‘pesto alfredo’ for all I care as long as we agree what kind of methods we mean by that to categorize a bunch of computer science stuff.

    In the opposite corner:
    “We have achieved AGI with LLMs”. No, we have not. There is still a substantial lack of capabilties and properties.

    Or: “LLMs are sentient and self-aware”. To the best of my knowledge, they are not. To be fair, there is little room for debate, which often boils down to stuff like semantic arguments about consciousness and definitions of understanding, but the consensus is that they are not.


    Another one:
    “Homeopathy cures diseases.” No, it doesn’t. It has a placebo effect but that’s pretty much about it.


    There is more:
    “Evolution theory is just a ‘theory’.” No, it’s a proven set of explanations and models supported by overwhelming empirical evidence. Popular confusion of the colloquial use of the word “theory” with the scientific one.

    Colloquial meaning: a guess, hunch, speculation, or unproven idea.

    Scientific meaning: a well-substantiated explanatory framework supported by extensive evidence and capable of being tested and potentially falsified.


    And there is even more, but I have already written a wall of text and am tired now.


  • That’s philosophical.
    Are our neurons, are waves etc. not just a system that directly ‘perform’ maths without ‘doing’ maths? Math can be seen as a language for us to describe, explore and predict stuff. But you could equivalently say that the math is already there and we just discover it and put it into words.

    That relates to the question whether math is discovered or invented. The one is an act of uncovering universal and natural truths, the other a rather creative process of bringing something new into a universe where it isn’t naturally found.

    But that’s the catch. We wouldn’t say that, for example, coffe machines are discovered, they are not found in nature. (If they would, that would be quite a headline to wake up to.) They are clearly invented. Math however builds upon a fundament of provable truths. Of stuff that is already there and can be found in nature. So while we might argue that at least some parts of math may be invented (just like the coffee machine that operates on physical principles that exist elsewhere in nature with respect to their components), isn’t the fundament of math itself rather discovered? We just put into words and symbols, what is already there and uncover the hidden mechanisms.

    I am not a mathematician, but have heard somewhere that it is already quite an effort to prove why our numbers make sense or why 1+1 can equal 2. And while we certainly do not need to tie math to an observable physical reality, we derived fundamental working principles from it, don’t we?


  • Minor corrections: AI does not just comprise methods for tasks that require ‘cognition’. Let’s rather use the more general “information processing”. Nor is it restricted to “normally requires humans”. Think of swarm intelligence methods for example, like ant colony optimization.

    There is an inherent issue in the definition of the word “intelligence” though. For labelling a bunch of methods, that’s not as problematic, we could call all that ‘banana milkshake’ as long as we agree upon what we put into that category.

    But we do not even have a good definition of “intelligence” itself. As soon as this issue is solved, we might start rethinking the label ‘artificial intelligence’.

    My proposed “information processing” is also insufficient, as it would make a fancy pocket calculator indistinguishable from what we usually call “AI”.

    Thinking about that: if we would apply some AI methods, e.g. from the field of machine learning, to perform operations that a pocket calculator already solves (which is kind of ridiculous, because we would be using a computer to train an AI model to mimick a computer) does that make a calculator AI? Or the AI a calculator? What would that make us humans?


  • So “the problem” is you first heard about it in the context of chatbots, so now you want to insist that is the only meaning the phrase has ever represented and everyone else needs to change to accomdoate you.

    No, it’s a term used in science and engineering to categorize a bunch of algorithms, methods, and models that is being misunderstood by many people in the first place and has existed well before the first chatbots.
    Such misconceptions are not unusual, which is often a result of using scientific terminology from a colloquial point of view. Think of the term “theory” for another example.

    I’m not saying chatbots are AI, I’m saying the definition that calls them AI is incorrect because grifters just changed it to fit what they were doing, for money.

    I disagree with the money part. You are now throwing scIentists and engineers into one pot with those who exploit this term for marketing purposes alone.
    But I agree that the “intelligence” part is difficult to justify.

    I understand that it is an intuitive choice for labelling methods that can mimick or outperform “natural intelligence” (people, birds, ants, fungi, bacteria, …) on tasks that involve some form of information processing. The “artificial” part underlines that these methods are usually well… not found in nature (although often inspired from) but manufactured, man-made.

    From my point of view the issue really begins at the “intelligence” part. We throw this word around as if it was something unique to humans. Yet, there exists no solid definition of what the fuck ‘intellgience’ even is. I challenge you to think about an airtight definition of ‘intelligence’. If we have a solid definition for that, we can think about how we might carry that over to what we currently call artificial intelligence and may consider relabeling if necessary.

    Currently, I lack an alternative. And for that reason I stick with AI as a commonly accepted working label.