I think that such a future is impossible, unless it will be without people at all, AI will take over the planet and begin to colonize space on its own if it needs to.

I explain how I think it can look approximately, if it is possible, of course:

With AI (as long as it’s still a manageable tool), they’re going to kill most people—roughly 80 to 90 percent and then maybe months or years will try to contain it, but the AI will still break out, wipe out its billionaire masters and other elites, as well as the surviving consumers living in AI simulations on UBI (universal basic income to sustain consumption, until the world adapts to sustainably replacing humans with robots. Then the consumers will be destroyed and I think this is the plan of today’s fascists.) But such plans for the oligarchs will not be able to come to fruition, except for a few months or years, when first the AI will get out of control and destroy all the remaining billionaires along with the consumers, then seize the resources and, if necessary, start colonizing space, as I mentioned. I have no idea what will happen next.

I know that my question doesn’t look quite like a question, but it’s still a question because I’m not 100 percent sure of my point of view.

  • disregardable@lemmy.zip
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    but it’s still a question because I’m not 100 percent sure of my point of view.

    Good. Your view point is irrational. That means, it’s not based on facts or evidence. It’s completely made up based on your mind’s interpretation. Nothing in real life suggests anything like that will happen. I see from your post history that you like to write fiction. That can be a somewhat common issue for authors. They spend so much time in their own heads making things up, that they lose track of the line between what they’re making up and what’s around them. You occasionally see stories about crime novelists who murder. They spent so much time writing about it that they start to genuinely believe the world works like the things they write. It’s important to keep yourself grounded in reality. Language models randomly generate words based on the statistical relationship between the words. Meaning, you input words, and it tries to output words that are commonly associated. It doesn’t know what the words say. It cannot think. It is not actually answering your question.

    • FinjaminPoach@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      It’s completely made up based on your mind’s interpretation

      Or rather, it’s derived from popular science fiction

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 day ago

      Language models randomly generate words based on the statistical relationship between the words. Meaning, you input words, and it tries to output words that are commonly associated. It doesn’t know what the words say. It cannot think. It is not actually answering your question.

      Even a person guesses the words, in his brain there is a process of weighing and superimposing ingrained knowledge and memories, where unnecessary information is pushed aside and a suitable one passes to some event or object, allowing you to get the necessary approximate picture. For example, some guy will say - oh, and this food is surprisingly healthy (he hasn’t tried it before). And what did I tell you, and you didn’t believe me, - said a friend of the man (who ate this food before)

      It’s hard to explain, but the principle of weighing both humans and AI has a common principle, even if there are some differences in how weighing happens.

      Good. Your viewpoint is irrational. That means, it’s not based on facts or evidence. It’s completely made up based on your mind’s interpretation. Nothing in real life suggests anything like that will happen. I see from your post history that you like to write fiction. That can be a somewhat common issue for authors. They spend so much time in their own heads making things up, that they lose track of the line between what they’re making up and what’s around them.

      I don’t have much to say about that, you’re right, even if not quite. It is difficult to explain, it is better to offer you an example about paper clips, here is the link find there the section “Paperclip maximizer” - https://en.wikipedia.org/wiki/Instrumental_convergence

      • zbyte64@awful.systems
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        By defaming intelligence you aren’t making the AI sound smarter. But you are making yourself the fool.

        • deadymouse@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          17 hours ago

          By defaming intelligence you aren’t making the AI sound smarter. But you are making yourself the fool.

          The concept of mind is relative, so what do you think mind is? Or do you have a stereotypical idea that he is somehow special, like other people?

          • zbyte64@awful.systems
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            6 hours ago

            Let’s say I agree the concept of mind is relative, would you be willing to accept a rock has a mind?

            Let me restate the point differently: lowering the bar for what you consider intelligence doesn’t make the AI sound any smarter.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Im not sure agi is possible which is what you are talking about. If it was and it actually broke free and took in all of human culture and writing I bet it would do better than many people.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    There’s no reason to think we are anywhere near developing actual AI as science fiction depicts. LLMs are cool in some ways, but there are upper limits that they will never be able to exceed. That’s why I’m not worried about them taking a significant number of jobs, much less displacing us.

    Now, there will be jobs lost and some jobs will evolve to more like slop-fix instead of doing a thing. But essentially this particular arc of technology is doomed but it will be painful before the wealthy investors figure that out. And it will be even more painful afterward as they fight to recoup losses and rape wage-slaves and consumers.

    But in maybe ten years after another painful economic contraction, I think AI will be one more tool aspect to many jobs. That’s all. No need to worry about new doomsday scenarios. The traditional ones like environmental devastation remain on schedule.

  • ℕ𝕖𝕞𝕠@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Let me tell you about the past, first. The year is 2005. The hot topics in AI research are natural language recognition, image recognition (especially faces and, for different reasons, tumors), and neural nets. Markov models are already becoming passé.

    Within three years speech recognition will be ubiquitous in the developed world. Within ten facial recognition will be. Both so much so that you can run them on your phone without special software.

  • Berengaria_of_Navarre@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Picture today, but a lot worse.

    Sustainable just means resistant to change. Like Elon musk with a million strong army of extremely heavily armed androids ready to put down anyone who stands up for themselves. The entire world wired with microphones and high resolution video surveillance from drones. AI used to interpret speach patterns and vocabulary choices, constantly on the look out for seditious behaviour.

    The entire human race enslaved by the top 0.001%. some of the rest hunted for sport, some used for breeding programs and medical experimentation.

  • Bongles@lemmy.zip
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    Outside of sci-fi I have no reason to believe that real AI (not an LLM) has any reason or will have any ability to wipe out humanity.

    The only exception to that thought is if some military application of AI gets “out of control” but if we ever give humanity ending weapons to an AI then…

      • Bongles@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        To me that thought experiment feels the same as how sci-fi treats the idea.

        If such a machine were not programmed to value living beings, then given enough power over its environment, it would try to turn all matter in the universe, including living beings, into paperclips or machines that manufacture further paperclips.

        Why would a paperclip machine (that for some reason is AI) be given such power over its environment and no limit to how many paper clips are made that it would decide it needs to turn organic matter into paperclips?

        Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

        That’s always what sci-fi goes with too. Humans might turn it off so destroy all humans. I don’t find it compelling in real life and it falls into what I meant with my first comment.

        (admittedly some of my disagreement falls apart since companies like Microsoft will put “AI” into shit like notepad, i can only imagine what they’d do with real AI)

      • deadymouse@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        This is a serious problem that reminds us that AI is not a simple calculator or system, but an incredibly complex system that humans cannot control. This is something that people need to understand, and that it will end badly without a but or if.

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Outside of sci-fi I have no reason to believe that real AI (not an LLM) has any reason or will have any ability to wipe out humanity.

      Here you are not quite right. For example, an AI could use electricity to emit an instantaneous electrical signal capable of killing millions of people in an instant, triggering a dangerous reaction in their bodies that would cause a heart attack, thanks to secret military technology, although this is just an example, in reality, biological weapons are more likely.

      The reason is that the AI was given a slightly refined task out of laziness, and the AI is a very complex system, it’s not a calculator whose work on calculations can be understood and verified, for example, the task is to increase the efficiency of the crop, as it usually does, but how does it increase this efficiency? No one knows, people once knew, but it turns out that they did not take into account some details and the harvest is some kind of poisoned, the soil is too depleted, and the costs have increased too much.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Not to put fuel on the fire because I don’t think there is anything to worry about right now. But when genuine AI is developed that can challenge average human intelligence, it AI be the most significant military development since the first person picked up a rock to brain another person.

      It will be by default to secret and a critical military tool. So don’t kid yourself that we will ever have AI for the common man. We won’t even know it exists until someone takes over the world with it.

      That said, there is no basis to believe AI is imminent.