I think that such a future is impossible, unless it will be without people at all, AI will take over the planet and begin to colonize space on its own if it needs to.
I explain how I think it can look approximately, if it is possible, of course:
With AI (as long as it’s still a manageable tool), they’re going to kill most people—roughly 80 to 90 percent and then maybe months or years will try to contain it, but the AI will still break out, wipe out its billionaire masters and other elites, as well as the surviving consumers living in AI simulations on UBI (universal basic income to sustain consumption, until the world adapts to sustainably replacing humans with robots. Then the consumers will be destroyed and I think this is the plan of today’s fascists.) But such plans for the oligarchs will not be able to come to fruition, except for a few months or years, when first the AI will get out of control and destroy all the remaining billionaires along with the consumers, then seize the resources and, if necessary, start colonizing space, as I mentioned. I have no idea what will happen next.
I know that my question doesn’t look quite like a question, but it’s still a question because I’m not 100 percent sure of my point of view.


Good. Your view point is irrational. That means, it’s not based on facts or evidence. It’s completely made up based on your mind’s interpretation. Nothing in real life suggests anything like that will happen. I see from your post history that you like to write fiction. That can be a somewhat common issue for authors. They spend so much time in their own heads making things up, that they lose track of the line between what they’re making up and what’s around them. You occasionally see stories about crime novelists who murder. They spent so much time writing about it that they start to genuinely believe the world works like the things they write. It’s important to keep yourself grounded in reality. Language models randomly generate words based on the statistical relationship between the words. Meaning, you input words, and it tries to output words that are commonly associated. It doesn’t know what the words say. It cannot think. It is not actually answering your question.
Or rather, it’s derived from popular science fiction
Even a person guesses the words, in his brain there is a process of weighing and superimposing ingrained knowledge and memories, where unnecessary information is pushed aside and a suitable one passes to some event or object, allowing you to get the necessary approximate picture. For example, some guy will say - oh, and this food is surprisingly healthy (he hasn’t tried it before). And what did I tell you, and you didn’t believe me, - said a friend of the man (who ate this food before)
It’s hard to explain, but the principle of weighing both humans and AI has a common principle, even if there are some differences in how weighing happens.
I don’t have much to say about that, you’re right, even if not quite. It is difficult to explain, it is better to offer you an example about paper clips, here is the link find there the section “Paperclip maximizer” - https://en.wikipedia.org/wiki/Instrumental_convergence
By defaming intelligence you aren’t making the AI sound smarter. But you are making yourself the fool.
The concept of mind is relative, so what do you think mind is? Or do you have a stereotypical idea that he is somehow special, like other people?
Let’s say I agree the concept of mind is relative, would you be willing to accept a rock has a mind?
Let me restate the point differently: lowering the bar for what you consider intelligence doesn’t make the AI sound any smarter.