

Language models randomly generate words based on the statistical relationship between the words. Meaning, you input words, and it tries to output words that are commonly associated. It doesn’t know what the words say. It cannot think. It is not actually answering your question.
Even a person guesses the words, in his brain there is a process of weighing and superimposing ingrained knowledge and memories, where unnecessary information is pushed aside and a suitable one passes to some event or object, allowing you to get the necessary approximate picture. For example, some guy will say - oh, and this food is surprisingly healthy (he hasn’t tried it before). And what did I tell you, and you didn’t believe me, - said a friend of the man (who ate this food before)
It’s hard to explain, but the principle of weighing both humans and AI has a common principle, even if there are some differences in how weighing happens.
Good. Your viewpoint is irrational. That means, it’s not based on facts or evidence. It’s completely made up based on your mind’s interpretation. Nothing in real life suggests anything like that will happen. I see from your post history that you like to write fiction. That can be a somewhat common issue for authors. They spend so much time in their own heads making things up, that they lose track of the line between what they’re making up and what’s around them.
I don’t have much to say about that, you’re right, even if not quite. It is difficult to explain, it is better to offer you an example about paper clips, here is the link find there the section “Paperclip maximizer” - https://en.wikipedia.org/wiki/Instrumental_convergence
The concept of mind is relative, so what do you think mind is? Or do you have a stereotypical idea that he is somehow special, like other people?