But we don’t have agreed upon definition for intelligence either:
The ability to acquire, understand, and use knowledge.
the ability to learn or understand or to deal with new or trying situations
the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
the act of understanding
the ability to learn, understand, and make judgments or have opinions that are based on reason
It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
I see AI as a term similar to “plants.” When I hear this complaint it sounds to me like someone asking how strawberries and sequoia trees can both be plants when they couldn’t be further apart. Well yeah, but that’s why we have more specific terms when we’re referring to a particular plant - just like with AI. Plants and AI are both parent categories that cover a wide range of subcategories.
I think the issue is that when people hear “AI,” their minds immediately jump to the sci-fi AI systems depicted as as smart or smarter than humans. They then see the stupid mistakes LLMs make and reasonably conclude these systems are nothing alike, so LLMs don’t count as AI in their minds.
However, the AI systems in sci-fi aren’t just intelligent - they’re generally intelligent. That’s what LLMs lack.
The way I see it, there are levels to intelligence. A chess bot is a narrowly intelligent system. It’s great at one thing but can’t do anything else. Then there’s Artificial General Intelligence (AGI), which is basically human-level intelligence. The next step up is Artificial Superintelligence (ASI) - a generally intelligent system that’s superhuman across the entire field of intelligence, unlike a chess bot that’s only “superhuman” at chess.
I’d say LLMs are somewhere between narrow intelligence and AGI. They can clearly do more than just generate language, but not to the extent humans can, so I wouldn’t call them generally intelligent. At least not yet.
And yeah, I don’t think sentience necessarily needs to come along for the ride. It might, but it’s not obvious to me that one couldn’t exist without the other. It’s conceivable to imagine a system that’s superintelligent but it doesn’t feel like anything to be that system.
But we don’t have agreed upon definition for intelligence either:
I see AI as a term similar to “plants.” When I hear this complaint it sounds to me like someone asking how strawberries and sequoia trees can both be plants when they couldn’t be further apart. Well yeah, but that’s why we have more specific terms when we’re referring to a particular plant - just like with AI. Plants and AI are both parent categories that cover a wide range of subcategories.
Respect for you, good sir! A good point well made.
It’s just my interpretation or current understanding of intelligence. I think I am adding sentience and motivation accidentially.
So your original point stands.
Thank you.
I think the issue is that when people hear “AI,” their minds immediately jump to the sci-fi AI systems depicted as as smart or smarter than humans. They then see the stupid mistakes LLMs make and reasonably conclude these systems are nothing alike, so LLMs don’t count as AI in their minds.
However, the AI systems in sci-fi aren’t just intelligent - they’re generally intelligent. That’s what LLMs lack.
The way I see it, there are levels to intelligence. A chess bot is a narrowly intelligent system. It’s great at one thing but can’t do anything else. Then there’s Artificial General Intelligence (AGI), which is basically human-level intelligence. The next step up is Artificial Superintelligence (ASI) - a generally intelligent system that’s superhuman across the entire field of intelligence, unlike a chess bot that’s only “superhuman” at chess.
I’d say LLMs are somewhere between narrow intelligence and AGI. They can clearly do more than just generate language, but not to the extent humans can, so I wouldn’t call them generally intelligent. At least not yet.
And yeah, I don’t think sentience necessarily needs to come along for the ride. It might, but it’s not obvious to me that one couldn’t exist without the other. It’s conceivable to imagine a system that’s superintelligent but it doesn’t feel like anything to be that system.