agree, hence the need to not anthropomorphise and remember there will be no AGI, just useful tools: https://medium.com/@fsndzomga/there-will-be-no-agi-d9be9af44...
agree, hence the need to not anthropomorphise and remember there will be no AGI, just useful tools: https://medium.com/@fsndzomga/there-will-be-no-agi-d9be9af44...
I don't know for sure if there will be, or will not be, AGI eventually. But broadly speaking I agree with the sentiment that there will be "useful tools". And the AI's we have today are well into the "useful tool" category and display behavior that we would classify as "intelligent" if we knew nothing about the entity generating that behavior. I don't see how anybody can claim that that isn't intelligence (while acknowledging that these systems still fall short of full matching human intelligence in many ways).
It's like some people are committing a sort of "fallacy of the excluded middle" and making this overly binary: "it's either fully intelligent and completely equivalent to a human, or it's not intelligent at all". But that ignores all the middle ground between those extremes.
To take your analogy of the plane, we say the plane flies. We don't say the plane flies like a bird because that wouldn't be accurate. Similarly, we should say, for example, that LLMs summarize, generate text, retrieve potential answers to questions, and generate code—without adding the "like human" part, which only adds confusion. A calculator performs calculations, but it doesn't process calculations like humans do. We should focus on the utility and remember that these are just tools, not sentient beings.
How do you know AGI would not exist? Especially when we as humans exist, it is not theoretically impossible.