Intelligence Hasn't Been Solved The Unresolved Questions Behind AGI Promises
May 15, 2025

In public discourse, intelligence is often talked about as if it is a solved problem. Claims on the part of the big players in the space to the effect that their chatbots have Ph.D. level intelligence or that AGI is within reach obviously only further this idea.

But what should we believe?

Do we have an agreed upon definition that we can just hold any candidate AI up against or that could be made the foundation of a test of its ability? Indeed, even supposing that we had and could, should we? Turing famously introduced his imitation game to circumvent the problems that defining intelligence comes with, specifically the fear that it might ultimately just reflect our own conceptual limitations and anthropocentric biases about what constitutes genuine understanding.

In any case, to someone who has studied the mind extensively and criteria for the presence of mind in particular, the aforementioned confidence is startling. More than anything, it appears to reflect the poverty of history symptomatic of the modern mind.

Just consider this catalogue of “basic” questions due to the now-deceased Peter Lanz:

Does ‘intelligence’ name some entity which underlies and explains certain classes of performances, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities . . . ? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? But when is behavior intelligent and when is something done intelligently? Should we primarily look at how something is done (the adverbial use dominates) or should we primarily look at what is done: If the system plays chess, it is intelligent, because the ability to play chess is a manifestation of intelligence? How is intelligence related to successful performance? What is the proper range of application of the concept of intelligence? Only human beings or are animals or machines or cells or assemblies of cells or even species (phyla) also to be included? . . . How does the intraspecies comparative use of the notion of intelligence (student A is more intelligent than student B) relate to the interspecies comparative use of the notion: Velvet monkeys are more intelligent than wildebeests? . . . Are individuals the proper locus of intelligence or are human beings more intelligent than other species because we have language, writing, books and other aids? Do these serve as aids for increasing our abilities without increasing intelligence? Compare: The microscope helps us seeing more and more details without increasing our visual acuity. Hans Moravec suggests in his book Mind Children. The Future of Robots and Human Intelligence . . . : “The edge humans have over other large-brained animals such as elephants and whales may depend less on our individual intelligence than on how effectively that intelligence is coupled to our rapidly evolving, immensely powerful, tool-using industry”. (Lanz 2000, 20)

Which definition should we use as a guide and why choose this specific idea rather than the other ones that, when presented by their originators, appear at least plausible accounts of how the story about intelligence might go?

To me it seems that the chief way in which current discourse goes wrong is in assuming that a behavioral, specifically adverbial, understanding of intelligence could be adequate for ascriptions of intelligence.

That belief is implicit in the idea that something like the chatbots of today could be intelligent. If they act intelligently, they are intelligent, the logic goes. But Ned Block has shown that it is at least theoretically possible for something to act intelligently by processing information in a way that most of us would be reluctant to—indeed recoil at—describing as a legitimate route to intelligence.

The fundamental issue is that behavioral mimicry is conflated with genuine cognition. Current popular AI systems, however sophisticated their responses may appear, are fundamentally just pattern-matching engines operating on statistical correlations derived from vast datasets. They lack the intentionality, phenomenal consciousness, and genuine comprehension that we intuitively associate with intelligence proper.

Yet the proclamations of their creators suggest we have somehow transcended these deep philosophical puzzles through engineering prowess alone. Tis’ not so.

🔸