The Myth of Artificial Intelligence and Why It Persists.

A still frame from “I, Robot” (2004, dir. Alex Proyas)


You can also download and read or share a .pdf of the complete text of this essay HERE.


The Myth of Artificial Intelligence and Why It Persists

No digital computing systems or digital technology, no matter how sophisticated or tricked-out with high-tech bells and whistles, will ever be able to equal or exceed the essentially embodied innate mental capacities or powers of rational human animals[i]—including, of course, the readers of this very sentence. Not even in principle can such systems or technology match or surpass our capacities or powers. To be sure, digital computing systems or digital technology can carry out certain operations much faster and more accurately than we can. But that’s not a fact about the nature, scope, and limits of our mental capacities or powers, but rather only a fact about the applications of those powers to certain mechanical tasks, and no more philosophically exciting or significant than the quotidian fact that we can build machines that move faster than we do, lift heavier weights than we do, or make more accurate measurements than we do. In other words, digital computing systems and digital technology are artificial, but not intelligent in the sense in which we’re intelligent; and to that extent, the term “artificial intelligence” is simply an oxymoron.

The widely-held yet false contrary belief—namely, that digital computing systems or digital technology can equal or exceed the essentially embodied innate mental capacities or powers of rational human animals—is what I call the myth of artificial intelligence, aka the myth of AI. Digital computing systems or digital technology that can supposedly equal or exceed rational human animal intelligence are sometimes also called “Artificial General Intelligence” or AGI. Nevertheless, AGI is no more really possible than is Sonny the NS-5 prototype robot in the 2004 movie, I, Robot, pictured in the image at the top of this essay. Science fiction and futuristic fantasy are just fine as artistic genres—provided that we don’t also start to believe them and act on that belief. But the myth of AI is pernicious, since it leads us not only to seriously to depreciate and underestimate our own mental capacities or powers, but also, by means of our excessive use of and reliance on digital technology, seriously to neglect and even impair our own mental capacities or powers—this is particularly true in the case of the recent roll-out of Large Language Models (LLMs) or chatbots like ChatGPT: the invasion of the mind-snatchers (Hanna, 2023a, 2023b)—and, to the extent that we knowingly or unknowingly disseminate and perpetuate the myth of AI, also seriously to misapply and misuse our own mental capacities or powers. That all being so, then the only remaining really important philosophical question is: why does the myth of artificial intelligence persist? I’ll offer an answer to that hard question at the end of this essay. But before we get there, let me now rehearse, briefly but in a step-by-step way, precisely why the myth of AI is a myth, that is, why it is a widely-held yet false belief.

The myth of AI can be explicitly formulated as a two-part philosophical thesis, the strong thesis of artificial intelligence, aka strong AI, which says

(i) that rational human intelligence can be explanatorily and ontologically reduced to Turing-computable algorithms and the operations of digital computers or digital technology (aka the thesis of formal mechanism, as it’s applied to rational human intelligence), and

(ii) that it’s really possible to build digital computing systems or digital technology that’s are counterpart models of rational human intelligence, such that these systems or technology not only exactly reproduce (aka simulate) all the actual performances of rational human intelligence, but also outperform or surpass it (aka the counterpart thesis) (see, e.g., Block, 1980: part 3; Kim, 2011: ch. 6).

I’ll now describe nine distinct arguments against strong AI.

The first two arguments belong to what I’ll call the old-school, phenomenological critique of strong AI—as originally developed by, for example, Hubert Dreyfus and John Searle in the 1960s, 70s, and 80s.

First, according to Searle’s chinese room argument, even assuming that a digital computing system or digital technology passes The Turing Test (Turing, 1950), nevertheless it cannot have consciousness (or subjective experience) and intentionality (or mental directedness); but all human thinking of any kind is conscious and intentional and/or self-conscious (i.e., involving a second-order consciousness of first-order consciousness), including all of its characteristic rational achievements; hence digital computing systems or digital technology cannot, even in principle, equal or exceed the characteristic achievements of conscious intentional human thinking (Searle, 1980a, 1980b, 1984).

Second, according to Dreyfus’s heideggerian argument, digital computing systems and digital technology cannot engage in  unconscious, essentially non-conceptual, non-rule-based, non-inferential, pre-logical, context-sensitive, know-how-driven, skillful, intuitional, sensible activities; but at least some human thinking is of this specific kind, including all the characteristic achievements of unconscious affective/emotional, perceptual, and practical, pre-rational human thinking; hence AI cannot, even in principle, equal or exceed the characteristic achievements of unconscious pre-rational human thinking (Dreyfus, 1972, 1979, 1992; Dreyfus and Dreyfus, 1986).

Clearly, however, there’s an inconsistency between these two phenomenological lines of argument, since those who use the first line of argument—the “Searlians”—hold that all human thinking, including its characteristic rational achievements, is conscious and intentional, whereas those who use the second line of argument—the “Dreyfusards” (yes, it’s a pun)—hold that at least some of the characteristic achievements of human thinking, including its characteristic pre-rational achievements, are unconscious. This inconsistency, in turn, has allowed defenders of strong AI to drive a critical wedge into this unfortunate gap between the Searlians and the Dreyfusards (i) by endlessly postponing solving the problem of consciousness, insofar as the strong AI defenders treat it as a “hard” (Chalmers, 1996) and perhaps even insoluble and “mysterian” (McGinn, 1989) problem, but in any case an open problem, and (ii) by also, at the very same time, using the method of reverse-engineering, industriously and industrially designing,  building, marketing, and rolling out new and extremely lucrative forms of digital technology, especially those based on “machine learning” systems—neural networks—and robotics, that more and more closely behaviorally mimic the characteristic achievements of unconscious human thinking.

My new-school and neo-organicist (Hanna and Paans, 2020; Torday, Miller Jr, and Hanna, 2020) critique of strong AI, however, closes this unfortunate gap in the old, phenomenological Searlian-Dreyfusard critique of AI, by holding, first, that consciousness and intentionality are necessarily and completely (i.e., essentially) embodied, yet neither logically nor naturally/nomologically dependent on or reducible to fundamentally material or physical properties, and also second, by rejecting the very idea of unconscious human thinking, and by asserting that all of the characteristic pre-rational achievements of human thinking, and indeed all mental activities of any kind, are also inherently conscious, even if only pre-reflectively, non-self-consciously, and essentially non-conceptually conscious (aka “first-order consciousness”): I call this The Deep Consciousness Thesis (Hanna and Maiese, 2009; Hanna, 2011a, 2015: esp. section 2.8). If The Essential Embodiment Theory and The Deep Consciousness Thesis are both true, then defenders of strong AI cannot justifiably endlessly postpone solving the problem of consciousness by treating it a hard or mysterian and in any case open problem, and in the meantime, using reverse engineering, create and sell—thereby reaping immense profits—machine-learning-based or robot-based simulations of the characteristic achievements of rational human minded animal thinking, marketing and disseminating them as if they somehow equalled or surpassed the characteristic achievements of human thinking. For essentially embodied deep consciousness constitutes a necessary and indeed essential component of the characteristic pre-rational achievements of human thinking, as well as being a necessary and indeed essential component of its characteristic rational achievements.

For these reasons, I think that contemporary and future critics of strong AI should rely instead on the following seven new-school and neo-organicist arguments against it.

First, according to the inadequacy of the turing test argument, passing The Turing Test, which relies on human judgments about whether a series of written outputs from a digital computing system inside a room manifest intelligence or thinking or not, that’s comparable to what’s manifested by the written outputs provided by a rational human interlocutor inside another room, clearly is insufficient to show that a digital computing system is intelligent in the sense in which we’re intelligent, since human judgers in general are very gullible and therefore easily fooled on all sorts of recognition tests having nothing whatsoever to do with AI—for example, supposedly contacting spirits at séances and other faked mystical experiences—and this gullibility smoothly transfers to The Turing Test; hence the criterion of intelligence presupposed by strong AI is inadequate to capture the presence of intelligence in the sense in which we’re intelligent (Hanna, 2023a).

Second, according to the organicity argument, rational human mindedness in all its modes, including intelligence and free agency, requires essential embodiment in organic and indeed organismic processes (Hanna and Maiese, 2009; Hanna, 2015, 2018), so since all digital computing systems and digital technology are mechanical and not organic or organismic, then they can’t be intelligent in the sense in which we’re intelligent.[ii]

Third, according to the readability argument, there are some illegible, meaningless, or nonsensical texts that no digital computing system or digital technology can parse or read, yet rational human animals can indeed parse and read, hence digital computing systems and digital technology cannot be intelligent in the sense in which we’re intelligent (Hanna, 2023c).

Fourth, according to the it’s-all-done-with-mirrors argument, there are some well-specified sets of circumstances in which digital computing systems or digital technology cannot discriminate between left-handed and right-handed but otherwise identical (i.e., enantiomorphic) counterparts, yet rational human animals can indeed discriminate between them, hence digital computing systems and digital technology cannot be intelligent in the sense in which we’re intelligent (Hanna, 2023d).

Fifth, according to the uncomputable functions argument, digital computing systems or digital technology can’t carry out functions or operations in the logico-mathematical sense over domains containing objects or other items that are either non-denumerably finite or non-denumerably infinite, vague, holistic, or entangled, or for which the rule-following problem holds, including the halting problem, yet rational human animals can indeed perform these very functions or operations, hence digital computing systems and digital technology cannot be intelligent in the sense in which we’re intelligent (Hanna, 2023e).

Sixth, according to the incompleteness argument, as a specific case of the uncomputable functions argument, no digital computing system or digital technology can carry out functions or operations in the logico-mathematical sense beyond the formal limitations determined by Kurt Gödel’s two incompleteness theorems, yet rational human animals can indeed perform these very functions or operations beyond the limits of incompleteness, hence digital computing systems and digital technology cannot be intelligent in the sense in which we’re intelligent (Gödel, 1931/1967; Keller, 2023).

Seventh, and finally, according to the babbage’s principle argument, which is a maximally wide-scope generalization of the classical garbage-in, garbage-out principle, aka GIGO, no digital computing system or digital technology can perform categorical improvements or upgrades of the intrinsic specific character or quality of the informational inputs, premises, or materials with which they’re supplied—for example, from meaninglessness to meaningfulness, or from falsity to truth—yet rational human animals can creatively transform these informational inputs, premises, or other materials into categorically improved or upgraded informational outputs, conclusions, or other products, in ten different ways, hence digital computing systems and digital technology cannot be intelligent in the sense in which we’re intelligent, especially as regards our authentic human creativity (Hanna, 2023f, 2023g).

So, to summarize, there are at least nine old-school or new-school arguments against strong AI: 1. the chinese room argument, 2. the heideggerian argument, 3. the inadequacy of the turing test argument, 4. the organicity argument, 5. the readability argument, 6. the it’s-all-done-with-mirrors argument, 7. the uncomputable functions argument, 8. the incompleteness argument, and 9. the babbage’s principle argument. Moreover, at least seven of these—i.e., arguments 3. through 9.—are not subject to the problems faced by the old-school chinese room argument and heideggerian argument, hence we can confidently conclude that strong AI is false.

By way of concluding, I’m now in a position to re-raise the question I posed at the outset of this essay: given at least seven new-school and neo-organicist arguments against strong AI that I just rehearsed—which, when considered as a single collective package, surely has knockdown persuasive rational force—and in view of the further fact that the myth of AI has the pernicious consequences I also mentioned at the outset, then why does it persist?

I think that the persistence of the myth of AI can be explained by a combination of six essentially ideological causal factors:

(i) a dogmatic or at least irresponsibly uncritical commitment to the false doctrine of Cartesian dualism, whether substance dualism or property dualism (Hanna and Maiese, 2009),

(ii) a dogmatic or at least irresponsibly uncritical commitment to the fantasy of post-humanist or trans-humanist spiritualism (Hanna, 2022, 2023h),

(iii) a dogmatic or at least irresponsibly uncritical commitment to the false doctrine of intellectualism about the nature of rational human animals and rational human cognition (Hanna, 2015),

(iv) a dogmatic or at least irresponsibly uncritical commitment to false (reductive or non-reductive) materialist/physicalist views about the rational human mind, especially including computational functionalism (Hanna and Maiese, 2009),

(v) more generally, a dogmatic or at least irresponsibly uncritical commitment to the false mechanistic worldview (Hanna and Paans, 2020), and finally

(vi) the hegemony of what I call the military-industrial-digital complex, which amasses and reaps immense wealth and political power precisely by means of effectively and relentlessly disseminating and perpetuating the myth of AI, while at the same time hypocritically issuing public warnings about the dangers of runaway AI and calls to “pause giant AI experiments” for six months (Hanna, 2023b).

The first five factors are classically philosophical factors, so they’re at least in principle open to critical rational argumentation, refutation, and correction. But the same isn’t the case with respect to the sixth factor, which is essentially social-institutional and political in nature. Sadly, however, I think it’s more than merely reasonable to hold that the hegemony of the military-industrial-digital complex is the principal cause of the persistence of the myth of artificial intelligence. Nevertheless, radical social-institutional and political change for the better, or even the best, is in fact really possible, by means of what Michelle Maiese and I have called the mind-body politic and the enactive-transformative principle (Maiese and Hanna, 2019). So to end this essay on an upbeat note, there’s at least some rational hope for debunking the myth of artificial intelligence, by devolving and enactively transforming the military-industrial-digital complex.[iii]

NOTES

[i] Or of rational non-human animals, if there are any. For the record, the basic essentially embodied innate mental capacities or powers are: (i) consciousness, i.e., subjective experience, (ii) self-consciousness, i.e., consciousness of one’s own consciousness, second-order consciousness, (iii) caring, i.e., desiring, emoting, or feeling, (iv) sensible cognition, i.e., sense-perception, memory, or imagination, (v) intellectual cognition, i.e., conceptualizing, believing, judging, or inferring, (vi) volition, i.e., deciding, choosing, or willing, and (vii) free agency, i.e., free will and practical agency. In human animals, the unified set of these capacities constitutes our rational human mindedness, which is the same as our human real personhood (Hanna, 2018).

[ii] Organic systems, including organismic systems, are complex dynamic systems. Correspondingly, Jobst Landgrebe and Barry Smith have recently formulated and defended a version of the organicity argument that, in its most compact version, runs as follows:

1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.

2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. (Landgrebe and Smith, 2022)

[iii] I’m grateful to Elma Berisha, Scott Heftler, Michelle Maiese, and Otto Paans for thought-provoking conversations or correspondence on and around the main topics of this essay.

REFERENCES

(Block, 1980). Block, N. (ed.), Readings in the Philosophy of Psychology. 2 vols., Cambridge, MA: Harvard Univ. Press. Vol. 1.

(Chalmers, 1996). Chalmers, D., The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford Univ. Press.

(Dreyfus, 1972). Dreyfus, H. What Computers Can’t Do. Cambridge MA: MIT Press.

(Dreyfus, 1979). Dreyfus, D. What Computers Can’t Do. 2nd edn., Cambridge MA: MIT Press.

(Dreyfus, 1992). Dreyfus, H. What Computers Still Can’t Do. Cambridge MA: MIT Press.

(Dreyfus and Dreyfus, 1986). Dreyfus, H. and Dreyfus, S. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford: Blackwell.

(Gödel, 1931/1967). Gödel, K. “On Formally Undecidable Propositions of Principia Mathematica and Related Systems.” In J. van Heijenoort (ed.), From Frege to Gödel. Cambridge MA: Harvard Univ. Press. Pp. 596-617.

(Hanna, 2011). Hanna, R. “Minding the Body.” Philosophical Topics 39: 15-40. Available online in preview at URL = <https://www.academia.edu/4458670/Minding_the_Body>.

(Hanna, 2015). Hanna, R. Cognition, Content, and the A Priori: A Study in the Philosophy of Mind and Knowledge. THE RATIONAL HUMAN CONDITION, Vol. 5. Oxford: Oxford Univ. Press. Also available online in preview HERE.

(Hanna, 2018). Hanna, R. Deep Freedom and Real Persons: A Study in Metaphysics. THE RATIONAL HUMAN CONDITION, Vol. 2. New York: Nova Science. Available online in preview HERE.

(Hanna, 2022). Hanna, R. “Turing, Strong AI, and The Fantasy of Transhumanist Spiritualism.” Unpublished MS. Available online HERE.

(Hanna, 2023a). Hanna, R. “How and Why ChatGPT Failed The Turing Test.” Unpublished MS. Available online at URL = <https://www.academia.edu/94870578/How_and_Why_ChatGPT_Failed_The_Turing_Test_January_2023_version_>.

(Hanna, 2023b). Hanna, R. “Hinton & Me: Don’t Pause Giant AI Experiments, Ban Them.” Unpublished MS, Available online HERE.

(Hanna, 2023c). Hanna, R. “Are There Some Legible Texts That Even The World’s Most Sophisticated Robot Can’t Read?” Unpublished MS. Available online HERE.

(Hanna, 2023c). Hanna, R. “It’s All Done With Mirrors: A New Argument That Strong AI is Impossible.” Unpublished MS. Available online at HERE.

(Hanna, 2023e). Hanna, R. “How and Why to Perform Uncomputable Functions.” Unpublished MS. Available online at URL = <https://www.academia.edu/87165326/How_and_Why_to_Perform_Uncomputable_Functions_March_2023_version_>.

(Hanna, 2023f). Hanna, R. “Babbage-In, Babbage-Out: On Babbage’s Principle.” Unpublished MS. Available online at URL = <https://www.academia.edu/101462742/Babbage_In_Babbage_Out_On_Babbages_Principle_May_2023_version_>.

(Hanna, 2023g). “Necessary and Sufficient Conditions for Authentic Human Creativity.” Unpublished MS. Available online HERE.

(Hanna, 2023h). Hanna, R. “Essentially Embodied Kantian Selves and The Fantasy of Transhuman Selves,” Studies in Transcendental Philosophy 3. Available online at URL = <https://ras.jes.su/transcendental/s271326680021060-6-1-en>, and also in preview HERE. = <>.

(Hanna and Maiese, 2009). Hanna, R. and Maiese, M., Embodied Minds in Action. Oxford: Oxford Univ. Press. Available online in preview HERE.

(Hanna and Paans, 2020). Hanna, R. and Paans, O. “This is the Way the World Ends: A Philosophy of Civilization Since 1900, and A Philosophy of the Future.” Cosmos & History 16, 2 (2020): 1-53. Available online at URL = <https://cosmosandhistory.org/index.php/journal/article/view/865>.

(Keller, 2023). Keller, A. “Artificial, But Not Intelligent: A Critical Analysis of AI and AGI.” Against Professional Philosophy. 5 March. Available online at URL = <https://againstprofphil.org/2023/03/05/artificial-but-not-intelligent-a-critical-analysis-of-ai-and-agi/>.

(Kim, 2011). Kim, J. Philosophy of Mind. 3rd edn., Boulder, CO: Westview.

(Landgrebe and Smith, 2022). Landgrebe, J. and Smith, B. Why Machines Will Never Rule the World: Artificial Intelligence without Fear. London: Routledge.

(Maiese and Hanna, 2019). Maiese, M. and Hanna, R. The Mind-Body Politic. London: Palgrave Macmillan. Available online in preview HERE.

(McGinn, 1989). McGinn, C. “Can We Solve the Mind-Body Problem?” Mind 98: 349-366.

(Searle, 1980a). Searle, J. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3: 417-424.

(Searle, 1980b). Searle, J. “Intrinsic Intentionality.” Behavioral and Brain Sciences 3: 450-456

(Searle, 1984). Searle, J. Minds, Brains, and Science. Cambridge, MA: Harvard Univ. Press.

(Torday, Miller Jr, and Hanna, 2020). Torday, J., Miller Jr, W.B., and Hanna, R., “Singularity, Life, and Mind: New Wave Organicism.” In J. Torday and W.B. Miller Jr, The Singularity of Nature. Cambridge: Royal Society of Chemistry. Ch. 20. Pp. 206-246.

(Turing, 1950). Turing, A. “Computing Machinery and Intelligence.” Mind 59: 433–460.


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!