Organoid Intelligence? Just Say No.

(Wikipedia, 2023a)

You can also download and read or share a .pdf of the complete text of this essay HERE.


Organoid Intelligence? Just Say No.

Although the artificial intelligence movement, aka AI, has been around for more than 70 years (see, e.g., Turing, 1950), the recent and contemporary furor about AI has been primarily generated by the much-publicized invasion of the chatbots: Google’s LaMDA in 2021 and Bard in early 2023, Microsoft’s Sydney in 2021, and especially OpenAI’s ChatGPT in late 2022 (see, e.g., Hanna, 2023a; Chomsky, Roberts, and Watamull, 2023).

But just when you thought that you were beginning to wrap your head around good old AI and the new chatbots, and could sleep comfortably at night dreaming of electric sheep (Dick, 1968), then a new, well-funded, and mind-blowingly—or brain-blowingly—revolutionary multidisciplinary research program in biological computing called organoid intelligence, aka OI, was published in Frontiers in Science in late February 2023:

Recent advances in human stem cell-derived brain organoids promise to replicate critical molecular and cellular aspects of learning and memory and possibly aspects of cognition in vitro. Coining the term “organoid intelligence” (OI) to encompass these developments, we present a collaborative program to implement the vision of a multidisciplinary field of OI. This aims to establish OI as a form of genuine biological computing that harnesses brain organoids using scientific and bioengineering advances in an ethically responsible manner. Standardized, 3D, myelinated brain organoids can now be produced with high cell density and enriched levels of glial cells and gene expression critical for learning. Integrated microfluidic perfusion systems can support scalable and durable culturing, and spatiotemporal chemical signaling. Novel 3D microelectrode arrays permit high-resolution spatiotemporal electrophysiological signaling and recording to explore the capacity of brain organoids to recapitulate the molecular mechanisms of learning and memory formation and, ultimately, their computational potential. Technologies that could enable novel biocomputing models via stimulus-response training and organoid-computer interfaces are in development. We envisage complex, networked interfaces whereby brain organoids are connected with real-world sensors and output devices, and ultimately with each other and with sensory organ organoids (e.g. retinal organoids), and are trained using biofeedback, big-data warehousing, and machine learning methods….The many possible applications of this research urge the strategic development of OI as a scientific discipline. We anticipate OI-based biocomputing systems to allow faster decisionmaking, continuous learning during tasks, and greater energy and data efficiency. Furthermore, the development of “intelligence-in-a-dish” could help elucidate the pathophysiology of devastating developmental and degenerative diseases (such as dementia), potentially aiding the identification of novel therapeutic approaches to address major global unmet needs. (Smirnova et al., 2023: p. 2, boldfacing added)

The shudder-quoted neologism, “intelligence-in-a-dish,” will remind philosophers and other philosophically-minded people of Hilary Putnam’s well-known “brain-in-a-vat,” aka BIV, thought-experiment in Reason, Truth, and History (Putnam, 1981: pp. 1-21). And indeed, the powerful thought-shaping image (Hanna and Paans, 2021) of a living human brain floating listlessly in a dish or vat yet also fully interfaced with sophisticated digital technology—as per the image at the top of this essay—is essentially the same for OI and BIV. But there’s an important and indeed categorical difference between the two scenarios. According to Putnam’s BIV scenario, we’re asked to imagine the skeptical possibility that unbeknownst to us, we as we currently are, as rational human minded animals, i.e., human real persons, might be nothing but brains in a vat, and also, like the fictional scenario presented 18 years later in the movie The Matrix (see, e.g., Wikipedia, 2023b), that nothing really is what it seems to us to be, precisely because it’s nothing but, as we would say nowadays, a “virtual reality,” or a computer-generated living dream created by evil computer scientists, the 20th century equivalents of Descartes’s evil demon. OI is also superficially similar to, yet categorically distinct from, the fictional scenario presented in the 1982 movie Blade Runner—itself based on Philip K. Dick’s brilliant science fiction novel, Do Androids Dream of Electric Sheep (Dick, 1968)—describing the “Nexus VI replicants,” artificially manufactured and marketed by the “Tyrell Corporation,” such that this particular generation of Nexus replicants are relatively short-lived biophysical counterparts of human real persons, i.e., bio-engineered, synthetic humanoid real persons, possessing all of the innate capacities constituting our rational human mindedness, including consciousness and self-consciousness, but used for industrial and other advanced capitalist purposes, treated as slaves, and permissibly “retired” at will by hired assassins called “blade runners” (see, e.g., Wikipedia, 2023c).

To summarize, BIVs are just us, supposedly metaphysically reduced to our brains, and filled to the brim with false beliefs about the world and ourselves; and Nexus VI replicants, although not precisely identical to us, i.e., rational minded members of the species homo sapiens, are nevertheless people just like us, who are also being enslaved and oppressed by us, merely because they’ve been produced as fully-grown in factories, but not, like us, created as fertilized eggs, then gestated as fetuses, born as neonates from wombs, and then nurtured through all developmental phases from infancy to adulthood, in the world. But in categorical contradistiction to BIVs and Nexus VI replicants alike, OI is the scientific research program scenario for growing new brain organoids from human brain cells, then synchronously embedding those cultured organoids inside cutting edge digital technology, including chatbots and robots, and then, using machine-learning programs, training up those computational systems to the level of contemporary chatbots or beyond. As the scientists developing the OI research program put it:

We envision using biofeedback to systematically train organoids with increasingly complex sensory inputs and output opportunities—interfacing the brain organoids with computers, sensors, and machine interfaces to facilitate supervised and unsupervised learning. We use the term “OI” for this approach to stress its complementarity to AI—where computers aim to perform tasks done by brains, often by modeling our understanding of learning. However, while AI aims to make computers more brain-like, OI research will explore how a 3D brain cell culture can be made more computer-like. (Smirnova et al., 2023: p. 4, boldfacing added)

Now, from a philosophical point of view, especially including metaphysics and ethics, what should we say about OI?

Here are my basic metaphysical commitments. By rational human mindedness or human intelligence I mean the essentially embodied (i.e., necessarily and completely embodied in a suitably complex living animal organism), unified set of basic innate cognitive, affective, and practical capacities that are present in all and only those human animals possessing the essentially embodied neurobiological basis of those capacities, namely: (i) consciousness, i.e., subjective experience, (ii) self-consciousness, i.e., consciousness of one’s own consciousness, second-order consciousness, (iii) caring, i.e., desiring, emoting, or feeling, (iv) sensible cognition, i.e., sense-perception, memory, or imagination, (v) intellectual cognition, i.e., conceptualizing, believing, judging, or inferring, (vi) volition, i.e., deciding, choosing, or willing, and (vii) free agency, i.e., free will and practical agency (Hanna and Maiese, 2009; Hanna, 2011, 2015, 2018a). This unified set of cognitive, affective, and practical capacities constitutes our human real personhood, which in turn is the metaphysical ground of our human dignity (Hanna, 2018b, 2023b, 2023c).

Metaphysically speaking, then, I’m committed to the view that rational human mindedness or human intelligence must be alive and organismic; hence since no machine is alive and organismic, and since all computers are Turing machines, then no computer can be rationally minded or genuinely intelligent in the sense in which we’re intelligent. I’m also committed to the view that there are several strong purely logical and mathematical, topological, and psycholinguistic reasons why no computer, no matter how sophisticated, can ever be intelligent and think in the sense that we are intelligent and think (Hanna, 2006, 2023d, 2023e; Keller, 2023; Landgrebe and Smith, 2022). And I’m also committed to the view that rational mindedness, or genuine intelligence, is necessarily and completely embodied in suitably complex living animal organisms, whether human or non-human. So my view is not speciesist: contrary to metaphysical or moral speciesism, there can also be non-human real persons with dignity (Hanna, 2018a, 2018b).

As we’ve seen, the OI research program seeks to create computational systems that include embedded, synchronous organoids grown from human brain cells. But in view of my metaphysical commitments, the following three claims seem self-evidently true to me. First, a digital computer that merely includes some organic parts is still a machine, hence it cannot be alive, and therefore no actual or possible OI can ever be rationally minded or genuinely intelligent in the sense in which we’re rationally minded and intelligent. Second, logically and mathematically speaking, a computer is still nothing but a computer, even if it contains some organic parts: hence no actual or possible OI can ever be rationally minded or genuinely intelligent in the way we are. And third, since the organic parts of an OI are only parts of, or even the whole of, a human brain, and do not constitute a complete living rational minded or genuinely intelligent animal, whether human or non-human, then no actual or possible OI is ever completely embodied, therefore no actual or possible OI has rational mindedness or genuine intelligence like ours. In short, just as every actual or possible AI is artificial, but not intelligent (see, e.g., Keller, 2023), so too every actual or possible OI will be organoid, but not intelligent.

What about the ethical implications of OI? Here’s what the scientists say:

[M]oral attitudes toward OI may depend less on epistemological concerns mentioned above, such as the role of specific cognitive capacities in assessments of moral status, and more on ontological arguments of what constitutes a human being. Perceptions of (re)creating ‘human-like’ entities in the lab are likely to evoke concerns about infringing on human dignity that could reflect secular or theological beliefs about the ‘essential’ nature of the human being. (Smirnova et al., 2023: p. 15)

Ethically speaking, I’m committed to what I call dignitarian neo-Luddism with respect to digital technology, which says that not all digital technology is bad and wrong,[i] but instead all and only the digital technology that harms and oppresses ordinary people (i.e., people other than digital technocrats), by either failing to respect our human dignity sufficiently or by outright violating our human dignity, is bad and wrong, and therefore all and only this bad and wrong digital technology should be rejected but not—except in extreme cases of digital technology whose coercive use is actually violently harming and oppressing ordinary people, for example, digitally-driven weapons or weapons-systems being used for mass destruction or mass murder—destroyed, rather only either simply refused, non-violently dismantled, or radically transformed into its moral opposite. (Hanna, 2023f: p. 5)

Since my account of the metaphysics of human dignity explicitly grounds dignity on human real personhood, which in turn is constituted by “the role[s] of specific cognitive capacities in assessments of moral status” (Smirnova et al., 2023: p. 15), then my account does not depend on “ontological arguments of what constitutes a human being,” such that

[p]erceptions of (re)creating ‘human-like’ entities in the lab are likely to evoke concerns about infringing on human dignity that could reflect secular or theological beliefs about the ‘essential’ nature of the human being. (Smirnova et al., 2023: p. 15)

Therefore, since according to my account, no OI can ever, even in principle, be a human real person with dignity, then necessarily, the OI research program, in and of itself, will never involve “infringing on human dignity” (Smirnova et al., 2023: p. 15). Of course, if the OI research team used coercive means to obtain brain cells from some human real persons—say, at the point of a gun, or by means of an immoral social-institutional system for trafficking in human brain cells—then that would violate human dignity; but it wouldn’t flow from the OI research program per se.

Nevertheless, just like LaMDA, Bard, Sydney, ChatGPT, and all the other chatbots to-be, unless we universally enact dignitarian neo-Luddism with respect to digital technology, OI will enable and indeed mandate our excessive use of and indeed addiction to it. In turn, this will  systematically undermine our innate capacities for thinking, caring, and acting for ourselves. Then, when you combine our excessive use of and addiction to chatbots and OI with our excessive use of and addiction to smart-phones, desktop and laptop computers, the internet, social media, and so-on, the result is nothing less than the invasion of the mind snatchers, or more precisely, what I’ve called “an all-out existential attack on our rational human mindedness or intelligence,” which “is also an all-out existential attack on our human dignity” (Hanna, 2023f: p. 6). So even though OI will never be intelligent in the sense in which we’re intelligent, nor will it ever have human dignity, not even in principle, nevertheless, just like the mind-snatching chatbots, unless we universally enact dignitarian neo-Luddism with respect to digital technology, OI will be a serious threat to the well-being of humankind by oppressing us and indeed outright violating our human dignity. One of the direct implications of dignitarian neo-Luddism with respect to digital technology is that if any line of research in any of the formal or natural sciences that theoretically feed digital technology, is implemented in a form of digital technology that either fails to respect human dignity sufficiently or outright violates human dignity, then we should refuse to pursue that line of research, no matter how massively power-enhancing or massively profit-making this form of digital technology will be for those who belong to the global hyper-State that I’ve called the military-industrial-complex (Hanna, 2023f: p. 1 and n. 1). Therefore, from the standpoint of dignitarian neo-Luddism with respect to digital technology, it’s a moral categorical imperative that we not pursue the OI research program.

Organoid intelligence? Metaphysically and morally, just say no.

NOTE

[i] It needs to emphasized and re-emphasized that dignitarian neo-Luddism with respect to digital technology is also committed to the positive dignitarian moral doctrine that some digital technology is good and right, and therefore ought to be used, precisely because it promotes the betterment of humankind and sufficiently respects human dignity. For example, in my opinion this is true of posting or self-publishing essays about dignitarian digital/AI ethics for universal free sharing on the internet. Why else would I be doing it? But in this context, I’m focusing on the negative dignitarian moral doctrine.

REFERENCES

(Chomsky, Roberts, and Watumull, 2023). Chomsky, N., Roberts, I. and Watumull, J. “Noam Chomsky: The False Promise of ChatGPT.” New York Times. 8 March. Available online at URL = <https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html>.

(Dick, 1968). Dick, P.K. Do Androids Dream of Electric Sheep? New York: Doubleday.

(Hanna, 2006). Hanna, R. Rationality and Logic. Cambridge MA: MIT Press. Also available online in preview at URL = <https://www.academia.edu/21202624/Rationality_and_Logic>.

(Hanna, 2011). Hanna, R. “Minding the Body.” Philosophical Topics 39: 15-40. Also available online in preview at URL = <https://www.academia.edu/4458670/Minding_the_Body>.

(Hanna, 2015). Hanna, R. Cognition, Content, and the A Priori: A Study in the Philosophy of Mind and Knowledge. THE RATIONAL HUMAN CONDITION, Vol. 5. Oxford: Oxford Univ. Press. Also available online in preview HERE.

(Hanna, 2018a). Hanna, R. Deep Freedom and Real Persons: A Study in Metaphysics. THE RATIONAL HUMAN CONDITION, Vol. 2. New York: Nova Science. Available online in preview HERE.

(Hanna, 2018b). Hanna, R. Kantian Ethics and Human Existence: A Study in Moral Philosophy. THE RATIONAL HUMAN CONDITION, Vol. 3. New York: Nova Science. Available online in preview HERE.

(Hanna, 2023a). Hanna, R. “How and Why ChatGPT Failed The Turing Test.” Unpublished MS. Available online at URL = <https://www.academia.edu/94870578/How_and_Why_ChatGPT_Failed_The_Turing_Test_January_2023_version_>.

(Hanna, 2023b). Hanna, R. “Dignity, Not Identity.” Unpublished MS. Available online at URL = <https://www.academia.edu/96684801/Dignity_Not_Identity_February_2023_version_>.

(Hanna, 2023c). Hanna, R. “Frederick Douglass, Kant, and Human Dignity.” Unpublished MS. Available online at URL = <https://www.academia.edu/97518662/Frederick_Douglass_Kant_and_Human_Dignity_February_2023_version_>.

(Hanna, 2023d). “It’s All Done With Mirrors: A New Argument That Strong AI is Impossible.” Unublished MS. Available online HERE.

(Hanna, 2023e). “Are There Some Legible Texts That Even The World’s Most Sophisticated Robot Can’t Read?” Unpublished MS. Available online in preview HERE.

(Hanna, 2023f). Hanna, R. “Don’t Pause Giant AI Experiments: Ban Them.”  Unpublished MS. Available online HERE.

(Hanna and Paans, 2021). Hanna, R. and Paans, O. “Thought-Shapers.” Cosmos & History 17, 1: 1-72. Available online at URL = <http://cosmosandhistory.org/index.php/journal/article/view/923>.

(Keller, 2023). Keller, A. “Artificial, But Not Intelligent: A Critical Analysis of AI and AGI.” Against Professional Philosophy. 5 March. Available online at URL = <https://againstprofphil.org/2023/03/05/artificial-but-not-intelligent-a-critical-analysis-of-ai-and-agi/>.

(Landgrebe and Smith, 2022). Landgrebe, J. and Smith, B. Why Machines Will Never Rule the World: Artificial Intelligence without Fear. London: Routledge.

(Putnam, 1981). Putnam, H. Reason, Truth, and History. Cambridge: Cambridge Univ. Press.

(Smirnova, et al., 2023). Smirnova, L. et al. “Organoid intelligence (OI): The New Frontier in Biocomputing and Intelligence-in-a-Dish.” Frontiers in Science. 28 February. Available online at URL = <https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235>.

(Turing, 1950). Turing, A. “Computing Machinery and Intelligence.” Mind 59: 433–460.

(Wikipedia, 2023a). Wikipedia. “Brain in a Vat.” Available online at URL = <https://en.wikipedia.org/wiki/Brain_in_a_vat>.

(Wikipedia, 2023b). Wikipedia. “The Matrix.” Available online at URL = <https://en.wikipedia.org/wiki/The_Matrix>.

(Wikipedia, 2023c). Wikipedia. “Blade Runner.” Available online at URL = <https://en.wikipedia.org/wiki/Blade_Runner>.


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!