
“Homo Machina (Machine Man),” by Fritz Kahn (Redbubble, 2025)
TABLE OF CONTENTS
1. Introduction
2. The Present Limits of AI: Empirical Considerations
3. Philosophical Arguments Against Artificial General Intelligence
4. Robert Hanna’s Systematic Challenge to Computational Mechanism
5. Neuroscientific Evidence Against Digital Computationism
6. Leading Theories of Consciousness: A Critical Analysis of Their Limitations
7. Quantum Mechanics and Consciousness
8. Conclusion
The essay below will be published in six installments; this installment, the second, contains section 3.
But you can also download and read or share a .pdf of the complete text of this essay, including the REFERENCES, by scrolling down to the bottom of this post and clicking on the Download tab.
3. Philosophical Arguments Against Artificial General Intelligence
Philosophical arguments against AGI attempt to show that it is a fundamental impossibility rooted in the nature of intelligence itself. The computational theory of mind (CTM) emerged in the mid-20th century as part of the broader cognitive revolution, promising to explain human intelligence through the metaphor of digital computation (Putnam, 1960; Fodor, 1975; Pylyshyn, 1984). According to CTM, mental processes are computational processes, mental states are computational states, and the mind is essentially a biological computer running cognitive software on neural hardware. This view has profoundly influenced artificial intelligence research, cognitive psychology, and neuroscience for over half a century.
However, despite its institutional dominance, CTM faces increasingly serious challenges from multiple directions. Mathematical arguments based on Gödel’s incompleteness theorems suggest that human mathematical insight transcends algorithmic computation (Lucas, 1961; Penrose, 1989, 1994). Philosophical arguments like John Searle’s Chinese Room demonstrate fundamental gaps between syntactic manipulation and semantic understanding (Searle, 1980, 1992). Practical problems in AI, particularly the frame problem, reveal deep difficulties in computational approaches to common-sense reasoning (McCarthy and Hayes, 1969; Dreyfus, 1972, 1992). Embodiment research shows that intelligence is fundamentally shaped by physical interaction with the environment in ways that resist computational modelling (Varela et al., 1991; Clark, 1997). Quantum mechanical considerations suggest that the brain may exploit non-classical physical processes (Penrose and Hameroff, 2011). Phenomenological analysis reveals aspects of consciousness that appear irreducible to computational processes (Dreyfus, 1972). And mounting neuroscientific evidence shows that brain processes differ fundamentally from digital computation processes (Freeman, 1999, 2001; Edelman, 1987).
We examine these challenges systematically, arguing that they collectively constitute a powerful case against computational theories of mind. Rather than viewing the mind as a digital computer, these arguments suggest that human intelligence emerges from non-computational processes that cannot be captured by algorithmic methods.
3.1 The Gödel Argument: Mathematical Insight Beyond Computation
Lucas and the Mechanistic Thesis
The most mathematically rigorous argument against computational theories of mind derives from Kurt Gödel’s incompleteness theorems, first applied to human cognition by philosopher J.R. Lucas in his influential 1961 paper “Minds, Machines and Gödel.” Lucas argued that Gödel’s first incompleteness theorem demonstrates that human mathematical insight transcends any possible mechanical (computational) system.
Gödel’s first incompleteness theorem shows that any consistent formal system capable of elementary arithmetic contains true statements that cannot be proved within the system. For any such system S, we can construct a Gödel sentence G(S) that essentially states “This sentence is not provable in system S.” If S is consistent, then G(S) is true but unprovable within S. However, Lucas argued, humans can recognize the truth of G(S) through insight that transcends the mechanical procedures of system S.
Lucas’s argument proceeds as follows: If the human mind were equivalent to a computational system (formal system) M, then there would exist a Gödel sentence G(M) that is true but unprovable within M. However, humans can recognize the truth of G(M), demonstrating that human mathematical insight exceeds the capabilities of any computational system M. Therefore, the human mind cannot be purely computational (Lucas, 1961, pp. 112-127). We can set out this argument, as elaborated by Penrose as follows:
Argument (Gödel–Lucas–Penrose style)
Any sound, computably axiomatized theory S of arithmetic (e.g., any fixed program’s theorems) has a true arithmetic sentence G_S that S can’t prove (Gödel’s first incompleteness theorem). For Π1 sentences (halting-type claims), “true” means: the corresponding computation really does or doesn’t halt.
1. Suppose a particular Turing machine M fully captures an ideal human mathematician H’s provable Π1 statements. Let S_M be the recursively axiomatized theory whose theorems are exactly M’s Π1 outputs.
2. By Gödel, there is a true Π1 sentence G_M that S_M cannot prove.
3. If H can recognize the Π1-soundness of S_M — i.e., that M never proves a false Π1 claim—then H can see the standard metamathematical implication “If S_M is Π1‑sound, then G_M is true,” and thereby accept G_M.
4. So H can correctly accept a Π1 truth that M never reaches. Hence H’s Π1 competence strictly exceeds M’s.
5. Since M was arbitrary, no single Turing machine captures H. Therefore, H is not Turing-computable.
The key assumptions involved in this line of argument are:
- Idealized correctness: H never endorses a false Π1 statement (Π1-soundness), or at least is entitled to trust Π1-soundness of each specific S_M considered.
- Reflective insight: H can grasp the metatheory needed to infer G_M from (the trusted) Π1-soundness of S_M.
- Unity: There is a single machine M meant to capture the whole of H’s mathematical competence, not just a moving target that keeps being strengthened.
The diagonal step (3–5) is purely mathematical. If you grant the epistemic premises, that ideal human insight legitimately goes beyond any fixed computable theory, you get the anti-mechanist conclusion.
Mechanists argue in reply:
- Humans aren’t Π1‑sound; we make mistakes.
- Even if a human informally trusts “S_M is sound,” a machine can simulate the same reflective strengthening by iterating consistency extensions; a single Turing machine can enumerate the growing union.
- Knowing “If S_M is sound, then G_M” doesn’t license accepting G_M unless you can justifiably accept S_M’s soundness; by Gödel’s second theorem, S_M itself can’t, and it’s unclear humans can non-circularly.
This is a real argument to “humans are not Turing-computable,” but it hinges on strong epistemic assumptions about ideal human mathematical insight. If you accept them, the conclusion follows; if not, the mechanist can reject the argument, which they do.
Penrose’s Sophisticated Development
Roger Penrose significantly developed and refined the Gödel argument in The Emperor’s New Mind (1989) and Shadows of the Mind (1994), and directly addressed many standard objections to Lucas’s original formulation. Penrose distinguishes between different types of computational systems and argues that human mathematical understanding exhibits non-algorithmic characteristics that cannot be captured by any computational procedure.
Penrose’s argument focuses on the concept of “mathematical insight,” the human ability to recognize mathematical truths that cannot be proven through purely mechanical procedures. He argues that human mathematicians can “see” the truth of Gödel sentences in ways that transcend algorithmic proof procedures, suggesting that mathematical understanding involves non-computational processes.
Crucially, Penrose addresses the objection that humans might themselves be inconsistent or subject to error. He argues that even if human mathematical reasoning contains errors, the specific type of insight involved in recognizing Gödel sentences demonstrates non-algorithmic understanding. Human errors are typically systematic and correctable through reflection, unlike the fundamental limitations that Gödel’s theorem imposes on formal systems (Penrose, 1994: pp. 64-75).
The Knowability and Soundness Arguments
Penrose develops sophisticated versions of the Gödel argument that avoid many traditional objections. His “knowability argument” contends that humans can in principle know the truth of mathematical statements that are unknowable to any computational system that generates the same mathematical output as humans. His “soundness argument” focuses on the human ability to recognize the soundness of mathematical reasoning procedures, an ability that appears to transcend any algorithmic characterization.
The soundness argument is particularly powerful because it addresses circularity objections to the Gödel argument. Critics often argue that the Gödel argument is circular: it assumes humans can recognize truths about formal systems, then concludes that humans transcend formal systems. However, Penrose’s soundness argument focuses on the human ability to recognize when mathematical procedures are truth-preserving (sound), which is a precondition for mathematical reasoning itself, rather than a conclusion drawn from such reasoning.
Recent work by Stewart Shapiro and others has attempted to rebut Penrose’s arguments through more sophisticated analyses of algorithmic versus non-algorithmic processes (Shapiro, 2003). However, these rebuttals typically concede that human mathematical insight involves non-mechanical elements, while arguing that these elements might still be broadly “computational” in some extended sense. This concession significantly weakens the computational theory of mind by acknowledging irreducibly non-mechanical aspects of human cognition.
3.2 Searle’s Chinese Room: Syntax, Semantics, and Understanding
The Original Argument
John Searle’s Chinese Room argument, first presented in “Minds, Brains, and Programs” (Searle, 1980), provides one of the most influential philosophical challenges to computational theories of mind. The argument targets “strong AI,” the claim that appropriately programmed computers have cognitive states, understanding, and other mental phenomena that can equal or exceed the cognitive achievements of rational human animals.
Searle’s thought experiment involves a monolingual English speaker locked in a room with vast rule books for manipulating Chinese characters. The person receives Chinese characters through a slot, consults rule books, and sends appropriate Chinese characters back out, without understanding Chinese at all. From the outside, the room’s input-output behavior might be indistinguishable from a native Chinese speaker, yet no genuine understanding occurs within the room.
Searle argues that this demonstrates a fundamental distinction between syntactic manipulation (following formal rules) and semantic understanding (grasping meanings). Computers, like the person in the room, can only manipulate syntactic symbols according to formal rules; they cannot achieve genuine semantic understanding. Since understanding requires semantic content, purely computational systems cannot exhibit genuine understanding, regardless of their behavioral sophistication (Searle, 1980: pp. 417-424).
The Biological Naturalism Alternative
Searle develops his critique within a framework of “biological naturalism,” the view that consciousness and understanding emerge from specific biological processes in brains rather than from abstract computational processes (Searle, 1992, 1997). According to biological naturalism, mental phenomena are higher-level features of brain activity, comparable to how digestion is a higher-level feature of stomach activity.
This biological approach provides an alternative to both computationalism and dualism. Against computationalists, Searle argues that consciousness depends on specific biological processes that cannot be replicated through functional simulation alone. Against dualists, he maintains that consciousness is a natural biological phenomenon, rather than a separate metaphysical substance.
Biological naturalism suggests that the specific biochemical processes occurring in biological brains are necessary for genuine understanding and consciousness. Silicon-based computers, regardless of their computational sophistication, lack the biological processes that generate semantic understanding in biological organisms (Searle, 1992: pp. 227-230).
Responses and the Symbol Grounding Problem
The Chinese Room argument connects to broader issues in cognitive science about the “symbol grounding problem” about how it is that symbolic representations acquire their semantic content (Harnad, 1990). Computational systems manipulate formal symbols, but where do these symbols get their meanings?
Traditional computational approaches attempt to ground symbolic meaning through causal connections to external environments or through role within larger computational systems. However, these approaches face circularity problems: causal connections require interpretation to become meaningful, and computational roles are themselves purely syntactic unless grounded in semantic content.
The symbol grounding problem suggests that computational approaches to meaning may be fundamentally inadequate. Meaning appears to require genuine understanding that involves more than formal symbol manipulation, thereby supporting Searle’s distinction between syntactic computation and semantic understanding.
Recent work in embodied cognition and enactive approaches attempts to address symbol grounding through embodied interaction with environments (Varela et al., 1991; Clark, 1997). However, these approaches typically abandon pure computational approaches in favor of more complex dynamical and interactive frameworks that transcend digital computation.
3.3 The Frame Problem and Common-Sense Reasoning
McCarthy’s and Hayes’s Original Formulation
The frame problem, first articulated by John McCarthy and Patrick Hayes in 1969 (McCarthy and Hayes, 1969), represents one of the most persistent and fundamental challenges to computational approaches to intelligence. Originally formulated as a technical problem in AI, the frame problem reveals deep conceptual difficulties in representing and updating knowledge about changing environments.
The basic frame problem concerns how to represent what remains unchanged when some change occurs in the world. For example, if a robot moves a box from one room to another, how does it know that the walls haven’t changed color, that other objects remain in their positions, and that the laws of physics continue to operate? Computational systems require explicit representation of these “frame conditions,” but there appear to be infinitely many such conditions for any given change.
This leads to what Daniel Dennett calls the “frame problem proper,” the computational intractability of updating beliefs in changing environments (Dennett, 1984). Any computational system attempting to reason about a changing world faces either combinatorial explosion (considering all possible changes) or incompleteness (failing to consider relevant changes). This suggests fundamental limitations in computational approaches to common-sense reasoning (McCarthy and Hayes, 1969: pp. 463-502).
Dreyfus’s Phenomenological Critique
Hubert Dreyfus developed powerful critiques of computational approaches to common-sense reasoning, drawing on phenomenological insights from Martin Heidegger and Maurice Merleau-Ponty. Dreyfus argued that human common-sense understanding involves holistic engagement with meaningful contexts that cannot be captured through explicit symbolic representation (Dreyfus, 1972, 1992).
According to Dreyfus’s analysis, human understanding is fundamentally contextual and embodied rather than computational. Humans navigate complex environments through what Heidegger called “ready-to-hand” engagement (Heidegger, 1927/1962), skilful coping that involves immediate, non-reflective response to meaningful situations. This type of understanding resists computational modelling because it involves global sensitivity to context rather than explicit rule-following.
Dreyfus’s critique extends beyond technical problems in AI to fundamental conceptual issues about the nature of human understanding. He argues that computational approaches assume that intelligence consists in explicit representation and rule-following, whereas human intelligence actually involves embodied skills and contextual sensitivity that cannot be made fully explicit (Dreyfus, 1992: pp. 3-35).
By way of example, in August 2025, Gemini developed a problem of looping, being unable to solve particular problems, so it responded with messages that looked like cyber-depression. It was not; it was actually just executing an algorithm that was misfiring, and it made use of expressions of frustration. Gemini did not know that it did not know, it did not have the subjective experience of ignorance, and thus it did not have the ability to say “I don’t know,” as humans could easily do. (This was told to us by Gemini in a request.) If Gemini had subjective awareness, it might have been able to recognize its own failure and simply state, “I am unable to solve this problem.” This would be a conscious act of acknowledging its own ignorance. But it didn’t, because it lacked that self-awareness; it is merely a system for processing information. Humans have a subjective, first-person experience of our own mental states. When we “know” something, it’s not just a memory lookup; it’s a feeling of confidence, a sense of having the information. Similarly, when we “don’t know,” it’s a conscious experience of ignorance. We are aware of the gap in our knowledge. This awareness is a meta-level of cognition: we’re thinking about our own thinking. This subjective awareness of our own knowledge and ignorance is a key component of human intelligence that is not easily explained by a purely computational model.
3.4 The Qualification Problem and Brittleness
Related to the frame problem is the “qualification problem,” the difficulty of specifying all the conditions under which a given rule or procedure applies (McCarthy, 1986). For example, the rule “if you want to go somewhere, walk there” requires qualification by countless conditions: unless your legs are broken, unless the destination is across an ocean, unless you are in a wheelchair, etc.
The qualification problem reveals the “brittleness” of computational systems, their tendency to fail catastrophically when encountering situations not explicitly programmed. Human intelligence exhibits remarkable robustness in novel situations, suggesting non-computational adaptability mechanisms.
Attempts to solve the frame and qualification problems through “non-monotonic logics” or “default reasoning,” have achieved limited success while significantly complicating computational approaches (Reiter, 1980; McCarthy, 1986). These solutions typically require ad hoc modifications that suggest more fundamental problems with computational approaches to common-sense reasoning.
3.5 Embodiment and Situated Cognition
The Enactive Approach
The enactive approach to cognition, developed by Francisco Varela, Humberto Maturana, and others, challenges computational theories by emphasizing the fundamental role of embodied action in cognition (Varela et al., 1991; Maturana and Varela, 1987). According to enactivism, cognition does not involve internal representation of external environments, but rather emerges from dynamic interaction between organisms and their environments.
Enactive cognition is characterized by “structural couplings” between cognitive systems and their environments, co-evolution and mutual specification that cannot be captured through computational input-output relationships. Cognitive systems and environments mutually specify each other through ongoing interaction, thereby creating emergent properties that transcend both individual systems and environmental constraints.
This approach fundamentally challenges computational approaches by denying that cognition involves internal computation over symbolic representations. Instead, cognition consists in skilful action that maintains viability within environmental constraints. The cognitive system and environment form an integrated unity that cannot be decomposed into computational modules processing environmental inputs (Varela et al., 1991: pp. 150-173).
The Dynamical Systems Alternative
Related to enactive approaches are dynamical systems theories of cognition, which model cognitive processes as continuous dynamical systems rather than discrete computational processes (Thelen and Smith, 1994; van Gelder, 1995; Clark, 1997). Dynamical approaches emphasize temporal evolution, continuous interaction, and emergent organization rather than symbolic computation.
Tim van Gelder’s influential work demonstrates that many cognitive phenomena are better understood as dynamical systems than as computational processes (van Gelder, 1995). For example, coordinated rhythmic movements involve continuous dynamical coupling rather than computational planning and control. Similarly, decision-making may involve dynamical settling into attractor states rather than computational evaluation of alternatives.
Dynamical systems approaches suggest that cognitive processes are fundamentally temporal and continuous, rather than discrete and algorithmic. This represents a fundamental departure from computational approaches, which model cognition as sequential processing of discrete symbolic representations.
3.6 Affordances and Direct Perception
James J. Gibson’s ecological psychology provides another challenge to computational approaches through the concept of “affordances,” opportunities for action directly specified by environmental structures (Gibson, 1979). According to Gibson, perception directly detects affordances rather than constructing internal representations of environmental properties.
Gibson’s approach eliminates the need for computational processing of sensory inputs to construct internal world-models. Instead, perceptual systems are directly attuned to environmental information that specifies opportunities for action. This capacity for direct perception bypasses computational stages of representation, inference, and planning that are central to computational approaches.
Recent research in embodied cognition supports Gibson’s insights by demonstrating intimate connections between perception and action that resist computational modelling (Clark, 1997; Chemero, 2009). Perceptual-motor skills appear to involve direct coupling between environmental information and motor response rather than computational mediation through symbolic representations.
3.7 Landgrebe and Smith’s Critique of Machine Supremacy
Jobst Landgrebe and Barry Smith in their 2025 book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear, argue that the complexity of real-world environments, combined with computational limits and the nature of biological cognition, creates insurmountable barriers to machine intelligence that matches human cognitive flexibility and adaptability (Landgrebe and Smith, 2025).
Central to Landgrebe and Smith’s argument is Stephen Wolfram’s principle of computational irreducibility, the idea that for many complex systems, there are no shortcuts to determining their behavior other than running the system itself. This principle has profound implications for artificial intelligence, particularly for systems attempting to navigate complex, real-world environments.
Landgrebe and Smith argue that biological systems, including human cognition, operate within computationally irreducible domains. The behavior of such systems cannot be predicted or simulated by computational shortcuts because the systems themselves represent the most efficient computational process for determining their own behavior. This creates a fundamental barrier for AI systems attempting to model or predict biological behavior, including human decision-making and environmental dynamics.
The computational irreducibility argument gains force when considered alongside the scaling challenges facing AI systems. As AI systems attempt to model increasingly complex real-world scenarios, the computational requirements grow exponentially. Landgrebe and Smith argue that this growth rate exceeds any plausible improvements in computational hardware, creating a fundamental ceiling on AI capabilities.
This argument is particularly compelling when applied to embodied AI systems that must navigate complex physical environments. The number of variables and potential interactions in real-world scenarios grows combinatorically, quickly overwhelming computational resources. While narrow AI systems can excel in constrained domains with well-defined parameters, the complexity of general intelligence appears to exceed computational tractability.
Critics might argue that computational irreducibility applies only to certain classes of systems and that biological intelligence might employ computational shortcuts unknown to current AI approaches. However, Landgrebe and Smith’s argument is strengthened by evidence from complexity science suggesting that biological systems operate at the “edge of chaos,” precisely the regime where computational irreducibility is most pronounced.
Furthermore, even if biological systems employ unknown computational shortcuts, the burden of proof falls on AGI proponents to demonstrate that these shortcuts exist and can be discovered and implemented artificially. The default assumption, given our current understanding of complexity, should be that such shortcuts are unlikely to exist for general intelligence tasks.
Landgrebe and Smith argue that the frame problem reveals deeper issues about the nature of intelligence and context. Current AI systems struggle with contextual understanding because they lack the implicit background knowledge that humans effortlessly bring to any situation. This knowledge is not merely factual but involves understanding the relevance relationships that determine which aspects of a situation are important for particular purposes. Landgrebe and Smith contend that this contextual understanding cannot be captured by statistical correlations in large datasets, but requires a form of embodied interaction with the world that artificial systems cannot achieve.
As we have pointed out, closely related to the frame problem is the symbol grounding problem, the question of how symbols in a computational system acquire meaning. Landgrebe and Smith argue that human intelligence is grounded in biological embodiment and evolutionary history in ways that artificial systems cannot replicate. The meaning of concepts for humans emerges from their interaction with the world through biological bodies shaped by millions of years of evolution.
AI systems, by contrast, manipulate symbols without genuine understanding. Even the most sophisticated Large Language Models (LLMs) that demonstrate impressive linguistic capabilities are essentially performing statistical transformations on symbol patterns without access to the grounded meaning that these symbols have for humans. This fundamental disconnect limits AI systems to sophisticated pattern matching rather than genuine understanding.
As we noted above, contemporary AI systems provide empirical support for Landgrebe and Smith’s arguments about contextual understanding. Despite impressive performance in specific domains, these systems regularly fail when confronted with situations that require flexible contextual reasoning or when they encounter examples that differ subtly from their training data.
The brittleness of current AI systems when faced with adversarial examples or out-of-distribution data suggests fundamental limitations rather than merely engineering challenges. These failures indicate that current AI approaches lack the robust contextual understanding necessary for general intelligence.
Building on recent work in embodied cognition and enactive approaches to mind, Landgrebe and Smith argue that intelligence is not computation performed by a brain, but instead emerges from the dynamic interaction between an organism and its environment. This view, supported by research in cognitive science and neuroscience, suggests that intelligence cannot be separated from the biological substrate and the embodied experience that produces it.
AI systems, regardless of their computational sophistication, lack the biological embodiment that grounds human intelligence. They cannot replicate the sensorimotor experience, emotional responses, and biological needs that shape human cognition. This embodiment is not merely instrumental to intelligence but constitutive of it.
Landgrebe and Smith draw extensively on complexity science to argue that biological intelligence exhibits emergent properties characteristic of complex adaptive systems. These properties cannot be captured by reductionist approaches that attempt to build intelligence from computational primitives.
Complex adaptive systems exhibit nonlinear dynamics, self-organization, and emergent behaviors that cannot be predicted from knowledge of their components. Biological intelligence, as a complex adaptive system, possesses properties that emerge only from the interaction of biological, psychological, and environmental factors in ways that cannot be replicated by artificial systems.
Research in complexity science suggests that optimal computation occurs at the “edge of chaos,” the boundary between order and disorder. Biological systems, including neural networks, all operate in this regime, which provides optimal conditions for information processing, memory storage, and adaptive behavior.
Landgrebe and Smith argue that artificial systems cannot sustainably operate at the edge of chaos because they lack the self-organizational properties of biological systems. Attempts to engineer systems that operate in this regime either collapse into chaos or revert to narrowly- and overly-ordered states, thereby inherently limiting their cognitive flexibility.
The network topology of biological neural networks exhibits properties associated with “criticality,” a specific regime of network dynamics that optimizes information transmission and processing. These critical dynamics emerge spontaneously in biological systems through self-organizational processes but cannot be engineered in artificial systems.
The critical dynamics of biological networks contribute to the flexibility and adaptability of biological intelligence. Artificial neural networks, despite their name, do not exhibit the same critical dynamics as biological networks, inherently limiting their ability to match the cognitive flexibility of biological systems.
A common counterargument to Landgrebe and Smith’s position points to the continuous improvement in AI capabilities over recent decades. Proponents of the strong AI thesis argue that exponential improvements in computational power, combined with algorithmic advances, will eventually overcome current limitations.
However, Landgrebe and Smith’s argument is not about current limitations, but instead about fundamental barriers. The principle of computational irreducibility suggests that certain problems cannot be solved more efficiently, regardless of computational power. Similarly, the frame problem and symbol grounding problem represent conceptual, rather than merely technical challenges.
Another counterargument to Landgrebe and Smith claims that intelligence is substrate-independent, that cognitive processes can be implemented in any sufficiently complex computational system. This functionalist position suggests that artificial systems could, in principle, replicate human intelligence if they implement the right computational processes.
Landgrebe’s and Smith’s emphasis on biological embodiment and evolutionary history challenges this substrate independence assumption. Their argument entails that intelligence is not merely functional, but instead emerges from the specific material and historical properties of biological systems. Intelligence may be substrate-dependent in ways that make artificial replication impossible.
Recent LLMs have demonstrated emergent capabilities that were not explicitly programmed, leading some to argue that sufficient scale might produce artificial general intelligence, aka AGI, through emergence. Landgrebe and Smith’s complexity science background provides the resources for adequately rebutting this argument.
While acknowledging that complex systems can exhibit emergent properties, they argue that the specific type of emergence characteristic of biological intelligence cannot be replicated in artificial systems. The emergent properties of biological intelligence depend on evolutionary history, embodied interaction, and biological substrate in ways that artificial systems cannot replicate. Once more, this puts fundamental limits upon AI.

Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.
Please consider becoming a patron!
