Mind, Mechanism, and Materialism: The Case Against the Computational Theory of Mind and Artificial General Intelligence, #1.

“Homo Machina (Machine Man),” by Fritz Kahn (Redbubble, 2025)


TABLE OF CONTENTS

1. Introduction

2. The Present Limits of AI: Empirical Considerations

3. Philosophical Arguments Against Artificial General Intelligence

4. Robert Hanna’s Systematic Challenge to Computational Mechanism

5. Neuroscientific Evidence Against Digital Computationism

6. Leading Theories of Consciousness: A Critical Analysis of Their Limitations

7. Quantum Mechanics and Consciousness

8. Conclusion

The essay below will be published in six installments; this installment, the first, contains sections 1-2.

But you can also download and read or share a .pdf of the complete text of this essay, including the REFERENCES, by scrolling down to the bottom of this post and clicking on the Download tab.


Mind, Mechanism, and Materialism: The Case Against the Computational Theory of Mind and Artificial General Intelligence

Whoever undertakes to set him[her]self up as a judge in the field of Truth and Knowledge is shipwrecked by the laughter of the gods. (Einstein, 1954: pp. 27-28)

Latterly, I have come to think that mystery is quite pervasive, even in the hardest of sciences. Physics is a hotbed of mystery: space, time, matter and motion – none of it is free of mysterious elements. The puzzles of quantum theory are just a symptom of this widespread lack of understanding. … The human intellect grasps the natural world obliquely and glancingly, using mathematics to construct abstract representations of concrete phenomena, but what the ultimate nature of things really is remains obscure and hidden. How everything fits together is particularly elusive, perhaps reflecting the disparate cognitive faculties we bring to bear on the world (the senses, introspection, mathematical description). We are far from obtaining a unified theory of all being and there is no guarantee that such a theory is accessible by finite human intelligence. (McGinn, 2012)

1. Introduction

This work presents an overview of some challenges to the mechanistic worldview, or mechanism for short, arguing that there is a balance of reason against the position, and therefore it should be rejected as being false, at least, unjustified. The main focus will be upon the philosophy and psychology of mind, although the discussion of quantum mechanics in section 7 below will seek to extend the critique more broadly against mechanism. We’ll begin with a characterization of mechanism.

The Mechanistic World View

Robert Hanna gives this account of mechanism, which we agree with:

[E]verything in the world is fundamentally either a formal automaton or a natural automaton, operating strictly according to Turing-computable algorithms and/or time-reversible or time symmetric deterministic or indeterministic laws of nature, especially the Conservation Laws (including the First Law of Thermodynamics) and the Second Law of thermodynamics, which also imposes always-increasing entropy—i.e., the always-increasing unavailability of any system’s thermal energy for conversion to causal  (aka “mechanical”) action or work—on all natural mechanisms, until a total equilibrium state of the universe is finally reached. (Hanna, 2024: 23)

Mechanism in the philosophy of mind represents a cluster of related positions that seek to understand mental phenomena by physical processes and systems. Rather than treating the mind as something fundamentally separate from or irreducible to physical reality, mechanistic approaches attempt to explain consciousness, cognition, and mental states in terms of underlying physical neurological mechanisms; hence in a reductionist fashion.

This approach emerged from the scientific revolution’s success in explaining natural phenomena through mechanical principles, and it represents an attempt to extend this explanatory framework to the mind itself. This perspective stands in contrast to positions that reject reductionism and do not see the mind, especially consciousness, as in need of such reductionistic explanations, mental phenomena being sui generis. That’s the position that is reached in the conclusion of this paper, contra mechanism.

The so-called “hard problem of consciousness” (Chalmers, 1995, 1996, 2018), is seen by mechanists as the remaining “nomological dangler,” needing reductionist explanation to complete the aim of scientific unificationism, the reduction of all sciences (possibly excluding formal logic, (abstract/formal) computer science and mathematics, although a physicalist nominalist program seeks this reduction too [Hanna, 2024]), to the master foundational science, physics. To elaborate on this: According to David Chalmer, “we can say that a being is conscious if there is something it is like to be that being” (Chalmers, 1996: p. 4; Nagel, 1974). Consciousness involves having phenomenological experiences, qualia. But from a mechanistic point of view this is puzzling, because there seems to be an explanatory gap between physiology/neurology and subjective experience (Levine, 1983). The “hard problem” is to explain why sentient animals have phenomenological experience at all, when they could have been “philosophical zombies,” behaviorally equivalent to the present state, but totally lacking in such experiences (Chalmers, 1996).

The mechanist position can be further broken down as follows:              

1. Realism

Realism in the philosophy of mind maintains that mental states and processes correspond to objective features of reality, rather than being mere constructions or illusions. For mechanistic approaches, this typically means:

Scientific Realism: Mental phenomena are real features of the natural world that can be studied scientifically. Beliefs, desires, emotions, and conscious experiences are not simply useful fictions, but correspond to actual states of physical systems.

Mind-Independence: Mental properties exist independently of our theories about them. The mechanisms underlying cognition operate according to objective principles, regardless of whether we have discovered or correctly understood them.

Causal Efficacy: Mental states have real causal powers, being physical. They can bring about changes in behavior and other mental states through their underlying physical mechanisms.

This realist commitment distinguishes mechanism from eliminativist approaches that deny the reality of mental phenomena, and from purely instrumentalist approaches that treat mental concepts as merely useful tools for prediction without ontological commitment.

2. Materialism

Materialism (or physicalism) forms the ontological foundation of mechanistic approaches. This position holds that:

Ontological Priority: Everything that exists is either physical or supervenes on the physical. There are no mental substances or properties that exist independently of physical reality.

Causal Closure: The physical world is causally closed. All physical events have sufficient physical causes, leaving no room for non-physical mental causation to intervene in the natural order.

Explanatory Completeness: In principle, all phenomena, including mental phenomena, can be explained in terms of physical processes and properties.

Materialism provides the metaphysical backdrop against which mechanistic explanations operate. It rules out dualistic solutions that would place mental phenomena outside the reach of physical science, thus creating the explanatory challenge that mechanism attempts to meet in the case of, for example, phenomenological experience.

3. Reductionism/ Scientific Unificationism

Reductionism in the philosophy of mind comes in several varieties, but generally involves the claim that mental phenomena can be reduced to or explained in terms of more fundamental physical processes:

Ontological Reductionism: Mental properties are identical to or constituted by physical properties. There is nothing “over and above” the physical that needs to be explained.

Explanatory Reductionism: Mental phenomena can be fully explained by appeal to lower-level physical mechanisms. Understanding these mechanisms provides complete explanations of mental life.

Methodological Reductionism: The proper way to study mental phenomena is through investigation of their underlying physical basis, typically at the level of neuroscience, biochemistry, and ultimately physics.

4. Computationalism

Computationalism, the main critical target in this essay, represents a specific form of mechanism that became prominent with the rise of computer science and cognitive science.

The Computational Theory of Mind: Mental processes are computational processes. Thinking is a form of computing, involving the manipulation of symbolic representations according to formal rules. This position could be part of a more general metaphysics, pancomputationalism, that everything in the universe, if not the universe itself, is a computing system, or Turing machine (Piccinini, 2007) However, Turing machines only take a countably infinite number of states, but standard mathematical descriptions of natural systems by systems of differential equations, have a continuous state space, comprising an uncountable number of possible  state space trajectories, so a Turing machine would not be able to map onto their mathematical descriptions (Piccinini, 2007, 100).

Implementation Independence: Mental processes are defined by their computational structure rather than their specific physical implementation. This allows for the possibility that minds could be implemented in non-biological systems.

Information Processing: Mental phenomena can be understood as forms of information processing, involving the encoding, storage, retrieval, and transformation of information. The mind is a computing machine, such as a Turing machine (Boolas and Jeffery, 1980, 19-33), a “meat” version of a digital computer.

5. Determinism

Determinism plays a crucial role in mechanistic approaches, though its relationship to mental phenomena raises complex philosophical questions.

Causal Determinism: Mental events, like all physical events, are the inevitable result of prior causes operating according to natural laws. Given the state of the physical world at any time, future mental states are fixed. As Hoefer puts it:

The world is governed by (or is under the sway of) determinism if and only if, given a specific way things are at time t, the way things go thereafter is fixed as a matter of natural law. (Hoefer, 2020, [emphasis] in the original)

According to this, facts about the past, plus the laws of nature, entail all truths about the future of the world, so all events are completely determined by prior existing causes. On the hard determinist position, free will does not exist, and as Brian Greene puts it: “We are no more than playthings knocked to and fro by the dispassionate rules of the cosmos” (Greene, 2020, p. 147; also: Harris, 2011; Scott, 2018; Sapolsky, 2023).

Predictability in Principle: Mental phenomena follow regular patterns that could, in principle, be predicted if we had complete knowledge of the underlying physical mechanisms and initial conditions.

Lawlike Regularities: Mental processes operate according to discoverable laws or principles, even if these may be probabilistic rather than strictly deterministic.

Our primary concern is with the mechanistic approach to mind, especially computer models of mind. This is particularly relevant today outside of philosophical and psychological debates, insofar as we are inundated with media reports of the relentless march of artificial intelligence (AI), to Artificial General Intelligence (AGI), where AI can perform everything humans can do. If the “singularity” occurred, making AI truly thinking machines, there are scenarios for this “superintelligence” to not merely replace humans in “mechanical” jobs where pattern recognition is primary, but in all cognitive areas (Bostrom, 2014; Sonik and Colarossi, 2020; Ozimek, 2025).

In the next section, we’ll show that while the power of AI to render millions unemployed is real, the general replacement thesis, based upon present evidence, and the arguments of AI insiders, is false. The rest of the work will then explore theoretical and philosophical objections to mechanism with respect to mind and consciousness. The seven sections to follow are:

2. The Present Limits of AI: Empirical Considerations.

3. Philosophical Arguments Against Artificial General Intelligence.

4. Robert Hanna’s Systematic Challenge to Computational Mechanism.

5. Neuroscientific Evidence Against Digital Computationism.

6. Leading Theories of Consciousness: A Critical Analysis of their Limitations.

7. Quantum Mechanics and Consciousness

8. Conclusion

2. The Present Limits of AI: Empirical Considerations

There are many warnings from big tech leaders, journalists, and think tanks about the rapid displacement of workers—including professionals, for example, lawyers—by AI, with some saying, for example, that getting a law or computer science degree now is a waste of time, since coding can now supposedly be done by AI better than the low-paid IT workers who slave away at it, and AI will soon do most jobs low-level lawyers now do (Ozimek, 2025). Therefore, it is worthwhile initiating this critique of AI and AGI with a discussion of present limits. First, we consider one person who used LLMs in his legal practice, and found problems. Second, we then consider a more wide-ranging critique by AI insiders, before, third, beginning in the next section, moving on to general philosophical concerns.  

Charles Hugh Smith argues in a blog post that AI, at least at present, fails at tasks “where accuracy must be absolute to create value” (Hugh Smith, 2025).  He cites the case of investigative legal reporter Ian Lind, who used AI tools such as Gemini and ChatGPT to help analyze federal prosecution cases in Hawaii. Lind found:

My experience has definitely been mixed. On the one hand, sort of high-level requests like “identify the major issues raised in the documents and sort by importance” produced interesting and suggestive results. But attempts to find and pull together details on a person or topic almost always had noticeable errors or hallucinations. I would never be able to trust responses to even what I consider straightforward instructions. Too many errors. Looking for mentions of “Drew” in 150 warrants said he wasn’t mentioned. But he was, I’ve gone back and found those mentions. I think the bots read enough to give an answer and don’t keep incorporating data to the end. The shoot from the hip and, in my experience, have often produced mistakes. Sometimes it’s 25 answers and one glaring mistake, sometimes more basic. (Hugh Smith, 2025)

Hugh Smith also points out that these limitations arise from a number of factors:

1. AI doesn’t actually “read” the entire collection of texts. In human terms, it gets “bored” and stops once it has enough to generate a credible response.

2. AI has digital dementia. It doesn’t necessarily remember what you asked for in the past nor does it necessarily remember its previous responses to the same queries.

3. AI is fundamentally, irrevocably untrustworthy. It makes errors that it doesn’t detect (because it didn’t actually “read” the entire trove of text) and it generates responses that are “good enough,” meaning they’re not 100% accurate, but they have the superficial appearance of being comprehensive and therefore acceptable. This is the “shoot from the hip” response Ian described.

4. AI agents will claim their response is accurate even when it is obviously lacking, they will lie to cover their failure, and then lie about lying. If pressed, they will apologize and then lie again. (Hugh Smith, 2025)

While these negative results might not matter for, say, student undergraduate work, in complex legal work in the “real world,” for example, these mistakes and limits could lose cases and endanger peoples’ freedom, property and money, just by making a mistake about a defendant’s name, for example. However, as Hugh Smith notes, the fundamental problems go to the heart of the limits of AI agents, which is the “illusion of thinking” problem, which we now describe in detail.

The recent wave of enthusiasm for Large Reasoning Models (LRMs), advanced versions of Large Language Models (LLMs) that extend inference through chain-of-thought and self-reflection, has been driven by the hope that more computation translates into deeper reasoning. This analogy to human deliberation, however, is misleading. Shojaee et al. present perhaps the strongest arguments to date that what these systems produce is not genuine reasoning, but what they call an “illusion of thinking” (Shojaee et al., 2025). Their findings support broader critiques of LLMs, including Marcus and Davis on brittleness, and Mitchell on the lack of adequate conceptual grounding (Marcus and Davis, 2019; Mitchell, 2023).

A key contribution of these studies lies in their experimental design. Existing benchmarks (e.g., MATH-500, AIME) are compromised by training-set contamination, lack of fine-grained difficulty control, and a fixation on final accuracy. By contrast, Shojaee et al. introduce puzzle-based environments, Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World, that allow precise manipulation of complexity, employ novel instances to block memorisation, and use deterministic simulators for rigorous validation (Shojaee et al., 2025). This design isolates reasoning ability itself, rather than rewarding pattern familiarity,  calls for “stress tests” that expose structural weaknesses in AI  and Mitchell’s (2023) insistence on domain transfer tests to probe generalization.

Empirically, LRMs cluster into three performance regimes that highlight their fragility:

Low-Complexity Regime—The Efficiency Paradox. Standard LLMs often outperform their “thinking” counterparts. Overthinking leads to deterioration: solutions found early are abandoned as models meander into error. Like a student who second-guesses a correct answer, the model wastes time and tokens without gain.

Medium-Complexity Regime—A Narrow Sweet Spot. Here, LRMs demonstrate advantages, but only within a constrained band of difficulty. Their gains are real but modest, not transformative.

High-Complexity Regime—The Collapse Phenomenon. At levels of higher complexity, both LRMs and baseline LLMs fail entirely. Crucially, additional computation does not stave off collapse, revealing that scaling inference-time “thinking” does not mirror human cognition, where extended deliberation can eventually solve harder problems. This mirrors Mitchell’s argument that LLMs excel only within a “Goldilocks zone” of task difficulty, but fail at extremes, indicating that such systems lack mechanisms for true generalisation (Mitchell, 2023).

The most striking result is that LRMs do not substantially improve even when handed explicit solution algorithms. In principle, executing a known procedure should be easier than inventing one. Yet performance falters at the same thresholds. This is akin to giving a student the recipe for long division, only to find that they still misapply the steps. Such evidence undermines claims that LRMs are engaging in algorithmic reasoning at all, supporting Marcus’s claim that LLMs lack “systematicity,” the ability to reliably follow rules across contexts (Marcus, 2022).

Another telling behavior is what Schojaee et al. call the token allocation paradox. As problems become more difficult, LRMs paradoxically devote fewer reasoning tokens, even when resources remain available. This betrays a lack of meta-cognition: an inability to recognise when more effort is required. Humans may procrastinate or misjudge effort, but they can also critically reflect and make adjustments; LRMs and LLMs cannot. This observation resonates with Lake’s and Baroni’s argument that LLMs lack adaptive control processes central to human intelligence (Lake and Baroni, 2023)

LRMs show remarkable inconsistency across domains. For example, Claude 3.7 Sonnet Thinking can sustain 100 correct moves in Tower of Hanoi, yet collapses after only 4–5 moves in River Crossing, despite the latter being far simpler. The likely explanation is training exposure: familiar puzzles elicit competent performance; rarer ones reveal brittleness and incompetence. This behavior manifests pattern matching, rather than reasoning, and aligns with the notion of “stochastic parroting,” where models reproduce familiar surface patterns, but falter when novelty demands flexible reasoning (Lake and Baroni, 2023).

Across experiments, the evidence converges on a single interpretation: LRMs and LLMs simulate reasoning through sophisticated statistical patterning but do not engage in genuine thought. Hallmarks of this include:

  • Failure to benefit from explicit algorithms.
  • Inappropriate scaling of effort with difficulty.
  • Collapse at modest complexity increases.
  • Domain brittleness tied to training exposure.

Together, these behaviors resemble human appearances of thought, overconfidence, confusion, second-guessing, without the underlying mechanisms that allow humans to recover, adapt, or transfer strategies.

The implications are sobering. If LLMs and LRMs cannot reliably execute even simple algorithmic steps, their promise as precursors to Artificial General Intelligence (AGI) is overstated. Predictions of imminent AGI in 2026 are undermined by the fact that these systems falter on well-defined puzzles far simpler than real-world reasoning tasks. For all their fluency, they lack semantic understanding (Mitchell, 2023), systematic generalization (Marcus and Davis, 2019), and meta-cognitive awareness (Lake and Baroni, 2023). These results alone establish, that at least with present technology, LLMs and LRMs exhibit only the “illusion of thinking,” and not general intelligence. Shojaee et al. remind us that impressive surface performance does not equate to deep reasoning (Shojaee et al., 2025).  LRMs give the appearance of thought, but collapse under pressure, much as a student who can mimic textbook answers falters when asked to improvise and reason independently.

This conclusion about the limits of AI limits is, surprisingly, now the “expert consensus” among AI elites. A remarkable shift in expert opinion has emerged regarding the trajectory of artificial intelligence development. A recent survey found that 76% of scientists said that scaling large language models was “unlikely” or “very unlikely” to achieve AGI.  This is a major departure from the optimistic predictions that have dominated the tech industry narrative since the generative AI boom of 2022 (AAAI, 2025; Turner, 2025; Wu and Boas, 2025). This survey, conducted by the Association for the Advancement of Artificial Intelligence (AAAI), using 475 AI researchers as respondents, reveals a scientific community increasingly skeptical of the fundamental approach that has driven billions in investment and captured global attention. The findings represent what many consider a “resounding dismissal” of tech industry predictions that current AI models only need more data, hardware, energy, and money in order to eclipse human intelligence.

The scaling hypothesis has been the foundation of the modern AI boom. Since the breakthrough success of transformer architecture in 2017, the industry has operated under the assumption that larger models, trained on more data with more computational resources, would inevitably lead to artificial general intelligence. This belief has driven unprecedented investment, the generative AI industry raised $56 billion in venture capital globally in 2024 alone. 

Broader industry observations are that OpenAI, Google and others are seeing diminishing returns to building ever-bigger models. The evidence of stagnation is mounting. Recent model releases appear to plateau in performance. AI labs traveling the road to super-intelligent systems are realizing they might have to take a detour, as current scaling laws show diminishing returns (AAAI, 2025).

The fundamental flaw in current approaches is that they involve training large feedforward circuits. Such circuits have fundamental limitations as a way to represent concepts. This implies that such circuits have to be enormous in order to represent such concepts even approximately, essentially as a glorified lookup table, which leads to vast data requirements and piecemeal representation with gaps. This architectural critique suggests that the limitations are not only about scale, but also about the fundamental approach to knowledge representation. The circuit-based approach creates what amounts to “glorified lookup tables” that require exponentially increasing resources to handle even approximate concept representation.

Beyond architectural constraints, the scaling paradigm faces imminent resource limitations. Projections indicate that the finite human-generated data essential for further growth will likely be exhausted by the end of this decade. Once this occurs, alternatives include harvesting private data from users or feeding AI-generated “synthetic” data back into models, approaches that risk system collapse from accumulated errors (AAAI, 2025).

The survey reveals a stark disconnect between industry hype and scientific assessment: 79% of the survey’s respondents said perceptions of AI capabilities don’t match reality. Experts warn of a slowdown in AI advances, with LLMs hitting performance ceilings and diminishing returns from scaling. This aligns with observations that scaling AI is getting more expensive—and harder. As computation and energy costs surge, tech giants will need to  rethink the future.

Even in practical applications, the limitations are becoming apparent. When developers use AI tools, they take 19% longer than without—AI makes them slower, according to recent studies of experienced open-source developers.

Unsustainable Resource Requirements

The scaling approach demands eye-watering quantities of money and energy. The carbon emissions of data center complexes have tripled since 2018, highlighting the environmental unsustainability of continued scaling efforts. Throwing more resources at scaling delivers diminishing returns. To keep advancing requires smarter techniques beyond traditional scaling, suggesting that the current approach is both economically and environmentally unsustainable. Indeed, even if all of these technical problems are overcome, the path to AGI may be blocked by energy consumption and the laws of physics (Lloyd, 2000).

The foundation of the energy constraint argument gains force from a study of the Blue Brain Project (Stiefel and Coggan, 2022). This project attempts to recreate the neural networks of the human brain in silicon, providing a crucial benchmark for understanding the computational requirements of human-level intelligence. However, even this cutting-edge simulation leaves out many details, so current estimations of energy consumption are an under-estimation. Even here the projected energy needs were orders of magnitude greater than the present US energy production (Stiefel and Coggan, 2022).    

A critical assumption underlying the energy constraint argument is that AGI systems must possess complexity comparable to or exceeding human brains. The human brain contains approximately 100 billion neurons, each representing a highly complex processing unit with thousands of connections. Creating an artificial system capable of matching human cognitive abilities across all domains would seemingly require similar or greater computational resources.

This creates a fundamental scaling problem. Current deep learning networks with 10 million parameters cannot compete with biological brains containing 100 billion neurons. The gap isn’t just quantitative, it’s qualitative, involving the intricate biochemical processes that enable neural computation (Cotra, 2020; Thompson, 2020; Karnofsky, 2021). 

Modern semiconductor-based computing faces inherent efficiency limitations when compared to biological systems. The human brain operates on roughly 20 watts of power, equivalent to a dim light bulb, while performing cognitive tasks that challenge even the most powerful supercomputers.

Silicon-based systems must overcome several disadvantages:

  • Heat generation and cooling requirements.
  • Electrical resistance and energy loss.
  • The need for precise digital switching versus analog biological processes.
  • Separation between memory and processing units.

The energy constraint argument represents a sobering physical reality check for AGI ambitions. The fundamental physics of information processing might impose limits that cannot be overcome through engineering alone. Correspondingly, AGI might be shipwrecked on the finite energy resources of the planet (Stiefel and Coggan, 2022).   


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!