Mind, Mechanism, and Materialism: The Case Against the Computational Theory of Mind and Artificial General Intelligence, #4.

“Homo Machina (Machine Man),” by Fritz Kahn (Redbubble, 2025)


TABLE OF CONTENTS

1. Introduction

2. The Present Limits of AI: Empirical Considerations

3. Philosophical Arguments Against Artificial General Intelligence

4. Robert Hanna’s Systematic Challenge to Computational Mechanism

5. Neuroscientific Evidence Against Digital Computationism

6. Leading Theories of Consciousness: A Critical Analysis of Their Limitations

7. Quantum Mechanics and Consciousness

8. Conclusion

The essay below will be published in six installments; this installment, the fourth, contains section 5.

But you can also download and read or share a .pdf of the complete text of this essay, including the REFERENCES, by scrolling down to the bottom of this post and clicking on the Download tab.


5. Neuroscientific Evidence Against Digital Computationism

Contemporary neuroscience increasingly reveals limitations in computational metaphors for understanding brain function. Research on neural plasticity, neurogenesis, and epigenetic factors demonstrates that brains differ fundamentally from digital computers in their developmental processes and adaptive capabilities (Kandel et al., 2013).

Unlike digital computers, which maintain stable hardware-software distinctions, biological brains exhibit continuous structural and functional plasticity that integrates “hardware” and “software” levels. Neural systems can modify their own connectivity patterns, generate new neurons, and alter gene expression in response to experience, capabilities that have no analogues in digital computation.

Research on neural oscillations, default mode networks, and global workspace dynamics reveals organizational principles that differ significantly from digital computational architectures (Varela et al., 1991). These findings entail that brains operate according to dynamical principles that transcend computational approaches, an approach adopted by Gerald Edelman, whose work we now consider.    

5.1 Gerald Edelman’s Neurological Critique of Computer Models of Mind

Gerald Edelman’s Bright Air, Brilliant Fire (Edelman, 1992), presents a sustained neurobiological critique of computational theories of mind, challenging the foundational assumptions underlying artificial intelligence and cognitive science. By means of his theory of Neural Darwinism, Edelman argues that the brain operates according to selectionist rather than instructionist principles, fundamentally distinguishing biological cognition from computational processing. While Edelman’s critique offers valuable insights into the limitations of computational metaphors, questions nevertheless remain regarding the completeness of his alternative framework and its implications for contemporary neuroscience and AI research.

Edelman’s central thesis challenges what he terms the “computer metaphor” of mind, the assumption that mental processes can be adequately understood in terms of computational operations performed on symbolic representations. Instead, he proposes Neural Darwinism, a theory that emphasizes the selectionist, evolutionary character of neural organization and function. This framework, Edelman argues, reveals fundamental differences between biological and artificial information processing that cannot be bridged through mere increases in computational power or sophistication.

The stakes of this debate extend beyond academic philosophy to encompass practical questions about artificial intelligence development, neurotechnology, and our understanding of human nature itself. If Edelman’s critique is sound, then much contemporary work in AI and computational cognitive science is pursuing fundamentally misguided research programs.

Edelman’s most fundamental contribution lies in his distinction between selectionist and instructionist theories of neural function. Instructionist theories, which underlie most computational models, assume that the brain processes information according to predetermined programs or algorithms. Like a computer executing software, the instructionist brain follows explicit rules to transform inputs into outputs through a series of determinate steps.

By contrast, Edelman’s selectionist approach draws an analogy with evolutionary processes. Just as natural selection operates on populations of organisms with varying traits, neural selection operates on populations of neuronal groups with varying patterns of connectivity and response. Through experience, some neural circuits are strengthened while others are weakened, resulting in the emergence of adaptive patterns without the need for explicit programming or instruction.

This selectionist vs. instructionist distinction has profound implications for understanding the nature of mental representation and processing. In instructionist models, mental contents are typically understood as discrete symbols manipulated according to syntactic rules, the foundation of both classical AI and much contemporary cognitive science. Edelman’s selectionist framework suggests that neural “representations” are better understood as dynamic patterns of activity that emerge from competitive processes among neural populations.

Edelman’s Neural Group Selection (NGS) theory provides the mechanistic foundation for his selectionist approach. NGS operates through three key processes:

  • Developmental Selection: During neural development, genetic and epigenetic factors create enormous diversity in synaptic connections. This primary repertoire provides the raw material for subsequent selection processes.
  • Experiential Selection: Through interaction with the environment, some synaptic connections are strengthened while others are weakened. This secondary repertoire reflects the organism’s particular experiential history.
  • Re-entrant Signalling: Neural areas engage in continuous reciprocal signalling, creating dynamic patterns of correlated activity that integrate information across different brain regions.

The NGS framework holds that neural organization emerges through activity-dependent processes rather than following predetermined blueprints. This fundamental insight challenges computational models that assume fixed architectures implementing determinate algorithms. Instead, Edelman proposes that neural “computation” is better understood as a form of pattern recognition and selection operating on populations of neural responses.

Edelman’s concept of topobiology provides additional neurobiological grounding for his critique of computational models. Topobiology describes how neural development proceeds through local interactions between cells and their molecular environment, resulting in the emergence of complex neural architectures without centralized control or explicit programming.

This developmental perspective highlights a crucial difference between biological and artificial systems. Computer architectures are designed according to explicit specifications that determine their structure and function. In contrast, neural architectures emerge through self-organizing processes that reflect both genetic constraints and environmental influences. The resulting neural organization cannot be fully captured by any finite set of rules or algorithms.

One of Edelman’s most powerful arguments against computational theories concerns the problem of semantic content. Classical AI and cognitive science assume that mental representations derive their meaning from their role in computational processes, their “functional role” in transforming inputs to outputs. Edelman argues that this approach fails to account for how neural activity acquires genuine semantic content rather than merely formal structure.

The difficulty arises from what is called the “symbol grounding problem” (Harnad, 1990). In digital computers, symbols derive their meaning through interpretation by external agents (programmers and users). The symbol “cat” in a computer program means cat only because humans have established this interpretive relationship. But neural activity cannot depend on external interpretation, it must somehow ground its own semantic content through the physical processes of the brain itself.

Edelman’s selectionist framework suggests a solution through the concept of “value.” Neural selection processes are biased by evaluative systems that determine which patterns of activity are reinforced or suppressed. These value systems, rooted in the organism’s biological needs and evolutionary history, provide the basis for semantic content by establishing which neural patterns matter for the organism’s survival and flourishing.

This account suggests that genuine semantic content requires the kind of embodied, value-laden interaction with the environment that characterizes biological systems. Computational models, operating through formal symbol manipulation without genuine biological values, may achieve sophisticated behavioral outputs without genuine understanding or semantic content.

Edelman’s analysis of categorization provides another crucial line of argument against computational models. Categorization, the ability to group diverse stimuli into meaningful classes, is fundamental to all higher cognitive functions. Yet Edelman argues that computational approaches face a fundamental “bootstrap problem” in explaining how categorical structure emerges.

Classical computational models assume that categories are defined by necessary and sufficient conditions that can be explicitly programmed. However, psychological research has consistently shown that natural categories typically lack such definitional structure. Instead, they exhibit “family resemblance” structure with overlapping similarities rather than shared essential features.

Edelman argues that, even more problematically, computational models presuppose the very categorical distinctions they purport to explain. To implement a program that recognizes cats, programmers must already possess the concept “cat” to specify the relevant features and decision procedures. This creates a regress: how did humans originally acquire categorical concepts if not through computational processes?

Edelman’s selectionist framework offers an alternative through what he terms “categorization through memory.” Neural group selection creates neural circuits that respond selectively to recurring patterns in the organism’s experience. Through re-entrant signalling, these specialized circuits interact to create higher-order patterns that capture similarities across different experiences. Categories emerge as stable patterns of neural activity rather than explicit symbol structures.

The binding problem—how the brain integrates diverse types of information into unified conscious experiences—poses another challenge for computational models. When you see a red ball, your visual system processes color, shape, motion, and other features in different neural areas. Yet you experience a single, integrated percept rather than a collection of separate features. How does the brain achieve this integration?

Computational approaches typically propose various architectural solutions, such as central processors that combine information from specialized modules or synchronization mechanisms that coordinate distributed processing. However, Edelman argues that such solutions fail to capture the dynamic, context-sensitive character of neural integration.

His theory of re-entrant signalling provides an alternative account. Rather than requiring a central integrator, re-entrant connections allow neural areas mutually to influence each other’s activity patterns. Through recursive interactions, a globally consistent pattern of activity emerges that reflects the constraints imposed by all participating neural areas. This creates what Edelman terms “dynamic cores,” integrated patterns of neural activity that underlie conscious experience.

The re-entrant framework suggests that conscious integration cannot be reduced to computational operations because it depends on the specific anatomical and physiological properties of neural circuits. The timing, connectivity patterns, and biophysical properties of neurons all contribute to the emergence of integrated conscious states in ways that cannot be captured by abstract computational descriptions.

Edelman’s critique also addresses the temporal aspects of neural processing that computational models struggle to capture. Biological neural networks exhibit complex temporal dynamics, with patterns of activity evolving continuously rather than through discrete computational steps. This creates what AI researchers recognize as the “frame problem,” the difficulty of determining which information remains relevant as situations change over time.

In computational systems, the frame problem is typically addressed through explicit rules that specify what information to maintain or update in different circumstances. However, such rules require programmers to anticipate all relevant situations in advance, an impossible task for open-ended environments.

Edelman’s selectionist framework suggests that biological systems solve the frame problem through the continuous operation of neural selection processes. Rather than maintaining explicit representations of current states, the brain maintains dynamic patterns of activity that are continuously modified by ongoing experience. These patterns naturally adapt to changing circumstances without requiring explicit rules about what to preserve or update.

The temporal dimension of neural processing also relates to Edelman’s concept of the “remembered present.” Unlike computational systems that operate on discrete time steps, conscious experience involves the continuous integration of past, present, and anticipated future states. This temporal integration emerges through the recurrent dynamics of neural circuits, rather than through the sequential processing of discrete computational operations.

Edelman’s treatment of consciousness represents perhaps his most ambitious challenge to computational theories. While computational approaches typically focus on the functional aspects of mental processes, what the mind does rather than what it feels like, Edelman focuses on the qualitative dimensions of conscious experience that philosophers call “qualia.”

As we’ve noted, the hard problem of consciousness concerns how and why physical processes give rise to subjective experience. Why should there be “something it is like” to see red or feel pain, rather than just neural processing that discriminates colors or responds to tissue damage? Computational theories typically sidestep this question by focusing on functional capacities while remaining agnostic about subjective experience.

Edelman argues that consciousness and qualia cannot be separated from the specific biological processes that generate them. His theory proposes that conscious experience emerges from the formation of dynamic cores, integrated patterns of re-entrant activity that bind diverse neural processes into unified states. The qualitative character of experience reflects the specific patterns of neural activity within these dynamic cores.

This biological grounding of consciousness creates difficulties for computational theories that assume substrate independence, the view that mental processes can be implemented in any sufficiently complex information processing system. If Edelman is correct, then consciousness requires the specific biological properties of neural tissue and cannot be replicated in silicon-based computational systems regardless of their functional sophistication.

Edelman’s distinction between primary and higher-order consciousness provides additional support for his critique of computational models. Primary consciousness, which he attributes to many animals, involves the integration of sensory, memory, and value systems to create a unified “scene” of ongoing experience. This form of consciousness emerges through the basic operations of neural group selection and re-entrant signalling.

Higher-order consciousness, unique to humans and perhaps a few other species, involves the additional capacity for symbolic reference and linguistic communication. This enables humans to construct models of themselves and their environment that can be manipulated independently of immediate sensory input.

Crucially, Edelman argues that higher-order consciousness depends on the prior existence of primary consciousness rather than emerging purely from computational operations on symbolic representations. Language and symbolic thought are grounded in the qualitative experiences of primary consciousness, suggesting that genuine AI would require not just sophisticated symbol processing, but also the biological substrate that supports conscious experience.

The rise of deep learning and artificial neural networks might seem to vindicate computational approaches, by demonstrating that artificial systems can achieve sophisticated pattern recognition and learning. However, Edelman’s critique entails important limitations that remain relevant to contemporary AI.

While artificial neural networks are loosely inspired by biological neurons, they typically lack the complex temporal dynamics, re-entrant connectivity, and value systems that Edelman identifies as crucial for genuine neural processing. Most deep learning systems operate through feedforward processing with discrete training phases, contrasting sharply with the continuous, recurrent dynamics of biological neural networks.

Moreover, deep learning systems typically require enormous amounts of labelled training data and exhibit brittleness when confronted with novel situations, limitations that may reflect their departure from the selectionist principles that Edelman argues are fundamental to biological intelligence. The need for extensive supervised training suggests that these systems lack the autonomous categorization abilities that emerge naturally in biological systems through neural group selection.

Edelman’s emphasis on the biological grounding of cognition aligns with contemporary developments in embodied cognition and more specifcally with enactivist approaches to mind. These frameworks, developed by researchers like Francisco Varela, and Alva Noë (Noë, 2004), argue that cognition cannot be understood independently of the body and its environmental interactions.

The convergence between Edelman’s selectionist framework and enactivist approaches suggests a broader shift away from computational metaphors toward more biologically grounded theories of mind. However, this convergence also raises questions about whether Edelman’s specific theoretical commitments are necessary or whether alternative biological approaches might achieve similar insights.

Recent developments in predictive processing and Bayesian brain theories present interesting challenges to Edelman’s framework. These approaches propose that the brain operates as a prediction machine that continuously generates and updates models of sensory input based on prior expectations and prediction errors, (Friston, 2010)

While predictive processing theories maintain computational commitments that Edelman would likely reject, they share his emphasis on the active, constructive character of neural processing. The brain doesn’t simply respond to sensory input, but actively generates predictions that shape perception and action. This active dimension might be seen as compatible with Edelman’s selectionist framework, although significant theoretical differences also remain.

While Edelman’s critique of computational models is compelling, questions remain about whether his alternative framework successfully addresses the explanatory challenges it identifies. The hard problem of consciousness, for instance, might not be resolved merely by appealing to dynamic cores and re-entrant signalling. Critics might argue that Edelman simply relocates the mystery rather than solving it; why should these particular biological processes give rise to subjective experience rather than occurring “in the dark”?

Similarly, Edelman’s account of semantic content through value systems may not fully address the symbol grounding problem. Even if biological values provide a basis for semantic content, it remains unclear how the transition from biological significance to genuine meaning occurs. The gap between biological function and semantic content may be as challenging as the gap between computation and meaning that Edelman identifies in artificial systems.

Edelman’s theoretical framework makes specific empirical predictions about neural organization and function, but the experimental validation of these predictions remains incomplete. While there is substantial evidence for activity-dependent neural development and the importance of re-entrant connections, direct evidence for neural group selection and dynamic cores is more limited.

Contemporary neuroscience has developed sophisticated techniques for measuring neural activity with high spatial and temporal resolution, but translating these measurements into tests of Edelman’s theoretical framework remains challenging. The complexity of neural systems and the indirect relationship between neural activity and theoretical constructs create difficulties for decisive experimental validation.

If Edelman’s critique is correct, then what implications follow for artificial intelligence research? One interpretation holds that genuine AI requires biological substrates, and that silicon-based systems cannot achieve genuine intelligence regardless of their computational sophistication. This would represent a fundamental limitation on AI development that could be overcome only through biotechnology or hybrid bio-artificial systems.

Edelman’s Bright Air, Brilliant Fire, presents a sophisticated and influential critique of computational theories of mind grounded in detailed knowledge of neural organization and function. His selectionist framework offers genuine insights into the distinctive features of biological cognition that might be missed by computational approaches focused on formal symbol manipulation.

The strength of Edelman’s critique lies in its integration of empirical neuroscience with broader theoretical insights about the nature of mind and consciousness. By grounding his arguments in specific claims about neural development, organization, and function, Edelman avoids the purely philosophical objections that computational theorists can sometimes dismiss as irrelevant to their technical projects.

However, significant questions remain about both the completeness of Edelman’s critique and the adequacy of his alternative framework. While he successfully identifies important limitations in computational approaches, it’s less clear that his selectionist framework provides a complete account of the phenomena he seeks to explain. The hard problem of consciousness, the symbol grounding problem, and other fundamental issues in philosophy of mind, will require additional theoretical resources beyond those that Edelman provides.

For contemporary cognitive science and AI research, Edelman’s work serves as a valuable corrective to overly simplistic computational assumptions while pointing toward more biologically realistic approaches to understanding intelligence and consciousness. Whether his specific theoretical commitments prove correct or not, his broader insights about the distinctive features of biological cognition will likely remain relevant for future developments in these fields. He has certainly generated a challenging neurological critique of mechanism.

5.2 John Lorber’s Challenge to Mechanism

John Lorber’s documented cases of severe hydrocephalus patients maintaining normal cognitive function despite massive brain tissue loss (Lewin, 1980; Perkins, 2025), present a fundamental challenge to computational theories of mind. Such extreme neuroplasticity demonstrates properties incompatible with classical Turing machine models, suggesting instead that consciousness emerges from dynamic, substrate-independent processes that resist mechanistic reduction.

The mechanistic view of mind assumes that specific cognitive functions require particular neural substrates, much as software requires specific hardware configurations. However, Lorber’s 1980 documentation of hydrocephalic patients with minimal brain tissue, yet normal intelligence, poses a direct empirical challenge to this framework.

Lorber’s most striking case involved a university student with an IQ of 126 whose brain scan revealed cerebral tissue compressed to just millimetres thick, roughly 5% of normal brain volume. Such cases weren’t isolated anomalies but represented a pattern among severe hydrocephalus patients who developed normally despite dramatic structural deficits.

These findings are incompatible with standard mechanistic assumptions:

  • Hardware Specificity Problem: Turing machines require specific hardware configurations to execute programs. Yet Lorber’s patients achieved normal cognitive output with radically different “hardware,” suggesting cognition isn’t tied to particular neural architectures.
  • Functional Localization Failure: Classical computational models assume cognitive functions map to discrete brain regions, like subroutines to memory addresses. But these patients maintained complex reasoning, language, and memory with minimal cortical tissue, implying functions aren’t localized to specific substrates.
  • Processing Power Paradox: Information processing should correlate with available computational resources. Yet patients with 95% brain tissue loss showed no proportional cognitive deficits, violating basic computational scaling principles.

The Neural Plasticity Argument Against Mechanism

The extreme neuroplasticity demonstrated in these cases reveals properties fundamentally at odds with Turing machine characteristics:

  • Dynamic Reconfiguration: While Turing machines follow fixed programs, these brains continuously reorganized themselves. The same minimal tissue performed vastly different functions across development, suggesting cognition emerges from dynamic processes rather than static algorithms.
  • Holographic Function: Unlike digital systems where data loss degrades performance proportionally, these patients maintained integrated cognitive function despite massive “data loss.” This suggests a holographic rather than digital organization of mental processes.
  • Substrate Independence: Most remarkably, normal cognition persisted across radically different physical substrates. This implies consciousness isn’t reducible to particular material arrangements, challenging core materialist assumptions underlying computational theories.

Implications for Theories of Consciousness

If consciousness were purely computational, Lorber’s cases would be impossible. A Turing machine with 95% of its components destroyed couldn’t maintain complex processing. Yet these patients not only survived but thrived cognitively.

This suggests several anti-mechanistic conclusions:

  • Emergence Over Reduction: Consciousness may be an emergent property that arises from but isn’t reducible to neural activity. Like wetness emerging from H2O molecules, consciousness might emerge from neural patterns while possessing irreducible properties.
  • Field Models: Perhaps consciousness operates more like an electromagnetic field,  distributed, dynamic, and capable of maintaining coherence despite local disruptions. This would explain how minimal neural tissue could sustain complex mental states.
  • Non-Algorithmic Processing: Rather than executing discrete algorithms, the brain might operate through continuous, analog processes that resist digital modelling. Consciousness would be fundamentally non-computational, as Hanna and Penrose hold.

Addressing Counterarguments

  • Distributed Processing Defense: Mechanists might argue that hydrocephalic brains demonstrate distributed rather than localised processing, still computational, but more flexible. However, this doesn’t address why 95% tissue loss produces no proportional functional loss, or how the same tissue can serve multiple cognitive roles simultaneously.
  • Compensation Mechanisms: Claims that subcortical structures compensate for cortical loss still assume mechanistic processing, just relocated. But this doesn’t explain how patients maintain the full spectrum of cortical functions with minimal tissue, or why compensation is so remarkably complete.
  • Developmental Adaptation: While early developmental plasticity might explain some adaptation, it doesn’t address why the mature brain maintains such flexibility, or how fundamental cognitive architectures can be so radically reorganized while preserving function.

Lorber’s hydrocephalus cases provide compelling evidence that consciousness operates through principles incompatible with Turing machine models. The extraordinary plasticity, substrate independence, and holographic functionality observed in these patients point toward post-mechanistic theories of mind.

Instead of emerging from computational algorithms running on neural hardware, consciousness might emerge from dynamic, field-like processes that can maintain coherence across vastly different material substrates. This doesn’t deny the importance of the brain, but suggests consciousness transcends simple mechanistic reduction. This conclusion will be strengthened by our consideration of the limits of contemporary theories of consciousness in the next section.


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!