Intractability in Cognitive Science: Further Problems for Mechanism.

(FreePik, 2025)


You can also download and read or share a .pdf of the complete text of this essay by scrolling down to the bottom of this post and clicking on the Download tab.


Intractability in Cognitive Science: Further Problems for Mechanism

1. Introduction

In an earlier essay, we explored and critically exposed the inherent limitations of the cognitive science research program which says that the human mind is fundamentally a machine (Smith, Smith, and Stocks, 2025). In this essay, we extend that argument by means of formal proofs.

Cognitive science has long grappled with methodological challenges: noisy data, measurement error, statistical inference problems, and the underdetermination of theory by evidence. The field has invested considerable energy in addressing these issues through improved experimental design, statistical methods, and open science practices. But what if these familiar obstacles were completely resolved? What if cognitive scientists had access to perfect, error-free observations and complete data sets, with no uncertainty about which effects are real and which theoretical interpretations are correct?

Intuitively, one might expect that under such idealized conditions, the core task of cognitive science—inferring the cognitive processes that produce observed behavior—would become straightforward. Patricia Rich, Ronald de Haan, Todd Wareham, and Iris van Rooij demonstrate that this intuition is fundamentally mistaken in their paper “How Hard is Cognitive Science?” (Rich et al., 2021a).

Even in this best-case scenario, the abductive inference problems that define cognitive science remain computationally intractable, and in some cases, uncomputable entirely.

This result has profound implications for our understanding of what cognitive science can achieve and how it should proceed. The authors’ formal analysis reveals that the difficulties facing cognitive scientists are not merely practical hurdles to be overcome through better methodology, but reflect deeper mathematical constraints on the inference process itself.

A cognitive scientist, “Dr Conjectura” is faced with the inference problem, of given knowledge of some cognitive systems inner workings, and behavior, to infer cognitive processes to explain the behavior. Rich, et al.  use computational complexity theory to analyze the intractability of abductive inference problems in cognitive science. Abductive inference involves generating explanations (functional or algorithmic) consistent with observed data D, under background assumptions about the class of functions F or algorithms A, expressed in a language L_F or L_A with a length bound K.

The proofs establish the “Conjectura theorems” (Theorems 1-8), showing that these inference problems are either uncomputable (when K = infinity) or intractable (NP-hard when K is a natural number), even for tractable cognitive systems (computable in polynomial time). The supplementary materials) contain full formal proofs, including reductions from known hard problems in computability and complexity theory (Rich et al., 2021b).

Below is an outline of the key proof strategies for each theorem, based on the paper’s descriptions and standard techniques (e.g., reductions from the Halting Problem for uncomputability and from 3-SAT for NP-hardness). These outlines highlight core ideas, assumptions, and conclusions, without full mathematical details.

2. Key Definitions and Setup

Computability: A problem is computable if an algorithm (mechanical procedure) exists to solve it; otherwise, uncomputable (e.g., via Turing machines).

Tractability: Solvable in polynomial time (P); otherwise, intractable. Polynomial time for an algorithm means that for integers X and y, such that for input data of length n, the computation is completed in no more than Xny steps.

Tractable Verifiability: Solutions can be checked in polynomial time (NP).

Assumptions: L_F (or L_A) is tractable (polynomial-time interpretable), and F (or A) contains only tractable functions/algorithms. Data D is error-free and complete.

Proof Technique Overview: Uncomputability via diagonalization or reduction from the Halting Problem. Intractability via polynomial-time reductions from NP-complete problems (e.g., showing the inference problem is at least as hard as 3-SAT).

Verifiability via efficient checking algorithms.

Functional-Level Abduction: (F, L_F)-ABDUCTIVE INFERENCE

Theorem 1: Uncomputable when K = infinity, for some L_F.

Statement: Without a length bound on explanations, finding a (possibly infinite-length) description L_F consistent with D is uncomputable, even if L_F is tractable.

Proof Outline:

Reduction from the Halting Problem: The Halting Problem (undecidable: given a Turing machine M and input x, does M halt on x?) is reduced to abduction. Construct a data set D encoding instances of halting queries.

Diagonalization Argument: Assume a computable L_F that generates all possible functions in F. Enumerate all possible descriptions L_F^i (via a dovetailing procedure over infinite strings). For each, simulate the corresponding function F_i = L_F(S, B, L_F^i) on a diagonal input (e.g., input i) and construct a diagonal function F* that differs from F_i at position i (e.g., F*(i) = F_i(i) + 1).

Contradiction: F* must be describable in L_F, but no finite (or even infinite enumerable) description can capture the diagonal without halting non-halters, leading to undecidability.

Key Lemma: Any effective enumeration of F via L_F allows self-referential constructions that evade completeness.

Conclusion: Uncomputability arises from the unbounded search space of explanations, not system complexity. Holds even for simple F (e.g., all computable functions).

Theorem 2: Computable when K is a natural number, for all L_F

Statement: With a finite length bound K, the problem is computable (a brute-force algorithm exists).

Proof Outline:

Enumeration Algorithm: Generate all possible strings L_F of length less than or equal to K over the alphabet of L_F (finitely many, exponential in K).

Interpretation and Check: For each L_F, compute F = L_F(S, B, L_F) (polynomial time, by assumption) and verify consistency with D (check if for all (s, b) in D, b is in F(s), polynomial in |D|).

Output: Return the first valid L_F or “None” if none exists.

Conclusion: Computability is restored by finiteness, but efficiency is not guaranteed (exponential time in K).

Theorem 3: Intractable (NP-hard) when K is a natural number and L_F tractable, for some L_F

Statement: Bounded-length abduction is intractable, even for tractable F.

Proof Outline:

Reduction from 3-SAT (NP-complete): Given a 3-SAT instance phi with n variables and m clauses, construct S, B, D such that situations s encode variable assignments (e.g., S = {0,1}^n), behaviors b encode clause satisfiability, and D includes pairs enforcing phi’s constraints.

Encoding Explanations: Design L_F to describe functions F as truth assignments (e.g., L_F is a string specifying literal values). Consistency requires F to satisfy all clauses in D.

Polynomial-Time Reduction: Mapping phi to D takes polynomial time in (n+m); |D| = O(m), K = O(n). A short consistent L_F exists if and only if phi is satisfiable.

Key Lemma (Hardness Preservation): Since 3-SAT is NP-hard and the reduction preserves polynomial-time solvability, abduction inherits hardness.

Conclusion: No polynomial-time algorithm exists unless P=NP. Intractability holds for non-trivial F (e.g., linear functions or simple decision rules).

Theorem 4: Tractably Verifiable when K is a natural number and L_F tractable, for all L_F

Statement: Given candidate L_F, verify consistency in polynomial time.

Proof Outline:

Verifier Algorithm: Interpret F = L_F(S, B, L_F) (polynomial time), then check |D| pairs (each check polynomial in |s|, |b|).

Bounded Size: |L_F| less than or equal to K ensures polynomial input size.

Even for Limited Systems: Holds for finite automata (verifiable via graph reachability in polynomial time).

Conclusion: Explanations are recognizable if found (e.g., by luck or heuristics), but generating them is hard.

Algorithmic-Level Abduction: (A, L_A)-ABDUCTIVE INFERENCE The proofs for Theorems 5-8 are fully analogous to 1-4, with adaptations for constructive languages L_A (e.g., pseudocode specifying steps).

Theorem 5: Uncomputable when K = infinity, for some L_A

Proof Outline: Similar to Theorem 1, but reduce from Rice’s Theorem (undecidability of non-trivial properties of computable functions). Encode algorithms as Turing machines; unbounded descriptions allow diagonalization over halting behaviors.

Theorem 6: Computable when K is a natural number, for all L_A

Proof Outline: Brute-force enumeration of bounded-length programs, simulation on D (polynomial time per candidate, since A tractable).

Theorem 7: Intractable (NP-hard) when K is a natural number and L_A tractable, for some L_A

Proof Outline: Reduce from Circuit Satisfiability (NP-complete). Encode circuits as bounded programs in L_A; D tests input-output consistency.

Theorem 8: Tractably Verifiable when K is a natural number and L_A tractable, for all L_A

Proof Outline: Simulate bounded program on D (polynomial time simulation for tractable A).

Implications and Robustness:  The proofs are robust to variations (e.g., approximate consistency remains hard via approximation-preserving reductions). They underscore that cognitive science’s core inference is fundamentally hard, advocating pluralism over proceduralization.

In conclusion, these results would seem to place cognitive and logical limits upon attempts to infer processes to explain cognitive activities, and hence would constitute a major impediment to mechanism and computational models of the mind.    

3. Conclusion

The Conjectura theorems establish that the foundational inference problems of cognitive science are mathematically intractable, persisting even under idealized epistemic conditions. These results pose a significant challenge to mechanistic theories of mind.

For mechanistic approaches that assume cognitive processes can be computationally modelled and reverse-engineered from behavioral data, these findings suggest fundamental limitations. If the inference from behavior to mechanism is intractable even with perfect information, then the mechanistic research program faces computational barriers that cannot be overcome simply through methodological refinement or increased computational power. The astronomical search spaces involved in abductive inference mean that even systems with the computational resources of the entire universe operating for cosmic timescales could not guarantee successful mechanism discovery.

The authors’ call for methodological pluralism becomes particularly relevant here. If no single algorithmic approach can efficiently solve the abductive inference problem, then the diversity of research methods and theoretical frameworks in cognitive science represents not a sign of immaturity, but a necessary response to mathematical reality. Different research traditions may explore different regions of the vast space of possible explanations, with the community’s collective efforts serving as a distributed search strategy. The intractability theorems demand greater humility about the scope of cognitive science and more sophisticated approaches to the discovery process. It is yet another example of epistemic humility in science.       

REFERENCES

(FreePik, 2025). “Gear Head Concept.” FreePik. Available online at URL = <https://www.freepik.com/free-vector/gear-head-concept_4347634.htm.>

(Rich et al., 2021a). Rich, P., de Haan, R., Wareham, T. and van Rooij, I. “How Hard is Cognitive Science?” Proceedings of the Annual Meeting of the Cognitive Science Society 43. Available online at URL = <https://escholarship.org/uc/item/8cr8x1c4>.

(Rich et al., 2021b). Rich, P., de Haan, R., Wareham, T. and van Rooij, I. “Supplementary Materials for ‘How Hard is Cognitive Science?’.” OSF Home. Available online at URL = <https://osf.io/gpkhj/>.

(Smith, Smith, and Stocks, 2025). Smith, J.W., Smith, S.J., and Stocks, N. “Mind, Mechanism, and Materialism: The Case Against the Computational Theory of Mind and Artificial General Intelligence.” Against Professional Philosophy. 19 October – 23 November. Available online HERE.


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!