Crisis? What Crisis? The Case For Neo-Intuitionism in Formal Science, Natural Science, and Philosophy, #1–Introduction.

(Supertramp, 1975)

TABLE OF CONTENTS

I. Introduction

II. What is Neo-Intuitionism?

III. Putting Neo-Intuitionism To Work

IV. Conclusion

REFERENCES


You can also download and read or share a .pdf of the complete text of this essay–including the REFERENCES–HERE.


I. Introduction

During the first three decades of the 20th century, formal science, natural science, and philosophy collectively underwent a mega-revolution, by which I mean an extension and generalization of the familiar notion of a scientific revolution in Thomas Kuhn’s sense of that term (Kuhn, 1970), such that there was a multi-membered set of internally-connected, synchronous, radical paradigm-shifts in local or specific scientific “disciplinary matrix,” and also, globally or generally, a radical change in worldview. More specifically, out of that early 20th century mega-revolution emerged

(i) Whitehead-Russell-Gödel-Tarski mathematical logic (Whitehead and Russell, 1962; Gödel, 1967; Tarski, 1943, 1956),

(ii) Zermelo-Fraenkel well-ordered set theory (Zermelo, 1930, 1967a, 1967b, 1967c; Hallett, 1984 ;Potter, 1990: ch. 7; Bagaria, 2021),  

(iii) Turing computer science and artificial intelligence (Turing, 1936/1937, 1950),

(iv) Einstein special and general relativity (Lorentz, Einstein, Minkowski, and Weyl, 1923/1952; Born, 1962),

(v) Planck-Poincaré-deBroglie-Bohr-Born-Heisenberg-Schrödinger quantum mechanics (Dirac, 1930/1958),

(vi) microbiology and Dobzhansky-Mayr “modern synthesis” evolutionary biology (Mayr, 1985),

(vi) classical Analytic philosophy, driven by logicism, i.e., the project of reducing mathematics to logic (Hanna, 2021: esp. chs. II-X), and, comprehending all of these,

(vii) the mechanistic worldview, centered on the root metaphor of the machine (for example, a clock, a steam engine, or paradigmatically since the mega-revolution, a digital computer) which says:

everything in the world is fundamentally either a formal automaton or a natural automaton, operating strictly according to Turing-computable algorithms and/or time-reversible or time-symmetric deterministic or indeterministic laws of nature, especially the Conservation Laws (including the 1st Law of Thermodynamics) and the 2nd Law of Thermodynamics, which also imposes always-increasing entropy—i.e., the always-increasing unavailability of any system’s thermal energy for conversion into mechanical action or work—on all natural mechanisms, until a total equilibrium state of the natural universe is finally reached (see Hanna and Paans, 2020; Hanna, 2022a: esp. chs. 1-2 and 4).

Correspondingly, to give this mega-revolution a handy label, let’s call it the mechanistic turn.

Nevertheless, even despite its triumphalist ideology, the mechanistic turn has actually encountereda mega-crisis, by virtue of the stunning theoretical fact that, over the ensuing 100 years since the early 20th century mega-revolution,  no fundamental progress has been made in any of these sciences or in Analytic philosophy. And that’s precisely because a set of basic and indeed framework-testing “open problems,” in each of these sciences and in Analytic philosophy, alike—open problems originally embedded in the interstices of the mega-revolution itself and gradually emerging into view over the last century—still continue to defy any adequate resolution.

Here are the principal examples.

After 100 years, mathematical logic still hasn’t fully faced up to and adequately resolved

(i) Cantor’s paradox, according to which if there were a greatest cardinal number, which counted or enumerated an absolutely universal set, “the universe V of all sets” (Bagaria, 2021: section 4), then there would already be an uncountable or non-denumerable power-set of V that has a cardinality greater than the cardinality of V, hence it would be the case that V both is and is not greater in size than itself—and yet there must still be an absolutely universal domain for set theory,

(ii) Russell’s paradox, according to which the set of all sets not members of themselves both is and is not a member of itself, hence Frege’s “naïve comprehension axiom” (Frege, 1964) must be false—and yet it must still be the case that set theory applies to every actual or possible object in the mathematical universe,

(ii) Gödel’s incompleteness theorems, according to which every logico-mathematical system at least as rich as Peano arithmetic contains undecidable, uncomputable, unprovable truths, and that more generally, truth-in-a-logico-mathematical-system cannot be determined by Turing-computable algorithms, i.e., recursive functions (Church, 1937), or by formal proof, nor can it be determined internally to that system—and yet it must still be the case that something is an inexhaustible source of truth for mathematics, thereby demonstrating its consistency,

(iii) the Liar paradox, according to which sentences that assert falsity of themselves are both true and false, and Tarski’s corresponding demonstration that every language rich enough to contain its own truth-predicate yields instances of the Liar, hence truth-in-that-language must be defined outside that language, in a metalanguage—and yet all natural languages contain their own truth-predicates,

(iv) the Löwenheim-Skolem (LS) theorem, according to which every theory satisfied by a denumerably or non-denumerably infinite model is also satisfied by an only-denumerable model (downward LS) and the converse result proved by Tarski (upward LS), which uses arbitrarily large finite models—and yet Cantor’s diagonal argument shows that there must be a sound and consistent mathematics of non-denumerably infinite or transfinite numbers (Cantor, 1891, 2019), and, above all,

(v) the patent fact of “deviant logics” like intuitionistic logic and dialetheic logic, and correspondingly what I’ve called the e pluribus unum problem of determining whether there is just One True Logic or on the contrary irreducibly many essentially different logics—and yet any attempt to justify or explain logic, must already presuppose and use logic, hence due to that circularity, it follows that logic is unjustified and inexplicable, i.e., the logocentric predicament (Sheffer, 1926; Hanna, 2006: ch. 2-3, 2022b).

After 100 years, Zermelo-Fraenkel set theory plus the axiom of choice, which licenses the power-set operation on finite and infinite sets, aka ZFC, still hasn’t fully faced up to and adequately resolved the open problems of

(i) demonstrating the truth or falsity of the Continuum Hypothesis (CH), which says that there are no varieties of infinity having a cadinality that’s either between or over-&-above the denumerable infinity of the natural numbers and the non-denumerable infinity of the real numbers,

(ii) fully theoretically accommodating the implications of proofs showing the logical independence of CH from ZFC—i.e., the logico-mathematical fact that CH can be shown to be undecidable in ZFC by using “the forcing technique” discovered by Paul Cohen (Cohen, 1966)—and of the further fact that,

[a]s a result of 50 years of development of the forcing technique, and its applications to many open problems in mathematics, there are now literally thousands of questions, in practically all areas of mathematics, that have been shown independent of ZFC,

which has led to a frantic “search for new axioms” (Bagaria, 2021: section 9), or

(iii) fully theoretically accommodating the paradoxical fact that the total collection of “proper classes,” i.e., non-paradoxical sets, is smaller than the total universe V of set theory, thereby constantly threatening Cantor’s paradox and/or Russell’s paradox all over again.

After 100 years, Turing computer science and AI still hasn’t fully faced up to and adequately resolved the paradox that because there are uncomputable functions that belong essentially to logic, mathematics, and other formal and natural sciences, it follows that a computing machine exhibiting AI can only ever be of inherently limited scientific intelligence—yet the Turing test entails that the scope of the scientific intelligence of AI is unlimited.

After 100 years, the Standard Models of cosmology and particle physics still haven’t fully faced up to and adequately resolved

(i) the manifest theoretical incoherence between the cosmology of special and general general relativity (as deterministic and committed to “local” or speed-of-light-constrained causation) and the particle physics of quantum mechanics (as indeterministic and committed to “non-local” or superluminary causation—or in Einstein’s famous put-down phrase, “spooky action-at-a-distance”),

(ii) how correctly to interpret quantum mechanics: anti-realistic/Copenhagen, realistic/many-worlds?, realistic/superdeterministic?, or realistic/Bohmian?, or

(iii) the theoretical disconnect between the highly abstract and technically complex, highly speculative, and even wildly fictional character of the leading mathematical physical theories on the one hand, and real-world empirical or phenomenal data or evidence and experimental physics on the other.

After 100 years, microbiology and Dobzhansky-Mayr “modern synthesis” evolutionary biology  still hasn’t fully faced up to and adequately resolved the paradox that although macrobiology is all and only about non-mechanical living organisms and organic processes, microbiology and Dobzhansky-Mayr “modern synthesis” evolutionary biology are all and only about machines and mechanical processes.

And after 100 years, contemporary post-classical Analytic philosophy still hasn’t fully faced up to and adequately resolved the paradox that although classical Analytic philosophy is all and only about logico-linguistic analysis, analytically necessary truths, and analytic a priori knowledge, nevertheless W.V.O. Quine’s four seminal essays, “Truth By Convention,” “Two Dogmas of Empiricism,” “Carnap and Logical Truth,” and “Epistemology Naturalized” (Quine, 1961, 1969, 1976b, 1976c) had in fact effectively undermined the very ideas of logico-linguistic analysis, analytically necessary truths, and analytic a priori knowledge, by the end of the 1950s (Hanna, 2021: esp. ch. XVI).

Now, all of this is honest-to-kant true, even if, as it seems, most of the practitioners of “normal science” (Kuhn, 1970) in all these sciences and in post-classical Analytic philosophy, aren’t even self-consciously aware of the mega-crisis: for, it seems, they’re mostly either simply blithely unaware of it, or else perhaps occasionally afflicted by a certain intellectual unease derving from it, that’s easily sublimated or suppressed by what Jeff Schmidt has aptly called the ideological discipline that’s characteristic and indeed partially constitutive of the recent and contemporary professional academy—especially in the sciences and in philosophy—and also by what Susan Haack has equally aptly called the professional academy’s system of perverse incentives (Schmidt, 2000; Turner and Chubin, 2020; Haack, 2022; Hanna, 2022c, 2022d). Moreover, assuming that at least some of them are self-consciously aware of the mega-crisis, they almost always don’t dare to admit it publicly: for example, as Sabine Hossenfelder, herself a contemporary physicist, has daringly publicly pointed out, most contemporary physicists simply don’t dare to admit publicly the existence of the local or specific crisis in contemporary physics (Hossenfelder, 2018, 2022). Let’s call this lamentable pervasive intellectual pathology of contemporary scientists and philosophers, whether unself-conscious and simply ideologically disciplined, or self-conscious and cravenly ideologically disciplined, crisis-denial.

Meanwhile, and now thinking morally and sociopolitically, the looming existential threats for humankind of

(i) atomic or biochemical annihilation,

(ii) adverse climate change and other ecological disasters created by industrial technology—again, see the image of the famously edgy and prescient album cover of Supertramp’s 1975 LP, Crisis? What Crisis?, that’s displayed at the top of this essay, and

(iii) malign applications and uses of digital technology, especially including those that are carried out under the banner of  “strong artificial intelligence,” aka strong AI,

can also at least partially and non-trivially laid at the door of the formal and natural sciences and their “handmaiden” or “underlaborer,” Analytic philosophy, by virtue of their intimate and profound complicity and entanglement with the “military-industrial-complex” and technocratic global corporate capitalism, over the last 100 years: in short, Frankenscience.

In the face of crisis-denial and Frankenscience, and for better or worse, elsewhere I’ve also tried to grapple comprehensively and fundamentally with the philosophical, moral, and sociopolitical issues flowing from them (Hanna, 2022a: esp. chs. 1-2, section 3.6, and ch. 5). But in this essay, in order to keep the discussion fairly and manageably well-focused, I want to concentrate more narrowly on some fundamental cognitive, epistemic, metaphysical, methodological, and semantic problems in the mega-crisis in contemporary formal science, natural science, and post-classical Analytic philosophy, and propose a unified approach to resolving those problems that I call neo-intuitionism.

NOTE

[i] Of course, as per the image at the top of this essay, I’m echoing the famously edgy and prescient album cover of the 1975 LP by the British rock band Supertramp. But more remotely, I’m also echoing Edmund Husserl’s unfinished Crisis of European Sciences, written in 1936, but not published until 1954 (Husserl, 1970; see also Hanna, 2014).


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!