APP Editors’ Note: Andreas Keller is an independent philosopher living in Hannover, Germany, and currently working as a computer programmer.
***
In Part 3, Section 2.2 (“Natural Mechanism, Computability, and Anti-Mechanism”) of “The Rational Human Condition”[i], Robert Hanna proposes a definition of what he calls “natural mechanism,” in terms of mechanistic laws of physics. He seems to assume that such mechanistic laws must be Turing-computable:
“…what I am proposing is that anything’s causally efficacious behaviors, functions, operations, and/or states are inherently mechanical in both their existence and their specific character if and only if:
[…]
(ii) they strictly conform to The Church-Turing Thesis (aka “Church’s Thesis”).“
The Church-Turing-Thesis is, however, not a statement about the physical world, so it does not really make sense here, although this seems to be what many people seem to have in mind about it. The Church-Turing Thesis does not say that every physical process must be describable in terms of such a computable function, or, to use an equivalent notion, in terms of a formal theory.
Technically speaking, the Church-Turing-Thesis says that all formal models of the concept of computation describe the same set of functions. This means that the different attempts to define the intuitive concept of computability (e.g. Turing machines, Recursive functions, Lambda-Calculus etc.) are equivalent. So the Church-Turing-thesis is a statement about properties of formal systems. All the different approaches to the concept of computability start with the requirement that the concept of computations has to be described in formal terms.
There is no contradiction in thinking that the set of mathematical equations describing a physical entity might contain non-computable functions. The Church-Turing-Thesis applies to formal models of computation, not to physical objects. That all the different ways to formalize the concept of computability lead to the same set of computable functions does not say physics must conform to such functions. It means if you start with the requirement to develop a formal concept of computation, you end up with Turing-computable functions. Tacitly extending this to physics means adding the unspoken assumption that physics is completely formalizable. This, however, is an unproven hypothesis, not an established fact, let alone an a priori statement.
What does it mean to say that a certain physical system (or the set of laws (in most cases: equations) describing it) is computable? It means that we can solve the equations in all instances, using a finite body of mathematical knowledge. If we incorporate that mathematical knowledge into a computer program, then we could enter a description of the initial state of the system and calculate any later state of it at the push of a button.
A common way to describe this is what is known as the “DN” (“deductive-nomological”) model introduced by philosophers Carl Hempel and Paul Oppenheim. According to this model, scientific explanations of a phenomenon (more specifically: a statement describing an observable phenomenon) E to be explained (the “explanandum”) consists of an enumeration of a number of laws L1, L2, … Lk (I will be taking about the “Ls” in my discussion below) and a number of initial conditions C1, C2, … Cr (the Cs). The explanandum is then logically derived (i.e. computed) from the initial conditions and the laws.
The assumption that physical systems are computable then means that for any set of laws L1, L2, … Lk you can always derive E for any set of initial conditions C1, C2, … Cr, using a finite amount of mathematical knowledge (i.e. a finite algorithm). This is normally taken for granted. It is, as I said above, a tacit assumption whose presence (and hypothetical nature) is not even realized by most people. But it is actually just that: a hypothesis.
Take the so-called three-body problem, for example. The task is to calculate the trajectories of three objects (e.g. planets or stars) influencing each other by gravitation. No general solution to calculate their movements is known. The equations can be written down completely, these are the laws (the “Ls” in the DN-model), but if you enter any set of initial conditions (values for mass and initial directions and speeds of motion of the three bodies), there is no known way to calculate what is going to happen in all possible instances. It might turn out that this rather simple set of laws is actually non-computable. I don’t know whether it is or whether its computability has been proven or disproven, but as far as I know, a complete solution is not known to this day.
To be non-computable means that with any given set of mathematical methods to solve the equations, you can only cover a limited set of cases (although this set might be infinite). So non-computability does not mean that you cannot calculate any values of a non-computable function, but instead that any method for doing so is incomplete. You may always add additional mathematical methods and thus extend the range of solutions, but the resulting theory (consisting of the physical laws and the mathematical methods to apply them) will always remain incomplete in the sense that you can set the initial conditions in such a way that the development of the “system”[ii] cannot be calculated with the given set of methods.
For example, the number of cases for which a way to solve the three body problem has recently been increased (by 1223 new solutions) for a number of special cases with certain properties.[iii] See, for example, this.
In other words, we cannot take the derivation step, by which E follows from the Cs and Ls, for granted. It is possible that there are physical “systems” that are, in this way, computationally incomplete.
Let us assume that some of physics (maybe the three-body problem is an example for this) indeed involves systems of equations that are not solvable for all cases by any single algorithm.
We would then have to extend the DN-Scheme by another set of components. We would not only have the laws and the initial conditions, but also a number of calculation methods M1, M2, … Mq. To derive a particular explanandum E from the initial conditions and the laws, we would have to use a number of such computational methods. In a computable system, there is, for a given set of laws, a finite set of such methods that covers all cases. We can therefore leave these methods implicit. This is done in the classical DN model, where the calculation is just hinted at by drawing a line under the Cs and Ls and it is implicitly assumed that we know how to compute E from the Cs and Ls. We can just say: “calculate” and assume that the physicist or mathematician (or computer) knows how to do that.
But if we don’t know how to solve the equations, this does not work. Many people just assume that it should be possible. Behind this is a concept of thinking of nature as a kind of a computer. According to this “computing metaphysics”, nature works by somehow performing a calculation. That many people recently even seem to accept the idea that we might be living inside a computer simulation shows how widespread this confusion of physical processes and their calculation is.
But a real physical “system” just proceeds according to the physical laws describing it, it does not calculate them. It is, therefore, no contradiction that some physical processes might not be describable completely in terms of any single formal theory or algorithm. No matter how we set up a three-body experiment, the “system” would just develop according to the laws of physics. That we might not be able to calculate all cases with a finite set of mathematical methods does not stop the three bodies from just moving the way they do. The physical laws are a description of the “system,” they are its invariants, but the system does not calculate. We should not assume that the possible is limited by the computable. Computability is a property of our descriptions, not of the real objects. It is a term belonging to epistemology, not ontology.
It is, therefore, not necessary to assume, as some people (e.g. David Deutsch[iv]) have done, that physics must be restricted to processes describable in terms of Turing machines (or Deutsch’s extended version, quantum Turing machines[v]).
In practice, physicists and other scientists are familiar with the situation that they do not know how to solve the equations. You cannot just calculate the result. The word says it all: the equations are “solved”, not just “computed”. They are problems that need creativity.[vi] There are books with mathematical methods and tricks, mathematical toolboxes, so to speak, and there are, in recent decades, software implementations of that mathematical knowledge in the form of programs like “Mathematica.” Such programs assist in the process of solving equations, but you still need the creative scientist. And in many instances, the equations cannot be solved with the available mathematical knowledge, even if they might describe correctly what the “system” is doing. In such cases, scientists are using approximation methods (e.g. what is known as “numerical methods”). However, if the equations are non-linear and therefore sensitive to small changes in the initial conditions (which might be introduced by the quantum mechanical nature of the underlying physics), numerical methods, being only approximations, might yield totally wrong results. In such cases, a complete, exact formal description is simply impossible. All we can do is produce models, i.e., “as-if”-descriptions that might be practically useful, but only to some extent. You may think of the impossibility of long term weather predictions, for example.
So let us assume, as a hypothesis, that physical reality might only be partially computable. We might one day find a general theory of everything, but it might turn out to be not very useful because it might be computationally incomplete in principle. We would have the laws (the Ls) and we could put in any initial conditions (the Cs) and implicitly, this would yield a description of anything we apply it to, but in many cases, when we change the Cs, i.e. apply the theory to a new case, we would need new axioms, rules of inference, principles, algorithms, mathematical tricks and methods in our computational tool box (additional Ms) to arrive at a prediction (the E). We would never have a complete set of Ms. The theory would be implicitly complete, but explicitly, or computationally, incomplete. We might be able to calculate many special cases, but never all of them, based on our always finite amount of computational knowledge.
We could still call such “systems” “mechanical”. We could “reduce” them to a limited set of physical laws (the Ls) but would need an unlimited set of computational methods (Ms). So while I do not think that the notion of “mechanistic explanations” needs to contain a requirement of computability, classical reductionism would not work here. Our theories (consisting of the Cs, Ls and Ms) would always be incomplete, so we would never understand reality completely, although we could always improve and extend our understanding of it.
In this situation, we should consider whether our notion of “science” is too restrictive (personally, I think it is) and if we should maybe replace it by a wider concept (akin to what is called “Wissenschaft” in German).[vii] The distinction between sciences and humanities would become less pronounced, but it would not be the methods of science that would conquer the territory of the humanities (as scientism expects) but rather the other way around.
To sum things up: there is no a priori reason to assume that physical reality must be computable. Reality might contain entities that are not computable. This means that every formal theory of such an entity is incomplete. For each formal theory describing such an entity, there would be instances of initial conditions for which the given formal theory does not hold. There might be processes that lead the entity out of the scope of a given theory, so for each formal theory of it, it could develop in such a way that the given theory can no longer by applied (we may say that the Ls are constant and only new Ms are required, but it might be arbitrary to decide which statements in the theory are laws and which are computational axioms, so the distinction may be blurred and it might make sense to describe such an entity as historically developing without fixed laws).
Such an entity would have more properties (i.e. there would be more true statements about it, including about its history of temporal development) than can be derived in any given formal theory. It might always be possible to extend a given theory to cover more cases, but the new theory would be incomplete again. Such entities are known inside mathematics (for example, the set of natural numbers is one) and they might exist in the physical world as well. Such an entity would always produce surprises and its investigation will require creativity. The possibility of error in the process of investigation cannot be excluded, in principle (i.e. we can approach truth, but never be sure we have arrived at it).
To make talking about such things easier (because “entity for which every formal theory describing it is incomplete” is a bit cumbersome) let me suggest a word for them: I call such entities “proteons” (a pseudo-greek word I have derived from the name of the god Proteus who was believed to have no definite shape, to change shape all the time and to avoid his future to be predicted, by changing his shape). So I suggest the hypothesis (the “proteon hypothesis”) that proteons (in this sense) not only exist in mathematics but also in physical reality. I suggest that the world as a whole is a proteon, and that some of its parts are proteons as well.[viii]
I further suggest opposing the term “proteon” to the term “system.”
A system, in the sense of this terminology, is an entity that can be described completely and exactly by a single formal theory (i.e. by a finite text with a well-defined semantics). A proteon, on the other hand, cannot be described by a single formal theory. Each formal description of it is incomplete, although it might be possible to move from one theory to an extended, more comprehensive, but again incomplete one. We can also add two adjectives, “protean” on the side of proteons and “systematic” on the side of systems. Proteons may be approximated or modeled by systems. They may have different systems as “aspects” or “views” or “models.”
We might distinguish “systematic science” (classical, hard science) from “protean science” (which is more general and may include scholarship, i.e. all kinds of “Wissenschaft”). Systematic science can be considered a restricted special case of protean science. The efforts being made to squeeze those sciences that are not completely systematic (psychology, biology, the “social sciences”, or even the humanities) into the procrustean bed of the systematic model should simply be given up. Those sciences are, at least partially, protean.
In practice, the term proteon may be extended to entities that are systems theoretically, but where a complete and exact description is not practically feasible. We might speak of “strict” or “theoretical” proteons in the case where a complete formal theory is impossible in principle and of “practical” proteons in cases where a formal theory is practically infeasible although it might be theoretically possible (I am open for any suggestions for improving the terminology). This last group includes systems whose computational complexity makes exact calculations practically impossible.[ix] They are theoretically systems but practically protean (and can also not be perfectly simulated by computers).
An important aspect of proteons is that for them, there is a trade-off between exactness (or explicitness) of descriptions on the one hand and completeness on the other. To make descriptions complete, you must either make them vague (i.e. you use concepts without a complete or unchanging definition – this is not necessarily a bad thing to do, you just have to push the incompleteness somewhere and this is one of the possibilities), or you have to keep them implicit, for example, you describe something in terms of equations for which no general solution is known.[x] This insight is important for the understanding of the current state of much of philosophy, especially in the context of academic analytic and scientistic philosophy: the attempt to introduce exact formal methods into philosophy and to turn it into a systematic and exact science results in its decay into a large number of very special small theories. The high degree of specialization observable in analytic academic philosophy can thus be explained as a result of trying to make it exact.
I suggest that the central point and core of philosophy (as opposed to science) is: to deal with proteons. Philosophy, therefore, cannot be a science in the restricted sense of systematic science. Trying to do so takes the philosophical core out of philosophy. It restricts philosophy and turns it into a pile of special cases, a number of little “science-lets”. The paradigmatic error of scientistic philosophy, I suppose, lies in the tacit (and I think, wrong) assumption that the computable or formalizable is the same as the possible. If physical proteons exist, this assumption is wrong, and our concept of science would have to change. Philosophy spawns new sciences whenever we learn to describe an aspect of reality as a system (i.e. to formalize it), but it is not a systematic science in itself, and its range of methods cannot be and should not be restricted to those of the sciences.
[ii] I am going to propose in this article not to call such an entity a system, hence the use of quotation marks here.
[iii] Xiaoming Li, Yipeng Jing, Shijun Liao: “The 1223 new periodic orbits of planar three-body problem with unequal mass and zero angular momentum” (Sept. 2017), available online at <https://arxiv.org/abs/1709.04775>.
[iv] Deutsch, David (July 1985).“Quantum theory, the Church-Turing principle and the universal quantum computer”(PDF).Proceedings of the Royal Society A. 400 (1818): 97–117. Bibcode:1985RSPSA.400…97D. doi:10.1098/rspa.1985.0070
[v] It is quite obvious that quantum mechanical entities, e.g. unstable atomic nuclei which may decay at any time but where there is no way to predict this time, cannot be completely described by means of a formal theory. Deutsch addresses this by introducing a quantum version of computability. But the notion that all physical processes are describable in terms of such quantum Turing machines must be viewed as an unproven hypothesis. One could argue that as long as an entity has a limited energy and limited size, its information capacity and hence its information content is limited (see: <https://en.wikipedia.org/wiki/Bekenstein_bound>) and it might be possible to demonstrate that such entities must be computable at least in principle, although I don’t think this has been demonstrated yet. However, if the entity is expanding (e.g. in the case of the three body problem, if the three bodies are flying apart), its information capacity can grow. As a result, if the theory of the system is non-computable, we may expect that growing information capacity to result in new phenomena (new from the point of view of any given formal theory or algorithm used to describe the entity), i.e. the physical entity could produce surprises (behaviors not derivable inside the current theory about it) at any time, even if it were completely deterministic.
[vi] Creativity, then, is something that in turn cannot be modelled completely by formal theories or algorithms (i.e. conventional AI would be on the wrong track), or else human cognition would be limited in principle. But scientists and mathematicians are obviously able to find novel solutions for cases for which general algorithms are not known or do not exist or cannot be used because of computational complexity. That humans can solve such problems points at the possibility that the human brain itself cannot be completely formalized (although its operations may be covered completely by physics). This interesting question is beyond the scope of this article (it requires its own lengthy discussion), but it is addressed in the following paper:
Ammon, Kurt: Informal Physical Reasoning Processes (August 2016), available online at URL = <https://arxiv.org/abs/1608.04672>.
If human cognition cannot, as Ammon suggests, be completely modelled in terms of any given formal theory, then human cultures and societies would be outside the scope of science, as long as science is understood in the narrow sense of providing DN-type explanations, i.e. describe systems in terms of computable models or formal theories. The division of academic disciplines into sciences and humanities can then be viewed as a result of the attempt by (scientistic) scientists to restrict all explanations to what is computable or formalizable, thus limiting the reach of science to those parts of reality that are.
[vii] Note also that the classic Popperian notion of falsifiability would no longer work in its classic form. We can never be sure if all or mathematical methods are correct. We can never be sure if all the mathematical methods we are using are correct. Roughly speaking, if we had a procedure to decide if a given calculation method is correct, we could enumerate all the correct methods by generating all possible programs from a given programming language and then deciding if they are correct. We could therefore describe all correct methods in a finite way and add the method-generator to our theories. Any non-Turing-computable theory would then become Turing-computable. So assuming such a procedure can exist leads to a contradiction). As a result we can only falsify the theory consisting of the laws (Ls) and the computational knowledge (Ms) together, not the laws alone. There could be errors in our calculations, while the laws are correct. Science therefore takes the form of a hermeneutic circle, in which we make sense of observations based on our current theories and methods and improve the theories and methods based on the observations.
[viii] Note that proteons, by definition, cannot be perfectly simulated by a computer, so proving for an observable physical entity that it is a proteon would rule out the idea – currently popular in some quarters – that we live in a computer simulation (or at least make it very difficult to defend that idea).
[ix] An example seems to be protein molecules whose folding process seems to be computationally intractable, see, for example, Aviezri S. Fraenkel, “Complexity of protein folding,” Bulletin of Mathematica Biology (1993) 55: 1199, available online at URL = <http://link.springer.com/article/10.1007/BF02460704> ( doi:10.1007/BF02460704), where we read the following abstract: “It is believed that the native folded three-dimensional conformation of a protein is its lowest free energy state, or one of its lowest. It is shown here that both a two-and three-dimensional mathematical model describing the folding process as a free energy minimization problems is NP-hard. This means that the problem belongs to a large set of computational problems, assumed to be very hard (“conditionally intractable”). Some of the possible ramifications of this results are speculated upon.”
[x] This trade-off was pointed out in 1987 by Kurt Ammon in his “The Automatic Developments of Concepts and Methods,” Doctoral Dissertation, University of Hamburg, 1987.” Ammon discusses it in the context of what he calls “creative systems” (what I would call cognitive proteons), i.e. cognitive “systems” (human or artificial) for which a complete formal theory is impossible. Ammon writes (p. 81): “This means that there is no explicit and general theory about creative processes […]. Furthermore, it means the explicitness of rather general theories and the generality of rather explicit theories about creative processes are very low.”
Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here. Please consider becoming a patron! We’re improvident, but cheerful.