Talk:Theory of everything

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Regarding the incompleteness argument against attempting a TOE[edit]

Wikipedia talk pages are not a forum, and changes to the article require reliable sources; please see WP:NOTFORUM and WP:RS
The following discussion has been closed. Please do not modify it.


I understand that the section Theory of everything#Arguments against is only meant to give a flavor and references to the topic. But i will here make my own criticisms of some of those criticisms, because i feel it is the right place, where some readers will find them interesting.

First of all the statement « any "theory of everything" will certainly be a consistent non-trivial mathematical theory » is dubious: a physical theory is not a mathematical theory. There are actually several notions of a mathematical theory, the most commonly used being a 1st-order classical theory with equality, but we could well consider alternatives, like intuitionistic logic, or other non-classical logics. Beyond this, when modeling some physics we only attribute a physical interpretation to very few mathematical variables and well-formed mathematical formulas, clear from the context. When we work in ZF set theory we will construct real numbers, vector spaces,... many mathematical objects and then apply them intuitively to some physical situation, assuming a certain level of accuracy, limitations on how faithful our mathematical description is. But there is of course plenty in our ZF theory that we can express that is not supposed to model anything physical, whatever the context, whatever the physicists or mathematicians doing the modeling. In fact much weaker theories than ZF probably suffice to prove all rigorous results of current physics -though of course there are many concepts which are not yet rigorous, like solutions of equations of fluid mechanics, or the quantum field theories of high-energy physics. It seems unrealistic to expect any nontrivial mathematical theory to be a physical theory -in particular a TOE. After all, how could we expect a simple intellectual construct to exactly match some set of physical phenomena ? And there is little point in attempting that: it is enough to have a flexible mathematical framework where to easily model physics of all kinds, which is what usual set theory provides.

Next it seems to me that one difficulty with the existence of a TOE is not mentioned in that section: proving existence of some mathematical objects satisfying what we observe in reality may be impossible in usual mathematics, in ZF(C) set theory. For instance it may be that the standard model of particle physics cannot be proved to exist in ZF -proving existence of quantum Yang-Mills theory with the expected properties is one of CMI's millenium problems. It would make sense then to add to our mathematical theory the desired existence as axiom, or some other axioms which imply it. Especially if we can prove that those axioms are consistent with ZF. I should mention here that the standard model is known to only be an effective theory, only an approximation of a more accurate theory, like the Navier-Stokes equations are only an approximation of the evolution of newtonian fluids; and in both cases it is conceivable that the more accurate models have regularizing effects which allow existence for all (initial or boundary or otherwise) conditions, while the effective theory provably does not have solutions in some cases. So even though a TOE might be a good framework to study small scale/large energy phenomena it may not provide the nice expected existence results for effective theories, which may be consistent with a given TOE without being provable in the TOE, so we would have useful physical models in ZF or some extension thereof not strictly consequences of our TOE -this does not seem to be mentioned in the various critiques cited in the page.

Gödel's incompleteness theorem is really a theorem about computability, in particular about self-reference. So eventually, the question of its applicability to a physical theory depends on what we mean to use our physical theory for. Perhaps the very concept of a TOE implies that we want to do "everything" with that physical theory: we would use it to model any kind of computer. We could actually do the same with a theory of psychology: after all it is always humans who think about such issues, so a theory describing all their mental processes must describe all the theories they come up with; so perhaps psychology is a TOE. Research, reflection is an inherently circular process, which is well embodied by our brains: with recursive connections, and rythmic looping activation of its diverse pathways. We may consider psychology as part of chemistry (so perhaps chemistry is a TOE), chemistry as part of quantum physics (...), and quantum physics as part of a TOE, and this as part of set theory, and set theory as part of psychology. But most mathematicians would recognize set theory as a theory of everything, mathematical or not: except for some logicians, set theorists, and "foundationalists" (who may consider higher-order logic or some axiomatization of category theory as alternative to ZF), sets are more than enough for mathematicians, and for physicists. So arguing that logical incompleteness is a limitation to the existence of a TOE seems to imply that we want to apply our TOE to problems of logic and computability, which are already well understood within logic and computability. This does not make much practical sense, and we know already the conclusion. It is like expecting our TOE to explain to us why planes fly, or water boils at 100ºC: those questions belong in theories which are accurate enough where they apply, and have well understood answers -though some may argue that water is very complicated and still poorly understood, with papers published in Nature and Science every year. :)

The issue of limits on accuracy, observed in the text, is very interesting; i do not know how far it has been explored in the literature, but to the comments and citations in this wiki page i would add considerations of complexity: there may be tradeoffs between accuracy of a theory and its computational complexity. For instance theory X may be in theory more accurate than theory Y, but the computations in X may be so complex as to make theory X unwieldy and yielding poorer results than theory Y. This is related to the above. If we try to describe chemical reactions or psychological phenomena with the standard model of particle physics we won't get very far. Such tradeoffs can be observed in a finer situations, for instance in video games we may use ray-tracing or lightmaps to render a scene, and although the former is more accurate, it will yield poorer results on a slow machine -not in a given image once rendered, but in the whole animated result being unbearably slow.

In conclusion i would say that the "incompleteness critique" is (given present knowledge) either trivially right, or trivially wrong, depending on what we mean our TOE for: if we mean to use it as a mathematical theory to decide all mathematical statements (say the continuum hypothesis) then the critique is right, but if we mean, as seems to be the intention of high-energy physicists, to be a physical theory modelling all fundamental physical phenomena leaving place to other theories at large scales, low energies, or high complexity (low entropy), thus most probably to be a theory of quantum gravity, then Gödel's incompleteness is irrelevant as we would just prove the necessary existence results in ZF, or add them as axioms -and all known mathematical undecidability results would surely be unaffected. Given the remarks above i guess that the name "theory of everything" itself makes little sense: it would be more descriptive and less polemical to call that kind of theory a "fundamental theory of physics", "theory of fundamental phenomena", "theory of fundamental forces", "foundation of physics". Note also that there would actually be infinitely many such theories, as we could add random axioms which are untestable, for instance large cardinal axioms in set theory -though i think some set theorists believe that some large cardinal axioms could somehow imply down-to-earth existence results in analysis, or more plausibly in computability theory, but anyway...

PS: I hope wiki's editors will not be annoyed by this lengthy entry. I feel that the wiki discussion page is the ideal place to make such comments, as for now the subject is not difficult, serious, well-defined, or useful enough for physics journals. Yet it is good to have a somewhat centralized discussion, where to gather comments, some of which will turn out useful. Plm203 (talk) 19:02, 11 August 2023 (UTC)[reply]

Addenda: First note that what is discussed above and in the wiki article is only a mathematical notion of incompleteness of a putative TOE. This is because Gödel's theorem is a theorem about a formal mathematical system, extended to other mathematical systems -it is mostly applied to recursively axiomatizable 1st-order theories. The undecidable statements within a formal system are mathematical. For instance those constructed by Gödel are: 1 a fixpoint of , where is the Gödel number of , and is a formula in PA modeling provability in PA and 2 in his "second incompleteness theorem" the metatheory formula .
Remark that Gödel's theorem, and indeed the whole field of logic (and computability), involves modeling of a physical or psychological nature: PA is expressed in a metatheory with symbols that as humans we connect to one another, and within our metatheory we prove eg that the PA-formula is equivalent to . The metatheory is usually taken ZF set theory for comfort but it can be much weaker: for instance Gödel's theorem can be proved within PA itself, as can be the independence of the continuum hypothesis or the axiom of choice from ZF. One can also model recursively axiomatizable 1st-order theories in PA and construct a PA-formula which models provability in ZF, ie such that is equivalent in PA to . We can thus prove in PA that -the formula will be different from that in the 1st paragraph, as we were in the meta language of informal set theory, while here the formula is a formal ZF-formula expressed in the meta language of PA. But of course we cannot prove in PA that implies , nor just that implies that . And this is general: when proving things in mathematics (ie set theory) we use finitistic means (in the sense of Hilbert, included recursion), so any theory like PA (i think it is enough that our theory can decide the "value formula" for a universal Turing machine, ) is enough to reason rigorously about proof. In other words, one can study within PA all rigorous demonstrations: that is prove that the demonstration gives the desired result. But that does not prevent us from being interested in more powerful theories than PA, and to usually work with a language and axioms stronger than the minimum required: in the practice of mathematics we make decisions as a community, we get interested in some theories and settle for some language, and we usually reason within set theory, or some kind of type theory, or just some theory even closer to the subject of study -before set theory of course people would reason only this way. We even usually only formulate some very partial axiom system, or not even that much, just some vague idea of the things that hold and of the results we derive, and then writing down our thoughts for others is nontrivial, it takes some time, or we may not even be able to achieve it.
Coming back to incompleteness: the mathematical statements which are undecidable within PA or any mathematical theory will a priori only have mathematical interest. In particular in any mathematical theory where we have expressed a physical TOE (or fundamental theory of physics, FTP, or FOP, foundations of physics) it is plausible that no physical statement is undecidable. Up to the present time no physical statement (the definition of what is "physical" is to be agreed upon by "the community" as a matter epistemology) has ever been proved independent of set theory. It is conceivable that the well-posedness of some "physical" PDE could be independent of set theory, or that "P=NP" is independent of set theory, but it seems reasonable to assume that physical statements can be decided within a simple and reasonable theory such as ZF. It is a very interesting problem at the crossroads of logic, psychology, and physics to define some kind of measure of "physicality" of a statement, and determine some kind of probability for a statement with a given level of physicality to be decidable in a "natural" formal system (as ZF).
At any rate there is a more straightforward and physical notion of incompleteness in physics: consider the state of physics at the end of the 19th, when researchers expected nothing new in physics would be discovered. Yet theoretical and experimental progress showed that there were many phenomena that were up-till-then poorly modeled or entirely ignored. It is not impossible that humans in one thousand years discover some hitherto totally ignored phenomenon that does not fit in any previous fundamental theory. Beyond this: perhaps the universe observable to present humans does not even display some phenomenon that will be observable to future humans, or that is observable to other intelligent beings, or simply that exists. The anthropic principle implies a strong such limitation, that there are parts of the universe unobservable to humans because humans cannot exist in them. After those remarks it looks quite probable that any fundamental theory is physically incomplete, that it is only a foundation for local physics (a FLOP :). I've read that some physicists deplore the existence of such an idea as the (or an) anthropic principle, arguing that it demotivates researchers, but 1 i see no evidence of that, 2 motivation to do research in any field of science seems to me to rely on many other factors, 3 the anthropic principle says a priori nothing of the complexity of humanity-local physics, they can be very complicated, 4 there are plenty of fields of research other than fundamental physics, which are arbitrarily complicated, and actually even if at some point some anthropic principle is seen as justified humans may always try to falsify it, or research as pure mathematics (pure thought) the physics of an imaginary universe where no anthropic principle holds.
The second addendum is to the 4th section in my entry. So we have a circular chain of theories with fundamental physics expressed within psychology (as mental dynamics) and psychology expressed as the dynamics of brains (made up of fundamental particles described by our FOP). But i should have made an important observation: when we theorize and describe psychology we assume that is only an approximation, a coarse description of more complicated phenomena (dynamics of many neurons); but when we describe fundamental particles within the standard model of particle physics we assume it is very accurate, and a requirement for a FOP would be that it would be ideally "exact", that is its theoretical results would exactly match "ideally prepared" real experiments, and with it we could do numerical computations to an arbitrary level of precision which would also match any real phenomenon to an arbitrary level of precision. Several observations: First on psychology, on why it is limited in accuracy. We would usually not model all interactions of the brain with the physical world, or indeed with the rest of the human body, which may eg fail due to a heart attack -this does not prevent us from describing a human researcher thinking about fundamental physics within psychology. Next on physics, limits on accuracy are mentioned in the article: matching to reality is itself a poorly controlled process, which will usually be extremely difficult to analyze. In general there are many uncertainties that are irreducible, or course we have the quantum mechanical uncertainty principle, but even with a hypothetically perfect classical model of measurement/experimentation (like a wind tunnel and a theory of fluid mechanics that would be perfect/fundamental) we can easily prove that a human+digital computer system cannot match the experimental apparatus -under reasonable assumptions and almost surely. A further difficulty is that current models of particle physics are only perturbatively defined, from an effective lagrangian whose couplings and masses are running. Of course we expect the theory does exist mathematically (exactly, in particular nonperturbatively), and the idea of a TOE/FOP implies that we can make exact theoretical computations, and numerical to arbitrary accuracy. I note here that the issue of complexity is too easily overlooked. For instance does it not make sense to consider that an analogue computer like a wind-tunnel is a counterexample to the Church-Turing thesis ? The answer would surely be yes if we take into account complexity and social limitations, as there will be analogue computers that cannot which can obtain better results than any digital computer mankind can build. I think this kind of question is well tackled in some articles on "hypercomputation" -i only know the field exists, i've never read any article. There is one field where complexity is taken seriously: quantum complexity, with the question of quantum supremacy. However i would like to end with a plea for the Church-Turing thesis, in fact a "moral proof" of it: i should have remarked that the very idea of a FOP (or TOE) and the observation of a circular chain of theories/models (from physics to psychology) implies the Church-Turing thesis; it is clear that any theory of psychology should be computable, that is its results should be recursively enumerable (it is already clear from our practice of mathematics and modeling, all being done in ZF, a recursively enumerable theory), but would have to be proved in detail in a given theory of psychology -in all this text when i use "psychology" it should probably be read as "human reasoning" or "cognition", here we don't need emotions, dreams,...-; thus if we can as humans formulate a FOP, it will be necessarily be describable and interpretable (in the sense intuitive sense, or the rigorous sense of logic, originally investigated by Tarski) within psychology, but then all its results, "theoretical or numerical", would be recursively enumerable, thus our fundamental theory would be Church-Turing computable, but given sufficient computational power it also approximates any possible theory of real phenomena, in particular any real computation -we can limit what we call "computation" or just call "computation" any real (observable or not) phenomenon, and it does not present additional difficulty for a Church-Turing computer with unlimited power. Thus bar complexity limitations the Church-Turing thesis holds.
PS: I hope again that wiki will forgive my lengthy comment -which seems to be at its home. Plm203 (talk) 20:16, 13 August 2023 (UTC)[reply]
Third instalment, a couple of addenda: First i think that what i called a "circular chain of theories" had perhaps better be called "cyclic chain/sequence of theories", or "cycle of theories", as "cyclic" is used in category theory and algebra -as in "cyclic order", or "cyclic set/object". And it is certainly interesting and natural to consider generally, for the sake of it, the complexity theory aspects of sequences of theories, beyond just the practical examples mentioned above and in common science. For instance in research one derives theorems, or makes further assumptions (adds axioms), which simplify greatly the proofs of further results. In logic and simple mathematics this is a domain of proof theory, where questions of speedup are investigated; i'm not familiar with it, butt could be interesting to look at the subject beyond pure mathematics, to connect with more applied issues, and with theoretical physics.
In my two previous entries there is an important point which i did not make explicit: in logic there are several cognate notions of comparing or modeling one theory in another, or one computational system in one theory. There is the notion of extension of theories, where one adds axioms, or formulas, even quantifiers, or deduction rules. There is also the notion of interpretability (not usually related to the notion of interpretation in model theory, but of course they can be related to one another), where one defines (ie translates) the terms of a theory S in a theory T in such a way that T proves the results of S -that is T with the definitions is an extension of S. And finally there is the notion which i used in my discussions above, and which is implicit in discussions of TOEs, of description or a theory in another. In traditional logic there is the notion of representation of functions in arithmetic. is representable in theory T (in the language of arithmetic) if there is a formula that is true in T if (there are further refinements of the notion), and with this notion one shows in a metatheory modeling both PA and Turing machine computations that Turing-computable functions are representable in PA -and there are weaker arithmetical theories where (partial) recursive functions are computable. The notion of describable extends this to theories. Roughly (and i have only thought about this at the intuitive level) all theories which can represent recursive functions should be able to describe one-another. This is related to my remarks on descrbing ZF proofs/provability in PA. Now description is what scientists do most naturally: using mathematics to describe the world, to do physics, should correspond rather closely to the notion of description i evoke here, and describing psychology within chemistry, or chemistry withing quantum mechanics, or all of those within a FOP, would be formalized as descriptions in this sense. But now the remark on computability indicates that to describe a FOP/TOE we only need a theory which can represent Turing computation/recursive functions. So from a mathematical perspective and if we only require being able to reason about a theory (without asking feasibility/practicality) we already have plenty TOEs, and they are all mathematically incomplete. We may consider theories where we manipulate physical objects directly, where we only axiomatize physical phenomena, but this seems extremely awkward: even the purest of physicists finds very practical to manipulate numbers, like natural numbers, and we can roughly describe numbers in terms of elementary particles or physical objects like beads made of zillions of elementary particles, but the description would be very complicated, approximate, and require a very heavy formalism, to basically do just common arithmetic. So the notion of description, and a practical theory of mathematics where we describe physics is what we want, and we are back to my original observations on incompleteness(es).
One observation may make the foregoing clearer: we always have computers at hand and as a community formalize much of our science on computers nowadays, and see with experience that we can always work out such formalism, and we often do bits of that and of programming so as to have our ideas yield rigorous theoretical results or more often computations -ie working out large but relatively straightforward/repetitive particular cases. So all the theorizing we do is realized in (Turing) computations, and we do not conceive otherwise, because we intuitively know we can model our brains and even compute them with a powerful enough computer -some thinkers like to sometimes claim not but... In physics nowadays we clearly expect computers to participate in all our large computations, in high-energy physics in particular for a FOP as it will be a primary field of application; in mathematics nowadays we have theorem provers used to prove nontrivial results, and we can (with time) feed them any mathematical theory we use as humans.
This does not exhaust though the discussion on restricted/weak theories to do physics. In mathematics there is the subject of reverse mathematics: we seek the weakest theories (in terms of quantification allowed and axiom sets) which prove a given common mathematical result. This tells us about the strength of theories and mathematical results and also of their difficulty. One can add complexity to this but i think usually researchers in the field don't do it. Now i am not aware of highly "physical" results which have been "reverse-mathematized" but it would be interesting to figure what minimal axioms can prove that Lipschitz ODEs or some classes of PDEs are well-posed, or results in quantum mechanics like the uncertainty principle. Or to construct the full quantum gauge theory of the standard model of particles.
I should briefly mention the issue of defining "physical": it would be quite complicated, there is a psychological aspect in the sense of closeness to physical phenomena observed and reflected upon by a human brain. This would be reflected in a mathematical theory describing physics. In the previous post i said that questions of computability were somewhat physical, but then we quickly derived that everything is physical, so we should introduce quantified nuance to this. There would be social aspects to take into accound, as research is highly dependent on and structured by society. I can also mention the notion of decider: a Turing machine which halts on every input. This is related to strength and consistency strength of a theory: a typical such Turing machine is one which halts after n deductions in a given theory if it has not found a contradiction, and loops indefinitely otherwise; a decider for can be proved to be a decider in ZF but not in PA -of course both theories can represent the function corresponding to this decider machine. Another standard example, of similar flavor is the Goodstein decider, which halts if the Goodstein sequence terminates (at 0) on input n, therefore can be proved to be a decider in ZF but not in PA. Yedidia and Aaronson have constructed a 7918 states-Turing machine (with 1 tape and 2 symbols) whose non-haltingness implies that ZFC is consistent and thus which cannot be proved not to halt, though they prove it halts in a consistency-stronger theory. All TM can be run in simple theories but there are small computers whose behavior we cannot predict although we can make reasonable guesses.
This leads to the issue of what makes a mathematical theory "natural" and (partially in particular) trustworthy. Pondering the issue it becomes clear that the main reason is psychological: humans are used to considering sets of things, this is a very sensory experience, we visualize, verbalize, or "auditivize" several similar objects, and we pick one within them. We have cognitive structures, hardwired in our brains, well-designed to carry out such tasks, to realize such neural dynamics. Thus it is natural, easiest for us to abstract from such "real" mechanisms. We build up our mathematical and mathematical modeling abilities throughout life in introspection/reflection and in interaction with sensory experience (a little) and mostly with society in particular with its scientific output and discussing science with other humans. Society will search for an optimal presentation of its scientific theories, through various processes. And so far the first-order theory of sets has settled as the most efficient way to do mathematics/computations. As remarked previously most mathematics do not require the full power of ZFC, and set theorists would like to reject choice as standard axiom, but the mathematical community likes to have it (in Zorn's lemma, algebraic closures, maximal ideals,...). It is practical, it makes some results simpler to prove too. And there is a sense that it is consistent, which we could reformulate in the form "it is infinitely counterintuitive that ZFC is consistent". When reformulated this way we see that consistency should really be considered from a quantitative perspective: imagine a theory that is inconsistent but whose shortest proof of a contradiction is -deductions long, then two things 1 humans would never find any inconsistency and all the results they would obtain would be "essentially" consistent and as useful as if working in a consistent theory and 2 actually it is probable that such a theory would have to have a rather complicated axiom system and that the contradiction would rely on some tricky, interesting mathematics. Now i don't know if the question has been researched, but i think that inconsistencies in a given "size n" theory can with high probability be reached in "few steps" relative to n. Thus having as a community explored set theory quite thoroughly without finding any inconsistency we can (could) be highly confident that it is consistent -of course this raises the issue of how representative or exhaustive human exploration is, but i think it would not be too "nonuniform". But to be honest, even this is overkill: as humans we make innumerable mistakes, but through education we learn to correct, to check better when necessary, to make-do with approximate results; so if ZFC was inconsistent we would instantly work around it, it would surely be in an interesting way, and the inconsistency would obviously bear very little relevance to building bridges or sending rockets -whose theories rely on much weaker mathematical theories than ZFC as hinted above. Of course the consistency any mathematical theory that will be used to formulate a FOP will never be fully explored; but this is somewhat misleading: we can actually make sure to a very high-degree that it is consistent, by normal mathematical research, but also in a more systematic way, and we could probably quantify in a nice way how consistent we have made sure it is -by the hypothesized results on the complexity of inconsistencies as function of the size of a theory.
PS: I'm pretty sure i'm forgetting things i thought i would write, but that will be it for now. A big thank you to the admins who allow my commenting here, which i think is still relevant to the page's topic. Plm203 (talk) 05:04, 16 August 2023 (UTC)[reply]