Talk:Halting problem/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3 Archive 5

comments

I think the material about Chaitin would maybe better fit on the page about Gödel's incompleteness theorem. Does anybody have good references or introductory material about that work? --AxelBoldt


The page was incomprehensible to a non-programmer; it used Scheme code which a non-programmer would have no chance of understanding. It also used unclear and poorly defined terms a lot, such as a "computing system". Some content (Scheme with real numbers) was too advanced for a general-purpose article describing the Halting Problem. I append below stuff from the page as it was before this change which someone might want to salvage and use here or elsewhere. --AV

[... old material deleted ...]


Thanks, the page is so much better now!

I wonder if you think the following trivia would be worth putting on the page: for an actual existing computer with a finite amount of RAM and external storage, the halting problem is of course solvable. If the number of internal states is N, you can just wait for N+1 steps to see if a state returns.

Also, do you know anything about the Chaitin connection? --AxelBoldt


I think it's better to put this somewhere on a page devoted to computability models, which should be written. Maybe I'll give it a shot later. Once it's written, it'd be linked from here, and anyone not sure why one needs unlimited memory could go there and read that.

I do know something about the Chaitin connection, but I question my memory, have been some years. I'll try to check it further, but right now it seems to me that this doesn't belong on this page, this information is not important enough for the general article on the halting problem; it can be relegated to a page on Chaitin and his results. -- AV


I like some of the things in the latest change, that make the description and the proof more clear, and dislike some others. Specifically, I refactored the page and rewrote the proof precisely in order to make it easy for a layman with no mathematical training to understand. There's no real reason to employ notation like H:N->N, or the notion of a partial function and undefined values in the sketch of the proof. Sure, it makes it somewhat more elegant, but accessibility of the idea to someone without a university education in CS or math is lost. Similarly, some redundancy in my version, e.g. stating the problem separately as an algorithm to solve the problem and an algorithm to compute the halting function which embodies the problem, was intentional.

Unless there are objections, I will try to integrate the nice things about the latest change with the easier-to-understand-for-a-nonspecialist version of the proof. --AV


No problem. I already made the proof sketch less formal myself. If you want to change it more, that's fine with. -- JanHidders


I think it is important to have a completely accessible definition of the problem and sketch of the proof; after all, it is one of the major intellectual highlights of the 20th century to see that there are well-defined problems that cannot possibly be solved with an algorithm/program.

But I also think that Turings approach is really important. How about if we restructure the article as follows:

  • Start with a colloquial description of the problem along the lines of "Find a program/algorithm, which takes as input the description of some other program/algorithm plus some input for that program, and decides whether that program, when fed that input, will ever halt".
  • State that provably, no such algorithm exists, and give implications (problems that computers can never possibly solve, Entscheidungsproblem)
  • Then give the colloquial proof sketch without lisp syntax and without Turing machines and without the Halting function
  • Then describe the formally correct version of the problem, and give the argument that this formalization actually captures the full content of the colloquial description (Church-Turing thesis).
  • Then comes Turings diagonalization argument
  • Pointers to generalizations (Rice's theorem)

That way, a reader can stop reading article as soon as the material gets too advanced, and they will still have gotten a for them adequate explanation.


Ok. I gave it a shot. Let me know what you think. I wanted to add some more on how the Halting problem can be formulated as a statement about numbers, but I'm rather time-pressed at the moment. My apologies to Anatoly, because I more or less promised him to leave the article to him. -- JanHidders


Is it correst to say that Turing was first? I always thought it was referred t as the Church-Turing thesis, since it was proven almost simultaneously. I know that Alan Turing's infinately more interresting than Church, but he (and his solution) is left out of the article altogether.


You are right that Church was first. (Turing says so in his own article) But the Church-Turing thesis is not about the Halting problem and I'm not sure I would agree with your statement that Turing is infinitely more interesting than Church. :-) -- JanHidders


It is true that Church was first in solving the Entscheidungsproblem, but I'm not so sure that Church talked about the halting problem at all; after all, in order to even define the Halting problem, you need to have a formal concept of some machine that may or may not halt. Church was using his lambda calculus to solve the Entscheidungsproblem. It certainly is possible to formulate the halting problem in lambda calculus, but did Church actually do it? --AxelBoldt


Hmmm. You may be right. I more or less assumed that Church had shown that the Entscheidungsproblem is undecidable by showing that the problem whether a certain lambda expression when applied to another lambda expression is defined or not, is undecidable. But I am not really sure of this. I will revert my changes and come back when I have found Church's article in the library. -- JanHidders

If you have access to http://www.jstor.org then you can read both of his articles online. The citations are on the Entscheidungsproblem page. The substantial one is the one in the American Journal of Mathematics. Church proves that there is no algorithm (defined via recursive functions) which decides whether two lambda-calculus expressions are equivalent. The proof relies heavily on Kleene's earlier work on the lambda-calculus. --AxelBoldt

Thanks for the enlightenment. Perhaps it should be more clearly defined as Turing's solution to the Entscheidungsproblem. And it's perhaps not so important that he solved the halting problem first, since he proposed it. Man, I'm coming off really anti-Turing today... --Eventi

Well, the article is about the Halting problem, not the Entscheidungsproblem, so I think it is important to mention who proposed it first and who solved it first. --AxelBoldt

Was it seperately proposed and then solved? If so, I agree. But there was no such thing as a halting problem before the Turing machine, so I doubt it. I don't think it's incorrect by any means, only that solving first is too trivial an accomplishment to mention.

Ok, maybe we should say "was proposed and solved in Turing's paper..." --AxelBoldt


I've corrected an important error, vis. the statement that the incompleteness theorem follows directly from the undecidability of the halting problem. I'm not completely satisfied with the way I've done that, however; anyone who can improve the language/clarity - please do it.

One important problem which perhaps isn't unique to this page is that I feel it wrong to talk in Wikipedia about "Goedel's Incompleteness Theorem". There are in fact two theorems, and either of them is sometimes referred to as "Godel's Incompleteness Theorem" (the first more often than the second), creating confusion. I'd prefer to always use "...theorems" or to refer explicitly to the first or to the second.

W.r.t. the particular page, I think it especially unfortunate that it still requires knowledge of a programming language to understand the sketch of the proof. There's no justification of that IMHO. The Pascal version is certainly lighter on the layman than the original Scheme version, but the verbal description of mine in rev. 11 is still better, I think. I'll bring it back and try to make it even simpler and less verbose, as soon as I find some time to work on that - if there're objections, please speak out. --AV

Good to see that you're back! I agree that a non-programming-language version would be desirable. Also, in light of your above comments, I think we should probably remove the paragraph early on talking about Goedel's incompleteness theorem and pack the whole discussion in the corresponding section later on; after all, Goedel's theorem had been proven earlier and the fact that a weaker version of it can be reproven using the Haling problem is not that amazing after all.

I agree. I'll do that when I find time to refactor the page (or anybody should feel free to do that before). In fact, there's an even conceptually simpler proof of Godel's first than with the undecidability of the halting problem - it's the proof via Tarsky's undefinability of truth. One: 1) establishes a coding for formulas and coding for sequences; 2) establishes a formula phi(x,y) which is true in N iff y codes a proof of x using axioms of Q; 3) notes that if Q is sound and complete then "there exists y s.t. phi(x,y)" defines precisely all sentences x which are true in N; 4) this contradicts undefinability of truth in N, which states precisely that a formula defining true sentences does not exist (this is the correct formalisation of the Liar paradox and is proved accordingly). Step 1 is technical and step 2 is very simple, while steps 3 and 4 are immediate observations. --AV

I think we should add something to the Goedel's theorem article about "soundness not required". I'm still not quite clear about the exact requirements that Goedel needs for his proof to go through though. --AxelBoldt

soundness is completely unnecessary for both statement and proof of Godel's theorems; in fact, nothing about truth or falsity is ever used, the theorem is completely proof-theoretic. This very important fact is often overlooked for several reasons, e.g. existence of simpler proofs that require soundness (via halting problem or via Tarski's undefinability theorem); another reason is Goedel's unfortunately worded introduction to his paper where he draws a (largely misguided) parallel with the Liar paradox, which is all about truth and falsity. We should clarify this on the goedel's theorem page.
the requirements for the first theorem are: that a theory Q be recursively axiomatizable; that it extend (or more generally interpret) a certain very weak theory of arithmetic; that it be omega-consistent. The conclusion is that Q is not complete. With Rosser's trick, the requirement of omega-consistence is weakened to consistence. --AV
"extend a very weak theory of arithmetic": does that not mean that the provable arithmetical statements of the weak theory (which are presumably true) are also provable in Q? In other words: Q has to be a little bit sound? --AxelBoldt
nope, that's why in general case we allow "interpreting", not necessarily extending. This means that we may map the axioms of the weak theory to different sentences, which aren't necessarily true at all, in such a way as to preserve proof-theoretic strength. Essentially what we are after is the power of representation that the weak theory has: e.g. given a recursive function f(x), we want there to be phi(x,y) s.t. the theory proves phi(x,y) whenever f(x)=y and proves not(phi(x,y)) whenever f(x)!=y. Now in the original weak theory this phi(y) will just embody whatever way of building the recursive function up from basic building blocks that we choose (e.g. it may embody a Turing machine or a process of defining the function from basic recursive functions, iterations, etc.), so phi(x,y) will actually be true whenever f(x)=y. But we don't really care about that in Goedel's proof, we just need the theory to be strong enough to prove/disprove phi(x,y) according to whether f(x)=y. So the Goedel's theorem will apply to very convoluted and extremely not-sound theories Q as long as there's a way to faithfully map the "real" sound phi(x,y) proved/disproved by the weak theory into some phi'(x,y) proved disproved by Q, without any regard to its truth in N. --AV

I did my homework and read again some of Penrose's writings connected with the halting problem. Penrose presents the core of his arguments connecting the halting problem and related ideas to conscious understanding in section 2.5, pages 72 to 77, of ?Shadows of the Mind?. The following summarizes his approach:

Although by Godel?s arguments no Turing machine can answer the general halting question, we could speculate on the existence of a Turing machine that does as well as possible in the sense that it captures all the methods that conscious being could possibly use.

Does he say this machine is correct (for those instances where it returns an answer, the answer will be right)? Does he say we can know that it is correct? I understand he goes on to prove it can't exist. I mean what's the initial assumption at the start of the proof? --LC
Penrose requires that the machine is always correct whenever it concludes that an algorithm does not stop (i.e., the machine is "sound"). Penrose describes an algorithm that we know will not stop, but that stumps the machine; so, the machine fails to capture all of our reasoning powers. This argument seems to assume that we are "sound"; one objection to Penrose is the contention that we are all unavoidably inconsistent. In which case I don't have to be embarrassed that I may be writing nonsense.
Thanks. How about the second question I asked? If someone hands me this machine (which happens to be sound), does he assume I'll look at it and know it's sound? --LC

Applying Cantor?s diagonal construction, Penrose shows that no such machine exists: for any Turing machine that addresses the halting problem, there is an algorithm that conscious being know will not stop but that stumps the Turing machine. This does not mean that the particular algorithm in question will stump all Turing machines, it only means that no single machine exists that will succeed on all algorithms that conscious being can analyze.

Penrose does not claim that conscious beings can answer the halting question for all algorithms, he is not even claiming that there are algorithms that only conscious beings can analyze; he is just saying that for any given Turing machine there is an algorithm that we can analyze that stumps that particular machine (another machine might succeed). The implication, Penrose maintains, is that the reasoning power of a mathematician cannot be captured by a (single) computer. A possible objection is that, if one computer will not do the job, then use many. Penrose addresses this type of objection and many others in ?Shadows?.

I don't see this implication at all. For a given Turning machine, there is an algorithm that it can't analyze. So what? For a given mathematician there are a lot of algorithms that it can't analyze.
The implication seems plausible to me because the argument shows that every Turing machine falls short of accomplishing all that a mathematician can do: for each such machine there is at least one halting problem the mathematician can solve that the machine cannot. So, it does not seem possible to model the mathematician as a machine. (Admittedly, my paraphrase of the argument is very informal and imprecise). The fact that machines can solve many problems that stump the mathematician is not relevant to the argument.
I'm still trying to find time to put these thoughts into an article that won't make me blush. I have collected a bunch of reviews and commentary on these issues. --srwenner

If we say that most mathematicians disagree with Penrose, then we should name some critics and present their objections, either in this article, the Penrose article or the Incompleteness Theorem article. The last time I looked, I did not see such specifics.

The electronic journal Psyche published a symposium on ?Shadows of the Mind?, with articles by many scholars and a reply by Penrose: (http://psyche.csse.monash.edu.au/psyche-index-v2.html). Perhaps we could summarize the key objections and Penrose?s rejoinders in the Penrose article. I could take a stab at this, if you wish (I hesitate to edit others? work without their blessing). SRWenner

This is interesting stuff. I'd prefer to have a separate article like "Philosophical implications of the halting problem" or similar, link to that article from here and from Penrose, and explain arguments and counterarguments there. That article could then also get the paragraph about "humans and the halting problem" that's currently still in this article (and in light of your explanation above now seems pretty pointless). The reason that I'd prefer a separate article is that it can easily grow into a quite sizable article whose topic is somewhat removed from the halting problem proper that computer scientists are interested in. --AxelBoldt

For an amusing illustration of the basic flaw in the Penrose/Lucas argument have a look at http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1994/941219.penrose.2.review.html -- Derek Ross


Perhaps it's interesting to mention that the undecidability of the PHS property follows trivially from Rice's theorem. -- JanHidders

Big Trouble in Proof?

From main page: I beleive there is a big trouble in its proof. We try the function halt(p,i) on trouble(s). But trouble need an input. Trouble halt or no depending on its input. What mean halt(T,T) ? "Does trouble(s) halt with trouble(s) as input?" The answer is different depending of the input of the second function trouble(), but this input aren't given, so we can't answer. If someone could explain where i'm wrong. (nico@seul.org)

The input is the function itself, not a certain instance of it - the algorithm, not its outcome. Perhaps it would be clearer if we wrote 'trouble' or 'trouble()' rather than 'trouble(s)'. The question is - does trouble halt with trouble as input? Well, if Halt(trouble,trouble) comes out with "No", it does, otherwise not. And when does Halt(trouble,trouble) come out with "No"? Well, if trouble halts with trouble as input.

We are in computer world, a function is represented by a big number. So if T'=T, halt(T,T') means : "Does trouble() stop with T' as input ?". Ok, but what is the input of T' ? We need it to answer the question ! Otherwise it's a none sense, how could you none the behavior of foo(int i){while(i){}}, it depend if i=0 or i!=0

T' doesn't need an input. T' is just a string. Strings don't take inputs. --LC 08:38 Aug 27, 2002 (PDT)

halt(X,Y) means that we are going to imagine what would happen if we ran the program represented by X. The program X has one input which is a string. We imagine giving it the string Y for its input. Y is just a string. You don't give inputs to a string. The string Y happens to represent a program, but that's irrelevant. We aren't going to run Y as a program. We're just going to take its string and feed it to X. That's why we don't have to worry about what inputs Y will take.

Imagine a Microsoft employee using Microsoft Word to edit a file which happens to be the source code for Microsoft Word. In this case, X is the copy of Word that is executing. Y is the source file that it reads in. We can talk about what input X takes. We can't really talk about what input Y takes. --LC

That's what i don't like in the demonstration. halt(X,X) is a none sense at the construction and you can't use it in a demonstration. So, you're ok with me ?

halt(X,X) makes sense, and is used correctly in the proof. I don't know how to make it any clearer. --LC 07:36 Aug 27, 2002 (PDT)

It seems that i loose some editing. In halt(X,Y), X is program that take data, Y. We extend Y to be a fonction. That what means halt(X,X) but the problem is : what is the input of the function X (the second one). We could code the input of the fonction inside the string X itself. But, in that case, X means : T(T(T(T(T(T(T(...))))))) and it's became an infinite loop. Do you see my problem ?

But X in halt(X,Y) is not a function, it is a string that represents a function. You are confusing the map with the territory. -- Jan Hidders 06:27 Sep 2, 2002 (PDT)
Maybe it's easier if we use an example.
Let X be the string
function Decrease (s: string)
begin
  repeat
    remove the last sign of s, if it has any
  until size(s)<20
  return(s)
end
Then if we take the algorithm represented by X with X as input, it will remove the last sign of X (the 'd' of 'end'), and repeat that until there are less than 20, return "function Decrease (", and then end. Thus Halt(X,X) should give Yes (and in fact Halt(X,s) should give Yes for any value of s). On the other hand, if Y were the string
function Decrease (s: string)
begin
  repeat
    remove the last sign of s, if it has any
  until size(s) >1000
  return(s)
end
Then taking the algorithm represented by Y with Y as input, would keep removing signs, and then idle through the loop without ever stopping, so Halt(Y,Y) should give No.
Maybe the best way to look at it, is to assume that we have a function c(), which given a string describing an algorithm, gives that algorithm (for other strings it is allowed to give garbage). Such a function exists for several mathematical descriptions of algorithms. A Halting algorithm Halt(X,Y) is now an algorithm on strings, that given a string X, if X describes an algorithm with a single string as its input, provides yes if c(X) halts with Y as its input, no otherwise. In particular, Halt(X,X) should give yes if c(X) halts with X as its input. The second X is just looked at as a string, not as a program, so it makes no sense to talk of its 'input'.
Andre Engels

In the first sentence of the indented paragraph labeled "2." near the end of the "Sketch of Proof" section of the Halting Problem article, I think that "Trouble does not halt" is incorrect and should be changed to "Trouble halts". I wanted to check with this discussion group before making the change.

Cole Kitchen

You are right. Well spotted. -- Jan Hidders 15:57 Nov 24, 2002 (UTC)


From the article:

it's very important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the question of truth, but only concerns the issue of provability).

What are the precise assumptions of the first incompleteness theorem? Typically, one sees something like "a theory strong enough to formulate elementary arithmetic", but what does that mean? Isn't it assumed that the theory is able to prove at least some of the true statements of arithmetic? AxelBoldt 02:30 Dec 2, 2002 (UTC)

Not necessarily. If the theory is not able to prove at least some of the true statements of arithmetic, it's certainly is not able to prove all of them, so Gödel's theorem holds semi-trivially. Andre Engels 09:45 Dec 2, 2002 (UTC)
I don't understand. Presburger arithmetic for instance is able to prove some, but not all, theorems of arithmetic, but Gödel's theorem does not apply to it. AxelBoldt 17:12 Dec 2, 2002 (UTC)
The reason that Gödel's theorem does not apply to Presburger arithmetic, is that there are some theorems of arithmetic that cannot be described in Presburger arithmetic. For example, there is no way in Presburger arithmetic to describe that there are infinitely many prime numbers. Because of this, Presburger arithmetic can be (and apparently in fact is) at the same time complete (meaning that for every statement P that can be described in Presburger arithmetic can be either proven or disproven) and correct (meaning that no statement P can be both proven and disproven), in spite of Gödel's theorem. The assumption of Gödel's theorem that is not true is that every statement of basic arithmetics can be described.
One could restate Gödel's Theorem to read "Any axiomatic system of arithmetics will either have true statements of arithmetics that it cannot prove, or false statements that it can prove." Under this rewriting, it is clear where Presburger arithmetic fails: There are certain true statements of arithmetics that cannot even be written down in Presburger arithmetic; it is obvious that they also cannot be proven.
On the Wikipedia-page Gödel's incompleteness theorem it is said, that "in any axiomatic system sufficiently strong to allow one to do basic arithmetic, one can construct a statement that either can be neither proven nor disproven within that system or can be both proven and disproven within that system." Because in Presburger arithmetic one cannot do a multiplication, it is not "sufficiently strong to allow one to do basic arithmetic", and thus Gödel's incompleteness theorem does not apply. Andre Engels 17:37 Dec 2, 2002 (UTC)

It seems to me that:

  • Any fool can prove that a program halts. Just run it until it halts.
  • Proving that a program does not halt is much harder than proving that it halts. This is because proving that it does not halt is proving a negative.
"Any fool can prove that a program halts. Just run it until it halts."
You just have proved that this instance of the program with this input can halt. What we are want to prove is that all instance of the program with all possible acceptable inputs will eventualy halt.
-- Looxix

One such consequence of the halting problem's undecidability is that ... there cannot be a general algorithm that decides whether a given statement about natural numbers is true or not.

What is this supposed to mean? It seems to me that I can quite easily prove, say, the statement that the empty set is a subset of the set of natural numbers. According to quote above I can't. I'm obviously misunderstanding something here. Can somebody clarify?

The quote doesn't say that you cannot do that. What it says is that there is not a fixed algorithm that decides the truth of all statements about natural numbers. (That is the meaning of "general".) That does not imply that you cannot find a proof for certain particular statements or a fixed algorithm for a certain small subset of statements. Compare it to the Halting Problem; there is no general algorithm that decides whether a program halts but it is still possible to prove for certain particular programs that they halt (or not). -- Jan Hidders 12:52, 31 Aug 2003 (UTC)

If the memory and external storage of a machine is limited, as it is for any computer which actually exists, then the halting problem for programs running on that machine can be solved with a general algorithm (albeit an extremely inefficient one).

How? Say S is the number of bits available on the machine. One way would be to keep track of another counter M of the same size, and see if it cycles back to zero. But can you run this Halt program on the same machine?

There exists a finite state machine that, given any program less than a gigabyte long, immediately returns whether the program will halt (we don't know that finite state machine however. See below

Where is the reference for this? -- User:MO 15 November 2003.

Question 1: Nobody said the program would run on the same machine.
Question 2: There is a finite number of programs smaller than a Gigabyte. We could simply take a list of all those programs, and write after each one 0 if it does not halt, and 1 if it halts. It's easy to make a finite state machine that returns a 0 or 1 depending on what we have written down. Andre Engels 01:52, 15 Nov 2003 (UTC)

I know this intuitive argument can be found in some books on the theory of computation, but I never liked it:

"the halting theorem itself is unsurprising. If there were a mechanical way to decide whether arbitrary programs would halt, then many apparently difficult mathematical problems would succumb to it."

It is wrong. The Perfect Number C-program is only 43 bytes long, so The Gigabyte Machine would decide it.

The halting problem for most of your brute-force search programs is solvable, computable. The existence of the Gigabyte Machine is proof of that (nonconstructive, unfortunately :-). Is this fact surprising? It shouldn't be.

The intuitive argument uses a magicians sleight of hand, replaces ”computable” with ”mechanicle way to decide”. That, whatever it means exactly, is a completely other issue. -- Jan Tångring

What is the "Gigabyte Machine"? Maybe you should write an article about it? :-) — Timwi 21:47, 5 Apr 2004 (UTC)

I am referring to the finitite state machine mentioned earlier in the article (search for "gigabyte" in the text) -- Jan Tångring

Quote: "So, after centuries of work, mathematicians have yet to discover whether a simple, 43-byte C program halts."

Is C older than we thought? ;-)

I am not sure the Perfect Number C-program is really a "less that a Gb long" algorithm. Of course it can be written in less than a Gb, but can he run with less? If the type "int" that is used is the classic one as in C, I bet the outcome of the program will be evident because the numbers will quickly grow and cycle. There will be a moment where either the condition is fullfilled, either a previously existing state of the program will occur again, proving the non-halting nature for the given output. If we don't want the numbers to cycle, then they must be arbitrary long, so maybe needing more than a Gb to be stored ;-)
Fafner 12:40, 2 Aug 2004 (UTC)

The article asks if this theoretical C program would halt:

int main(void)
{
   BigInt i, j, k; /* assume arbitrary-precision integers */
   for(i=j=k=1;--j||k;k=j?i%j?k:k-j:(j=i+=2));
   return 0;
}

Maybe if I could read it, I could tell you. The double-nested ?: expression is the worst C idiom I've ever seen; it looks like something from an obfuscated C contest. And why cram all that into the header of the for loop? I'm going to try to clean it up, but I might get it wrong, which is another reason why this code is so horrible in the first place... code like this does not belong in Wikipedia at all except as an example of obfuscated code. - Furrykef 16:13, 9 Sep 2004 (UTC)

Done. I think I "unpacked" the for loop correctly. Though I think we might be better served with a p-code example... - Furrykef 16:35, 9 Sep 2004 (UTC)
I agree, and I think that the twin primes example that's already on the page serves the purpose just fine. The obfuscated odd-perfect-number program is superfluous, and its obfuscation obscures the real point, which is that you can't tell if an odd-perfect-number searcher will terminate, whether or not it is obfuscated to the point of unintelligibility. -- Dominus 17:06, 9 Sep 2004 (UTC)

I simplified the proof of the halting problem, since Halt (or as I renamed it, Halts) really only needs to take one parameter -- the algorithm itself. Adding a second one is just confusing. Also, with modern programming languages, it would be simpler to have the code have functions passed around rather than strings; this also makes it easier to understand. I hope I didn't screw any of it up, though. - Furrykef 10:29, 12 Sep 2004 (UTC)

Oh, wait, I just realized my error. Yes, it does need the second parameter. - Furrykef 10:35, 12 Sep 2004 (UTC)

Limitations of this

I think it's important to note that the undecidability of the halting problem only holds when an infinite amount of memory is involved. Otherwise, it is decidable. A program can exist that can determine whether a program on a given computer system will ever terminate.

As an example, the perfect number problem can be answered. Eventually, the program is going to run out of memory. One of several things will happen, depending on the exact assembly language used by the compiler, the operating system, the C libraries, etc.:

  • The program causes an out-of-memory exception. Clearly, in this case, it terminates.
  • The program causes the operating system to freeze due to memory problems. Whether you define this as termination or running forever is up to you.
  • The program's BigInt library ignores the out-of-memory condition and continues running without increasing the integer size. i will be 0xFFFFFF...FFFF at this point. Adding 2 to i will result in 0x000000...0001. This causes i, j, and k to reset to the start state, which is an infinite loop.

As for the theoretical meaning of this... In Sketch of proof, assume that there is finite memory in the system. The proof essentially is the same idea as the sequence 1, -1, 1, -1 ... having no limit; the question of whether Trouble(Trouble) halts is equivalent to asking whether the "last" number in that sequence is 1 or -1. At some point, the infinite recursion that results from trying to decide whether Trouble() halts causes a stack overflow or other out-of-memory condition. At this point, the deepest copy of Trouble terminates with an error. The second to deepest copy of Trouble runs infinitely as a result. The third to deepest copy terminates, detecting this condition. This repeats up the "stack" until the top one either halts or does not halt, depending on the parity of the number of levels - there is no contradiction. (Note that there is no "stack", and that only one copy, the top copy, ever actually runs. The top copy is really just analyzing the lower ones, and in order to do that, it has to recursively analyze all subprograms called by its target.)

-- Myria 03:04, 24 Sep 2004 (UTC)

The fact that the halting problem is decidable for finite-memory machines has already been noted in the article. The practical consequences of this caveat are minimal, as the decision algorithm for finite-memory machines is fundamentally exponential-time. --Robert Merkel 03:55, 24 Sep 2004 (UTC)
  1. Not only exponential-time, but exponential-memory too right? This means that if you have a finite memory machine, you still cannot check, using this finite memory machine, to see if a program will halt or not on this finite memory machine.
This is related to the question of why abstract machines like Turing machines are even a practical model for algorithms, if they're more powerful than real computers. The answer is that:
  1. They can describe algorithms at once over all machines, regardless of how much memory they have; there is a maximum to this now, but it can rise arbitrarily in time. Algorithm results should be timeless.
Actually, I think a consequence of one of the laws of thermodynamics is that there is a finite amount of energy in the universe and, consequently, a finite amount of information. So "it can rise arbitrarily in time" is probably true from a human perspective, but our understanding of physics suggests that this is (from a universal perspective) flawed.  :) (I am not a physicist, though.) --pfunk42 (1 Jun 2005)
  1. They can describe algorithms without reference to any sort of machine-specific characteristics, such as the memory limit.
  2. They simplify the statement of algorithms; algorithms running on abstract machines are usually more general than their counterparts running on real machines, because they have arbitrary-precision datatypes available and never have to deal with unexpected out-of-memory conditions.
In short, you can certainly decide if an algorithm will halt on a given machine. It just takes a very long time, and in the end isn't worth much, because by the time it's done machines will have a million times or a billion times as much memory, and your algorithm has no hope of catching up. Derrick Coetzee 05:13, 24 Sep 2004 (UTC)
Okay. I think I understand your "fundamentally exponential time" bit, but that doesn't mean that many infinitely looping programs can't be decided as such rather quickly. In fact, it can be done proportional to the time it takes for the machine to repeat a state (which in some cases is rather fast), and with memory proportional to the memory used by the machine it's reasoning about. The "fundamentally exponential time" comes in because it can take exponential time with respect to the memory used for the machine to repeat a state, but "fundamentally exponential time" was confusing to me at first because you didn't specify what it's exponential with respect to--and it's only O(exponential) not Theta(exponential). Another interesting note: I actually implemented an infinite loop detector for Java that was quite effective for "tight" infinite loops. --pfunk42 (1 Jun 2005)
Any program which goes into a loop, literally, is easy to detect, just by remembering all its previous states and seeing if it hits one it's already hit. This means it's easy to detect if any program using bounded space enters an infinite loop. The problem is that Turing machines can use unbounded space, using more and more as they run, never entering a true loop, such as the twin primes example. A more practical problem is that trying to remember all the states of a real program so you can detect a loop requires more space than is typically available. To get around this people in model checking use Bloom filters. Deco 04:39, 27 July 2005 (UTC)

Reduction...

An anonymous user made an edit to the section of the importance of the halting problem, getting the proof by reduction part round exactly the wrong way (a common mistake). Could people check that I have got it correct? And is it too detailed for that part of the article? --Robert Merkel 10:28, 2 Nov 2004 (UTC)

Looks good. I'd suggest a concrete example, though, even a made-up one would help. Deco 23:47, 2 Nov 2004 (UTC)

Can Humans solve it?

I really don't think even humans can answer the halting problem with a definitive yes/no without being wrong, even if there is an infallible human being. First, suppose there is an infallible human being named Bob. Let's suppose we rewrite the program:

  function CheckIfHalts(program, input)
  {
     print(program, input);
     print("does this halt? (y/n)");
     return getch() == 'y';
  }

Bob sits in the back room doing nothing but analyzing the program and pressing y or n, and he's never wrong. We still have the contradiction - except that we end up presenting the contradiction to Bob instead of the Turing machine. Because we have already proven a logical contradiction, Bob has no choice but to say there is no solution.

This is all really just saying that the halting problem is a paradox with no solution, right?

--Caywen 20:00, 13 Mar 2005 (UTC)

You are correct, humans cannot solve it. Although the halting problem applies specifically to turing machines, there are many who would argue that human beings ARE turing machines (well, at least their brains are). It is a paradox with no solution, but a very special one: the significance is that there does not exist an algorithm which can determine whether or not other algorithms do not halt (or in your example, there exists no such person, and Turing definitively proved that non-existance). [no signature]
To clarify a point: the halting problem is not a paradox in the sense that it doesn't even make sense for a solution to exist. To the contrary, a program either halts on an input or it doesn't, and this is obvious. We just can't tell in general which it will do. There is an answer. Deco 04:35, 27 July 2005 (UTC)
Actually you forget that your function CheckIfHalts is not an algorithm that can run on a turing machine anymore, because it includes your more powerful human Bob. So if you ask Bob whether some program that includes him halts, he will just tell you that you are trying to fool him because the answer depends on Bob's free will. If Bob is not more powerful than a turing machine, then of course he can't solve it. It all depends on whether humans are more powerful than turing machines or not. (I think that question is unanswered) GoGi 12:28, 15 August 2005 (UTC)

Super halting problems

The idea is: suppose there's a magical machine that can solve the halting problem. The super halting problem is to deside whether this machine halts. It seems that this problem cannot be solved by the "standard" turing machine or the magical one described earlier. A machine that could solve this problem could not solve it's own halting problem. Again, a machine that could solve it couldn't solve it's own halting problem. This leads to an infinite series of more and more powerful machines and "harder" halting problems. See http://www.scottaaronson.com/writings/bignumbers.html for much better explanation. (search for the phrase "super halting problem")

Should the page mention this or have link to page dedicated for this?

Well, deciding if that particular machine will halt is easy. The answer is yes: because there is no Turing machine that neither halts nor doesn't halt, in order to be correct the halt-deciding machine must always halt. A more interesting question (the one your probably intended) is to ask how much "harder" the halting problem is on a Turing machine with an oracle for the halting problem. This is discussed at Oracle machine#Oracles and Halting Problems article, to which we could reasonably link. Deco 00:14, 14 May 2005 (UTC)

wrong examples?

    function findTwinPrimeAbove(int N)
    int p := N
    loop
        if p is prime and p + 2 is prime
            return
        else
            p := p + 1

I think this example is wrong. If there would be a proof we know the answer. I think this should be a function for which we know there is not a proof because if "all proofs" are know then are there any functions left?

This is just offered as evidence why it makes sense for the halting problem to be impossible to solve, because it would show that there is an automatic procedure for solving hard open problems in mathematics. There are very few problems that have proofs that they cannot be proven or disproven, and it's not clear to me how a program could test any of these. Deco 22:24, 15 Jun 2005 (UTC)

Recognizing partial solutions

In this section it says:

Can we recognize a correct PHS when we see it?

I think this is intended to mean: "can we construct a Turing machine that will infallibly determine whether a program given as input is itself a correct PHS", but that meaning isn't clear from the above expression, and it took me some time analysing the following text to decide that this is what was meant.

However, if I am correct about the meaning of the question, the answer also seems trivially "no" - take any incorrect PHS and prepend code to find the first odd perfect number, for example - and it seems odd then that the section spends so long focussing on this apparently trivial question. To me, the more obviously interesting question to consider is, given that it is clearly possible to construct some Turing machines that are provably correct PHSs, how good can we make them? Hv 13:49, 12 August 2005 (UTC)

I agree with this. I'd like to see the discussion of the undecidability of this problem removed if there's consent. We can instead include a discussion of some well-known algorithms for partial deciding, such as those based on basic blocks and dead-code elimination that are used by compilers. Deco 22:32, 12 August 2005 (UTC)
Note that Busy beaver chasers need to solve exactly this problem, so their documents would be a useful starting point for research. Hv 04:00, 13 August 2005 (UTC)

Riemann hypothesis

I ran across an interesting fact at Divisor function. It appears that the famous Riemann hypothesis, one of the million-dollar problems, is false if and only if the following very simple Turing machine terminates:

function Riemann() {
    int n := 5041
    while true
        if sumOfDivisors(n)/(n*ln(ln(n)) ≥ e^y then
            return
        n := n + 1
}

Here, sumOfDivisors computes the sum of the divisors of n, e is the number e, y is the Euler-Mascheroni constant, and computations are done with sufficient accuracy to conclusively determine whether the inequality holds (it suffices to use range arithmetic and increase accuracy until the range does not contain the constant). We have enough examples already, and this one is a bit more complicated, but it's amazing how many important open problems can be expressed in this way. Deco 04:39, 22 August 2005 (UTC)

Bad Proof?

This is code from article

 function trouble(string s)
     if halt(s, s) = false
         return true
     else
         loop forever

but I think that correct form of trouble is( false replaced with true)

 function trouble(string s)
     if halt(s, s) = true
         return true
     else
         loop forever
You are mistaken. -- Dominus 20:38, 8 September 2005 (UTC)

/* limitations of a precise problem formulation */

If we model "halt(x, i)" using an oracle with the restriction that only *one* call can be made to the oracle for a program x. (that is, x may do everything except encode a call to halt())? Is this oracle any different from the general halting oracle (where unlimited calls are allowed in one program).

Assumption that Halt always halts?

On what assumption does the conclusion that halt always halts come from? Hackwrench 00:50, 9 November 2005 (UTC)

In the case listed, halt can make the determination that trouble never returns, but it can never pass that information to trouble, i.e. it never returns.

I just figured it out. The assumption being made is that to output an algoritm has to halt. This is not the case.

global does_halt

halt(i, j) { does_halt = false; infinite_loop;}Hackwrench 01:15, 9 November 2005 (UTC)

Expanded arguement and counters to refutation on kuro5hin.

http://www.kuro5hin.org/story/2005/11/8/221547/316

As you might have guessed, you're wrong. By definition, a function which solves the halting problem always returns true or false, or in other words always halts. We define it this way because a function which either decides if its input halts or never returns isn't very useful - it might only work on a few programs, and we want something that works for all programs (and the content of the theorem is that, as it turns out, that's asking too much). As the section "Recognizing partial solutions" notes, "There are programs that give correct answers for some instances of it, and run forever on all other instances." Deco 02:13, 10 November 2005 (UTC)

How is "return" a synonym for "halt"? Another issue I presented is that in order for trouble to be an algorithm, the definition of algorithm "trouble" must include the definition of "halt" so if "halt" halts, then "trouble" is immediately halted. Hackwrench 18:40, 10 November 2005 (UTC)

"halt" means "returns a result" because that's what it's defined to mean. You can use some other definition of "halt", but then you're not talking about the same thing that the article is.
A function that produces a result but that doesn't halt is useless, because the function's caller never gets a chance to examine the result. A function that halts but that doesn't produce a result is also useless; there was no reason to call it. So there is no reason to consider functions for which halting and producing results are different things. -- Dominus 18:54, 10 November 2005 (UTC)

But trouble as stated is not an algorithm. Even if we ignore the definition of algorithm on the [algorithm] page as always halting, the totality of trouble must include the portion assigned the label halt. If halt by definition always halts, then trouble always halts, and the portion beyond halt is not part of the algorithm trouble because it is unreachable. Hackwrench 00:00, 11 November 2005 (UTC)

No. When a function halts, that just means that it returns a value to its caller. When halt halts, its result is returned to trouble, which then can use the result in its own computations. -- Dominus 03:25, 11 November 2005 (UTC)

If that's the case, then trouble can never get passed to itself as the input because then the input becomes Trouble + result of halt. Hackwrench 14:45, 11 November 2005 (UTC)

function trouble(string s)

    if halt(s, s) = false
        return true
    else
        loop forever

is equivalent to function trouble(string s)

    if s = false
        return true
    else
        loop forever

end function

trouble(halt(t,t))

It is not equivalent. Your second function doesn't even make sense, because it compares a string with a boolean. -- Dominus 16:03, 11 November 2005 (UTC)
Any boolean can be expressed as a string. My point is that the string is not a real parameter for trouble as it doesn't come into contact with trouble's inner workings.

Furthermore trouble doesn't know which state is "false" or which state is true. There are two states for trouble and there are two possible states for halt and it is a trivial exercise to show that the two map to each other. Hackwrench 14:45, 11 November 2005 (UTC)

It occurs to me that the fact that the only use that trouble makes of the result of halt is to decide whether or not to branch may be useful in this discussion, as you said that trouble can use the result of halt in its computations, but in fact it can use it in only one calculation, its branching calculation. Hackwrench 14:45, 11 November 2005 (UTC)

Consider

This is a field of turing machines, all other turing machines can be built of these

0:0,0 does not return, so to find the answer, negate
0:1,1 returns: does not return
0:0,1 returns: returns
0:1,0 returns: returns
1:1,1 does not return, so to find the answer negate
1:0,0 returns:does not return
1:0,1 retruns:returns
1:1,0 returns:returns