In The Armchair

Gödel’s Theorem and Artificial Intelligence

Posted in Armchair Ruminations by Armchair Guy on December 22, 2006

In 1931, the mathematician Kurt Gödel published a paper that proved his famous incompleteness theorems. The first of these theorems has the following as one of its consequences: in any consistent axiomatic formal system rich enough to include arithmetic, there are true statements that cannot be proven as theorems. That is, there are statements which are true, but whose truth cannot be determined by any algorithm starting only from the axioms of the system.

Gödel’s theorem has had a lot of consequences, though it has not affected the development of most fields of mathematics or science. The theorem has been used by various people to claim various things, including that the Bible is incomplete, that God doesn’t exist, and that God must exist. This result has also been used by some people (most notably the mathematical physicist Roger Penrose) to claim that both Strong AI and Weak AI are impossible. I find this claim very interesting, and the arguments that are used are not easy to refute.

I think the arguments are so hard to refute because the natural languages we converse in (English, in this case) are sometimes not nuanced enough to clearly express our thoughts. We use the same phrase to mean two different things and confuse ourselves. The following attempts to address this by highlighting a potential flaw in Penrose’s argument. The flaw arises because Penrose fails to distinguish between the knowledge that there exists an object having a certain property from the knowledge of an object having that property. That is, he fails to recognize that we may believe that an object with the property exists, without knowing what the object is.

Gödel’s theorems are notoriously hard to grasp. Part of the reason is the unfamiliarity of the setting in which the theorems are developed (first order languages). Luckily, there is a far more familiar setting in which Gödel’s theorems come to play under a different guise: that of algorithms and programs, which many people are familiar with today. So let us understand an analogue of Gödel’s first theorem in the setting of algorithms, and use this setting to make our arguments, instead of the mathematical logic setting Gödel originally used.

Algorithms and the Halting Problem

Rather than try to define exactly what an algorithm is, I assume that it is well-defined, and note that any algorithm can be encoded as a finite string. An algorithm can have an input and an output; if A is an algorithm with 3 inputs i1, i2, i3, then we use the notation o = A(i1, i2, i3) to indicate that o is the output of the algorithm when executed with the inputs i1, i2, i3. An algorithm can have any number of inputs, and the number of inputs itself needn’t be fixed. That is, A could have 3, 4, 66, 1,003,478, or any other number of inputs.

We all know that algorithms execute in a series of “steps”, and any program executes a finite number of steps before it halts. We also know that programs sometimes don’t halt at all — an operating system hang, for example, a loop such as “while(1){}” in C, or a program written to halt when it finds an odd number divisible by 2. The Halting Problem refers to the following question. Is there any algorithm A, which, given another algorithm B as input, halts and returns 1 if B halts, and halts and returns 0 if B doesn’t halt? That is, A(B) = 0 if B doesn’t halt, A(B) = 1 if B does halt. Important points to note:

  1. A(B) must halt — after a finite number of steps, it must stop.
  2. The answer it returns (0 or 1) must be correct.
  3. A() must be able to do 1. and 2. above for any program B.

The answer to this question is known; there cannot exist any such algorithm A.

HL1: For any algorithm A, there is an algorithm, call it G(A), such that either A doesn’t halt on input G(A), or A(G(A)) is wrong (0 if G(A) halts, 1 if G(A) doesn’t halt).

We write G(A) instead of just G to emphasize that G(A) depends on A.

Now suppose we insist on considering only algorithms A which either halt with a correct answer (they halt only when they have determined beyond doubt whether their input halts or not), or don’t halt at all. These algorithms never return a wrong answer. Let us call these sound algorithms. Note that a sound algorithm never answers incorrectly, no matter what its input. It can be shown that:

HL2: For every sound algorithm A, there exists an algorithm G(A) such that neither G(A) nor A(G(A)) halts.

This result is the analogue of Gödel’s first theorem in the setting of algorithms. Note the following:

  1. A is a sound algorithm.
  2. G(A) doesn’t halt; this is a fact.
  3. However, A cannot determine the above fact; A(G(A)) does not halt.

We are now equipped to state the usual objection to AI based on this analogue of Gödel’s theorem.

The Halting Problem and AI

The usual objection to the possibility of Strong and Weak AI based on the halting problem argument goes as follows.

  1. Suppose I am a sound algorithm, say A.
  2. There is an algorithm G(A) which does not halt such that A(G(A)) does not halt.
  3. However, I know with certainty that G(A) does not halt.
  4. Therefore, since I am A, A knows with certainty that G(A) does not halt, and will perform the following:
    1. If the input B is not G(A), A can determine whether B halts in a finite number of steps; A does so and returns the correct answer.
    2. If the input is G(A), simply return 0 (for “does not halt”). This is correct.
  5. Thus:
    1. A is sound; we have not violated this assumption.
    2. A(G(A)) does halt.
  6. So A(G(A)) does not halt (from 2. above) and A(G(A)) does halt (from 5.2. above), a contradiction.
  7. Therefore 1. must be an invalid assumption.

Since the only assumption in the above is that I am a sound algorithm, and we have a contradiction, the assumption must be false; that is, I cannot be a sound algorithm.

This “proof” is slightly blurry in steps 4.1 and 4.2, in the sense that G(A) may not be unique. There will be many algorithms satisfying the requirements for G(A). However, this is a non-issue: we could have considered G(A) to be the class of all algorithms satisfying those requirements and used set membership instead of equality, and this issue would have disappeared.


This “proof” conceals several points, which we deal with sequentially.

Objection 1. The first objection is to Step 1 in the proof; this is the most common objection to the proof. Why should we suppose that we are sound algorithms? We may be unsound algorithms. Recall that in the first statement of the halting lemma (HL1), we don’t know whether G(A) halts or not. So if we are an unsound algorithm A, we don’t know whether our G(A) halts or not, and the proof doesn’t go through at all. Penrose does try to defend the assumption that we are sound algorithms.

Objection 2. To move on to the next objection, suppose we grant that if we are algorithms, we would have to be sound algorithms. Is the proof correct in this case? This second objection has to do with what A really is. In steps 4.1 and 4.2 of the proof, we would appear to be modifying the algorithm A itself. Since we assumed “I am A”, such a modification may not be permissible. In any case, if A is modified to A’, then we need to be concerned with G(A’), not with G(A) any longer. However, this objection is weakened by the chance that steps 4.1 and 4.2 could already be part of A, without any modification. That is, the behaviour of A is to check whether its input is G(A) or not, and base its actions on that test.

Objection 3. Next, we grant that A already includes 4.1 and 4.2 and as a consequence these steps do not constitute a modification to the algorithm. This brings us to what I think is the most crucial flaw in the proof. The proof involves the test: “if the input B is G(A)”. A crucial question is, would A be able to recognize G(A)? A knows that G(A) exists, but does not know what G(A) is. The knowledge of the existence of G(A) does not enable A to test whether B = G(A). Note also that, as noted below the proof, G(A) is not unique. Thus even if A could find one G(A) and compare B against it, this would not suffice. A would have to compare B against all possible G(A)s — this could be an infinite set. Thus A would have to determine using some clever method (not simply by comparing against candidates for G(A)) whether B is a valid candidate for G(A). If we assume that A can do this in a finite number of steps, we now have two assumptions in the proof, and the contradiction at the end would only imply that any one of them could be wrong.


Several people have published various versions of this proof, but if it is the proof in the above, it would seem that the fallacy outlined in the third objection settles the issue. As pointed out at the beginning of this article, it is the free form of the English language that makes it so hard to pin down what is really going on. When we say we know that G(A) doesn’t halt, we imagine we also know what G(A) is — this is not the case here. This distinction, between knowing that “there is a G(A) such that G(A) and A(G(A)) don’t halt” and knowing that “G(A) doesn’t halt”, is the crucial one. Weak and Strong AI are still very much alive!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: