Abstract
The active beginnings of modern Computer Science are somewhat diffuse and even controversial but, as we shall show, it is reasonable to consider the major active beginnings of Computer Science as being rooted in the works of one man, Alan M. Turing, starting with his epoch-making 1936 paper, On Computable Numbers with an Application to the Entscheidungsproblem. Proceeedings of London Mathematical Society, ser.2, vol. 42. 1936, corrections ibid. vol.43, 1937. This paper has been re-published by Dover Pub. Co. in the 2004 anthology The Undecidable, edited by Martin Davis. Some may not completely accept our opinion that Turing was the single fountainhead of the main ideas of computer science. However, one cannot fail to be impressed by the prescient quality of his wide-ranging research as displayed in his published papers and reports. At the very least, he was a leading thinker of the new discipline of computer science and contributed to both its hardware and software content.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1
Symbolic Logic, Computer Science and the Godel Incompleteness Theorem
The logicians B. Russell and A. Whitehead (R&W) in their monumental 1910 book Principia Mathematica attempted to derive all of mathematics within a formal system of symbolic logic following an earlier similar attempt by the logician G. Frege. In a related approach, the formalist school of mathematicians, led by David Hilbert, also in the early 1900s, advocated a process for the mechanization, by formalization within a formal system of logic, of all of mathematics. According to this formalist process, all of mathematics, (i.e. the theorems), is to be organized as one grand deductive logical system of formulas (called well-formed formulas or wff’s for short) obeying precise syntactic structural rules and the true statements of mathematics (the theorems) are to be represented symbolically by wff’s which are provable (mechanically derivable) in the logical system by formal proofs from appropriate axioms.
In a formal proof one proceeds step-by-step in an almost thought-free manner solely by applying the purely formal rules of the deductive logic system, a process which could seemingly be done by a computing machine if one existed. Indeed, some modern computer programmers have explored this possibility of proving some theorems by machine methods. The formalists’ doctrine, surprisingly proposed by a prominent mathematician like Hilbert who had done many important informal mathematical proofs in the usual intuitive natural language style, aimed to reduce all mathematical theorem-proving to purely formal symbol-manipulation computations with no intuitive understanding required of the interpreted meanings of the statements in the various steps of a proof. This proposal may have been suggested by the well-known quasi-formal proofs in abstract geometry presented in Euclid’s Elements (See Chap. 2 and section “The Decision Problem of Formalist Mathematics”). As remarked skeptically by other mathematicians, notably the eminent pure mathematician G. H. Hardy, such a program, if successful, would reduce all mathematical theorem-proving to a mechanical thought-free process of symbol-manipulation, one that could in principle be carried out by some kind of computing machine. Indeed, as we have said, modern computer scientists have tried to develop theorem-proving computer programs. Hopefully, by reading this Appendix 1 they may learn their limitations. Of course, even in the formalist program some human thought would be required to set up appropriate axioms for each mathematical theory. This seemed quite possible since it had already been done for the theory of natural numbers N by the mathematician Peano (See below). Other theories using real numbers could presumably be based on N. So the formalist program seemed reasonable and it began with N.
In this Appendix 1, we shall describe how the then (1931) young mathematician Kurt Godel upset the formalist program by proving that its grandiose goal is not achievable even for N. He did this by displaying a wff which clearly is interpreted as a true mathematical statement about N (by an obvious interpretation in N) and yet is demonstrably not formally provable in the formalists’ well-accepted logical system. We wish to explain the essence of Godel’s result and its significance for Computer Science without getting distracted by the syntactic details of the encompassing logical system. Yet, we must describe the main features of that system to explain Godel’s result. Hence, we shall present the main features but omit many syntax details about the encompassing formal logical system, such details being either obvious or well-known to those readers who are familiar with at least one computer programming language, which in several ways can be viewed as a formal logical system. For those who need more detailed explanations we refer them to such standard references as D.Hilbert–W. Ackermann (H&A) Principles of Mathematical Logic. Translated, Chelsea 1950. We shall refer to the formal system in H&A as HA. Also see D. Hilbert and P. Bernays Foundations of Mathematics, Springer, 1931. This appendix omits many of the details about the HA formal system (e.g. the complete syntax of wff’s), but we shall at least explain the motivation for how the formal logical system was constructed by H&A (following R&W) and how its deduction (inference) rules were intended to be used by the formalists as a “thought-free” machine-like proof mechanism.
Propositional Logic and Boolean Algebra
At the outset, it is important for us to recognize that although the wff’s in a symbolic-logic system on the one hand are treated purely as formal symbolic expressions having no meanings, on the other hand they are interpretable intuitively as meaningful statements that occur in natural-language treatises involving concepts of informal logic and mathematics. This is somewhat analogous to viewing a computer programming language, on the one hand, as a medium in which to simply write symbolic formulas (e.g. wff’s like assignment statements or Boolean expressions) as prescribed by a precise syntax but not giving much thought to their possible meanings while, on the other hand, interpreting these wff’s as natural-language statements about computations in some external domain like N. Furthermore, these interpreted statements, about N say, have an ordinary truth-value (True or False), which, as we shall see, is transferred back to their symbolic wff’s by a computable procedure.
The formal logic system HA has certain logic axioms, which are wff’s formalizing statements which are intuitively true under any natural interpretation. The axioms can be used as steps in formal proofs. As we have remarked elsewhere, Euclidean geometry serves as a prototype example of such a formal deductive system in which the meanings (i.e.in that case, geometric interpretations) of the statements are not involved in the proofs. In fact, there are no syntactically precise wff’s in Euclid’s Elements, However, the points and lines mentioned in quasi-formal statements in proofs are to have no geometric meanings and are to be treated as abstract objects having only such properties as are conferred on them by Euclid’s axioms; e.g. any two points determine a line. Euclid’s proofs of theorems proceed from the axioms by applying computational deductive rules of logic. For a major example of a computational deductive rule used in a formal proof in any logical system we refer to the well-known modus ponens rule below.
For our purpose here, we do not need to give the complete syntax of well-formed formulas (wff’s) in the logic system. A few examples will convey the main features of wff’s. The interested reader can refer to Hilbert& Ackerman for a full account. Russell & Whitehead (R&W) and Hilbert & Ackerman (H&A) set up their formal logic systems starting with formulas called propositions. They begin with propositional variables p, q and r etc. which are meant to denote arbitrary natural language declarative sentences which are either true or false, for example “Socrates was a man.” or in a modern computer context a statement like “switch 1 is closed.” Syntactically, variables like p, q and r serve as atomic propositions, that is, they are the basic building blocks of propositional wff’s. Analyzing how mathematicians construct informal intuitive proofs, R&W and H&A postulate that they mainly use the logical connectives AND (denoted by &), OR (denoted by v) and NOT (denoted by ¬) to construct compound propositions represented by such wff’s as p&q, p v q and ¬p. They also use the wff p → q to denote an implication which formalizes the statement “p implies q” or equivalently “if p, then q.” More complicated wff’s are constructed without limit by substituting wff’s for (all occurrences of) a propositional variable p, q, r etc. in previously formed compound wff’s or by repeated use of the logical connectives, for example like (p → q) → r. To compute the truth-value of a proposition when given truth values for the interpreted statements assigned to the propositional variables in the wff it is only necessary to know how the connectives determine truth-values. This is easily given by the natural truth-tables for the connectives as follows (with truth = T and falsity = F):.
Clearly, as shown, a conjunction formula p & q is true exactly when the variables p and q are both interpreted as true. A disjunction formula p v q (denoting the inclusive OR) is true if either p or q is true or both are true. This agrees with the truth values of natural language intuitive statements formed using these connectives. In passing, we observe that p → q and ¬ p v q have the same truth values.
Using these truth tables, for any propositional wff, f say, containing precisely the n propositional variables p(1),…, p(n) we can compute the truth value of f from an assignment of truth values to the variables p(1),…, p(n). Indeed, f defines a truth function of these n truth values. In the computation of the function’s truth value f(p(1),…, p(n)) we can regard the connectives as basic truth functions, or algebraic operations defined by their truth tables. Thus, in propositional logic, the truth of a wff under any interpretation of the atomic variables as natural language statements can be computed solely from the truth values of these interpreted statements. We do not need to consider the meanings of the statements. This situation arises in the design of computer switching circuits where there are n components, like transistors for example, which are either conducting (“on”) or non-conducting (“off”). We choose n proposition variables x(j), j =1,…,n to denote the statements “Component j is on.” and assign truth value T if it is “on” and F if it is “off”. It is then required to design a circuit which computes a prescribed truth function f(x(1),…, x(n)) which denotes the statement “The circuit output is on.”, which is true when the output line is “on” and false otherwise. The design of f can be done using algebraic methods known as Boolean algebra and hardware for f can be built using well-known circuits called gates for AND, OR and NOT.
In fact, about 50 years before R&W and H&A, in 1854, the English mathematician George Boole published his book “An Investigation of the Laws of Thought” (republished by Dover Pub. Co., N.Y. 1958) in which he proposed an abstract algebra of propositions in which the connectives behave like algebraic operators. We now call this Boolean algebra and it is used in Hardware design (see Chap. 5, Appendix) and also is incorporated in modern programming languages along with the usual arithmetic operators + (addition), • (multiplication) and − (negation) (See below).
Boole also considered aspects of human logic reasoning dealing with subsets of some given set, for example subsets of the set of integers N. In this type of reasoning, the union of two subsets X and Y, denoted by X ⋃Y and the intersection X ⋂Y arose in a natural way, as did the complement X′. The union X⋃Y arose for a proposition p V q where p is true for X and q for Y. Then p V q is true for elements in either X or Y or both, that is, in X⋃Y. Likewise, p&y is true for X⋂Y. Boole used 1 for true and 0 for false. Thus p and q were assigned one of the Boolean values 0 or 1. p and q were Boolean variables representing propositions that were either true or false in a proof in a propositional logic system. For a compound propositional wff W involving n proposition variables, W would have different truth values for different truth values assigned to the n variables. An arbitrary wff W in propositional logic defines a Boolean function when we restrict the values of the propositional variables in W to be either T or F, as when we are only interested in the truth values of W (rather than its meaning).
Boolean Algebra and Circuits
Boole recognized that the computation of truth values of an arbitrary wff can be done within an abstract algebraic setting involving two binary operations, + and •, corresponding to the OR and AND logic connectives v and & respectively and a unary operation ′ corresponding to NOT (We assume that p → q has been replaced by ¬ p v q). In modern mathematical terms, Boole’s algebraic setting is called a Boolean algebra. It consists of an abstract set B with binary operations + and • and a unary operation ̍ satisfying the following axioms:
-
1.
For x and y in B, the elements x + y and x • y and x′ are in B;
-
2.
There are elements 0 and 1 in B such that x + 0 = x and x • 1 = x;
-
3.
x + y = y + x and x • y = y • x;
-
4.
x • (y + z) = x • y + x • z and x + y • z = (x + y) • (x + z);
-
5.
For any x in B there is an element x′ ̍ such that x + x′ = 1 and x • x′ = 0.
Axiom 1 is really not needed, since it follows from the definition of an operation in an algebra.
An example of a Boolean algebra is the set B = {0, 1} with the Boolean + and • operations defined as the usual arithmetic operations except that 1 + 1 = 1. Also, 0′ = 1 and 1′ = 0.
Another example is the set all subsets of a given set X. with the + operation being union and the • being intersection and x′ being the complement of x. Here 1 is X and 0 is the empty set.
We have already mentioned the application of Boolean algebra to computer switching circuit design involving n “input” components x(1),…, x(n) which, like transistors, (Chap. 5, Appendix) can be in either of two states “on” (= 1) or “off” (= 0). The components are usually to be connected through OR and AND and NOT gate elements so that there is a single circuit output f(x(1),…,x(n)) which is either on (= 1) or off (= 0) as required for the specified circuit operation (See the Hardware Chap. 5). Thus the circuit computes a specified Boolean function f. In principle, the definition of f can be given by a table of its values for the 2n possible Boolean vector values (x(1),…,x(n)). Each row of the table will list the values of the x(j) variables (0 or 1) and an extra column for the specified value of f. We consider only those rows in which f has the value 1. A Boolean formula for f can be written as a disjunctive normal form consisting of sums (disjunctions) of products (conjunctions) y(1) • y(2) … • y(n) of n input variables y(j), where y(j) = x(j) if x(j) = 1 in that row and y(j) = x(j)′ if x(j) = 0. This conjunction will have the value 1for the value of (x(1),…,x(n)) in that row. The sum of these conjunctions will have the value 1 exactly for those vector inputs where it is required and 0 otherwise. This combination of OR (sum), AND (product) and NOT gates will compute f. However, it is probably not the minimal circuit to compute f. Algebraic simplification according to the Boolean axioms may be needed. Also it may be better to use other types of gates such as NAND gates (See the Hardware Chap. 5).
Propositonal Logic and First-Order Predicate Logic
To carry out formal logical proofs with formal symbolic expressions, called wff’s, representing natural language statements, H&A first formulated the system called propositional logic for the propositional wff’s described above. These wff’s are interpretable as natural language statements which are either true or false. For doing formal proofs with propositions, H&A chose certain wff’s as axioms to serve as initial wff’s in a proof. The following four axioms were chosen:
Let U, V, W be wff’s which can be interpreted so as to have truth values. For example U, V and W can be propositional variables. The axioms are:
-
1.
W v W → W;
-
2.
W → W v V;
-
3.
W v V → V v W;
-
4.
(V → W) → (U v V) → (U v W).
Certainly, the first and third formulas are intuitively true for any interpretation of W and V as natural language statements. Therefore, they are said to be valid. Their truth value, TRUE, for any assignment of truth values to V and W can be established by truth tables as in the above section on Boolean algebra. The same holds for axioms 2 and 4. As written here, the axioms are really axiom schemas since U, V, W can be any wff’s.
A proof is a sequence of wff’s starting with an axiom. At each step in a proof the next wff in the sequence (i.e. the next wff proved) is an axiom or is determined by either of two rules of inference applied to wff’s already in the sequence. The rules are
-
1.
The substitution rule, which allows any wff to be proved by substituting a wff for all occurrences of an atomic variable in a wff already proved
-
2.
modus ponens, which allows a wff Q to be proved if a wff P and the wff P → Q have already been proved in the current or previous steps of a proof.
It is clear by induction on the length of a proof that the rules of inference at each new proof step preserve the truth of earlier steps. The axioms are true and modus ponens yields a true wff Q, since P and P → Q are true by the induction hypothesis. Therefore, starting a proof with a purely logical axiom we can only prove wff’s which are propositions that are true for all possible interpretations. This was not the formalist program. They proposed to do formal proofs of wff’s which are interpreted as mathematical statements, say about natural numbers for starters. To prove wff’s which are interpreted as mathematical statements in a formal system, the formal system must provide symbols and axioms of a mathematical type. In modern computer programming terms, it is necessary to have variables and operators of type “integer” if we wish to have formulas that refer to integers. In the formalist program, it is reasonable to begin with the mathematics of the natural numbers, N, and adjoin to the logic system symbols to denote numbers and wff’s to denote axioms about N, thereby extending the logic system to a formal system F(N) for N. Before doing this, it is necessary to further extend the propositional logic system to take into account general mathematical statements which contain variables that can take on values in a mathematical domain like N. This leads to an immediate extension of propositional logic called the first-order predicate logic.
By analyzing how logic is used in mathematical proofs, for example in Euclidean geometry and number theory, R&W and H&A were able to extend the propositional logic system to a more general logic system which formalizes the intuitive logic methods for mathematical domains like the non-negative integers N. As with the propositional logic system defined above, the first step of this extension process was the representation of mathematical language statements by abstract well-formed formulas (wff’s) according to a precise syntax. In mathematics, typical statements contain individual variables like x, y, z which range over the particular domain being investigated, for example the integers N. Statement have a subject-predicate structure like “x is P” where P is a predicate like “a prime number”; i.e. “x is a prime number.” To formalize such statements H&A adjoin to the propositional logic new symbols called predicate variables say P, Q, R etc. which denote abstract predicates and they then define predicate formulas (wff’s) like P(x) to symbolize the statement “x is P”. This functional notation indicates that P(x) denotes a propositional function, that is, for each value, c, substituted for the individual variable x the formula P(c) symbolizes the proposition “c is P”. Thus the interpretation of a wff symbol like P(x) is an abstract unary relation (i.e. a subset) of some domain like N. It has no truth value. More generally, a wff like P(x, y) denotes a binary relation on some set like N, as for example is defined by the statement “x = y + 1” and so on for n-ary relations for each number n. These statements do not have truth values. These are the atomic predicate wff’s. As with atomic propositions, the atomic predicate formulas are building blocks which can be combined by the logical connectives into abstract compound predicate wff’s. Thus, P(x) & Q(y) and P(x) v Q(y) are compound predicate wff’s. In the formal system F(N) for N we also have specific atomic predicate wff’s like “x = y + 1”. The individual variables x and y in such predicate wff’s are said to be free variables. They can be assigned any value in a specified domain like N. Symbolically to represent such an assignment of a value to a variable x it is permissible to substitute a constant symbol, say c, for all occurrences of a free variable x in a wff, thereby creating a predicate wff like P(c), which does have a truth value when interpreted in F(N), as for example “c is a prime number.”.
Now let W(x) be a predicate wff containing the free variable x. Besides substituting a constant c for the free variable x in W(x) so as to obtain a truth value for the resulting wff W(c) it is also possible to obtain truth values by applying quantifiers to bind x, as is done in ordinary mathematics exposition. A free individual variable x in a predicate wff W(x) can be bound by being quantified by prefixing the existential quantifier ∃x, denoting “there exists x” or the universal quantifier ∀x denoting “for all x”. Then if there are no other free variables in W(x) the wff ∃xW(x) is a wff with no free variables and can be assigned a truth value. For example, we could have ∃x (x − 2 = 0). Such predicate wff’s with no free variables obviously do have truth values. We repeat that if x is not bound in W(x), x is called a free variable and its occurrences can be replaced in the usual way by substituting for them a value, that is, a symbol c denoting a constant in some interpreted domain, for example a number in N. But the predicate variables P, Q, R etc. cannot be quantified. This is the meaning of the adjective first-order. This completes the syntax of predicate wff’s. The resulting symbolic logic system with two more axioms given below is called the first-order predicate logic, in recognition of the property that predicate variables cannot be quantified.
For deductive proofs in first-order predicate logic H&A adjoin to the propositional axioms the following two predicate wff’s as axioms which specify how quantifiers can be used in proofs.
Additional Axioms of First-Order Predicate Logic
Let W(x) be a predicate wff containing the free variable x and let t be a term which denotes an element in some domain like N. Then the following are axioms:
Note: By a term t is meant a constant c or some combination of constants and operations (like 2 +3) which denotes an element in some domain like N. The first axiom allows the universal quantifier to be eliminated in a proof. Thus if ∀x(W(x)) has been established in a proof step, then applying this axiom and modus ponens allows the derivation of W(t) in the proof. Similarly, if W(t) has been proven, then the second axiom and modus ponens allows the derivation of ∃xP(x). These are natural mathematical proof steps.
For predicate logic the four axioms of propositional logic still hold but with the wff symbols U, V, W now denoting predicate wff’s which have truth values. Further the rules of inference of propositional logic still hold but again for predicate wff’s as well as propositional wff’s.
The same two rules of inference are adopted to derive (i.e. prove) other predicate wff’s in a formal proof. These are
-
1.
The substitution rule, which allows any predicate wff to be substituted for all occurrences of an atomic variable
-
2.
modus ponens, which allows a predicate wff V to be proved if predicate wff’s W and W → V have already been proved.
The resulting formal logic system is called the first-order predicate logic.
As with propositional wff’s it follows easily by truth table analysis using Boolean algebra that any predicate wff provable by logical deduction from wff’s which are intuitively true must likewise be true. We shall describe this behavior of truth by saying that truth is preserved by the logical proof system and call this property of the system truth-preservation. This property of the logic system is sometimes also called soundness. It is easy to show that the property holds for modus ponens.. (If W is true, then W → V can only be true if V is true. So modus ponens yields only true V.) Since the axioms are valid (true for any interpretation), so are all provable predicate wff’s. This is an important metalogical property of the logical system that we certainly require if the system is to be of any value in proving theorems. It partly justifies the Hilbert formalist program. To completely justify it another metalogical property is required, a converse of truth-preservation, namely, that any wff which is valid (true by any intuitive interpretation) must be provable. If this property held, the logical system would be considered complete. Formal proofs of valid wff’s would be a mechanical symbol-manipulating computation. In his 1929 doctoral thesis, Godel showed that the first-order predicate logical system is complete. However, valid wff’s were not the wff’s of interest in the formalist program to mechanize mathematics. The program rather dealt with wff’s which are interpretable as true in particular mathematical domains. The first such domain to be formalized was the non-negative integers (natural numbers) N with its usual arithmetic. This was done by adjoining wff’s to denote statements about N and adjoining axioms for the natural numbers N to the logical axioms so as to construct a formalized number system F(N). The formalist dogma was that every wff in F(N) which is interpretable in N as a true statement about N is formally provable in F(N). This would make N a domain similar to Euclidean geometry. It would also make F(N) a complete formal system for N in that for every true statement S about N there is a wff F(S) in F(N) which can be interpreted as S and which can be formally proved in F(N). In fact, Godel proved that F(N) is not complete, that is, it is incomplete, by displaying a wff S which is true in N but its formalization F(S) is not provable in F(N). He further showed that its incompleteness cannot be remedied, say by adjoining more axioms. As we shall see, this result has implications for problems in computation.
In his earlier 1929 doctoral thesis, Godel established that the first-order predicate logic system is complete. From the way that any logical system works, if W is a provable formula, then ¬W is not provable, since W must be true by the truth-preservation property and so ¬ W must be not true, hence not provable. This is also called a form of system consistency. Godel’s incompleteness result assumes that F(N) is consistent.
To actually formalize some branch of mathematics within a symbolic logic system it is necessary to adjoin symbols which denote mathematical objects in that branch, for example numbers in N, and to adjoin axioms which denote properties of these objects. A similar process is used to design a programming language for N. Once the purely logical axioms have been chosen for first-order predicate logic as explained earlier, it is natural to adjoin axioms for the arithmetic of natural numbers, N. One possible set of axioms for N are the well-known Peano axioms which employ the constant 0, the successor function s, addition +, multiplication • and equality =. These axioms are given by the following six purely arithmetic formulas and a seventh formula involving logic:
Peano’s Axioms
-
1.
∀x¬(0 = sx);
-
2.
∀x ∀y (sx = sy) → (x = y);
-
3.
∀x x + 0 = x;
-
4.
∀x ∀y x + sy = s(x + y);
-
5.
∀ x ∀y x • sy = x • y + x;
-
6.
∀x x • 0 = 0.
-
7.
Let W(x) be a wff interpretable in N and having a free variable x. Then
$$ ({\hbox{W}}(0)\ \&\ \forall {\hbox{x}}\left( {{\hbox{W}}\left( {\hbox{x}} \right) \to {\hbox{W}}\left( {\hbox{sx}} \right)} \right) \to \forall {\hbox{x}}\;{\hbox{W}}\left( {\hbox{x}} \right). $$
The intuitive interpretations in N of these axioms are quite obvious. For example, axiom 1 can be interpreted as stating that 0 is not the successor of any natural number. The Peano axioms have been found to be sufficient for deriving known properties of N. Three additional axioms are adjoined to represent the usual properties of equality. The result is a formal system for N. Let us continue to use F(N) to denote this formal system within first-order predicate logic with the Peano axioms adjoined.
Axiom 7 is the principle of mathematical induction to prove ∀x W(x). As in the axioms of predicate logic it is really an axiom schema since W(x) is an arbitrary predicate wff. Strictly speaking, there is an implied universal quantification of W, making this axiom look like a second-order wff. However, the universal quantifier ∀W is not permissible and is not part of the formal syntax. It enters informally by the interpretation of axiom 7. See the remarks below about the interpretation process.
What Godel did to show the incompleteness of F(N) was to arithmetize the well-formed formulas and the deductive proofs that take place in F(N) by using the formal arithmetic provided by F(N) itself. In this process, the formal objects of F(N) (i.e. the well-formed formulas and the proofs) are assigned numerical codings. His paper gives methods for the effective calculations of all the numerical codings. We shall sketch the key ideas. In the following, x, y and z are individual variables that can be assigned numerical values.
The first idea in Godel’s methods is to assign formal numerical values (we shall loosely say “Godel numbers”) to the formulas in F(N). Exactly how he does this we shall not describe but it is easy to see that it is by a simple effective calculation expressible in F(N), which first assigns numerical values to individual symbols. Then for a well-formed formula, say the symbol string f, he combines the numerical values for the symbols in f into a single number, G(f), called its Godel number. G is a computable function, or as Godel does it, a recursive function, which is a concept equivalent to Turing computability.
Second, he considers the set of all formulas Y(η) in F(N) which contain exactly one free variable η. For simplicity he uses the same variable η. For each such formula he defines its Godel number G(Y(η)) within F(N) in a way that permits the arithmetic statement “y = Godel number of Y(η)” to be expressed by a formula in F(N). This assigns G(Y(η)) to the variable y. Next, he gives a formula expressing that “Y(z) is the result of substituting z for the occurrences of the free variable in Y(η)”.
The final idea is to compute a Godel number G(pfX) for any proof pfX in F(N). A wff in F(N) is represented as a number computed from the sequence of the Godel numbers given to the sequence of symbols in the wff. Similarly, a Godel number of a proof pfX is computed from the sequence of numbers of the wff’s in the steps of pfX. This is all done in such a way that the intuitive arithmetic assignment statement “x = G(pfX)” can be expressed by a formula in F(N). These numerical values for the formal objects in F(N) arithmetize its formal aspects in a computable way. Thereby, the formalism of F(N) becomes a branch of ordinary arithmetic.
Finally, Godel formulates a complex predicate expression in F(N),
involving three free variables x, y, z, which has the following three-part interpretation (meaning):
-
Part (1) y is equal to the Godel number of a formula Y(η) with one free variable;
-
Part (2) Y(z) is the formula obtained by substituting z for η in that formula Y(η);
-
Part (3) x equals the Godel number of a proof pfX of Y(z) in F(N).
Part (1) might be a formula such as “y = G(Y(η))”, where G(Y(η)) is a Godel number that holds precisely for a wff which contains one free variable,η.
Part (2) gives a Godel number for the syntactic result Y(z) of substituting z for η in the formula Y(η) given by Part (1).
Part (3) obtains the Godel number for Y(z) from part (2), decodes it as wff Y(z) and then computes a Godel number for a proof, pfX, of Y(z) and asserts
Of course these high-level formulas and interpretations have recourse to various predicates having lower-level interpretations based on F(N) syntax and inference rules such as “pfX is a proof of Y(z)” which will usually involve definitions by induction on the length of a well-formed proof sequence pfX ending with Y(z). In effect, Godel arithmetizes the syntax of formulas in F(N) and the construction of formal proofs in F(N) by defining suitable numerical predicates in F(N) and uses these to formulate the predicate Prf(x, y, z). We assume, as has been checked by others, that Godel’s paper defines these numerical predicates correctly. Hence, we assume that Prf(x, y, z) is correctly and effectively constructed and the three-part interpretation of Prf(x, y, z) is as specified above. The rest of Godel’s proof can proceed directly as follows (But see the addendum at the end of this appendix).
It may be helpful to observe that Prf(x, y, z) as interpreted above sets up a two-dimensional array of wff’s indexed by the integer-valued pairs (y, z) where the row indices y are Godel numbers which designate one-variable predicate wff’s in F(N) and the column indices z designates numerical values to be substituted for the variable in these wff’s. Since the particular free variable in a one-variable predicate may be any variable, we can choose a convenient one to use in all. In a way, the rest of Godel’s proof can be viewed as applying a version of the classical Cantor diagonalization procedure to this array as follows.
Consider any value of row index y and obtain its designated one-variable formula, say Y(η), essentially by inverting the function G. Thus, y = G(Y(η)). Now take the column index z to equal y, (thereby selecting the diagonal elements of the array) and consider the wff Y(y) obtained by substitution of y for η as given in part 2 of the interpretation above. This yields the “diagonal” cases of the predicate Prf(x, y, z) namely, the predicate formula Prf(x, y, y). These cases will be used to generate an unprovable formula. Godel proves that one special instance of the various wff’s Y(y) is not provable in F(N). He considers the special one-variable predicate wff U(y) in F(N) defined as
which is interpreted as meaning that there is no proof Godel number, x say, of any wff Y(y) in which y equals the Godel number of Y(η). Taking one final step, Godel then computes the Godel number u of the predicate U(y), namely, u = G(U(y)), and constructs the specific formula U(u) given by
Formula U(u) has no free variables since u is a Godel number. The interpretation of U(u) states, by carefully unwinding the steps in the above interpretations (1),(2), (3) of Prf (x, u, u) that there is no proof Godel number x of U(u). So U(u), interpreted, asserts its own unprovability! As Godel pointed out, this self-referential property of U(u) is not new to logic. It was known in traditional logic as some variation of the liar’s paradox statement “I am lying” (Dear Reader: Can you decide if the liar is lying or telling the truth? Is his statement true or false? Either decision will lead to a contradiction).
In fact, the interpretation of U(u) is a metatheorem about provability, or rather a lack of it, in F(N). It is the essence of Godel’s incompleteness result. The truth of the metatheorem is easy to establish mathematically by a standard, simple, informal proof by contradiction, as Godel did, and as we now do.
Godel’s Incompleteness Theorem Let u be the Godel number of formula U(y) above. The formula U(u) given by ¬∃x Prf (x, u, u) is true by interpretation but not provable in the formal system of number theory F(N). Likewise, the negation of U(u) is not provable (i.e. U(u) is undecidable).
Proof Assume that the formula U(u) is provable in F(N). Then by truth-preservation the interpretation of U(u) is true. But the interpretation of U(u) asserts its own unprovability. Hence, U(u) is not provable. This is a contradiction i.e. the assumption that U(u) is provable implies U(u) is not provable. Therefore, by classical logic, U(u) is not provable. Furthermore, this unprovability is exactly the interpretation of ¬∃x Prf (x, u, u), so that this formula (i.e. U(u)) is true. So F(N) is incomplete.
Finally, consider the negation ¬U(u). Since U(u) is true, ¬ U(u) is not true. Therefore, ¬U(u) is not provable. #
Beyond Computation. Formal Syntax and Informal Interpretation
This theorem has raised questions about differences between the human brain and machine intelligence, a subject which interested Turing. E. Nagel and J. Newman in their book Godel’s Proof, N. Y. University Press, 1958, find in this theorem a rejection of machine intelligence. The physicist Roger Penrose in his 1994 book Shadows of the Mind conjectures that human consciousness is beyond computation (as defined and limited by the Church-Turing thesis) and speculates that, by some as yet unrecognized quantum mechanics processes the brain may be able to perform non-discrete algorithmic tasks that exceed the capability of Turing machines, such as establishing the truth of ¬∃x Prf (x, u, u). Ongoing research on quantum computers (Chap. 14) may be trying inadvertently to exploit this possibility.
The Godel theorem on incompleteness of F(N) ended Hilbert’s formalist program. It may be thought to cast some doubt on the adequacy of formal deductive proofs as implemented and limited by the Church-Turing thesis type of computations. However, our presentation of the Godel theorem does not force us to such drastic conclusions. As we presented it, the key to the theorem is in drawing a careful distinction between the formal syntactic aspects of the logical system F(N), which can be programmed into Turing machines, and the semantics of F(N), which we have associated with its interpretation. The interpretation process is not subject to the Church-Turing thesis. It can possibly be treated as a complicated mapping from F(N) wff’s onto natural language statements and their subsequent truth analysis using intuitive number theory N. This mapping has not been restricted to Church-Turing computability. To completely define the interpretation mapping would entail some computable constructions which characterize deductive proofs, as Godel has done, and furthermore satisfy the principle of truth-preservation. But more is involved. One of the elusive issues in defining the interpretation mapping is that of specifying the semantics of truth, especially where natural language is concerned. Natural language can be ambiguous, making truth analysis inexact. This issue has been studied by linguists and by various schools of logic, especially the intuitionist school of logic, led by L.E.J. Brouwer who raised questions about classical logic in his paper The untrustworthines of the principles of logic, 1908, Tijdschrift voor wijsbegeerte. For example, the intuitionists do not accept the classical law of the excluded middle which posits that any meaningful statement P is either true or false, which means that P OR ¬ P is always a true statement. More precisely, for intuitionists P XOR ¬P is true, where XOR is the exclusive or connective in which there is no middle true position (i.e. P XOR Q is not true if both P and Q are true as in the inclusive P OR Q, which is true if both P and Q are true). In particular, they do not accept this law for a statement P concerning an infinite set, which they regard, as Gauss did, as an incomplete entity. See S.C. Kleene, Introduction to Metamathematics, 1950, van Nostrand, for further discussion of intuitionism. Although fascinating, we shall not consider these issues further. We shall accept classical logic in our study of Computer Science and employ classical logic notions of mathematical truth such as the P OR¬ P law.
Hilbert died in 1943. It is rather ironic and sad to read the inscription on Hilbert’s tombstone, taken from an address that he delivered upon his retirement in 1930. He insisted that there are no unsolvable problems in mathematics and science. He said:
Wir mussen wissen (We must know.)
Wir werden wussen (We will know.)
It was only days preceding his address, and unknown to him, that the young Kurt Godel at a symposium on the foundations of mathematics announced the Incompleteness Theorem described in this appendix. There are undecidable statements of conditions which we can never “know” by proving them.
Addendum. Actually, the preceding explanation of Godel’s proof is lengthier than Godel’s own explanation of his proof. The latter can be read in the English translation of Godel’s paper by Elliott Mendelson in the book “The Undecidable”, edited by Martin Davis, Dover Publishing Co, 2004.
Godel sketches the main ideas of his proof at the beginning of his paper before plunging into the detailed constructions of the various wff’s involved. We quote some of his original statements with our own added parenthesized remarks. Thus, “For metamathematical considerations it makes no difference which objects one takes as primitive symbols…we decide to use natural numbers…a formula is a finite sequence of natural numbers … and a proof-figure is a finite sequence of finite sequences of natural numbers… the concepts formula, proof-figure [lines of a proof], provable formula are definable within the [formal] system…we obtain an undecidable proposition A for which neither A nor not-A is provable.” A formula with exactly one free variable and of the type of the natural numbers will be called a class-expression, “[these are] ordered in a sequence… R(n)…of class-expressions and the ordering R can be defined in the formal system.” “let A be a class expression… let {A:n} be the formula which arises by substitution of [the symbol for] the number n for the free variable. The ternary relation x = {y; z} is definable in the formal system … [z being a variable of type integer, y a variable of type class expression and x a variable of type wff] … Define a set K of natural numbers by
where Bew f means f is a provable [Beweissbar in German] formula. Since the concepts occurring in the definiens [right side of (†)] are all definable in the system, so also is K …i.e. there is a class expression S such that {S:n} intuitively interpreted says that n belongs to K. …S is identical with some R(q) … there is not the slightest difficulty in writing down the formula S.”
“We now show that the formula {R(q): q} is undecidable…. for if it is assumed to be provable then it would be true… [i.e. {S: q} would be true] so that q belongs to K … by (†) ¬ Bew {R(q): q} would hold, contradicting our assumption [that {R(q): q} is provable]. …If the negation ¬ {R(q): q} is provable [second assumption], i.e. [¬ {S: q} is provable, hence true which means that q does not belong to K], that is, Bew {R(q): q} would be provable [hence true].which is again impossible [contradicting our second assumption].” “…there is a close relationship with the Liar paradox.” “… {R(q):q} says that q belongs to K; i.e. by (†) {R(q):q} is not provable. Thus, we have a proposition before us which asserts [by its interpretation] its own unprovability…This method of proof can obviously be applied to every formal system which first possesses sufficient means of expression…to define the [system] concepts (especially the concept provable formula) and secondly in which every provable formula is true… we shall replace the second assumption by a purely formal and much weaker assumption.”
Appendix 2
The Lambda Calculus
In Chap. 3, we presented Turing’s theory of computation based on his machines as first introduced in his 1936 paper on the Entscheidungs (Decision) problem. As noted in several chapters, the Turing machine pioneered many ideas adopted in modern digital computers and is probably more important for that achievement rather than its original purpose of proving the unsolvability of the Decision problem. Recall that Turing’s paper was preceded by a few weeks by a paper by the American logician Alonzo Church proving the same unsolvability result by a quite different method called the lambda calculus. In this appendix, we give a short summary of the lambda calculus. Besides its original purpose of defining an effective procedure to be used in proving the unsolvability of the Decision problem, it turns out that many ideas in the lambda calculus correspond to techniques employed in modern programming languages, for example techniques for substituting values for variables in procedure calls (See Chap. 4).
As described in Appendix 1, in the early years of the twentieth century there was a very active development of formal logic. The propositional calculus was a formal system developed to formalize informal mathematical proofs involving declarative statements with no variables. The predicate calculus was a formal system developed to formalize mathematical proofs involving statements containing predicate expressions, that is, expressions formed with individual variables. These logical calculi prescribed precisely formulas, called well-formed-formulas (wff’s) to represent informal mathematical statements used in proofs of theorems and furthermore, specified precise rules for manipulating wff’s in formal proofs. The lambda calculus is a formal system devised by Alonzo Church and his Princeton graduate students S.C. Kleene and J.B. Rosser in the 1930s and 1940s to represent mathematical functions by wff’s called lambda expressions and to prescribe rules for manipulating such wff’s in a manner that corresponds to the various ways functions are manipulated in calculations found in informal mathematical treatises. The lambda calculus was supposed to include all effectively calculable functions, that is, all mathematical functions that could actually be calculated by obviously executable means.
The concept of a mathematical function had been in general use by mathematicians for many years before the 1930s but, surprisingly, without a commonly accepted notation for functions and without precise rules for manipulating functions in calculations. Functions were often defined by algebraic formulas, for example by polynomials in one variable such as x 2 + 2x + 1. A typical phrasing could be as follows: “let f be a function of the variable x defined by f(x) = x 2 + 2x + 1”. The notation “f(x)” was of fairly recent origin. An alternate common phrasing, still in use, could be: “let f be a function such that f: x → x 2 + 2x + 1”. This suggests that f is conceived of as a mapping from a domain set D into a range set R, stated as f: D → R, where x is in D and f(x) is in R. Some texts included heuristic two-dimensional diagrams depicting areas D and R with arrows on directed paths drawn from D to R to indicate the direction of the mapping. It was understood that there could only be at most one directed path emanating from any point x in D and ending at a point y in R. These informal notations are adequate for functions f of a single variable which has a well-defined domain D and range R, for example subsets of the real numbers. We can then think of “applying” f to a value d in D to obtain a value f(d) in R. The procedure of function application was the main abstract procedure performed with functions. If f(x) is defined by an algebraic expression, E say, involving the variable x, then application is effectively executed by substituting d for x in the expression E and carrying out the indicated algebraic operations in E. We shall denote this substitution calculation by E(x → d) and define the application by
For functions f of two or more variables, the defining expressions are more complicated and the process of applying f is more complicated. For example, let f(x, y) = x + y. We can apply f to an ordered pair (2, 3) to get the value f(2,3) = 2 + 3 = 5, if it is agreed implicitly that the ordering (2, 3) corresponds to (x, y). However, we can also apply f to (x, 3) to get f(x, 3) = x + 3, which is another function. So application of a function to arguments can yield other functions as results. Furthermore, mathematicians of that era were already considering higher-type functions. For example, for any integrable function f(x) the integral I(f) = ∫ ba f(x)dx defines a function I of f which has a real numerical value I(f). Even more generally, in some mathematical contexts there arise functions F, called transformations, which map a function f onto another function g, so that F(f) = g. A formal system which aims to provide a calculus for specifying and manipulating arbitrary mathematical functions must take the higher-order functions like F into account. This is the ultimate goal of the lambda calculus. A concomitant goal of Church was to restrict the manipulations on function wff’s to be “effective calculations” on symbol strings representing functions, where by “effective calculations” he meant such as would be acceptable to the Hilbert formalist school as parts of a decision procedure. However, he had to be careful that the restrictions would not limit the class of decision procedures by excluding some that, although complicated, should be clearly admissible. With the examples of the propositional and predicate calculus as a guide, Church devised the formal system called the lambda calculus as a system to provide wff’s to represent arbitrary mathematical functions and to provide rules for formally manipulating these wff’s in a manner which corresponded to the way functions are manipulated in informal procedural mathematics. If the wff’s and rules of the lambda calculus were properly formulated, then Church hoped and hypothesized that an effectively calculable function would be one expressible as a wff in the lambda calculus and an effective procedure acceptable to the Hilbert formalists would be available in the manipulations of such wff’s. When Turing proved in an appendix to his original 1936 paper that Turing machines and lambda calculus effective procedures are equivalent in that they yield the same class of procedures this supported Church’s hypothesis and it became the Church-Turing thesis that all computable functions are given either by Turing machines or the lambda calculus.
Syntax and Rules of Derivation of the Lambda Calculus
The main ambiguities in traditional function notation were the designation of how variables are used as symbolic arguments in function expressions E and the rules for applying a function expression E to actual arguments so as to obtain a function value. The preceding examples illustrate this ambiguity problem in the simple case where E is an algebraic formula. It was the main problem addressed by the lambda calculus. To explain Church’s solution, let us consider the syntax of lambda calculus wff’s expressing functions. We denote the set of such function wff expressions by ٨. We define it recursively as follows:
We assume a starting set of symbols called variables, say x, y, z etc..
-
1.
Certainly a variable x is in ^ (These are the simplest function wff’s).
-
2.
To obtain more complex function expressions, say using more variables, we suppose that E and F are in ^ ٨. Then the application of E to F, denoted by the concatenation (EF) is also in ^٨ (Parentheses can be omitted since application is assumed to be left-associative, so that EFG means (EF)G. Thus, xyz is a wff involving three variables).
-
3.
Let E be in ^٨. Let x be a variable. Then the expression (λx.E) is in ^٨. The operator λ is called the abstraction operator (It serves to designate possible variables x in E for which substitutions of “values” are to be made in evaluating E. It makes explicit which variable enters into an application procedure as stated in the beta-conversion procedure below. The scope of the abstraction operator is defined to be maximal so that λx.EF means λx. (EF) and not (λx. E)F. For multivariate functions we use λxλyλz E abbreviated as λxyzE. λx.E binds the variable x in E. All other variables in E that are not bound by an abstraction are free with regard to substitution. A variable is bound by its nearest abstraction operator).
There are two symbol-manipulation rules of the lambda calculus.
-
Alpha-conversion: change a bound variable x in λx.E to any other variable y to obtain λy.E as long as y is free, that is, not already bound, in E (This makes clear the role of bound function variables as place holders).
-
Beta-conversion (or reduction): Let E and F be lambda expressions in ^٨. Suppose E contains the variable x free and F does not. Then λxEF can be converted (reduced) by substituting F for x in E to get E(x → F) (This rule formalizes the application of a function to a specified argument and the evaluation of the result. Its inverse applied to E(x → F) yields the abstraction).
As a simple but interesting example, consider λx. x. Which function does it represent? Just apply it to any expression F and use beta-conversion to get (λx.x)F = (x → F) = F. So this represents the identity function.
Example: Suppose that the integers 2 and 7 have been encoded as lambda expressions (See below). Let the product 2*n also be encoded as a lambda expression. Then
This illustrates how symbol manipulation occurs in the lambda calculus formal system. Since every lambda expression denotes a function, in order to deal with domains like the integers some lambda expressions must be used to represent the integers n = 0, 1, 2, …. Church suggested that an integer n be represented by the n-fold composition fn of any function f with itself. Here f2(x) = f(f(x)) and fn (x) = f(f(…f(x))…) with n factors f. Also f0 is defined to be the identity function. Specifically he defined
Applying these lambda expressions to an expression E which does not contain f and using beta-conversion, we get 0E = λf.λx.xE = λf.(x → E) = λf.E = E; 1E = λf. fE = fE = f(E); 2E = λf.f(fE)) = f(fE) etc. as in ordinary composite function notation. The exponential behavior of n in fn yields the definition of addition, mPLUSn, namely, fm+n = fm fn. Then multiplication MULT is defined as repeated addition. Thus, mMULT n means add up n m times. A lambda expression which abbreviates multiplication is
To facilitate the use of the rather abstract symbolism in the lambda calculus it is convenient to introduce other abbreviations. By convention, the following two abbreviations (known as Church booleans) are used for the boolean values TRUE and FALSE:
Then, with these two abbreviated λ-expressions, we can define some basic logic operators as follows:
To verify that these expressions yield the usual Boolean values, consider the Church encoding for pairs, PAIR, which encapsulates the ordered pair (x,y) and the abbreviations FIRST for the first member x and SECOND for the second member y of a pair. By beta conversions it can be verified that FIRST returns the first element of the pair, and SECOND returns the second.
By applying the abbreviated logical operator expressions for AND etc. to the abbreviation for a (TRUE FALSE) pair the reader can verify that the logical operators satisfy the usual truth tables. Thus, for example, by beta conversion,
For numerical functions, the following expression defines the predicate ISZERO. Just apply it to any integer n as defined above to verify that ISZERO n is TRUE for n = 0 and FALSE for nonzero n.
By setting up such abbreviations for familiar mathematical and logical constructs, one can become familiar with the use of the lambda calculus as a formal system for manipulating the usual mathematical and logical functions and apply it to the decision problem which deals with proofs in predicate calculus. Rather than pursue this strategy further to obtain Church’s unsolvability result, we shall now consider other concepts of the lambda calculus which have been important for programming languages.
A structure that is useful in computer programming is the linked list or simply list. A list of elements (such as integers) can be defined as either NIL for the empty list, or the PAIR of an element and a smaller list. The predicate NULL tests for the value NIL.
Another computer programming structure is the procedure abstraction which serves as a subroutine in procedural languages like Fortran, C and C++. As noted in Chap. 4, Turing’s 1936 paper introduced subroutines as separate program sections P outside the main program having identifying names, such as P(x, y), where x and y are variables in statements in the defining body of P(x,y). The variables serve as parameters to be replaced by expressions u and v designated in a call statement, having the format P(u, v), at some point in the main program. The semantics of a call can be defined in the lambda calculus in terms of abstract function application, that is, as an application of P(x,y) to (u,v). We mentioned two possible useful cases of u and v, namely, as data values in a call-by-value and as addresses in a call-by-name. Other cases are possible. To define them formally, Church defines a predicate which determines whether a given lambda expression has a normal form. A normal form is an equivalent expression which cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. The term “redex” stands for an expression reducible by alpha or beta conversion. The latter may involve different orders of application. The lambda calculus considers various orders which have analogs in programming language when considering procedure bodies and procedure calls. For example, a procedure body P(x,y) can contain calls to other procedures Q(x, y). What substitutions should be done in such a call to Q(x, y) to execute a call P(u, v)? The lambda calculus offers several possible orders.
-
Applicative order
The leftmost, innermost redex is always reduced first. Intuitively this means a function’s arguments are always reduced before the function itself. Applicative order always attempts to apply functions to normal forms, even when this is not possible.
Most programming languages (including Lisp and C and Java) are described as “strict”, meaning that functions applied to non-normalising arguments are non-normalising. This is done essentially using applicative order in a call-by-value reduction (see below), usually called “eager evaluation”.
-
Normal order
The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced.
-
Call by name
As in normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
-
Call by value
Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction).
As pointed out by Peter Landin’s 1965 paper “A Correspondence between ALGOL 60 and Church’s Lambda-notation”, Commun. ACM 8, 89–101, 158–165, sequential procedural programming languages can be defined in terms of the lambda calculus, using the basic mechanisms for procedural abstraction and procedure (subprogram) application as enumerated above.
Turing’s Proof of the Unsolvability of the Decision Problem
Since we have not presented Church’s lambda calculus proof of the unsolvability of the decision problem for predicate calculus, we shall give a very brief outline of Turing’s machine-oriented proof. Turing follows Godel in setting up numerical codes for arbitrary Turing machine programs and their states and numerical functions for state transitions. The code or description number for M is computed from its program and is denoted by SD(M) (See Chap. 3). Conversely, given SD(M) one can reconstruct the program for M. He then constructs a predicate calculus wff U(SD(M)) for an arbitrary machine M that contains SD(M) as a subterm and which can be interpreted as asserting informally that there exists a configuration state q of M and a tape square n which contains the integer 0 when M is in state q. Formula U is built from several predicate wff’s which formalize how the program for machine M causes M to execute its computation steps. Thus, Turing arithmetizes his machines M and their computations.
He uses a formula for the integer successor function, SUCC(n,m), (interpreted as asserting that m = n + 1) and the predicate formula INT(n) which is interpreted as true whenever n is an integer. Then he defines SD(M) by Godel’s technique of assigning integer values to the symbols in the program of instructions for machine M. He constructs a predicate K(q, s) which asserts that q is the internal configuration of M when the total state of M is s. This is a straightforward coding calculation from s. The predicate I(s, k) is true if in state s the tape square k is scanned. This is again obtained from the definition of state s. The formula F(s, t) is interpreted as true if state t is a possible successor of state s according to the program for M. Finally, the predicate R(s, k, x) is true if for state s the scanned square is number k and contains the character x. Again this predicate can be constructed from the definition of the state of M as explained in Chap. 2. The wff U is a rather long wff which uses the above predicates and functions. We break it up into four lines as follows:
-
(Ǝn)[INT(n) & Ʉ(m) (INT(m) → ƎyF(m,y))
-
&ɄvɄz(F(v,z) → INT(v)&INT(z)&ɄmR(n, m, w)
-
&I(n,n)&K(q,n)&SD(M)]
-
→ƎsƎm(INT(s)&INT(m)&R(s, m, 0).
The fourth line carries the main interpretation of U. It states that there exists a state numbered s of M in which the scanned square is numbered m and contains the character 0. The first three lines assert that there is a number n (zero) which is the number of the first state and the first scanned square and further for each state m there is a successor state y.
Turing first proves (see Chap. 3) that there is no machine which can determine whether any machine M is circle-free and consequently whether M ever prints 0. He then establishes two Lemmas that U(SD(M)) is provable if and only if the character 0 appears on the tape of M in some state s. Next he assumes the Decision problem is solvable by some Decision machine TD. This means that TD can decide whether any wff, in particular any wff like U(SD(M)) for any M, is provable. But by the Lemmas, TD can then decide whether 0 appears on the tape of M. So TD can decide whether any M ever prints 0. But he has already shown that there is no such machine procedure. It follows that that there is no such machine TD. This contradicts the assumed existence of TD. So the Decision problem is unsolvable.
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Blum, E.K. (2011). The Heart of Computer Science. In: Blum, E., Aho, A. (eds) Computer Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-1168-0_3
Download citation
DOI: https://doi.org/10.1007/978-1-4614-1168-0_3
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-1167-3
Online ISBN: 978-1-4614-1168-0
eBook Packages: Computer ScienceComputer Science (R0)