SlideShare a Scribd company logo
1
A procedural interpretation
of the Church-Turing Thesis
Marie Duží, VSB-Technical University, Institute of Computer Science,
Ostrava, Czech Republic
marie.duzi@gmail.com
Introduction
Logicians are usually philosophically or mathematically minded. Why, then, would they be so
interested in problems that belong to computer science, like the explication of the notions of
algorithm, effective procedure, and suchlike? The reason for their interest is presumably this.
Such problems are interdisciplinary, and modern mathematics, logic and analytic philosophy
have much in common, going hand in hand. For instance, the classical decision problem
(Entscheidungsproblem) was tremendously popular among logicians. Kurt Gödel, for one,
worked on it.
Thus I first provide in Section 1 a brief summary of Gödel’s famous incompleteness
results. In the summary I will use a current technical vernacular. That is, I will use terms like
‘algorithm’, ‘effective procedure’, ‘recursive axiomatization’, etc. These terms were not used
in the time when Gödel was pursuing his research on (un)decidability, because the study of
these modern notions was triggered, inter alia, just by Gödel’s incompleteness results.
This paper offers a conceptual view of the Church-Turing Thesis, which is an attempt to
define the notion of algorithm/effective procedure.1
I am going to analyze the Thesis and the
problems of the specification of the concept of an algorithm. To this end I apply a procedural
theory of concepts. This theory was formulated by Materna using Transparent Intensional
Logic (TIL) as a background theory.2
I will not provide definite answers to the questions
posed by the problems just mentioned. Still I believe that the exact, fine-grained analysis
offered below will contribute to elucidating the notion of an effective procedure and will help
us to solve the problems stemming from the under-specification of the concept of algorithm.
The rest of the paper is structured as follows. Section 2 is a brief summary of the notions
of effective procedure, algorithm, effective method, Church’s Thesis, Turing’s Thesis, and
Turing-complete systems as they are known today. The Church-Turing Thesis deals with four
concepts, viz. EP, the concept of an effective procedure, TM, the concept of Turing machine
computability, GR, the concept of general recursive functions and D the concept of -
definable functions. The Thesis can be schematically introduced like this:
EP = TM = GR = D
The problematic constituent is here the most left-hand concept EP; TM, GR and D are
well defined and should serve to explicate or define or specify the concept of an algorithm,
EP. In this paper I am going to advance the research on this topic. My background theory is
TIL. Hence in Section 3 the foundations of TIL are introduced. Then in Section 4 I summarize
Materna’s procedural theory of concepts. Crucial for the definition of concept is the problem
of the individuation of procedures. To this end I define procedural isomorphism that lays
down a criterion of individuation of procedures. Finally, in the main Section 5 I apply our
logical machinery in order to analyze the notions introduced in Section 2, in particular to
1
Throughout the paper I will use the terms ‘algorithm’ and ‘effective procedure’ as synonyms.
2
For details on the procedural theory of concepts see, e.g., Materna (1998), (2004).
2
explicate the Church-Turing Thesis, its consequences and other closely related concepts. I
believe that our procedural view will shed new light on the Thesis. In particular, I will define
and make use of the notion of concept refinement, and propose constraints that would delimit
the concept of algorithm in such a way that the equivalence between the left-hand and right-
hand sides of the Church-Turing Thesis might be provable. Moreover, the distinction between
analytical and empirical concepts should elucidate the difference between purely theoretical
computational devices and machines that are restricted by empirical/physical laws.
1. Brief summary of Gödel’s Incompleteness Theorems.3
The German mathematician David Hilbert (1862-1943) announced his program of
formalization of mathematics in the early 1920s. It calls for a formalization of all of
mathematics in axiomatic form, and for proving the consistency of such formal axiom
systems. The consistency proof itself was to be carried out using only what Hilbert called
finitary methods. The special epistemological character of finitary reasoning then yields the
required justification of classical mathematics. Although Hilbert proposed his program in this
form only in 1921, it can be traced back until around 1900, when he first pointed out the
necessity of giving a direct consistency proof of analysis. This was the time when worrying
paradoxes began to crop up in mathematics (Zermelo’s paradox in 1900, Russell’s antinomy
in 1901, later in 1930 the Kleene-Rosser paradox, and many other paradoxes of self-
reference), most of them stemming from careless use of actual infinity. Hilbert first thought
that the problem of paradoxes arising from self-reference ‘vicious circle’ had essentially been
solved by Russell’s type theory in Principia. This is true, yet some fundamental problems of
axiomatics remained unsolved, including, inter alia, the decision problem.
In general, the idea of finitary axiomatization is simple: if we choose some basic formulas
(axioms) that are decidedly true and if we use a finite effective method of applying some
simple rules of inference that preserve truth, no falsehood can be derived from true axioms;
hence no contradiction can be derived, no paradox will crop up. Again, this is true, but the
problem remains that in this way we would never derive all true sentences of mathematics,
because there always remain independent sentences of which we are not able to decide
whether they are true or false. From the logical point of view, the decision problem is this.
Given a closed formula of first-order predicate logic (a sentence), decide whether it is
satisfiable (respectively, logically valid). Proof theorists usually prefer the validity version
whereas model theorists prefer the satisfiability version.
In 1928 Hilbert and Ackermann published a concise small book, Grundzüge der
theoretischen Logik, in which they arrived at exactly this point: they had defined axioms and
derivation rules of first-order predicate logic (FOL), and formulated the problem of
completeness. They raised the question whether such a proof calculus is complete in the sense
that each logical truth is provable within the calculus; in other words, whether the calculus
proves exactly all the logically valid FOL formulas.
Gödel’s Completeness Theorem gives a positive answer to this question: the 1st
-order
predicate proof calculus with appropriate axioms and rules is a complete calculus, i.e., all the
FOL logical truths are provable:
if |= , then |– .
Moreover, in a consistent FOL system,
syntactic provability is equivalent to being logically true:
|=   |– .
3
Portions of this section draw on material from Duží (2005).
3
There is even a stronger version of the Completeness Theorem that Gödel formulated and
proved as well. We derive consequences not only from logically valid sentences but also from
other sentences true under some interpretation rather than all interpretations. For instance,
from the facts that no prime number greater than 2 is even and 11 is a prime number greater
than 2 we can derive that the number 11 is not even. In FOL notation we have:
x [[P(x)  G(x, a)]  E(a)], [P(b)  G(b, a)]) |– E(b).
None of these formulas is a logical truth. They are true only under some but not all possible
interpretations. One such interpretation that makes the formula true is the intended one, viz.
the interpretation with the universe of natural numbers assigning the set of primes to the
symbol P, the relation of being greater than to the symbol R, the set of even numbers to Q,
and numbers 2 and 11 to the constants a and b, respectively. Yet this derivation is correct,
since the conclusion is logically entailed by the premises: whenever the premises are true, the
conclusion must be true as well. In other words, the conclusion is true in all the models of the
premises.
To formulate the strong version of the Completeness Theorem, we need to define the
notions of theory and proof in a theory. A (FOL) theory is given by a (possibly infinite) set of
FOL logical axioms and the set of special axioms. A proof in a theory T is a sequence of
formulas 1,…,n such that each i is either
 a logical axiom or
 a special axiom of T, or
 i is derived from some previous members of the sequence 1,…,i-1 using a
derivation rule of FOL.
A formula  is provable in T iff it is the last member of a proof in T; we also say that the
theory T proves , and the formula  is a theorem of the theory (denoted T |– ). A structure
M is a model of the theory T, denoted M |= T, iff each special axiom of T is valid in M.
The strong version of the Completeness Theorem holds that a formula  is provable in a
(consistent) theory T if and only if  is logically entailed by its special axioms; in other words,
iff  is valid in every model of the theory; in (meta-) symbols:
T |=   T |– .
Gödel’s famous results on incompleteness that entirely changed the character of modern
mathematics were announced by Gödel in 1930, and his paper ‘Über formal unentscheidbare
Sätze der Principia Mathematica und verwandter Systeme I’ was published in 1931. This
work contained a detailed proof of the Incompleteness Theorem and a formulation of the
second Incompleteness Theorem; both theorems were formulated within the system of
Principia Mathematica. In 1932 Gödel published in Vienna a short summary, ‘Zum
intuitionistischen Aussagenkalkül’, which was based on a theory that is nowadays called
Peano arithmetic.
In order to introduce these results in a comprehensible way, let me just briefly recapitulate
the main steps of Gödel's argument:
1. A theory is adequate if it encodes finite sequences of numbers and defines sequence
operations such as concatenation. An arithmetic theory such as Peano arithmetic (PA) is
adequate (so is, e.g., set theory).
2. In an adequate theory T we can encode the syntax of terms, sentences (closed formulas)
and proofs. This means that we can ask which facts about provability in T are provable in
T itself. Let us denote the code of  as <>.
4
3. The self-reference (diagonal) lemma: For any formula (x) (with one free variable) in an
adequate theory, there is a sentence  such that  iff (<>).
4. Let Th(N) be the set of numbers that encode true sentences of arithmetic (i.e. formulas
true in the standard model of arithmetic N), and Thm(T) the set of numbers that encode
sentences provable in an adequate (sound) theory T. Since the theory is sound, the latter is
a subset of the former: Thm(T)  Th(N). It would be nice if they were the same; in that
case the theory T would be complete.
5. No such luck if the theory T is recursively axiomatised, i.e., if the set of axioms is
computable in the following sense: there is an algorithm that, given an input formula ,
computes a Yes / No answer to the question whether  is an axiom. The computability of
the set of axioms and the completeness of the theory T are two goals that cannot be
achieved simultaneously, because:
5.1. The set Th(N) is not even definable by an arithmetic sentence such that it would be
true if its number were in the set and false otherwise. Here is why. Let n be a number
such that n  Th(N). Then by Self-Reference (3) there is a sentence  such that <>
= n. Hence  iff <>  Th(N) iff  is not true in N iff not  – contradiction! There is
no such . Since being non-definable implies being non-computable there will never
be a program that would decide whether an arithmetic sentence is true or false (in the
standard model of arithmetic).
5.2. The set Thm(T) is definable in an adequate theory, say Robinson’s arithmetic Q: for
any formula  the Gödel number <> is in Thm(T) iff  is provable, for: the set of
axioms is recursively countable, i.e., computable, so is the set of proofs that use these
axioms and so is the set of provable formulas and thus so is the set Thm(T). Since
computable implies definable in adequate theories, Thm(T) is definable. Let n be a
number such that n  Thm(T). By Self Reference (3) there is a sentence γ such that
<γ> = n. Hence γ iff <γ>  Thm(T), that is, γ is not provable. Now if γ is false then γ
is provable. This is impossible in a sound theory: provable sentences are true. Hence
γ is true but improvable.
Now one may wonder: if we can algorithmically generate the set Thm(T), can we not
obtain all the true sentences of arithmetic? Unfortunately, we cannot. No matter how far we
push ahead, we will never reach all of them, because there is no algorithm that would decide
each and every formula. There will always remain formulas that are simultaneously true and
undecidable. We define the notion of a theory being decidable thus:
A theory T is decidable if the set Thm(T) of formulas provable in T is (general) recursive.
If a theory is recursively axiomatized and complete, then it is decidable. However, one of the
consequences of Gödel’s incompleteness theorem is:
No recursively axiomatized theory T that contains Q and has a model N is decidable:
there is no algorithm that would decide every formula  (whether it is provable in the theory
T or not). For, if we had such an algorithm, we could use it to extend the theory so that it were
complete, which is impossible if the theory T is consistent (according to Rosser’s
improvement of Gödel’s first theorem).
Denoting Ref(T) the set of all the sentences refutable in the theory T (i.e. the set of all the
sentences  such that T |– ), it is obvious that also this set Ref(T) is not recursive. We can
illustrate mutual relations between the sets Thm(T), Th(N), and Ref(T) by the following
figure:
5
If the theory T is recursively axiomatized and complete, the sets Thm(T), Th(N) coincide
and Ref(T) is their complement. In such a case the set of numbers of sentences independent of
T (the hatched set in the figure) is empty. In an incomplete theory this set is non-empty.
Another consequence of the Incompleteness theorem is the undecidability of the problem
of logical truth in FOL: The FOL proof calculus is a theory without special axioms. Though it
is a complete calculus (all the logically valid formulas are provable), as an empty theory it is
not decidable: there is no algorithm that would decide for each and every formula  whether
it is a theorem or not (equivalently, whether it is a logically valid formula or not). The
problem of logical truth is not decidable in FOL. For Q is an adequate theory with a finite
number of axioms. If Q1,…Q7 are its axioms (closed formulas), then a sentence  is provable
in Q iff (Q1 & … & Q7)   is provable in the FOL calculus, and so (Q1 & … & Q7)   is
a logically valid formula.4
If the calculus were decidable, then so would Q be, which it is not,
however.
Alonzo Church proved that there are proof calculi that are semi-decidable: there is an
algorithm which at an input formula  that is logically valid outputs the answer Yes. If,
however, the input formula  is not a logical truth the algorithm may answer No or it never
outputs an answer.5
Gödel discovered that the sentence γ claiming “I am not provable” is equivalent to the
sentence ξ claiming “There is no <> such that both <> and <> are in Thm(T)”. The
latter is a formal statement that the system is consistent. Since γ is not provable, and γ and ξ
are equivalent, ξ is not provable, either. Thus we have:
Gödel’s Second Theorem of incompleteness: In any consistent, recursively axiomatizable
theory T that is strong enough to encode sequences of numbers (and thus the syntactic notions
of formula, sentence, proof) the consistency of the theory T is not provable in T.
The second incompleteness theorem shows that there is no hope of proving, e.g., the
consistency of first-order arithmetic using finitary means, provided we accept that finitary
means are correctly formalized in a theory, the consistency of which is provable in PA. As
Georg Kreisel remarked, it would actually provide no interesting information if a theory T
proved its consistency. This is because inconsistent theories prove everything, including their
consistency. Thus a consistency proof of T in T would give us no clue as to whether T really
is consistent.
One of the first to recognize the revolutionary significance of the incompleteness results
was John von Neumann who came close to anticipating Gödel’s Second Theorem. Others
were slower in absorbing the essence of the problem and accepting its solution. For example,
Hilbert’s assistant Paul Bernays had difficulties with the technicalities of the proof that were
4
Here we are using the Theorem of Deduction: Q1 & … & Qn |  iff Q1 & … & Qn-1 | Qn  .
5
Of course, there are subclasses of FOL that are decidable. For details, see Börger et al. (1996).
Axioms
Thm(T) Th(N) Ref(T)
6
cleared up only after extensive correspondence.6
Gödel’s breakthrough even drew sharp
criticism, which was due to the prevailing conviction that mathematical thinking can be
captured by laws of pure symbol manipulation, and due to the inability to make the necessary
distinctions involved, such as that between the notions of truth and proof. Thus, for instance,
the famous set theorist Ernst Zermelo interpreted the latter in a way that generates a
contradiction within Gödel’s results.
Since no reasonable axiomatic theory T can prove its own consistency, a theory S capable
of proving the consistency of T can be viewed as being considerably stronger than T. Of
course, being considerably stronger implies being non-equivalent. The Levy Reflection
Principle, which is non-trivial, but also not so difficult to prove, states that Zermelo-Fraenkel
set theory ZF proves the consistency of each of its finitely axiomatized sub-theories. So by
Gödel’s Second Theorem, full ZF is considerably stronger than any of its finitely axiomatized
fragments. This in turn yields a simple proof that ZF is not finitely axiomatizable.
The second-order theories (of real numbers, of complex numbers, and of Euclidean
geometry) do have complete axiomatizations. Hence these theories have no sentences that are
simultaneously true and unprovable. The reason they escape incompleteness is their
inadequacy: they cannot encode and computably deal with finite sequences. The price we pay
for second-order completeness is high: the second-order calculus is not (even semi-)
decidable. We cannot algorithmically generate all the second-order logical truths, thus not all
the logical truths are provable, and so the second-order proof calculus is not semantically
complete.
The consequences of Gödel’s two theorems are clear and generally accepted. First of all,
the formalist belief in identifying truth with provability is destroyed by the First Theorem.
Second, the impossibility of an absolute consistency proof (acceptable from the finitary point
of view) is even more destructive for Hilbert’s program. Gödel’s Second Theorem makes the
notion of a finitary statement and finitary proof highly problematic. If the notion of a finitary
proof is identified with a proof formalized in an axiomatic theory T, then the theory T is a
very weak theory. If T satisfies simple requirements, then T is suspected of inconsistency. In
other words, if the notion of finitary proof means something that is non-trivial and at the same
time non-questionable and consistent, there is no such thing.
Though it is almost universally believed that Gödel’s results destroyed Hilbert’s program,
the program was very inspiring for mathematicians, philosophers and logicians. Some
thinkers claimed that we should still be formalists.7
Others, like Brouwer, the father of
modern constructive mathematics, believe that mathematics is first and foremost an activity:
mathematicians do not discover pre-existing things, as a Platonist holds, and they do not
manipulate symbols, as a formalist holds. Mathematicians, according to Brouwer, make
things. Some recent intuitionists seem to stay somewhere in between: being ontological
realists, they admit that there are abstract entities we discover in mathematics, but at the same
time, being semantic intuitionists, they maintain that these abstract entities ‘cannot be claimed
to exist’ unless they are well defined by a formal proof, as a sequence of judgements.8
The possible impact of Gödel’s results on the philosophy of mind, artificial intelligence,
and on Platonism might be a matter of dispute. Gödel himself suggested that the human mind
cannot be a machine and that Platonism is correct. More recently Roger Penrose has argued
that “Gödel’s results show that the whole programme of artificial intelligence is wrong, that
creative mathematicians do not think in a mechanic way, but that they often have a kind of
insight into the Platonic realm which exists independently from us”.9
Gödel’s doubts about
6
The technical device used in the proof is now known as Gödel numbering.
7
See, e.g., Detlefsen (1990).
8
This is a slight rephrasing of a remark made by Peter Fletcher in an e-mail correspondence.
9
See, Brown (1999. p. 78).
7
the limits of formalism were certainly influenced by Brouwer who criticised formalism in the
lecture presented at the University of Vienna in 1928. Gödel, however, did not share
Brouwer’s intuitionism based on the assumption that mathematical objects are created by our
activities. For Gödel as a Platonic realist mathematical objects exist independently and we
discover them. On the other hand he claimed that our intuition cannot be reduced to Hilbert’s
concrete intuition of finite symbols, but we have to accept abstract entities like well-defined
mathematical procedures that have a clear meaning without further explication. His proofs are
constructive and therefore acceptable from the intuitionist point of view.
In fact, Gödel’s results are based on two fundamental concepts: truth for formal languages
and effective computability. Concerning the former, Gödel stated in his Princeton lectures that
he was led to the incompleteness of arithmetic via his recognition of the non-definability of
arithmetic truth in its own language. In the same lectures he offered the notion of general
recursiveness in connection with the idea of effective computability; this was based on a
modification of a definition proposed by Herbrand.
In the meantime, Church presented his thesis identifying effectively computable functions
with -definable functions. Gödel was not convinced by Church’s thesis, because it was not
based on a conceptual analysis of the notion of finite algorithmic procedure. It was only when
Turing, in 1937, offered the definition in terms of his machines that Gödel was ready to
accept the identification of the various classes of functions: the -definable, the general
recursive, and the Turing-computable ones.
The pursuit of Hilbert’s program had thus an unexpected side effect: it gave rise to the
realistic research on the theory of algorithms, effective computability and recursive functions.
Von Neumann, for instance, along with being a great mathematician and logician, was an
early pioneer in the field of modern computing, though it was a difficult task because
computing was not yet a respected science. His conception of computer architecture still has
not been surpassed. Gödel’s First Theorem has another interpretation in the language of
computer science. In first-order logic, the set of theorems is recursively enumerable: you can
write a computer program that will eventually generate any valid proof. You can ask if they
satisfy the stronger property of being recursive: can you write a computer program to
definitively determine if a statement is true or else false? Gödel’s First Theorem says that in
general you cannot; a computer can never be as smart as a human being because the extent of
its knowledge is limited by a fixed set of axioms, whereas people can discover unexpected
truths and enrich their knowledge gradually.
In my opinion, it is fair to say that Gödel’s results changed the face of meta-mathematics
and influenced all aspects of modern mathematics, artificial intelligence and philosophy of
mind. Moreover, they were really a strong impulse of the development of theoretical
computer science. Hence, it should be clear now that Church-Turing Thesis and the related
issues are still a hot topic. After all, we still do not have a rigorous definition of the central
concept in computer science, viz. algorithm.
2. Effective procedures and the Church-Turing Thesis
In this section I briefly summarize the notion of an algorithm/effective procedure and the
attempts to precisely characterize or even define this notion. Though there are many such
attempts, we still do not precisely know what an algorithm is; there remain open questions
concerning the notion of algorithm, for instance:
 Does an algorithm have to terminate, or could it sometimes compute theoretically for
ever?
 Does an algorithm always have to produce the value of a function being computed, or
does it compute properly partial functions with value gaps?
8
First I present a brief summary of the attempts to specify criteria for a method M to be
effective. Then I summarize particular theses as presented by Church, Turing, and others.
These theses are just theses. They are neither provable nor definitions. Though these notions
are well-known, I include this section in the interest of making the paper easier to read
without consulting additional sources of information. Also I wish to share with the reader the
same terminology and theoretical background.10
Copeland’s characterisations of an effective method M are these (Copeland 2008): A method,
or procedure, M, for achieving some desired result is called ‘effective’ or ‘mechanical’ just in
case
1. M is set out in terms of a finite number of exact instructions (each instruction being
expressed by means of a finite number of symbols);
2. M will, if carried out without error, produce the desired result in a finite number of
steps;
3. M can (in practice or in principle) be carried out by a human being unaided by any
machinery save paper and pencil;
4. M demands no insight or ingenuity on the part of the human being carrying it out.
On the problem of defining algorithm Gurevich (2003) refers to Kolmogorov’s research:
The problem of the absolute definition of algorithm was addressed again in 1953
by Andrei N. Kolmogorov; …. Kolmogorov spelled out his intuitive ideas about
algorithms. For brevity, we express them in our own words (rather than translate
literally).
 An algorithmic process splits into steps whose complexity is bounded in advance,
i.e., the bound is independent of the input and the current state of the
computation.
 Each step consists in a direct and immediate transformation of the current state.
 This transformation applies only to the active part of the state and does not alter
the remainder of the state.
 The size of the active part is bounded in advance.
 The process runs until either the next step is impossible or a signal says a solution
has been reached.
In addition to these intuitive ideas, Kolmogorov gave a one-paragraph sketch of a new
computation model. The model was introduced in the papers Kolmogorov & Uspensky (1958,
1963) written by Kolmogorov together with his student Vladimir A. Uspensky. The
Kolmogorov machine model can be thought of as a generalization of the Turing machine
model where the tape is a directed graph of bounded in-degree and bounded out-degree. The
vertices of the graph correspond to Turing’s squares; each vertex has a colour chosen from a
fixed, finite palette of vertex colours; one of the vertices is the current computation centre.
Each edge has a colour chosen from a fixed, finite palette of edge colours; distinct edges from
the same node have different colours. The program has this form: replace the vicinity U of a
fixed radius around the central node by a new vicinity W that depends on the isomorphism
type of the digraph U with the colours and the distinguished central vertex. Contrary to
Turing's tape whose topology is fixed, Kolmogorov's ‘tape’ is reconfigurable.
Here are the particular theses (slightly reformulated) as presented by Church and Turing.
These theses concern numerical functions and criteria for them to be effectively or
mechanically computable:
10
Portions of this section draw on material from Copeland (2008) and Copeland & Sylvan (1999).
9
Church: A numerical function is effectively computable by an algorithmic routine if and only
if it is general recursive or -definable.
Note. The concept of a -definable function is due to Church (1932, 1936, 1941), Kleene
(1936), and the concept of a recursive function is due to Gödel (1934) and Herbrand (1932).
The class of -definable functions and the class of recursive functions are identical. This was
established in the case of functions of positive integers by Church (1936) and Kleene (1936).
Turing: A numerical function is effectively computable by an algorithmic routine if and only
if it is computable by a Turing machine.
After learning of Church’s proposal, Turing quickly established that the apparatus of -
definability and his own apparatus of computability are equivalent (1936: 263ff). Thus, in
Church’s proposal, the words ‘recursive function of positive integers’ can be replaced by the
words ‘function of positive integers computable by a Turing machine’.
Post (1936, p. 105) referred to Church’s identification of effective calculability with
recursiveness as a ‘working hypothesis’, and quite properly criticized Church for masking this
hypothesis as a definition. This criticism then yielded a new ‘working hypothesis’ that Church
proposed:
Church's Thesis: A function of positive integers is effectively calculable only if it is
recursive.
The reverse implication, that every recursive function of positive integers is effectively
calculable, is commonly referred to as the converse of Church's thesis (although Church
himself did not so distinguish them, bundling both theses together in his ‘definition’). If
attention is restricted to functions of positive integers then Church’s Thesis and Turing’s
Thesis are equivalent, in view of the results by Church, Kleene and Turing mentioned above.
The term ‘Church-Turing thesis’ seems to have been first introduced by Kleene:
So Turing’s and Church’s theses are equivalent. We shall usually refer to them
both as Church’s thesis, or in connection with that one of its ... versions which
deals with ‘Turing machines’ as the Church-Turing Thesis. (1967, p. 232.)
Since the sets of -definable functions and general recursive functions are provably
identical, we can formulate the Church-Turing Thesis like this:
Church-Turing Thesis: A function of positive integers is effectively calculable if and only if
it is general recursive or -definable or computable by a Turing machine.
Hence the concepts of general recursive functions, -definable functions and Turing
computable functions coincide in this sense. These three very distinct concepts are equivalent,
because they share the same extension, viz. the set of functions-in-extension that are known to
be effectively computable.
As Kleene (1952) rightly points out, the equivalences between Turing computable
functions, general recursive functions and -definable functions provide strong evidence for
the Church-Turing thesis, because:
1) Every effectively calculable function that has been investigated in this respect has turned
out to be computable by Turing machine.
2) All known methods or operations for obtaining new effectively calculable functions from
given effectively calculable functions are paralleled by methods for constructing new
Turing machines from existing Turing machines.
10
3) All attempts to give an exact analysis of the intuitive notion of an effectively calculable
function have turned out to be equivalent, in the sense that each analysis offered has been
proved to pick out the same class of functions, namely those that are computable by a
Turing machine.
4) Because of the diversity of the various analyses, (3) is generally considered to provide
particularly strong evidence.
Next I briefly summarize many known characterizations of Turing-complete systems.
Wikipedia has this to say:11
“In computability theory, a system of data-manipulation rules
(such as a computer’s instruction set, a programming language, or a cellular automaton) is
said to be Turing complete or computationally universal if it can be used to simulate any
single-taped Turing machine. A classic example is the lambda calculus. The concept is named
after Alan Turing.”
Computability theory includes the closely related concept of Turing equivalence. Another
term for Turing equivalent computing system is ‘effectively computing system’. Two
computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P.
Thus, a Turing-complete system is one that can simulate a Turing machine; any real world
computer can be simulated by a Turing machine.
In colloquial usage, the terms ‘Turing complete’ or ‘Turing equivalent’ are used to mean
that any real-world, general-purpose computer or computer language can approximately
simulate any other real-world, general-purpose computer or computer language, within the
bounds of finite memory.
A universal computer is defined as a device with a Turing-complete instruction set,
infinite memory, and an infinite lifespan; all general-purpose programming languages and
modern machine instruction sets are Turing-complete, apart from having finite memory.
In practice, Turing completeness means that the rules followed in sequence on arbitrary
data can produce the result of any calculation. In imperative languages, this can be satisfied
by having, minimally, conditional branching (e.g., an ‘if’ and ‘goto’ statement) and the ability
to change arbitrary memory locations (e.g., having variables). To show that something is
Turing complete, it is enough to show that it can be used to simulate the most primitive
computer, since even the simplest computer can be used to simulate the most complicated
one.
Apart from -definability and recursiveness, there are other Turing-complete systems as
presented by logicians and computer scientists, for instance:
 Gödel's notion of computability (Gödel 1936, Kleene 1952);
 register machines (Shepherdson and Sturgis 1963);
 Post’s canonical and normal systems (Post 1943, 1946);
 combinatory definability (Schönfinkel 1924, Curry 1929, 1930, 1932);
 Markov (normal) algorithms (Markov 1960);
 Register machines (Shepherdson and Sturgis 1963);
 pointer machine model of Kolmogorov and Uspensky (1958, 1963).
An interesting thesis known as ‘Thesis M’ is due to Gandy (1980):
11
See http://en.wikipedia.org/wiki/Turing_completeness; retrieved on July 20, 2012.
11
Whatever can be calculated by a machine
(working on finite data in accordance with a finite program of instructions)
is Turing-machine computable.
There are two possible interpretations of Gandy’s thesis, namely a narrow-sense and a wide-
sense formulation:12
a) narrow sense: ‘by a machine’ in the sense ‘by a machine that conforms to the physical
laws of the actual world’.
Thesis M is then an empirical proposition, which means that it cannot be
analytically proved.
b) wide sense: abstracting from the issue of whether or not the machine in question
could exist in the actual world.
Thesis M is then false: “Super-Turing machines” have been described that calculate
functions that are not Turing-machine-computable.13
This completes our summary of notions that we are now going to analyse using TIL.
3. Foundations of Transparent Intensional Logic
The syntax of TIL is Church’s (higher-order) typed -calculus, but with the all-important
difference that the syntax has been assigned a procedural (as opposed to denotational)
semantics, according to which a linguistic sense is an abstract procedure detailing how to
arrive at an object of a particular logical type. TIL constructions are such procedures. A main
feature of the -calculus is its ability to systematically distinguish between functions and
functional values. An additional feature of TIL is its ability to systematically distinguish
between functions and modes of presentation of functions and modes of presentation of
functional values.14
The TIL operation known as Closure is the very procedure of presenting or forming or
obtaining or constructing a function; the TIL operation known as Composition is the very
procedure of constructing the value (if any) of a function at an argument. Compositions and
Closures are both multiple-step procedures, or constructions, that operate on input provided
by two one-step constructions, which figure as sub-procedures (constituents) of Compositions
and Closures, namely variables and so-called Trivializations.
Characters such as ‘x’, ‘y’ ‘z’ are words denoting variables, which construct the respective
values that an assignment function has accorded to them. The linguistic counterpart of a
Trivialization is a constant term always picking out the same object. An analogy from
programming languages might be helpful. The Trivialization of an object X, whatever X may
be, and its use are comparable to a pointer to X and the dereference of the pointer. In order to
operate on X, X needs to be grabbed first. Trivialization is such a one-step grabbing
mechanism. Similarly, in order to talk about Beijing (in non-demonstrative and non-indexical
English discourse), we need to name Beijing, most simply by using the constant ‘Beijing’.
Furthermore, TIL constructions represent our interpretation of Frege’s notion of Sinn
(with the exception that constructions are not truth-bearers; instead some constructions
present either truth-values or truth-conditions) and are kindred to Church’s notion of concept.
12
For details, see Copeland (2000).
13
It is straightforward to describe such machines, or ‘hypercomputers’ (Copeland and Proudfoot (1999)) that
generate functions that fail to be Turing-machine-computable (see e.g. Abramson (1971), Copeland (2000),
Copeland and Proudfoot (2000), Stewart (1991)).
14
Portions of this section draw on material from Duží & Jespersen (in submission) and Duží et. al. (2010).
12
Constructions are linguistic senses as well as modes of presentation of objects and are our
hyperintensions. While the Frege-Church connection makes it obvious that constructions are
not formulae, it is crucial to emphasize that constructions are not functions(-in-extension),
either. Rather, technically speaking, some constructions are modes of presentation of
functions, including 0-place functions such as individuals and truth-values, and the rest are
modes of presentation of other constructions. Thus, with constructions of constructions,
constructions of functions, functions, and functional values in our stratified ontology, we need
to keep track of the traffic between multiple logical strata. The ramified type hierarchy does
just that. What is important, in this paper, about this traffic is, first of all, that constructions
may themselves figure as functional arguments or values. Certain constructions, qua objects
of predication, figure as functional arguments of other functions. Moreover, since
constructions can be arguments of functions, we consequently need constructions of one order
higher to grab these argument constructions.
The sense of an empirical sentence is an algorithmically structured construction of the
proposition denoted by the sentence. The denoted proposition is a flat, or unstructured,
mapping with domain in a logical space of possible worlds. Our motive for working ‘top-
down’ has to do with anti-contextualism: any given unambiguous term or expression (even
one involving indexicals or anaphoric pronouns) expresses the same construction as its sense
whatever sort of context the term or expression is embedded within. And the sense/meaning
of an expression determines the respective denoted entity (if any) constructed by its sense, but
not vice versa. The denoted entities are (possibly 0-ary) functions understood as set-
theoretical mappings.
The context-invariant semantics of TIL is obtained by universalizing Frege’s reference-
shifting semantics custom-made for ‘indirect’ contexts.15
The upshot is that it becomes
trivially true that all contexts are transparent, in the sense that pairs of terms that are co-
denoting outside an indirect context remain co-denoting inside an indirect context and vice
versa. In particular, definite descriptions that only contingently describe the same individual
never qualify as co-denoting. Rather, they are just contingently co-referring in a given
possible world and at a given time of evaluation. Our term for the extra-semantic, factual
relation of contingently describing the same entity is ‘reference’, whereas ‘denotation’ stands
for the intra-semantic, pre-factual relation between two words that pick out the same entity at
the same world/time-pairs.
Our neo-Fregean semantic schema, which applies to all contexts, is this triangulation:
Expression Construction Denotation
expresses constructs
denotes
The most important relation in this schema is between an expression and its meaning, i.e.,
a construction. Once constructions have been defined, we can logically examine them; we can
investigate a priori what (if anything) a construction constructs and what is entailed by it.
Thus meanings (i.e. constructions) are semantically primary, denotations secondary, because
an expression denotes an object (if any) via its meaning that is a construction expressed by the
expression. Once a construction is explicitly given as a result of logical analysis, the entity (if
any) it constructs is already implicitly given. As a limiting case, the logical analysis may
reveal that the construction fails to construct anything by being improper.
In order to put our framework on a more solid ground, we now present particular
definitions. First we set out the definitions of first-order types (regimented by a simple type
15 See (Frege, 1892a).
13
theory), constructions, and higher-order types (regimented by a ramified type hierarchy),
which taken together form the nucleus of TIL, accompanied by some auxiliary definitions.
The type of first-order objects includes all objects that are not constructions. Therefore, it
includes not only the standard objects of individuals, truth-values, sets, etc., but also functions
defined on possible worlds (i.e., the intensions germane to possible-world semantics). Sets,
for their part, are always characteristic functions and insofar extensional entities. But the
domain of a set may be typed over higher-order objects, in which case the relevant set is itself
a higher-order object. Similarly for other functions, including relations, with domain or range
in constructions. That is, whenever constructions are involved, we find ourselves in the
ramified type hierarchy. The definition of the ramified hierarchy of types decomposes into
three parts: firstly, simple types of order 1; secondly, constructions of order n; thirdly, types
of order n + 1.
Definition 1 (types of order 1). Let B be a base, where a base is a collection of pair-wise
disjoint, non-empty sets. Then:
(i) Every member of B is an elementary type of order 1 over B.
(ii) Let α, β1, ..., βm (m > 0) be types of order 1 over B. Then the collection
(α β1 ... βm) of all m-ary partial mappings from β1  ...  βm into α is a functional type of
order 1 over B.
(iii) Nothing is a type of order 1 over B unless it so follows from (i) and (ii).
Definition 2 (construction)
(i) The Variable x is a construction that constructs an object X of the respective type
dependently on a valuation v; x v-constructs X.
(ii) Trivialization: Where X is an object whatsoever (an extension, an intension or a
construction), 0
X is the construction Trivialization. It constructs X without any change.
(iii) The Composition [X Y1…Ym] is the following construction. If X v-constructs a function f
of a type (αβ1…βm), and Y1, …, Ym v-construct entities B1, …, Bm of types β1, …, βm,
respectively, then the Composition [X Y1…Ym] v-constructs the value (an entity, if any,
of type α) of f on the tuple-argument B1, …, Bm. Otherwise the Composition [X
Y1…Ym] does not v-construct anything and so is v-improper.
(iv) The Closure [λx1…xm Y] is the following construction. Let x1, x2, …, xm be pair-wise
distinct variables v-constructing entities of types β1, …, βm and Y a construction v-
constructing an α-entity. Then [λx1 … xm Y] is the construction λ-Closure (or Closure). It
v-constructs the following function f of the type (αβ1…βm). Let v(B1/x1,…,Bm/xm) be a
valuation identical with v at least up to assigning objects B1/β1, …, Bm/βm to variables x1,
…, xm. If Y is v(B1/x1,…,Bm/xm)-improper (see iii), then f is undefined on the argument
B1, …, Bm. Otherwise the value of f on B1, …, Bm is the α-entity v(B1/x1,…,Bm/xm)-
constructed by Y.
(v) The Single Execution 1
X is the construction that either v-constructs the entity v-
constructed by X or, if X v-constructs nothing, is v-improper (yielding nothing relative to
valuation v).
(vi) The Double Execution 2
X is the following construction. Where X is any entity, the
Double Execution 2
X is v-improper (yielding nothing relative to v) if X is not itself a
construction, or if X does not v-construct a construction, or if X v-constructs a v-
improper construction. Otherwise, let X v-construct a construction Y and Y v-construct an
entity Z: then 2
X v-constructs Z.
(vii) Nothing is a construction, unless it so follows from (i) through (vi).
14
Definition 3 (ramified hierarchy of types)
T1 (types of order 1). See Definition 1.
Cn (constructions of order n)
i) Let x be a variable ranging over a type of order n. Then x is a construction of order n
over B.
ii) Let X be a member of a type of order n. Then 0
X, 1
X, 2
X are constructions of order n
over B.
iii) Let X, X1,..., Xm (m > 0) be constructions of order n over B. Then [X X1... Xm] is a
construction of order n over B.
iv) Let x1,...xm, X (m > 0) be constructions of order n over B. Then [x1...xm X] is a
construction of order n over B.
v) Nothing is a construction of order n over B unless it so follows from Cn (i)-(iv).
Tn+1 (types of order n + 1). Let n be the collection of all constructions of order n over B.
Then
i) n and every type of order n are types of order n + 1.
ii) If m > 0 and , 1,...,m are types of order n + 1 over B, then ( 1 ... m) (see T1 ii)) is
a type of order n + 1 over B.
iii) Nothing is a type of order n + 1 over B unless it so follows from Tn+1 (i) and (ii).
Remark. For the purposes of natural-language analysis, we are currently assuming the
following base of ground types, which is part of the ontological commitments of TIL:
ο: the set of truth-values {T, F};
ι: the set of individuals (the universe of discourse);
τ: the set of real numbers (doubling as discrete times);
ω: the set of logically possible worlds (the logical space).
Empirical languages incorporate an element of contingency, because they denote
empirical conditions that may or may not be satisfied at some world/time pair of evaluation.
Non-empirical languages (in particular the language of mathematics) have no need for an
additional category of expressions for empirical conditions. We model these empirical
conditions as possible-world intensions. They are entities of type (): mappings from
possible worlds to an arbitrary type . The type  is frequently the type of the chronology of
-objects, i.e., a mapping of type (). Thus -intensions are frequently functions of type
(()), abbreviated as ‘’. Extensional entities are entities of a type  where   () for
any type .
Examples of frequently used intensions are: propositions of type , properties of
individuals of type (), binary relations-in-intension between individuals of type (),
individual offices/roles of type .
Our explicit intensionalization and temporalization enables us to encode constructions of
possible-world intensions, by means of terms for possible-world variables and times, directly
in the logical syntax. Where variable w ranges over possible worlds (type ) and t over times
(type ), the following logical form essentially characterizes the logical syntax of any
empirical language: wt […w….t…]. Where  is the type of the object v-constructed by
[…w….t…], by abstracting over the values of variables w and t we construct a function from
worlds to a partial function from times to , that is a function of type ((τ)), or ‘τ’ for
short.
Logical objects like truth-functions and quantifiers are extensional:  (conjunction), 
(disjunction) and  (implication) of type (), and  (negation) of type (). The
quantifiers 
, 
are type-theoretically polymorphous functions of type (()), for an
15
arbitrary type , defined as follows. The universal quantifier 
is a function that associates a
class A of -elements with T if A contains all elements of the type , otherwise with F. The
existential quantifier 
is a function that associates a class A of -elements with T if A is a
non-empty class, otherwise with F. Another logical object we need is a partial polymorphic
function Singularizer I
of type (()). A singularizer is a function that associates a
singleton S with the only member of S, and is otherwise (i.e. if S is an empty set or a multi-
element set) undefined.
Below all type indications will be provided outside the formulae in order not to clutter the
notation. Furthermore, ‘X/’ means that an object X is (a member) of type . ‘X v ’ means
that the type of the object v-constructed by X is . This holds throughout: w v  and t v .
If C v  then the frequently used Composition [[C w] t], which is the intensional descent
(a.k.a. extensionalization) of the -intension v-constructed by C, will be encoded as ‘Cwt’.
When using constructions of truth-functions, we often omit Trivialisation and use infix
notation to conform to standard notation in the interest of better readability. Also when using
constructions of identities of -entities, =/(), we omit Trivialization, the type subscript,
and use infix notion when no confusion can arise. For instance, instead of
‘[0
 [0
= a b] [0
=(()) wt [Pwt a] wt [Pwt b]]]’
where =/() is the identity of individuals and =(())/() the identity of propositions;
a, b constructing objects of type , P objects of type (), we write
‘[[a = b]  [wt [Pwt a] = wt [Pwt b]]]’.
We invariably furnish expressions with procedurally structured meanings, which are
explicated as TIL constructions. The analysis of an unambiguous sentence thus consists in
discovering the logical construction encoded by a given sentence. The TIL method of analysis
consists in three steps:
a) Type-theoretical analysis, i.e., assigning types to the objects that receive mention in the
analysed sentence.
b) Type-theoretical synthesis, i.e., combining the constructions of the objects ad (1) in
order to construct the proposition of type  denoted by the whole sentence.
c) Type-theoretical checking, i.e. checking whether the proposed analysans is type-
theoretically coherent.
To illustrate the method, let us analyse the sentence
(1) “The Church-Turing thesis is believed to be valid.”
Ad (a). As always, first a type analysis:
Church-Turing/(); Thesis_of/((n)()): an empirical function that assigns to a set of
individuals (in this case the couple Church, Turing) a set of hyperpropositions that together
form a thesis the individuals share; [0
Thesis_ofwt
0
Church-Turing] v (n): a set of
hyperpropositions; (to be) Believed/(n): a property of a hyperproposition; Valid/():
a property of a proposition (namely, being true at a w, t-pair).
Ad (b), (c). For the sake of simplicity, we now perform steps (b) and (c) of the method
simultaneously. We must combine constructions of the objects ad (a) in order to construct the
16
proposition denoted by the sentence. Since we aim at a literal analysis of the sentence, we use
Trivializations of these objects.16
Here is how.
i) [0
Thesis_ofwt
0
Church-Turing] v (n);
ii) [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]] v ; c v n, 2
c v ;
iii) c [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]] v (n);
iv) [0
*c [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]]] v , */((n));
v) wt [0
*c [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]]] v ;
vi) [0
Believedwt
0
[wt [0
*c [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]]]] v ;
(1*):
wt [0
Believedwt
0
[wt [0
*c [[[0
Thesis_ofwt
0
Church-Turing] c]  [0
Validwt [2
c]]]]]
v .
Comments. We analysed the expression ‘The Church-Turing thesis’ as a expression that
denotes a set of hyperpropositions, though the thesis as formulated in Section 1 is just one
hyperproposition. Yet this thesis could be easily reformulated as a set of three
hyperpropositions. Thus this analysis is a more general one. The Composition (ii) is glossed
like this. For any hyperproposition c that belongs to the set of hyperpropositions that make up
the Church-Turing thesis and the proposition v-constructed by 2
c it holds that the Composition
(ii) v-constructs a truth-value. In other words, a hyperproposition belonging to the Church-
Turing thesis constructs a proposition that takes value T in the given w, t-pair of evaluation.
The Closure (iii) constructs the set of such hyperproposition c. Composition (iv) is glossed
like this: for all hyperpropositions c belonging to the Church-Turing thesis it holds that the
proposition v-constructed by 2
c is valid in the given w, t-pair of evaluation. Composition (v)
constructs the proposition with truth conditions given by (iv). Finally, Composition (vi) v-
constructs the truth value T according as the Trivialisation of the proposition constructed by
(v) is believed (to be true at a given w, t-pair of evaluation). We construe Believed/(n) as
a property of a hyperproposition. This leaves room for the fact that if the thesis were
formulated in another (albeit equivalent) way it would not have to be generally believed.
Thus (1*) is the construction expressed by sentence (1) as its meaning. Note that our
analysis leaves it open whether (1*) constructs an analytically true proposition (that is, the
proposition true in all w, t-pairs) or an empirical proposition (that is, a proposition true in
some but not all w, t-pairs).
This completes our exposition on the foundations of TIL. Now we have all the technical
machinery that we will need in Section 4 in which I am going to introduce the procedural
theory of concepts formulated by Materna (1998, 2004) within TIL.
4. Procedural Theory of Concepts
The problems connected with the Church-Turing Thesis are surely of a conceptual character.
A reasonable explication of the Thesis as well as of the other notions connected with
algorithm, effective procedure and suchlike should be based on a fine-grained theory of
concepts. The procedural theory of concepts presented below is one such fine-grained theory.
Since the procedural theory of concepts did not come out of the blue, we first summarize
the historical background underlying the origin of the theory. I begin with Bolzano. His
16
For the definition of literal analysis, see Duží et. al. (2010, §1.5, Def. 1. 10). Briefly, the literal analysis of an
expression E is such an admissible analysis of E in which the objects that receive mention by semantically
simple meaningful subexpressions of E are constructed by their Trivialisations.
17
Wissenschaftslehre offers a systematic realist theory of concepts. In Bolzano concepts are
construed as objective entities endowed with structure. But his ingenious work was not well-
known at the time when modern logic was founded by Frege and Russell.
Thus the first theory of concepts that was recognized as being compatible with modern,
entirely anti-psychologistic logic was Frege’s. Frege’s theory, as presented in (1891), (1892b),
construes concepts as total, monadic functions whose arguments are objects (Gegenstände)
and whose values are truth-values. At first sight this definition seems to be plausible. Yet
there are, inter alia, two crucial questions:
a) What are the content and the extension of a concept?
b) What is the sense of a concept word?
It is far from clear what answer Frege could propose to the question (b). After all, no
genuine definition of sense can be found in Frege’s work.17
As for the question (a), it is
obviously a Wertverlauf what can be called an extension. So it seems that it is the sense of the
concept word that can be construed as the content of a concept. This is well compatible with
Frege’s criticism of “Inhaltslogiker” in (1972, pp. 31-32).
However, Frege oscillated between two different notions of a function: ‘function-in-
extension’, i.e. function as a mapping (Wertverlauf) and what Church would later call
‘function-in-intension’. The latter notion was not well-defined by Church, yet obviously it can
be understood as Frege’s mode of presentation of a particular function-in-extension. Thus
function-in-intension would be a good candidate for explication of Frege’s sense.
In his (1956) Church tries to adhere to Frege’s principles of semantics, but he soon
realizes Frege’s explication of the notion of concept is untenable. Concepts should be located
at the level of Fregean sense; in fact, as Church maintains, the sense of an expression E should
be a concept of what E denotes. Consequently, concepts should be associated not only with
predicates (as was the case of Frege), but also with definite descriptions, and in general with
any kind of semantically self-contained expression, since all (meaningful) expressions are
associated with a sense. Even sentences express concepts; in the case of empirical sentences
they are concepts of propositions (‘proposition’ as understood by Church, as a concept of a
truth-value, and not as understood in this article, as a function from possible worlds to
(functions from times to) truth-values).18
The degree to which ‘intensional’ entities, and so concepts, should be fine-grained was of
the utmost importance to Church.19
When summarizing Church’s heralded Alternatives of
constraining intensional entities, Anderson (1998, p. 162) canvasses three options considered
by Church. Senses are identical if the respective expressions are (A0) ‘synonymously
isomorphic’, (A1) mutually -convertible (that is, - and -convertible), (A2) logically
equivalent. (A2), the weakest criterion, was refuted already by Carnap in his (1947), and
would not be acceptable to Church, anyway. (A1) is surely more fine-grained. Alternative (0)
arose from Church’s criticism of Carnap’s notion of intensional isomorphism and is discussed
in Anderson (1980). Carnap proposed intensional isomorphism as a criterion of the identity of
belief. Roughly, two expressions are intensionally isomorphic if they are composed from
expressions denoting the same intensions in the same way.
Church, in (1954), constructs an example of expressions that are intensionally isomorphic
according to Carnap’s definition (i.e., expressions that share the same structure and whose
parts are necessarily equivalent), but which fail to satisfy the principle of substitutability.20
17
As for a detailed analysis of the problems with sense in Frege, see Tichý (1988), in particular Chapters 2 and
3.
18
For the critical analysis of Frege’s conception of concepts, see Duží & Materna (2010).
19
Now we are using Church’s terminology; in TIL concepts are hyperintensional entities.
20
See also Materna (2007).
18
The problem Church tackles is made possible by Carnap’s principle of tolerance (which itself
is plausible). We are free to introduce into a language syntactically simple expressions which
denote the same intension in different ways and thus fail to be synonymous. Yet they are
intensionally isomorphic according to Carnap’s definition. Church used as an example of such
expressions two predicates P and Q, defined as follows: P(n) = n  3, Q(n) = xyz (xn
+ yn
=
zn
), where x, y, z, n are positive integers. P and Q are necessarily equivalent, because for all n
it holds that P(n) if and only if Q(n). For this reason P and Q are intensionally isomorphic,
and so are the expressions “n (Q(n)  P(n))” and “n (P(n)  P(n))”. Still one can easily
believe that n (Q(n)  P(n)) without believing that n (P(n)  P(n)).21
Church’s Alternative (1) characterizes synonymous expressions as those that are -
convertible.22
But, Church’s -convertability includes also -conversion, which goes too far
due to partiality; -reduction is not guaranteed to be an equivalent transformation as soon as
partial functions are involved. Church also considered Alternative (1’) that includes -
conversion. Thus (1’) without -conversion is the closest alternative to our definition of
synonymy based on the notion of procedural isomorphism that we are going to introduce
below.
Summarising Church’s conception, we have: A concept is a way to the denotation rather
than a special kind of denotation. Thus concepts should be situated at the level of sense. There
are not only general concepts but also singular concepts, concepts of propositions, etc. More
concepts can identify one and the same object. Now what would we, as realists, say about the
connection between sense and concept? Accepting, as we do, Church’s version as an intuitive
one, we claim that senses are concepts. Can we, however, claim the converse? This would be:
concepts are senses.
A full identification of senses with concepts would presuppose that every concept were
the meaning of some expression. But then we could hardly explain the phenomenon of
historical evolution of language, first and foremost the fact that new expressions are
introduced into a language and other expressions vanish from it. Thus with the advent of a
new expression, meaning-pair a new concept would have come into being. Yet this is
unacceptable for a realist: concepts, qua logical entities, are abstract entities and, therefore,
cannot come into being or vanish. Therefore, concepts outnumber expressions; some concepts
are yet to be discovered and encoded in a particular language while others sink into oblivion
and disappear from language, which is not to say that they would be going out of existence.
For instance, before inventing computers and introducing the noun ‘computer’ into our
language(s), the procedure that von Neumann made explicit was already around. The fact that
in the 19th
century we did not use (electronic) computers, and did not have a term for them in
our language, does not mean that the concept (qua procedure) did not exist. In the dispute
over whether concepts are discovered or invented the realist come down on the side of
discovery.
Hence in order to assign concept to an expression as its sense, we first have to define and
examine concepts independently of a language, which we are going to do in the next
paragraphs. Needless to say, our starting point is Church’s rather than Frege’s conception of
concepts, because:
- concepts are structured entities, where their structure is (in principle) derivable from the
grammatical structure of the given (regimented) expression, and
- concepts can be executed to produce an object (if any).
21
Criticism of Carnap’s intensional isomorphism can be also found in Tichý (1988, pp. 8-9), where Tichý points
out that the notion of intensional isomorphism is too dependent on the particular choice of notation.
22
See Church (1993, p.143).
19
Fregean concepts (1891, 1892b) are interpretable as set-theoretical entities, which does not
meet the above desiderata. Sets are flat, non-structured entities that cannot be executed to
produce anything. It should be clear now that TIL constructions are strong candidates for
‘concepthood’. However, there are two problems that we must address. Firstly, only closed
constructions can be concepts, because open constructions do not construct anything in and
by themselves, they only v-construct something relative to a valuation v. Secondly, from the
conceptual or procedural point of view, constructions are too fine-grained. Thus we must
address the problem of the identity of procedures.
As for the first problem, this concerns in particular expressions that contain indexicals,
i.e. such expressions whose meanings are pragmatically incomplete.23
As an example,
consider
‘my books’, ‘his father’.
TIL’s anti-contextualist thesis of transparency, viz. that expressions are furnished with
constructions as their context-invariant meanings is valid universally, that is also for
expressions with indexicals. Their meaning is an open construction that is a construction
containing free variables that are assigned to indexical pronouns as their meanings. In our
case the meanings of ‘my books’ and ‘his father’ are
wt [0
Book_ofwt me] v 
wt [0
Father_ofwt him] v .
Types. Book_of/(()): an attribute that dependently on w, t-pair assigns to an individual
the set of individuals (his/her books); Father_of(); me, him v .
Similarly as ‘my books’ and ‘his father’ do not denote any particular object, these
constructions do not construct individual roles. Rather, they only v-construct. If in a given
situation of utterance the value of ‘me’ or ‘him’ is supplied (for instance, by pointing at a
particular individual, say, Marie or Tom), we obtain a complete meaning pragmatically
associated with wt [0
Book_ofwt me] and wt [0
Father_ofwt him], say, wt [0
Book_ofwt
0
Marie], wt [0
Father_ofwt
0
Tom]. Yet the meanings of ‘books of me’ and ‘father of him’ are
open constructions that cannot be executed in order to construct an individual role. These
expressions do not express concepts.
Thus we have a preliminary definition: Concepts are closed constructions that are
procedurally indistinguishable.
Now we have to address the second problem, viz. the problem of the individuation of
procedures. This is a special problem of a broader one, namely how hyperintensions are
individuated. Hyperintensionality is in essence a matter of the individuation of non-
extensional (‘intensional’) entities. Any individuation is hyperintensional if it is finer than
necessary co-extensionality, such that equivalence does not entail identity. Hyperintensional
granularity was originally negatively defined, leaving room for various positive definitions of
its granularity. It is well-established among mathematical linguists and philosophical logicians
that hyperintensional individuation is required at least for attitudinal sentences with attitude
relations that are not logically closed (especially in order to block logical and mathematical
omniscience) and linguistic senses (in order to differentiate between, say, “a is north of b” and
“b is south of a”, whose truth-conditions converge).24
23
For details on pragmatically incomplete meanings, see (Duží et. al., 2010, §3.4).
24
The theme of hyperintensionality will be explored in a special issue of Synthese to be guest-edited by Bjørn
Jespersen and Marie Duží.
20
Our working hypothesis is that hyperintensional individuation is procedural individuation
and that the relevant procedures are isomorphic modulo -, - or restricted -convertibility.
Any two terms or expressions whose respective meanings are procedurally isomorphic are
semantically indistinguishable, hence synonymous. Procedural isomorphism is a nod to
Carnap’s intensional isomorphism and Church’s synonymous isomorphism. Church’s
Alternatives (0) and (1) leave room for additional Alternatives.25
One such would be
Alternative (½), another Alternative (¾). The former includes - and -conversion while the
latter adds a form of restricted -conversion. If we must choose, we would prefer Alternative
(¾) to soak up those differences between -transformations that concern only -bound
variables and thus (at least appear to) lack natural-language counterparts.
There are three reasons for excluding unrestricted -conversion. First, as mentioned
above, unrestricted -conversion is not an equivalent transformation in logics boasting partial
functions, such as TIL. The second reason is that occasionally even -equivalent
constructions have different natural-language counterparts; witness the difference between
attitude reports de dicto vs. de re. Thus the difference between “a believes that b is happy”
and “b is believed by a to be happy” is just the difference between -equivalent meanings.
Where attitudes are construed as relations to intensions (rather than hyperintensions), the
attitude de dicto receives the analysis
wt [0
Believewt
0
a wt [0
Happywt
0
b]]
while the attitude de re receives the analysis
wt [x [0
Believewt
0
a wt [0
Happywt x]] 0
b]
Types: Happy/(); x v ; a, b/; Believe/().
The de dicto variant is the -equivalent contractum of the de re variant. The variants are
equivalent because they construct one and the same proposition, the two sentences denoting
the same truth-condition. Yet they denote this proposition in different ways, hence they are
not synonymous. The equivalent -reduction leads here to a loss of analytic information,
namely loss of information about which of the two ways, or constructions, has been used to
construct this proposition.26
In this particular case the loss seems to be harmless, though,
because there is only one, hence unambiguous, way to -expand the de dicto version into its
equivalent de re variant.27
However, unrestricted equivalent -reduction sometimes yields a
loss of analytic information that cannot be restored by -expansion.28
The restricted version of equivalent -conversion we have in mind consists in collision-
less substituting free variables for -bound variables of the same type, and will be called r-
conversion. This restricted r-reduction is just a formal manipulation with -bound variables
that has much in common with -reduction and less with -reduction. The latter is the
operation of applying a function f/() to its argument value a/ in order to obtain the value
25
Recall that (A0) is -conversion and synonymies resting on meaning postulates; (A1) is - and -conversion;
(A1) is -, - and -conversion; (A2) is logical equivalence. See Church (1993). Anderson (1998) adds (A1*)
as a generalization of (A0), in which identity is the only permissible permutation. (A1*) is an automorphism
defined on a set of -terms.
26
For the notion of analytic information, see Duží (2010) and Duží et. al. (2010, §5.4).
27
In general, de dicto and de re attitudes are not equivalent, but logically independent. Consider “a believes that
the Pope is not the Pope” and “a believes of the Pope that he is not the Pope”. The former, de dicto, variant
makes a deeply irrational and most likely is not a true attribution, while the latter, de re, attribution is perfectly
reasonable and most likely the right one to make. In TIL the de dicto variant is not an equivalent -contractum of
the de re variant due to the partiality of the role Pope/.
28
For details, see Duží & Jespersen (in submission).
21
of f at a (leaving it open whether a value emerges). It is the fundamental computational rule of
functional programming languages. Thus if f is constructed by the Closure C
C = x [… x …]
then -reduction is here the operation of calling the procedure C with a formal parameter x at
an actual parameter a: [x [… x …] 0
a]. Now the Trivialisation of the value a is substituted
for x and the ‘body’ of the procedure C is computed, which means that the Composition […
0
a …] is evaluated.
No such features can be found in r-reduction. If a variable y v  is not free in C then
the r-contractum of [x [… x …] y] is [… y …]. Now the evaluation of the Composition
[…y …] does not yield a value of f. As a result we just obtain a formal simplification of
[x [… x …] y].
Thus we define:
Definition 4 (procedurally isomorphic constructions: Alternative (¾))
Let C, D be constructions. Then C, D are -equivalent iff they differ at most by deploying
different -bound variables. C, D are -equivalent iff one arises from the other by -reduction
or -expansion. C, D are r-equivalent iff one arises from the other by r-reduction or r-
expansion. C, D are procedurally isomorphic, denoted ‘C  D’, /(nn), iff there are closed
constructions C1,…,Cm, m1, such that 0
C = 0
C1, 0
D = 0
Cm, and all Ci, Ci+1 (1  i < m) are
either -, - or r-equivalent.
Example.
0
Prime  x[0
Prime x]  y [0
Prime y]  z [0
Prime z] r z [y [0
Prime y] z] …
Types: Prime/(); x, y, z v ;  the type of natural numbers.
Procedural isomorphism is an equivalence relation on the set S of closed constructions of
a particular order and thus partitions S into equivalence classes. Hence in any partition cell we
can privilege a representative element. In Horák (2002) the method of choosing a
representative is defined. Briefly, this method picks out the alphabetically first, not - or r-
reducible construction. The respective representative is then called a construction in its
normal form.
Constructions in the above example belong to one and the same partition class. The
representative of this class is 0
Prime (that is, a primitive concept of the set of prime numbers).
Definition 5 (Concept). A concept is a closed construction in its normal form.
Corollaries.
Concepts are equivalent iff they construct one and the same entity.
Concepts are identical iff they are procedurally isomorphic.
Example.
Equivalent but different concepts of prime numbers:
a) 0
Prime (simple, primitive)
b) x [[0
 x 0
1]  y [[0
Divide y, x]  [[y = 0
1]  [y = x]]]]
natural numbers greater than 1 and divisible just by 1 and themselves
c) x [[0
Card y [0
Divide y, x]] = 0
2]
natural numbers possessing just two factors
22
Types. Let  be the type of natural numbers; Divide/(): the division function;
Card/(()): function that assigns to a finite set of naturals the number of elements of this
set; 1, 2/; x, y v .
Next we need to define the distinction between empirical and analytical concepts.
Definition 6 (empirical vs. analytical concept).
a) A concept C is empirical iff C constructs a non-constant intension (that is, an intension I
such that I has different values in at least two w, t-pairs).
b) A concept C is analytical iff C constructs a constant intension (that has one and the same
value in all w, t-pairs or no value in any w, t-pair), or C constructs an extension
(typically a mathematical object).
Examples.
The above concepts of primes are analytical: they construct a mathematical entity, the set
of primes, i.e., an extension.
The concept wt [[0
All 0
Bachelorwt] 0
Manwt] expressed by “All bachelors are men” is
analytical;29
it constructs the constant proposition TRUE that takes value T in every w, t-
pair. Types. All/((())()): a restricted quantifier that assigns to a given set of individuals
the set of all its supersets; Bachelor, Man/().
The term ‘female bachelor’ is also analytical; its denotation is the constant property of
individuals that takes as its value an empty set of individuals in all w, t-pairs. The concept
expressed by this term is [0
Femalem 0
Bachelor]. Additional type: Femalem
/(()()): a
property modifier.30
As a concept of a property modifier 0
Femalem
is an analytical concept;
however, if 0
Femalep
 () is a concept of a property, then it is an empirical concept.
The concepts 0
Bachelor, 0
Man are empirical.
The concepts expressed by ordinary sentences of a natural language, like “Prague is the
capital of the Czech Republic”, “Alan Turing was an ingenious man” are empirical; they are
concepts of non-constant propositions.
This completes our exposition on procedural theory of concepts. In the next Section we
are going to apply this theory in order to throw some more light on the Church-Turing thesis.
5. The Church-Turing thesis from the conceptual point of view
First, let us summarize the dramatis personae onstage. They are these different concepts:
1. concept of an effective procedure (or algorithm): EP
2. concept of a Turing machine: TM
3. concept of general recursion: GR
4. concept of -definability: D
First we investigate TM, GR and D. These concepts construct kinds (classes) of
procedures (functions-in-intension). Hence TM, D, GR/n+1  (n).
29
The term ‘bachelor’ is homonymous. Either it means an unmarried man or the lowest university degree, B.A.
Here we take into account only the former.
30
For an analysis of property modifiers, see Duží et. al. (2010, §4.4). The latest TIL research into modifiers is
found in Jespersen and Primiero (forthcoming) and Primiero and Jespersen (2010).
23
Moreover, it holds for each of these concepts that every procedure belonging to
their product produces a computable function-in-extension. These functions-in-
extension are of a type (), where ,  are types  of positive integers, or =(), or
=(), and so on. Simply, these functions are numerical functions on positive
integers. Formally, the following constructions construct the truth-value T:
c [[TM c]  [0
Computable 2
c]]
c [[GR c]  [0
Computable 2
c]]
c [[D c]  [0
Computable 2
c]]
Additional types. c/n; 2
c  (); Computable/(()).
The variable c ranges over constructions/procedures producing numerical
functions. If such a procedure belongs to the set of procedures identified by a concept
TM or GR or D, then its product is a numerical computable function. For this reason
we must use the Double Execution in the consequent in order to construct the
respective numeric function of type () of which we wish to predicate that it is
computable.
These significantly different concepts TM, D and GR construct substantially
different classes of procedures:
TM  D  GR
Yet it has been proved that these concepts are equivalent in the following way. A
procedure belonging to any of the classes constructed by TM or D or GR produces a
function-in-extension belonging to one and the same class CF/(()) of computable
functions-in-extension. Thus we define:
Definition 7 (equivalence on the set of concepts of classes of procedures). Let /(n+1n+1)
be a relation of equivalence on the set of concepts producing classes of procedures. Let
C1, C2/n+1  (n). Then31
0
C1  0
C2
if and only if the classes of functions-in-extension constructed by elements of C1, C2,
respectively, are identical:
f c1 [[C1 c1]  [2
c1 =1 f]] =2 g c2 [[C2 c2]  [2
c2 =1 g]]
Types: f, g v (); c1, c2 v n; 2
c1, 2
c2 v (); =1/(()()): the identity of
functions-in-extension; =2/((())(())): the identity of classes of functions-in-
extension.
Hence it has been proved that 0
TM  0
D  0
GR. It means that the class of computable
functions-in-extension CF =2
f t [[TM t]  [2
t =1 f]] =2 g l [[D l]  [2
l =1 g]] =2 h r [[GR r]  [2
r =1 h]]
Types: f, g, h v (); t, l, r v n; 2
t, 2
l, 2
r v (); =1/(()()): the identity of
functions; =2/((())(())): the identity of classes of functions-in-extension;
CF/(()).
31
In the interest of better readability, we use infix notation now.
24
Note that we typed the concepts TM, D and GR as analytical concepts. Each of
them constructs a class of procedures, an object of type (n). Are we entitled to do
so? Couldn’t any of them be empirical? I don’t think so. The concepts GR and D are
obviously analytical concept: their definitions do not contain any empirical constituent, they
are purely mathematical. Could TM perhaps be an empirical concept? Then there is the
question what in the definition of a Turing machine might be of an empirical character. If one
consults the Stanford Encyclopaedia of Philosophy,32
it is easy to see that in the definition of a
Turing machine there is no trace of anything empirical that ‘might be otherwise’, that is, no
trace of a concept that would define a non-constant function with the domain of possible
worlds.
There are a number of variations of the Turing-machine definition that turn out to be
mutually equivalent in the following sense. Formulation F1 and formulation F2 are equivalent
if for every machine described in F1 there is machine described in F2 which has the same
input-output behaviour, and vice versa, i.e., when started on the same tape at the same cell,
they will terminate with the same tape on the same cell. In other words, all possible concepts
TMi of the Turing machine are equivalent according to Definition 7: 0
TM1  …  0
TMn.
The alternative definitions include, inter alia, the definition of a machine with a two-way
infinite tape, machines with an arbitrary number of read-write heads, machines with multiple
tapes, bi-dimensional tapes, machines where arbitrary movement of the head is allowed, an
arbitrary finite alphabet, etc. etc. Even the definition of non-deterministic Turing machine
that is apparently a more radical reformulation of the notion of Turing machine does not alter
the definition of Turing computability.
Importantly, all these alternative definitions do not contain any empirical concept that
would construct an intension and the defined concepts are equivalent (Definition 7) by
constructing classes of procedures that produce elements of one and the same set CF of
functions-in-extension.
This might suffice as evidence that the concepts falling under the umbrella TM are
analytical as well. Formally, we can prove it like this. Suppose that some of the concepts TMi,
D, GR are empirical. Let a concept C be empirical. Then C constructs a property of
procedures rather than a class of procedures: C  (n). In order that C be (contingently)
equivalent to the other concepts, for instance, to D, the following must hold:
wt [f c [[Cwt c]  [2
c = f]] =2 g l [[D l]  [2
l = g]]]
Additional types: c v n; 2
c v ().
Since C is empirical, the property of procedures it constructs is a non-constant
intension and so is the proposition constructed by this Closure. But a non-constant
proposition is not analytically provable. Hence, there is no empirical concept C among
our concepts.33
In summary,
GR, D, TM are all analytical concepts.
Now there is a crucial problem concerning the class EP that can be formulated like this.
Recall that CF is the class of computable functions-in-extension of naturals that TM, D and
GR have in common. Then the Church-Turing thesis can be formulated like this:
Only the elements of CF are computable by an effective procedure EP.
32
See Barker-Plummer, David, ‘Turing machines’, The Stanford Encyclopedia of Philosophy (Fall 2012
Edition), Edward N. Zalta (ed.), forthcoming URL = http://plato.stanford.edu/archives/fall2012/entries/turing-
machine/.
33
I am grateful to Pavel Materna for an outline of the idea of this proof.
25
And vice versa,
Only the elements of EP compute the elements of CF.
Formally,
c [[[EP c]  [0
CF 2
c]]  [[0
CF 2
c]  [EP c]]]
Types: c v n; 2
c v (); EP/n+1  (n); CF/(()).
The second conjunct is unproblematic, for sure. If a function is computable then it is
computable by an effective procedure. However, the first conjunct gives rise to a question:
Could a new concept c belonging to EP such that
c computes a function that does not belong to CF emerge?
If the answer is in the affirmative, then the Church-Turing thesis would not be true. Again, let
us consider two variants of a definition of the concept EP. Either (a) EP is an analytical
concept or (b) it is defined as an empirical one.
Let us first consider variant (a) that is an analytical concept EP. There are three
alternatives: the Church-Turing Thesis is
1) a definition
2) an explication
3) possibly provable after a refinement of the concept EP.
Ad 1): As mentioned above, Church (1936, p.356) speaks about defining
the notion … of an effectively calculable function of positive integers by
identifying it with the notion of a recursive function of positive integers (or with a
lambda-definable function of positive integers).
Post rightly criticizes this formulation (1936, p. 105):
“To mask this identification under a definition…blinds us to the need of its
continual verification.”
Indeed, a definition cannot be verified. It can only be tested whether the so defined concept is
adequate so that a new definition (i.e. a new concept) is not needed.
Ad 2): If TM, GR and D were (Carnapian) explications of EP then we would end up with at
least three concepts which differ in a very significant way and explicate one and the same
concept EP, which seems to be implausible as well. Explication should make the meaning of
an inexact concept (explicandum) clear. It is purely stipulative, normative definition, and thus
it cannot be true or false, just more or less suitable for its purpose. And it is hardly thinkable
that one and the same thing (the EP concept) would be explicated in three substantially
different ways unless we would end up with three different concepts EP1, EP2, EP3.
Ad 3): In this case we encounter the problem of a proper calibration of EP. The basic idea or
rather hypothesis is this. If we refine the concept EP so that we obtain a fine-grained
definition of EP such that it strictly delimits the class of procedures involved, then the
Church-Turing thesis becomes provable.
First we have to define refinement of a construction (concept in this case).34
To this end
we need two other notions, namely that of a simple concept and ontological definition:
Let X be an object that is not a construction. Then 0
X is a simple concept.
34
For details, see Duží (2010) and Duží et. al. (2010, §5.4.4, Definition 5.5).
26
The ontological definition of an object X is a compound (= molecular rather than simple)
concept of X.
Definition 8 (refinement of a construction). Let C1, C2, C3 be constructions. Let 0
X be a
simple concept of X, and let 0
X occur as a constituent of C1. If C2 differs from C1 only by
containing in lieu of 0
X an ontological definition of X, then C2 is a refinement of C1. If C3 is a
refinement of C2 and C2 is a refinement of C1, then C3 is a refinement of C1. ฀
In order to formulate corollaries of this definition, let us denote the analytical content of a
construction C, that is, the set of constituents of C by ‘AC(C)’, and let |AC(C)| be the number
of constituents of C. Then
Corollaries. If C2 is a refinement of C1, then
1) C1, C2 are equivalent by constructing one and the same entity but not procedurally
isomorphic;
2) AC(C1) is not a subset of AC(C2);
3) |AC(C2)| > |AC(C1)|.
For instance, a refinement of the simple concept 0
Prime is the molecular concept
x [0
Card y [[0
Divide y x] = 0
2]],
or using prefix notion
x [0
= [0
Card y [0
Divide y x]] 0
2].
 The two concepts are equivalent by constructing one and the same set, viz. the set of
primes, but these concepts are not procedurally isomorphic.
 AC(0
Prime) = {0
Prime};
 AC(x [0
= [0
Card y [0
Divide y x]] 0
2]) =
{x [0
= [0
Card y [0
Divide y x]] 0
2],
[0
= [0
Card y [0
Divide y x]] 0
2],
0
=, [0
Card y [0
Divide y x]], 0
2,
0
Card, y [0
Divide y x], [0
Divide y x], 0
Divide, y, x}.
 Hence AC(0
Prime) ⊈ AC(x [0
= [0
Card y [0
Divide y x]] 0
2])
 |AC(0
Prime)| = 1 whereas |AC(x [0
Card y [0
Divide y x] = 0
2])| = 11.
There can be more than one refinement of a concept C. For instance, the Trivialization
0
Prime is in fact the least informative procedure for producing the set of primes. Using
particular definitions of the set of primes, we can refine 0
Prime in many ways, including:
x [0
Card y [0
Divide y x] = 0
2],
x [[x  0
1]  y [[0
Divide y x]  [[y = 0
1]  [y = x]]]],
x [[x > 0
1]  y [[y > 0
1]  [y < x]  [0
Divide y x]].
By refining the meaning CS of a sentence S we uncover a more fine-grained construction
CS’ such that CS and CS’ are equivalent, yet not procedurally isomorphic, and such that the
latter is more analytically informative than the former.35
But theoretically, we could keep
refining one and the same construction ad infinitum, possibly criss-crossing between various
35
The notion of analytic information has been defined in Duží (2010). Briefly, analytic information conveyed by
the meaning of an expression E is the set of constituents of the meaning of E. Comparison of the amount of
analytic information conveyed by expressions is based on the definition of a refinement of their meanings.
27
conceptual systems. For instance, we could still refine the definitions of the set of primes
above by refining the Trivialization 0
Divide:
0
Divide = yx [z [x = [0
Mult yz]]].
Types: x, y, z  ; Mult/(): the function of multiplication defined over the domain of
natural numbers .
Substituting the Closure for the Trivialization yields a more informative refinement (we
denote the relation of being less analytically informative ‘<an’):
0
Prime <an [x [0
Card y [0
Divide y x] = 0
2]] <an
[x [0
Card y [z [x = [0
Mult yz]]] = 0
2]] <an …
The uppermost level of refinement depends on the conceptual system in use. Thus we
must define the notion of conceptual system. In general, conceptual systems are a tool by
means of which to characterise and categorize the expressive force of a vernacular and
compare the expressive power of two or more vernaculars.36
In this paper I need the notion of
conceptual system to fix the limit up to which we can refine, in a non-circular manner, the
ontological definitions of the objects within the domain of a given language.
A conceptual system is a set of concepts, some of which must be simple. Simple concepts
are defined as Trivializations of non-constructional entities of types of order 1. A system’s
compound concepts are exclusively derived from its simple concepts. Each conceptual system
is unambiguously individuated in terms of its set of simple concepts. Thus we define:
Definition 9 (conceptual system). Let a finite set Pr of simple concepts C1,…,Ck be given. Let
Type be an infinite set of types induced by a finite base (e.g., {, , , } or {, }). Let Var
be an infinite set of variables, countably infinitely many for each member of Type. Finally, let
C be an inductive definition of constructions. In virtue of Pr, Type, Var and C, an infinite
class Der is defined as the transitive closure of all the closed compound constructions
derivable from Pr and Var using the rules of C, such that:
i) every member of Der is a compound concept;
ii) if C  Der, then every subconstruction of C that is a simple concept is a member of Pr.
The set of concepts Pr  Der is a conceptual system derived from Pr. The members of Pr are
the primitive concepts, and the members of Der the derived concepts, of the given conceptual
system.
Remark. As is seen, Pr unambiguously determines Der. The expressive power of a given
(stage of a) language L is then determined by the set Pr of the conceptual system underlying
the language L.
Every conceptual system delimits a domain of objects that can be conceptualized by the
resources of the system. There is the correlation that the greater the expressive power, the
greater the domain of objects that can be talked about in L. Yet Pr can be extended into Pr’ in
such a way that Pr’ is no longer logically independent (the way the axioms of an axiomatic
system may be mutually independent). Independency means here that Der does not contain a
concept C equivalent to C’ of Pr, unless C’ is a subconstruction of C.
An example of a, minuscule, independent system would be Pr = {0
Succ, 0
0}, where
Succ/(), 0/. Due to transitive closure, there is a derived concept of the function +/()
defined as follows (f()):
36
The theory of conceptual systems was first introduced in Materna (1998, Chs. 6-7) and further elaborated on in
Materna (2004).
28
[0
If x [[[f x 0
0] = x]  y [[f x [0
Succ y]] = [0
Succ [f x y]]]]].
This concept is not equivalent to any primitive concept of the system. However, among
the derived concepts of this system there is, for instance, the compound concept of the sum
0+0,
[0
If x [[[f x 0
0] = x]  y [[f x [0
Succ y]] = [0
Succ [f x y]]]] 0
0 0
0],
which is equivalent to 0
0. Yet the system is independent, because the primitive concept 0
0 is a
subconstruction of the above compound concept.
An example of a, likewise minuscule, dependent system would be Pr1 = {0
, 0
, 0
 }. In
this system either 0
 or 0
 is superfluous because, e.g., disjunction can be defined by the
compound concept pq [0
 [0
 [0
p][0
q]]], which is equivalent to 0
. The simple concept
0
 is not a subconstruction of the compound concept pq [0
 [0
 [0
p][0
q]]]. To obtain
independent systems, omit either 0
 or 0
. This will yield either Pr2 = {0
, 0
} or Pr3 =
{0
, 0
 }.
Thus, the set of primitive concepts of an independent system contains no superfluous
concepts and is insofar minimal. Pr1 was an example of a system containing a superfluous
element. However, it should be possible to take an independent system and add one or more
concepts to it and still keep the system independent. When such interesting extensions are
made, the expressive power of the new system increases. To show how this works, first we
define proper extension of a system S as individuated by Pr. A proper extension of S is simply
defined as a system S’ individuated by Pr’ such that Pr is a proper subset of Pr’. An
interesting extension is one that preserves the independency of the initial system.
The definition of conceptual system does not require that the system’s Pr contain
concepts of logical or mathematical operations. However, any conceptual system intended to
underpin a language possessing even a minimal amount of expressive power of any interest
must contain such concepts. Otherwise there will be no means to combine the non-logical
concepts of the system, whether that system be mathematical, empirical or a mix of both. Let
‘LM-part of S’ denote the portion of logical/mathematical concepts of S, and ‘E-part of S’
denote the portion of empirical concepts of S.
Proper extensions of S come in two variants, essential and non-essential. A proper non-
essential extension S’ of S is defined as follows: the LM-part of S  the LM-part of S’ and the
E-part of S = the E-part of S’. A proper essential extension S’ of S is defined as follows: the
LM-part of S = the LM-part of S’ and the E-part of S  the E-part of S’. It may happen that
both the LM-part and the E-part of the system are extended. Then we simply talk of an
extension of S.
Here is an example. Let S be assigned to a language L as its conceptual system. Let PrL =
{0
Parent, 0
Male, 0
Female, 0
, 0
, 0
, 0
=}. An element of DerL is the concept of the relation-
in-intension Brotherhood; to wit,
wt [xy z [[[0
Parentwt z x]  [ 0
Parentwt z y]]  [0
Malewt x]]]].
Types: Male, Female/(); Parent/ (); the types of the logical objects are obvious.
In general, when the speakers of L find that the object defined by a compound concept is
frequently needed, they are free to introduce, via a linguistic convention, a new expression co-
denoting this object. Whenever this happens, a verbal definition sees the light of day. For
instance, the speakers may decide to introduce the relational predicate ‘is a brother of’ to co-
denote the relation-in-intension defined by some compound concept encompassing various
logical concepts and empirical concepts such as Parent and Male, as done above.
Back to our problems concerning effective procedure/algorithm (EP). Before adducing
possible refinements of the concept EP, let us try to answer the question:
29
What do the concepts belonging to TM, GR, and D have in common?
They comply with finitism. Mendelson (1990, p. 225) says about computable functions:
… we do not mean actual human computability or empirically feasible
computability. … When we talk about computability, we ignore any limitations of
space, time, or resources.
This does not violate the tenets of finitism; unlimited is not actually infinite, of course. It
is a similar difference as the difference between application of the (unrestricted) general
quantifier  (‘for all’) and lambda abstraction (‘for any’). For instance, Fermat’s Last
Theorem, “No three positive integers a, b, and c can satisfy the equation an
+ bn
= cn
for any
integer value of n greater than two” expresses the construction37
[0
 n [[n > 0
2]  0
(a b c) [an
+ bn
= cn
]]]
or, equivalently
[0
(a b c n) [[n > 0
2]  [an
+ bn
 cn
]]]
This construction/procedure is not effectively executable/computable, because it involves and
presupposes the existence of actual infinity, viz. the set of positive integers. The execution of
this construction would amount to, inter alia, the execution of these constituents:
 construct the (characteristic function of the) set of 4-tuples a, b, c, n:
a b c n [[n > 0
2]  [an
+ bn
 cn
]]
 check whether this set is the set of all such 4-tuples
The first constituent is glossed “for any () positive integers a, b, c and n check whether
Composition [[n > 0
2]  [an
+ bn
 cn
]] v-construct T”. This constituent is easily executable
and complies with finitism. Only potential infinity is involved rather than actual infinity. No
such luck with the second constituent that involves actual infinity, viz. the set of all 4-tuples.
Now we are going to try to refine the concept of algorithm/effective procedure (EP) in
such a way that the Church-Turing thesis might become provable (though then there is the
question whether the Church-Turing thesis would not degenerate to triviality). First, however,
we must put the notion of procedure on a more solid ground. Using TIL vernacular,
a procedure P is a sequence of constituents of P each of which (including P itself)
must be executed in order to produce a product of P (if any).
Note that procedure is not a mere sequence of constituent instructions that is a set. As
mentioned above, a set cannot be executed. However, the phrase in parentheses ‘including P
itself’ expresses an important constraint that raises P above the set-theoretical extensional
level up to the hyperintensional one that is the procedural level of abstraction.
Now a possible refinement of the concept of an effective procedure EP yields this refined
definition: Let a concept C belong to EP. Then
1) C is a finite sequence of constituents each of which (including C itself) must be executed
to produce a product of C (if any);
2) execution of none of the constituents of C involves actual infinity;
3) execution of none of the constituents of C calls for an additional input argument;
37
Now we use ordinary mathematical notation to make the constructions easier to read.
30
4) in order to produce the product of C neither infinitely small nor infinitely large execution
time is necessary.
Our hypothesis is that the so-defined EP is analytical and provably equivalent with TM, D
and GR. But isn’t then the Thesis just trivial? I do not think so, because by lifting some of the
constraints (for instance, if we allow infinitely large execution time), we obtain a new class of
procedures and we may ask again whether those procedures are equivalent to the procedures
defined by TM, D and GR.
If we introduce a proper essential extension of our conceptual system in use then we enter
the zone of empirical concepts, i.e.
Variant (b). In this case the Church-Turing thesis is not true, because, as we have seen above,
no empirical concept can be logically equivalent to the analytical concepts TM, D, GR. We
would end up with an empirical procedure in our hands, viz. the one constructed by
wt [EPwt = GR = D = TM]
which is not analytically provable. However, this empirical conception leaves room for
discovering other concepts of the class of procedures computing in this or that way numerical
functions that are not computable in the classical sense, that is, that do not belong to the class
CF.
One such empirical variant is provided by the concept of machine-computable functions.
But this would be too radical an extension, because there is a substantial difference between
the analytical concept EP and the concept machine-computable in the wide sense;38
the latter
involves infinitely small time and thus does not meet the constraint ad (4) of the refined
definition above.39
Bertrand Russell, Ralph Blake and Hermann Weyl independently described one extreme
form of temporal patterning. It seems that this temporal patterning was first described by
Russell, in a lecture given in Boston in 1914. In a discussion of Zeno’s paradox of the race-
course Russell said, “If half the course takes half a minute, and the next quarter takes a quarter
of a minute, and so on, the whole course will take a minute” (Russell 1915, pp. 172-3).40
Hence analytical EP and machine-computable in the wide sense are different non-
equivalent concepts. Recall that machine-computable in the narrow sense is an empirical
concept and thus non-equivalent to the analytical variant of EP as well.
Remark. In Gödel (Collected Works II., p. 306) we find Gödel’s philosophical criticism of
Turing:
What Turing disregards completely is the fact that mind, in its use, is not static,
but constantly developing…
This remark of Gödel’s is, in general, remarkable, but presumably there is a
misconception. Turing actually had in mind EP as an analytical concept while Gödel intended
to draw our attention to perspectives similar to the notion of machine-computable in the wide
sense.
Besides machine-computable in the narrow/wide sense we have another interesting notion
of computability, namely the notion of O-machines. Turing (1939, pp. 172ff) defines O-
machines as follows:
38
See Section 2.
39
Note that the ancient paradoxes like Zeno’s paradoxes of motion are paradoxical due to the same trick; they
are based on the assumption of an infinitely small instant of time.
40
These passages draw on material from Copeland (1998).
31
… an O-machine is an ordinary Turing machine augmented by an ‘oracle’.
Oracle is a primitive operation – a black box – that returns the value of an incomputable
function on integers.
O-machines can compute more functions than can ordinary Turing machines depending
on arbitrary restrictions placed on the oracle. This is due to the fact that O-machines do not
meet the constraint ad (3) of the refined definition. Moreover, if no restrictions are placed on
the oracle, then the generalization and broadening of the concept of computability makes the
concept trivial. Any function of integers is computable relative to the capabilities of an oracle.
As a result, the concept of an O-machine is an empirical one.
Tichý (1969) distinguishes between two kinds of procedures:
a) autonomous (analytic): their product depends on the outcome of the foregoing steps only,
irrespective of the state of the external world, and
b) empirical: the product does depend on the state of the world.
An empirical system contains a finite set of individuals and the ‘intensional basis of
elementary tests’. These elementary tests and their results are then numerical surrogates of the
elements of set W of possible worlds. Turing machine works with an oracle that supplies
Turing machine computation with information about the state of the external world in terms
of W, whenever needed.
Using current IT terminology, we might say that Tichý’s empirical system corresponds to
an information system with a database being gradually updated. The oracle is simulated by a
data collection and corresponds to database update. However, each computation involving a
given database state is effective, because it is executed over a finite database state that is a
snapshot of a fragment of the actual world.41
This explains how such an empirical information
system can function in practice, computing and producing its products.
6. Summary and concluding remarks
We considered four ways of construing the notion of computability:
1) EP – analytical concept of effective procedure, algorithm
2) TM – Turing machine, GR – general recursivity, D – lambda definability
3) MN – machine-computable in the narrow sense (for instance with laws of physics
imposing limitations on the machine)
MW – machine-computable in the wide sense (for instance involving infinitely small
times…)
4) O-machines with an oracle.
The Church-Turing thesis claims the equivalence of (1) and (2). Thus the Church-Turing
thesis proposes three kinds of a refinement of the concept of effective procedure/algorithm.
At this point we can formulate a hypothesis: if the concept of an effective procedure
(algorithm) is sufficiently refined and delimited, for instance, as proposed above by our
refined definition, then the Church-Turing thesis becomes provable. Though only a
hypothesis, the idea seems attractive. As for concepts (3) and (4), Concept MN is empirical,
therefore not equivalent to EP, Concept MW is incompatible with (1) and O-computability is
incompatible with (2).
41
On the assumption of flawless data collection.
32
In this paper I deployed TIL and the procedural theory of concepts built within TIL in
order to analyse the problems connected with the Church-Turing thesis and consequently the
problems of the specification of the concept of an effective procedure/algorithm. I did not
provide definite answers to the questions posed by these problems, which was not the goal of
the paper. Yet I believe that our exact, fine-grained analysis contributes to solving these
problems by making available explicit and rigorous descriptions of them, thereby rendering
them logically tractable.
Acknowledgements.
This research has been supported by the Grant Agency of the Czech Republic, Project No. 401-10-
0792, Temporal Aspects of Knowledge and Information, and also by the internal grant agency of VSB-
Technical University Ostrava, Project No. SP2012/26, An Utilization of Artificial Intelligence in
Knowledge Mining from Software Processes. A version of this paper was presented by the author as an
invited talk at the Studia Logica International Conference on Church’s Thesis: Logic, Mind and
Nature, Krakow, Poland, June 3-5, 2010.
I am indebted to Pavel Materna, who was co-invited to the conference, for his inspiring ideas that
positively contributed to the quality of the presentation as well as the resulting paper. I am also
grateful to Bjørn Jespersen whose valuable comments helped me to improve the structure of the paper
and to correct my inappropriate English formulations.
References
Abramson, F.G. (1971). ‘Effective Computation over the Real Numbers’. Twelfth Annual Symposium
on Switching and Automata Theory. Northridge, Calif.: Institute of Electrical and Electronics
Engineers.
Anderson, C.A. (1980). ‘Some new axioms for the logic of sense and denotation’. Nous 14, 217-234.
Anderson, C.A. (1998). ‘Alonzo Church’s contributions to philosophy and intensional logic’. The
Bulletin of Symbolic Logic 4, 129-171.
Blass, A. & Gurevich, Y. (2003). ‘Algorithms: A quest for absolute definitions’, Bulletin of European
Association for Theoretical Computer Science 81, 2003.
Börger, E., Grädel, E, Gurevich, Y. (2001). The Classical Decision Problem. Springer Verlag,
Perspectives in Mathematical Logic, 1997; second printing, Springer Verlag, 2001.
Brown, J.R. (1999). Philosophy of mathematics. London, New York: Routledge.
Carnap, R. (1947). Meaning and necessity. Chicago: Chicago University Press.
Church, A. (1932). ‘A set of Postulates for the Foundation of Logic’. Annals of Mathematics, second
series, 33, 346-366.
Church, A. (1936). ‘An Unsolvable Problem of Elementary Number Theory’. American Journal of
Mathematics, 58, 345-363.
Church, A. (1941). The calculi of lambda conversion. Annals of Mathematical Studies. Princeton:
Princeton University Press.
Church, A. (1954). ‘Intensional isomorphism and identity of belief’. Philosophical Studies 5, 65-73.
Church, A. (1956). Introduction to mathematical logic. Princeton: Princeton University Press.
Church, A. (1993). ‘A revised formulation of the logic of sense and denotation’. Alternative (1). Noûs
27, 141-157
Copeland, B.J. & Proudfoot, D. (1999). ‘Alan Turing's Forgotten Ideas in Computer Science’.
Scientific American, 280 (April), 76-81.
Copeland, B.J. & Proudfoot, D. (2000). ‘What Turing Did After He Invented the Universal Turing
Machine’. Journal of Logic, Language, and Information, 9, 491-509.
Copeland, B.J. & Sylvan, R. (1999). ‘Beyond the Universal Turing Machine’. Australasian Journal of
Philosophy, 77, 46-66.
Copeland, B.J. (1998). ‘Even Turing Machines Can Compute Uncomputable Functions’. In Calude,
C., Casti, J., Dinneen, M. (eds) 1998, Unconventional Models of Computation, London and
Singapore: Springer-Verlag, 150-164.
Copeland, B.J. (2000). ‘Narrow Versus Wide Mechanism’. Journal of Philosophy, 97, 5-32.
33
Copeland, B.J. (2008). ‘The Church-Turing Thesis’. The Stanford Encyclopaedia of Philosophy (Fall
2008 Edition), Edward N. Zalta (ed.), URL =
<http://plato.stanford.edu/archives/fall2008/entries/church-turing/>.
Curry, H.B. (1929). ‘An Analysis of Logical Substitution’. American Journal of Mathematics, 51,
363-384.
Curry, H.B. (1930). ‘Grundlagen der kombinatorischen Logik’. American Journal of Mathematics, 52,
509-536, 789-834.
Curry, H.B. (1932). ‘Some Additions to the Theory of Combinators’. American Journal of
Mathematics, 54, 551-558.
Detlefsen, M. (1990). ‘On an Alleged Refutation of Hilbert’s Program Using Gödel’s first
incompleteness theorem’, Journal of Philosophical Logic, 19, 343-377.
Duží, M. & Materna, P. (2010). ‘Can concepts be defined in terms of sets?’ Logic and Logical
Philosophy, 19, 195-242.
Duží, M. (2005). ‘Kurt Gödel. Metamathematical results on formally undecidable propositions:
Completeness vs. Incompleteness’. Organon F, XII: 4, pp. 447-474.
Duží, M. (2010). ‘The paradox of inference and the non-triviality of analytic information’. Journal of
Philosophical Logic, 39: 5, pp. 473-510.
Duží, M., Jespersen, B., Materna, P. (2010). Procedural Semantics for Hyperintensional Logic.
Foundations and Applications of Trasnsparent Intensional Logic. First edition. Berlin: Springer,
series Logic, Epistemology, and the Unity of Science, vol. 17, 2010.
Duží, M., Jespersen, B. (in submission). ‘Procedural isomorphism and restricted -conversion’,
revised and resubmitted to Logic Journal of the IGPL.
Feferman, S., ed. (1986): Kurt Gödel: Collected Works. Oxford University Press.
Frege, G. (1891). Funktion und Begriff. Jena: H. Pohle. (Vortrag, gehalten in der Sitzung vom 9.
Januar 1891 der Jenaischen Gesellschaft für Medizin und Naturwissenschaft, Jena, 1891).
Frege, G. (1892a). ‘Über Sinn und Bedeutung’. Zeitschrift für Philosophie und philosophische Kritik
100: 25-50.
Frege, G. (1892b). ‘Über Begriff und Gegenstand’. Vierteljahrschrift für wissenschaftliche
Philosophie 16: 192-205.
Frege, G. (1972). Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen
Denkens. Halle: L. Nebert, 1879. Translated as Begriffsschrift, a Formula Language, Modeled
upon that of Arithmetic, for Pure Thought. In From Frege to Gödel, edited by Jean van
Heijenoort. Cambridge, MA: Harvard University Press, 1967. Also as Conceptual Notation and
Related Articles. Edited and translated by Terrell W. Bynum. London: Oxford University Press,
1972.
Gandy, R. (1980). ‘Church's Thesis and Principles for Mechanisms’. In Barwise, J., Keisler, H.J.,
Kunen, K. (eds), The Kleene Symposium. Amsterdam: North-Holland.
Gödel, K. (1934). ‘On Undecidable Propositions of Formal Mathematical Systems’. Lecture notes
taken by Kleene and Rosser at the Institute for Advanced Study. Reprinted in Davis, M. (ed.)
1965. New York: Raven.
Herbrand, J. (1932). ‘Sur la non-contradiction de l'arithmetique’. Journal fur die reine und
angewandte Mathematik, 166, 1-8.
Horák, A. (2002). The Normal Translation Algorithm in Transparent Intensional Logic for Czech,
PhD Thesis, Masaryk University, Brno, retrievable at http://www.fi.muni.cz/~hales/disert/
Jespersen, B. and G. Primiero, ‘Alleged assassins: realist and constructivist semantics for modal
modifiers’, Lecture Notes in Computer Science, forthcoming.
Kleene, S.C. (1936). ‘Lambda definability and recursiveness.’ Duke Mathematical Journal, 2, 340-
353.
Kleene, S.C. (1952). Introduction to Metamathematics. D. Van Nostrand Co., Inc., New York.
Kleene, S.C. (1967). Mathematical Logic. John Wiley & Sons, Inc., New York-London-Sydney 1967.
First Corrected printing 1968.
Kolmogorov, A.N. & Uspensky, V.A. (1958, 1963). ‘On the definition of algorithm’, Uspekhi Mat.
Nauk 13:4 (1958), 3-28, Russian; translated into English in AMS Translations 29 (1963), 217-
245.
34
Kolmogorov, A.N. (1953). ‘On the concept of algorithm’, Uspekhi Mat. Nauk 8:4 (1953), 175-176,
Russian. An English translation in Uspenski & Semenov (1993), pp. 18-19].
Materna, P. (1998). Concepts and Objects. Helsinki: Acta Philosophica Fennica, vol. 63.
Materna, P. (2004). Conceptual Systems. Berlin: Logos.
Materna, P. (2007). ‘Church’s criticism of Carnap’s intensional isomorphism from the viewpoint of
TIL’. In The World of Language and the World Beyond Language: A Festschrift for Pavel
Cmorej, eds. T. Marvan and M. Zouhar, 108-118. Bratislava: Department of Philosophy, Slovak
Academy of Sciences.
Mendelson, E. (1990). ‘Second thoughts about Church’s thesis and mathematical proofs’. Journal of
Philosophy, 87: 225–233.
Post, E.L. (1936). ‘Finite Combinatory Processes - Formulation 1’, Journal of Symbolic Logic, 1, 103-
105.
Post, E.L. (1943). ‘Formal Reductions of the General Combinatorial Decision Problem’, American
Journal of Mathematics, 65, 197-215.
Post, E.L. (1946). ‘A Variant of a Recursively Unsolvable Problem’, Bulletin of the American
Mathematical Society, 52, 264-268.
Primiero, G. and B. Jespersen (2010). ‘Two kinds of procedural semantics for privative modification’,
Lecture Notes in Artificial Intelligence, 6284, 252-71.
Russell, B.A.W. (1915). Our Knowledge of the External World as a Field for Scientific Method in
Philosophy. Chicago: Open Court.
Schönfinkel, M. (1924). ‘Uber die Bausteine der mathematischen Logik’. Mathematische Annalen, 92,
305-316.
Shepherdson, J.C., Sturgis, H.E. (1963). ‘Computability of Recursive Functions’. Journal of the ACM,
10, 217-255.
Siegelmann, H.T., Sontag, E.D. (1994). ‘On the Computational Power of Neural Nets’. Proceedings of
the 5th Annual ACM Workshop on Computational Learning Theory, 440-449.
Stewart, I. (1991). ‘Deciding the Undecidable’. Nature, 352, 664-5.
Tichý, P. (1968). ‘Smysl a procedura’. Filosofický časopis 16: 222-232. Translated as ‘Sense and
procedure’ in (Tichý 2004: 77-92).
Tichý, P. (1969). ‘Intensions in terms of Turing machines’. Studia Logica 26: 7-25. Reprinted in
(Tichý 2004: 93-109).
Tichý, P. (2004). Pavel Tichý´s Collected Papers in Logic and Philosophy, V. Svoboda, B. Jespersen,
C. Cheyne (eds.), Prague: Filosofia, Czech Academy of Sciences, and Dunedin: University of
Otago Press.
Turing, A.M. (1936). ‘On Computable Numbers, with an Application to the Entscheidungsproblem’.
Proceedings of the London Mathematical Society, Series 2: 42 (1936-37), 230-265.
Turing, A.M. (1939). ‘Systems of Logic Based on Ordinals‘. Proceeding of the London Mathematical
Society, 45, 161-228.
Uspensky, V.A. (1992). ‘Kolmogorov and mathematical logic’, Journal of Symbolic Logic 57: 2, 385-
412.
Uspensky, V.A. and Semenov, A.L. (1993). Algorithms: Main Ideas and Applications, Kluwer.

More Related Content

PPTX
Class 35: Self-Reference
PDF
Gödel’s incompleteness theorems
PPT
Secure-Software-10-Formal-Methods.ppt
PPTX
nas23-vardi.pptx
PDF
20130928 automated theorem_proving_harrison
PDF
Computability
PDF
Intuitive Intro to Gödel's Incompleteness Theorem
PDF
A Case For Weakening The Church-Turing Thesis
Class 35: Self-Reference
Gödel’s incompleteness theorems
Secure-Software-10-Formal-Methods.ppt
nas23-vardi.pptx
20130928 automated theorem_proving_harrison
Computability
Intuitive Intro to Gödel's Incompleteness Theorem
A Case For Weakening The Church-Turing Thesis

Similar to A Procedural Interpretation Of The Church-Turing Thesis (20)

PDF
Logicrevo15
PDF
Godels Disjunction The Scope And Limits Of Mathematical Knowledge Horsten
PDF
29364360 the-logic-of-transdisciplinarity-2
PPT
C2.0 propositional logic
PPT
Godels First Incompleteness Theorem
PDF
Sentient Arithmetic and Godel's Incompleteness Theorems
PDF
Logic for everyone
PDF
Mathematical logic
PPTX
Computability and Complexity
PPT
PPTX
Theorem proving 2018 2019
PDF
Theorem proving 2018 2019
PPTX
Incompleteness Theorems: Logical Necessity of Inconsistency
PPTX
True but Unprovable
PDF
Goedel Theorem
PDF
Geuvers slides
PDF
types, types, types
PPTX
Math-802_Group-1_LOGIC final.ppt logicx
PPSX
lecture03.ppsxlecture03.ppsxlecture03.ppsxlecture03.ppsx
Logicrevo15
Godels Disjunction The Scope And Limits Of Mathematical Knowledge Horsten
29364360 the-logic-of-transdisciplinarity-2
C2.0 propositional logic
Godels First Incompleteness Theorem
Sentient Arithmetic and Godel's Incompleteness Theorems
Logic for everyone
Mathematical logic
Computability and Complexity
Theorem proving 2018 2019
Theorem proving 2018 2019
Incompleteness Theorems: Logical Necessity of Inconsistency
True but Unprovable
Goedel Theorem
Geuvers slides
types, types, types
Math-802_Group-1_LOGIC final.ppt logicx
lecture03.ppsxlecture03.ppsxlecture03.ppsxlecture03.ppsx
Ad

More from Daniel Wachtel (20)

PDF
How To Write A Conclusion Paragraph Examples - Bobby
PDF
The Great Importance Of Custom Research Paper Writi
PDF
Free Writing Paper Template With Bo. Online assignment writing service.
PDF
How To Write A 5 Page Essay - Capitalize My Title
PDF
Sample Transfer College Essay Templates At Allbu
PDF
White Pen To Write On Black Paper. Online assignment writing service.
PDF
Thanksgiving Writing Paper By Catherine S Teachers
PDF
Transitional Words. Online assignment writing service.
PDF
Who Can Help Me Write An Essay - HelpcoachS Diary
PDF
Persuasive Writing Essays - The Oscillation Band
PDF
Write Essay On An Ideal Teacher Essay Writing English - YouTube
PDF
How To Exploit Your ProfessorS Marking Gui
PDF
Word Essay Professional Writ. Online assignment writing service.
PDF
How To Write A Thesis And Outline. How To Write A Th
PDF
Write My Essay Cheap Order Cu. Online assignment writing service.
PDF
Importance Of English Language Essay Essay On Importance Of En
PDF
Narrative Structure Worksheet. Online assignment writing service.
PDF
Essay Writing Service Recommendation Websites
PDF
Critical Essay Personal Philosophy Of Nursing Essa
PDF
Terrorism Essay In English For Students (400 Easy Words)
How To Write A Conclusion Paragraph Examples - Bobby
The Great Importance Of Custom Research Paper Writi
Free Writing Paper Template With Bo. Online assignment writing service.
How To Write A 5 Page Essay - Capitalize My Title
Sample Transfer College Essay Templates At Allbu
White Pen To Write On Black Paper. Online assignment writing service.
Thanksgiving Writing Paper By Catherine S Teachers
Transitional Words. Online assignment writing service.
Who Can Help Me Write An Essay - HelpcoachS Diary
Persuasive Writing Essays - The Oscillation Band
Write Essay On An Ideal Teacher Essay Writing English - YouTube
How To Exploit Your ProfessorS Marking Gui
Word Essay Professional Writ. Online assignment writing service.
How To Write A Thesis And Outline. How To Write A Th
Write My Essay Cheap Order Cu. Online assignment writing service.
Importance Of English Language Essay Essay On Importance Of En
Narrative Structure Worksheet. Online assignment writing service.
Essay Writing Service Recommendation Websites
Critical Essay Personal Philosophy Of Nursing Essa
Terrorism Essay In English For Students (400 Easy Words)
Ad

Recently uploaded (20)

PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PPTX
Introduction to Building Materials
PDF
My India Quiz Book_20210205121199924.pdf
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PDF
Indian roads congress 037 - 2012 Flexible pavement
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
Empowerment Technology for Senior High School Guide
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PPTX
Introduction to pro and eukaryotes and differences.pptx
PPTX
20th Century Theater, Methods, History.pptx
PDF
advance database management system book.pdf
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Trump Administration's workforce development strategy
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
AI-driven educational solutions for real-life interventions in the Philippine...
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
FORM 1 BIOLOGY MIND MAPS and their schemes
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
Introduction to Building Materials
My India Quiz Book_20210205121199924.pdf
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Indian roads congress 037 - 2012 Flexible pavement
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
Empowerment Technology for Senior High School Guide
Share_Module_2_Power_conflict_and_negotiation.pptx
Introduction to pro and eukaryotes and differences.pptx
20th Century Theater, Methods, History.pptx
advance database management system book.pdf
What if we spent less time fighting change, and more time building what’s rig...
Trump Administration's workforce development strategy
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape

A Procedural Interpretation Of The Church-Turing Thesis

  • 1. 1 A procedural interpretation of the Church-Turing Thesis Marie Duží, VSB-Technical University, Institute of Computer Science, Ostrava, Czech Republic marie.duzi@gmail.com Introduction Logicians are usually philosophically or mathematically minded. Why, then, would they be so interested in problems that belong to computer science, like the explication of the notions of algorithm, effective procedure, and suchlike? The reason for their interest is presumably this. Such problems are interdisciplinary, and modern mathematics, logic and analytic philosophy have much in common, going hand in hand. For instance, the classical decision problem (Entscheidungsproblem) was tremendously popular among logicians. Kurt Gödel, for one, worked on it. Thus I first provide in Section 1 a brief summary of Gödel’s famous incompleteness results. In the summary I will use a current technical vernacular. That is, I will use terms like ‘algorithm’, ‘effective procedure’, ‘recursive axiomatization’, etc. These terms were not used in the time when Gödel was pursuing his research on (un)decidability, because the study of these modern notions was triggered, inter alia, just by Gödel’s incompleteness results. This paper offers a conceptual view of the Church-Turing Thesis, which is an attempt to define the notion of algorithm/effective procedure.1 I am going to analyze the Thesis and the problems of the specification of the concept of an algorithm. To this end I apply a procedural theory of concepts. This theory was formulated by Materna using Transparent Intensional Logic (TIL) as a background theory.2 I will not provide definite answers to the questions posed by the problems just mentioned. Still I believe that the exact, fine-grained analysis offered below will contribute to elucidating the notion of an effective procedure and will help us to solve the problems stemming from the under-specification of the concept of algorithm. The rest of the paper is structured as follows. Section 2 is a brief summary of the notions of effective procedure, algorithm, effective method, Church’s Thesis, Turing’s Thesis, and Turing-complete systems as they are known today. The Church-Turing Thesis deals with four concepts, viz. EP, the concept of an effective procedure, TM, the concept of Turing machine computability, GR, the concept of general recursive functions and D the concept of - definable functions. The Thesis can be schematically introduced like this: EP = TM = GR = D The problematic constituent is here the most left-hand concept EP; TM, GR and D are well defined and should serve to explicate or define or specify the concept of an algorithm, EP. In this paper I am going to advance the research on this topic. My background theory is TIL. Hence in Section 3 the foundations of TIL are introduced. Then in Section 4 I summarize Materna’s procedural theory of concepts. Crucial for the definition of concept is the problem of the individuation of procedures. To this end I define procedural isomorphism that lays down a criterion of individuation of procedures. Finally, in the main Section 5 I apply our logical machinery in order to analyze the notions introduced in Section 2, in particular to 1 Throughout the paper I will use the terms ‘algorithm’ and ‘effective procedure’ as synonyms. 2 For details on the procedural theory of concepts see, e.g., Materna (1998), (2004).
  • 2. 2 explicate the Church-Turing Thesis, its consequences and other closely related concepts. I believe that our procedural view will shed new light on the Thesis. In particular, I will define and make use of the notion of concept refinement, and propose constraints that would delimit the concept of algorithm in such a way that the equivalence between the left-hand and right- hand sides of the Church-Turing Thesis might be provable. Moreover, the distinction between analytical and empirical concepts should elucidate the difference between purely theoretical computational devices and machines that are restricted by empirical/physical laws. 1. Brief summary of Gödel’s Incompleteness Theorems.3 The German mathematician David Hilbert (1862-1943) announced his program of formalization of mathematics in the early 1920s. It calls for a formalization of all of mathematics in axiomatic form, and for proving the consistency of such formal axiom systems. The consistency proof itself was to be carried out using only what Hilbert called finitary methods. The special epistemological character of finitary reasoning then yields the required justification of classical mathematics. Although Hilbert proposed his program in this form only in 1921, it can be traced back until around 1900, when he first pointed out the necessity of giving a direct consistency proof of analysis. This was the time when worrying paradoxes began to crop up in mathematics (Zermelo’s paradox in 1900, Russell’s antinomy in 1901, later in 1930 the Kleene-Rosser paradox, and many other paradoxes of self- reference), most of them stemming from careless use of actual infinity. Hilbert first thought that the problem of paradoxes arising from self-reference ‘vicious circle’ had essentially been solved by Russell’s type theory in Principia. This is true, yet some fundamental problems of axiomatics remained unsolved, including, inter alia, the decision problem. In general, the idea of finitary axiomatization is simple: if we choose some basic formulas (axioms) that are decidedly true and if we use a finite effective method of applying some simple rules of inference that preserve truth, no falsehood can be derived from true axioms; hence no contradiction can be derived, no paradox will crop up. Again, this is true, but the problem remains that in this way we would never derive all true sentences of mathematics, because there always remain independent sentences of which we are not able to decide whether they are true or false. From the logical point of view, the decision problem is this. Given a closed formula of first-order predicate logic (a sentence), decide whether it is satisfiable (respectively, logically valid). Proof theorists usually prefer the validity version whereas model theorists prefer the satisfiability version. In 1928 Hilbert and Ackermann published a concise small book, Grundzüge der theoretischen Logik, in which they arrived at exactly this point: they had defined axioms and derivation rules of first-order predicate logic (FOL), and formulated the problem of completeness. They raised the question whether such a proof calculus is complete in the sense that each logical truth is provable within the calculus; in other words, whether the calculus proves exactly all the logically valid FOL formulas. Gödel’s Completeness Theorem gives a positive answer to this question: the 1st -order predicate proof calculus with appropriate axioms and rules is a complete calculus, i.e., all the FOL logical truths are provable: if |= , then |– . Moreover, in a consistent FOL system, syntactic provability is equivalent to being logically true: |=   |– . 3 Portions of this section draw on material from Duží (2005).
  • 3. 3 There is even a stronger version of the Completeness Theorem that Gödel formulated and proved as well. We derive consequences not only from logically valid sentences but also from other sentences true under some interpretation rather than all interpretations. For instance, from the facts that no prime number greater than 2 is even and 11 is a prime number greater than 2 we can derive that the number 11 is not even. In FOL notation we have: x [[P(x)  G(x, a)]  E(a)], [P(b)  G(b, a)]) |– E(b). None of these formulas is a logical truth. They are true only under some but not all possible interpretations. One such interpretation that makes the formula true is the intended one, viz. the interpretation with the universe of natural numbers assigning the set of primes to the symbol P, the relation of being greater than to the symbol R, the set of even numbers to Q, and numbers 2 and 11 to the constants a and b, respectively. Yet this derivation is correct, since the conclusion is logically entailed by the premises: whenever the premises are true, the conclusion must be true as well. In other words, the conclusion is true in all the models of the premises. To formulate the strong version of the Completeness Theorem, we need to define the notions of theory and proof in a theory. A (FOL) theory is given by a (possibly infinite) set of FOL logical axioms and the set of special axioms. A proof in a theory T is a sequence of formulas 1,…,n such that each i is either  a logical axiom or  a special axiom of T, or  i is derived from some previous members of the sequence 1,…,i-1 using a derivation rule of FOL. A formula  is provable in T iff it is the last member of a proof in T; we also say that the theory T proves , and the formula  is a theorem of the theory (denoted T |– ). A structure M is a model of the theory T, denoted M |= T, iff each special axiom of T is valid in M. The strong version of the Completeness Theorem holds that a formula  is provable in a (consistent) theory T if and only if  is logically entailed by its special axioms; in other words, iff  is valid in every model of the theory; in (meta-) symbols: T |=   T |– . Gödel’s famous results on incompleteness that entirely changed the character of modern mathematics were announced by Gödel in 1930, and his paper ‘Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I’ was published in 1931. This work contained a detailed proof of the Incompleteness Theorem and a formulation of the second Incompleteness Theorem; both theorems were formulated within the system of Principia Mathematica. In 1932 Gödel published in Vienna a short summary, ‘Zum intuitionistischen Aussagenkalkül’, which was based on a theory that is nowadays called Peano arithmetic. In order to introduce these results in a comprehensible way, let me just briefly recapitulate the main steps of Gödel's argument: 1. A theory is adequate if it encodes finite sequences of numbers and defines sequence operations such as concatenation. An arithmetic theory such as Peano arithmetic (PA) is adequate (so is, e.g., set theory). 2. In an adequate theory T we can encode the syntax of terms, sentences (closed formulas) and proofs. This means that we can ask which facts about provability in T are provable in T itself. Let us denote the code of  as <>.
  • 4. 4 3. The self-reference (diagonal) lemma: For any formula (x) (with one free variable) in an adequate theory, there is a sentence  such that  iff (<>). 4. Let Th(N) be the set of numbers that encode true sentences of arithmetic (i.e. formulas true in the standard model of arithmetic N), and Thm(T) the set of numbers that encode sentences provable in an adequate (sound) theory T. Since the theory is sound, the latter is a subset of the former: Thm(T)  Th(N). It would be nice if they were the same; in that case the theory T would be complete. 5. No such luck if the theory T is recursively axiomatised, i.e., if the set of axioms is computable in the following sense: there is an algorithm that, given an input formula , computes a Yes / No answer to the question whether  is an axiom. The computability of the set of axioms and the completeness of the theory T are two goals that cannot be achieved simultaneously, because: 5.1. The set Th(N) is not even definable by an arithmetic sentence such that it would be true if its number were in the set and false otherwise. Here is why. Let n be a number such that n  Th(N). Then by Self-Reference (3) there is a sentence  such that <> = n. Hence  iff <>  Th(N) iff  is not true in N iff not  – contradiction! There is no such . Since being non-definable implies being non-computable there will never be a program that would decide whether an arithmetic sentence is true or false (in the standard model of arithmetic). 5.2. The set Thm(T) is definable in an adequate theory, say Robinson’s arithmetic Q: for any formula  the Gödel number <> is in Thm(T) iff  is provable, for: the set of axioms is recursively countable, i.e., computable, so is the set of proofs that use these axioms and so is the set of provable formulas and thus so is the set Thm(T). Since computable implies definable in adequate theories, Thm(T) is definable. Let n be a number such that n  Thm(T). By Self Reference (3) there is a sentence γ such that <γ> = n. Hence γ iff <γ>  Thm(T), that is, γ is not provable. Now if γ is false then γ is provable. This is impossible in a sound theory: provable sentences are true. Hence γ is true but improvable. Now one may wonder: if we can algorithmically generate the set Thm(T), can we not obtain all the true sentences of arithmetic? Unfortunately, we cannot. No matter how far we push ahead, we will never reach all of them, because there is no algorithm that would decide each and every formula. There will always remain formulas that are simultaneously true and undecidable. We define the notion of a theory being decidable thus: A theory T is decidable if the set Thm(T) of formulas provable in T is (general) recursive. If a theory is recursively axiomatized and complete, then it is decidable. However, one of the consequences of Gödel’s incompleteness theorem is: No recursively axiomatized theory T that contains Q and has a model N is decidable: there is no algorithm that would decide every formula  (whether it is provable in the theory T or not). For, if we had such an algorithm, we could use it to extend the theory so that it were complete, which is impossible if the theory T is consistent (according to Rosser’s improvement of Gödel’s first theorem). Denoting Ref(T) the set of all the sentences refutable in the theory T (i.e. the set of all the sentences  such that T |– ), it is obvious that also this set Ref(T) is not recursive. We can illustrate mutual relations between the sets Thm(T), Th(N), and Ref(T) by the following figure:
  • 5. 5 If the theory T is recursively axiomatized and complete, the sets Thm(T), Th(N) coincide and Ref(T) is their complement. In such a case the set of numbers of sentences independent of T (the hatched set in the figure) is empty. In an incomplete theory this set is non-empty. Another consequence of the Incompleteness theorem is the undecidability of the problem of logical truth in FOL: The FOL proof calculus is a theory without special axioms. Though it is a complete calculus (all the logically valid formulas are provable), as an empty theory it is not decidable: there is no algorithm that would decide for each and every formula  whether it is a theorem or not (equivalently, whether it is a logically valid formula or not). The problem of logical truth is not decidable in FOL. For Q is an adequate theory with a finite number of axioms. If Q1,…Q7 are its axioms (closed formulas), then a sentence  is provable in Q iff (Q1 & … & Q7)   is provable in the FOL calculus, and so (Q1 & … & Q7)   is a logically valid formula.4 If the calculus were decidable, then so would Q be, which it is not, however. Alonzo Church proved that there are proof calculi that are semi-decidable: there is an algorithm which at an input formula  that is logically valid outputs the answer Yes. If, however, the input formula  is not a logical truth the algorithm may answer No or it never outputs an answer.5 Gödel discovered that the sentence γ claiming “I am not provable” is equivalent to the sentence ξ claiming “There is no <> such that both <> and <> are in Thm(T)”. The latter is a formal statement that the system is consistent. Since γ is not provable, and γ and ξ are equivalent, ξ is not provable, either. Thus we have: Gödel’s Second Theorem of incompleteness: In any consistent, recursively axiomatizable theory T that is strong enough to encode sequences of numbers (and thus the syntactic notions of formula, sentence, proof) the consistency of the theory T is not provable in T. The second incompleteness theorem shows that there is no hope of proving, e.g., the consistency of first-order arithmetic using finitary means, provided we accept that finitary means are correctly formalized in a theory, the consistency of which is provable in PA. As Georg Kreisel remarked, it would actually provide no interesting information if a theory T proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of T in T would give us no clue as to whether T really is consistent. One of the first to recognize the revolutionary significance of the incompleteness results was John von Neumann who came close to anticipating Gödel’s Second Theorem. Others were slower in absorbing the essence of the problem and accepting its solution. For example, Hilbert’s assistant Paul Bernays had difficulties with the technicalities of the proof that were 4 Here we are using the Theorem of Deduction: Q1 & … & Qn |  iff Q1 & … & Qn-1 | Qn  . 5 Of course, there are subclasses of FOL that are decidable. For details, see Börger et al. (1996). Axioms Thm(T) Th(N) Ref(T)
  • 6. 6 cleared up only after extensive correspondence.6 Gödel’s breakthrough even drew sharp criticism, which was due to the prevailing conviction that mathematical thinking can be captured by laws of pure symbol manipulation, and due to the inability to make the necessary distinctions involved, such as that between the notions of truth and proof. Thus, for instance, the famous set theorist Ernst Zermelo interpreted the latter in a way that generates a contradiction within Gödel’s results. Since no reasonable axiomatic theory T can prove its own consistency, a theory S capable of proving the consistency of T can be viewed as being considerably stronger than T. Of course, being considerably stronger implies being non-equivalent. The Levy Reflection Principle, which is non-trivial, but also not so difficult to prove, states that Zermelo-Fraenkel set theory ZF proves the consistency of each of its finitely axiomatized sub-theories. So by Gödel’s Second Theorem, full ZF is considerably stronger than any of its finitely axiomatized fragments. This in turn yields a simple proof that ZF is not finitely axiomatizable. The second-order theories (of real numbers, of complex numbers, and of Euclidean geometry) do have complete axiomatizations. Hence these theories have no sentences that are simultaneously true and unprovable. The reason they escape incompleteness is their inadequacy: they cannot encode and computably deal with finite sequences. The price we pay for second-order completeness is high: the second-order calculus is not (even semi-) decidable. We cannot algorithmically generate all the second-order logical truths, thus not all the logical truths are provable, and so the second-order proof calculus is not semantically complete. The consequences of Gödel’s two theorems are clear and generally accepted. First of all, the formalist belief in identifying truth with provability is destroyed by the First Theorem. Second, the impossibility of an absolute consistency proof (acceptable from the finitary point of view) is even more destructive for Hilbert’s program. Gödel’s Second Theorem makes the notion of a finitary statement and finitary proof highly problematic. If the notion of a finitary proof is identified with a proof formalized in an axiomatic theory T, then the theory T is a very weak theory. If T satisfies simple requirements, then T is suspected of inconsistency. In other words, if the notion of finitary proof means something that is non-trivial and at the same time non-questionable and consistent, there is no such thing. Though it is almost universally believed that Gödel’s results destroyed Hilbert’s program, the program was very inspiring for mathematicians, philosophers and logicians. Some thinkers claimed that we should still be formalists.7 Others, like Brouwer, the father of modern constructive mathematics, believe that mathematics is first and foremost an activity: mathematicians do not discover pre-existing things, as a Platonist holds, and they do not manipulate symbols, as a formalist holds. Mathematicians, according to Brouwer, make things. Some recent intuitionists seem to stay somewhere in between: being ontological realists, they admit that there are abstract entities we discover in mathematics, but at the same time, being semantic intuitionists, they maintain that these abstract entities ‘cannot be claimed to exist’ unless they are well defined by a formal proof, as a sequence of judgements.8 The possible impact of Gödel’s results on the philosophy of mind, artificial intelligence, and on Platonism might be a matter of dispute. Gödel himself suggested that the human mind cannot be a machine and that Platonism is correct. More recently Roger Penrose has argued that “Gödel’s results show that the whole programme of artificial intelligence is wrong, that creative mathematicians do not think in a mechanic way, but that they often have a kind of insight into the Platonic realm which exists independently from us”.9 Gödel’s doubts about 6 The technical device used in the proof is now known as Gödel numbering. 7 See, e.g., Detlefsen (1990). 8 This is a slight rephrasing of a remark made by Peter Fletcher in an e-mail correspondence. 9 See, Brown (1999. p. 78).
  • 7. 7 the limits of formalism were certainly influenced by Brouwer who criticised formalism in the lecture presented at the University of Vienna in 1928. Gödel, however, did not share Brouwer’s intuitionism based on the assumption that mathematical objects are created by our activities. For Gödel as a Platonic realist mathematical objects exist independently and we discover them. On the other hand he claimed that our intuition cannot be reduced to Hilbert’s concrete intuition of finite symbols, but we have to accept abstract entities like well-defined mathematical procedures that have a clear meaning without further explication. His proofs are constructive and therefore acceptable from the intuitionist point of view. In fact, Gödel’s results are based on two fundamental concepts: truth for formal languages and effective computability. Concerning the former, Gödel stated in his Princeton lectures that he was led to the incompleteness of arithmetic via his recognition of the non-definability of arithmetic truth in its own language. In the same lectures he offered the notion of general recursiveness in connection with the idea of effective computability; this was based on a modification of a definition proposed by Herbrand. In the meantime, Church presented his thesis identifying effectively computable functions with -definable functions. Gödel was not convinced by Church’s thesis, because it was not based on a conceptual analysis of the notion of finite algorithmic procedure. It was only when Turing, in 1937, offered the definition in terms of his machines that Gödel was ready to accept the identification of the various classes of functions: the -definable, the general recursive, and the Turing-computable ones. The pursuit of Hilbert’s program had thus an unexpected side effect: it gave rise to the realistic research on the theory of algorithms, effective computability and recursive functions. Von Neumann, for instance, along with being a great mathematician and logician, was an early pioneer in the field of modern computing, though it was a difficult task because computing was not yet a respected science. His conception of computer architecture still has not been surpassed. Gödel’s First Theorem has another interpretation in the language of computer science. In first-order logic, the set of theorems is recursively enumerable: you can write a computer program that will eventually generate any valid proof. You can ask if they satisfy the stronger property of being recursive: can you write a computer program to definitively determine if a statement is true or else false? Gödel’s First Theorem says that in general you cannot; a computer can never be as smart as a human being because the extent of its knowledge is limited by a fixed set of axioms, whereas people can discover unexpected truths and enrich their knowledge gradually. In my opinion, it is fair to say that Gödel’s results changed the face of meta-mathematics and influenced all aspects of modern mathematics, artificial intelligence and philosophy of mind. Moreover, they were really a strong impulse of the development of theoretical computer science. Hence, it should be clear now that Church-Turing Thesis and the related issues are still a hot topic. After all, we still do not have a rigorous definition of the central concept in computer science, viz. algorithm. 2. Effective procedures and the Church-Turing Thesis In this section I briefly summarize the notion of an algorithm/effective procedure and the attempts to precisely characterize or even define this notion. Though there are many such attempts, we still do not precisely know what an algorithm is; there remain open questions concerning the notion of algorithm, for instance:  Does an algorithm have to terminate, or could it sometimes compute theoretically for ever?  Does an algorithm always have to produce the value of a function being computed, or does it compute properly partial functions with value gaps?
  • 8. 8 First I present a brief summary of the attempts to specify criteria for a method M to be effective. Then I summarize particular theses as presented by Church, Turing, and others. These theses are just theses. They are neither provable nor definitions. Though these notions are well-known, I include this section in the interest of making the paper easier to read without consulting additional sources of information. Also I wish to share with the reader the same terminology and theoretical background.10 Copeland’s characterisations of an effective method M are these (Copeland 2008): A method, or procedure, M, for achieving some desired result is called ‘effective’ or ‘mechanical’ just in case 1. M is set out in terms of a finite number of exact instructions (each instruction being expressed by means of a finite number of symbols); 2. M will, if carried out without error, produce the desired result in a finite number of steps; 3. M can (in practice or in principle) be carried out by a human being unaided by any machinery save paper and pencil; 4. M demands no insight or ingenuity on the part of the human being carrying it out. On the problem of defining algorithm Gurevich (2003) refers to Kolmogorov’s research: The problem of the absolute definition of algorithm was addressed again in 1953 by Andrei N. Kolmogorov; …. Kolmogorov spelled out his intuitive ideas about algorithms. For brevity, we express them in our own words (rather than translate literally).  An algorithmic process splits into steps whose complexity is bounded in advance, i.e., the bound is independent of the input and the current state of the computation.  Each step consists in a direct and immediate transformation of the current state.  This transformation applies only to the active part of the state and does not alter the remainder of the state.  The size of the active part is bounded in advance.  The process runs until either the next step is impossible or a signal says a solution has been reached. In addition to these intuitive ideas, Kolmogorov gave a one-paragraph sketch of a new computation model. The model was introduced in the papers Kolmogorov & Uspensky (1958, 1963) written by Kolmogorov together with his student Vladimir A. Uspensky. The Kolmogorov machine model can be thought of as a generalization of the Turing machine model where the tape is a directed graph of bounded in-degree and bounded out-degree. The vertices of the graph correspond to Turing’s squares; each vertex has a colour chosen from a fixed, finite palette of vertex colours; one of the vertices is the current computation centre. Each edge has a colour chosen from a fixed, finite palette of edge colours; distinct edges from the same node have different colours. The program has this form: replace the vicinity U of a fixed radius around the central node by a new vicinity W that depends on the isomorphism type of the digraph U with the colours and the distinguished central vertex. Contrary to Turing's tape whose topology is fixed, Kolmogorov's ‘tape’ is reconfigurable. Here are the particular theses (slightly reformulated) as presented by Church and Turing. These theses concern numerical functions and criteria for them to be effectively or mechanically computable: 10 Portions of this section draw on material from Copeland (2008) and Copeland & Sylvan (1999).
  • 9. 9 Church: A numerical function is effectively computable by an algorithmic routine if and only if it is general recursive or -definable. Note. The concept of a -definable function is due to Church (1932, 1936, 1941), Kleene (1936), and the concept of a recursive function is due to Gödel (1934) and Herbrand (1932). The class of -definable functions and the class of recursive functions are identical. This was established in the case of functions of positive integers by Church (1936) and Kleene (1936). Turing: A numerical function is effectively computable by an algorithmic routine if and only if it is computable by a Turing machine. After learning of Church’s proposal, Turing quickly established that the apparatus of - definability and his own apparatus of computability are equivalent (1936: 263ff). Thus, in Church’s proposal, the words ‘recursive function of positive integers’ can be replaced by the words ‘function of positive integers computable by a Turing machine’. Post (1936, p. 105) referred to Church’s identification of effective calculability with recursiveness as a ‘working hypothesis’, and quite properly criticized Church for masking this hypothesis as a definition. This criticism then yielded a new ‘working hypothesis’ that Church proposed: Church's Thesis: A function of positive integers is effectively calculable only if it is recursive. The reverse implication, that every recursive function of positive integers is effectively calculable, is commonly referred to as the converse of Church's thesis (although Church himself did not so distinguish them, bundling both theses together in his ‘definition’). If attention is restricted to functions of positive integers then Church’s Thesis and Turing’s Thesis are equivalent, in view of the results by Church, Kleene and Turing mentioned above. The term ‘Church-Turing thesis’ seems to have been first introduced by Kleene: So Turing’s and Church’s theses are equivalent. We shall usually refer to them both as Church’s thesis, or in connection with that one of its ... versions which deals with ‘Turing machines’ as the Church-Turing Thesis. (1967, p. 232.) Since the sets of -definable functions and general recursive functions are provably identical, we can formulate the Church-Turing Thesis like this: Church-Turing Thesis: A function of positive integers is effectively calculable if and only if it is general recursive or -definable or computable by a Turing machine. Hence the concepts of general recursive functions, -definable functions and Turing computable functions coincide in this sense. These three very distinct concepts are equivalent, because they share the same extension, viz. the set of functions-in-extension that are known to be effectively computable. As Kleene (1952) rightly points out, the equivalences between Turing computable functions, general recursive functions and -definable functions provide strong evidence for the Church-Turing thesis, because: 1) Every effectively calculable function that has been investigated in this respect has turned out to be computable by Turing machine. 2) All known methods or operations for obtaining new effectively calculable functions from given effectively calculable functions are paralleled by methods for constructing new Turing machines from existing Turing machines.
  • 10. 10 3) All attempts to give an exact analysis of the intuitive notion of an effectively calculable function have turned out to be equivalent, in the sense that each analysis offered has been proved to pick out the same class of functions, namely those that are computable by a Turing machine. 4) Because of the diversity of the various analyses, (3) is generally considered to provide particularly strong evidence. Next I briefly summarize many known characterizations of Turing-complete systems. Wikipedia has this to say:11 “In computability theory, a system of data-manipulation rules (such as a computer’s instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine. A classic example is the lambda calculus. The concept is named after Alan Turing.” Computability theory includes the closely related concept of Turing equivalence. Another term for Turing equivalent computing system is ‘effectively computing system’. Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P. Thus, a Turing-complete system is one that can simulate a Turing machine; any real world computer can be simulated by a Turing machine. In colloquial usage, the terms ‘Turing complete’ or ‘Turing equivalent’ are used to mean that any real-world, general-purpose computer or computer language can approximately simulate any other real-world, general-purpose computer or computer language, within the bounds of finite memory. A universal computer is defined as a device with a Turing-complete instruction set, infinite memory, and an infinite lifespan; all general-purpose programming languages and modern machine instruction sets are Turing-complete, apart from having finite memory. In practice, Turing completeness means that the rules followed in sequence on arbitrary data can produce the result of any calculation. In imperative languages, this can be satisfied by having, minimally, conditional branching (e.g., an ‘if’ and ‘goto’ statement) and the ability to change arbitrary memory locations (e.g., having variables). To show that something is Turing complete, it is enough to show that it can be used to simulate the most primitive computer, since even the simplest computer can be used to simulate the most complicated one. Apart from -definability and recursiveness, there are other Turing-complete systems as presented by logicians and computer scientists, for instance:  Gödel's notion of computability (Gödel 1936, Kleene 1952);  register machines (Shepherdson and Sturgis 1963);  Post’s canonical and normal systems (Post 1943, 1946);  combinatory definability (Schönfinkel 1924, Curry 1929, 1930, 1932);  Markov (normal) algorithms (Markov 1960);  Register machines (Shepherdson and Sturgis 1963);  pointer machine model of Kolmogorov and Uspensky (1958, 1963). An interesting thesis known as ‘Thesis M’ is due to Gandy (1980): 11 See http://en.wikipedia.org/wiki/Turing_completeness; retrieved on July 20, 2012.
  • 11. 11 Whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine computable. There are two possible interpretations of Gandy’s thesis, namely a narrow-sense and a wide- sense formulation:12 a) narrow sense: ‘by a machine’ in the sense ‘by a machine that conforms to the physical laws of the actual world’. Thesis M is then an empirical proposition, which means that it cannot be analytically proved. b) wide sense: abstracting from the issue of whether or not the machine in question could exist in the actual world. Thesis M is then false: “Super-Turing machines” have been described that calculate functions that are not Turing-machine-computable.13 This completes our summary of notions that we are now going to analyse using TIL. 3. Foundations of Transparent Intensional Logic The syntax of TIL is Church’s (higher-order) typed -calculus, but with the all-important difference that the syntax has been assigned a procedural (as opposed to denotational) semantics, according to which a linguistic sense is an abstract procedure detailing how to arrive at an object of a particular logical type. TIL constructions are such procedures. A main feature of the -calculus is its ability to systematically distinguish between functions and functional values. An additional feature of TIL is its ability to systematically distinguish between functions and modes of presentation of functions and modes of presentation of functional values.14 The TIL operation known as Closure is the very procedure of presenting or forming or obtaining or constructing a function; the TIL operation known as Composition is the very procedure of constructing the value (if any) of a function at an argument. Compositions and Closures are both multiple-step procedures, or constructions, that operate on input provided by two one-step constructions, which figure as sub-procedures (constituents) of Compositions and Closures, namely variables and so-called Trivializations. Characters such as ‘x’, ‘y’ ‘z’ are words denoting variables, which construct the respective values that an assignment function has accorded to them. The linguistic counterpart of a Trivialization is a constant term always picking out the same object. An analogy from programming languages might be helpful. The Trivialization of an object X, whatever X may be, and its use are comparable to a pointer to X and the dereference of the pointer. In order to operate on X, X needs to be grabbed first. Trivialization is such a one-step grabbing mechanism. Similarly, in order to talk about Beijing (in non-demonstrative and non-indexical English discourse), we need to name Beijing, most simply by using the constant ‘Beijing’. Furthermore, TIL constructions represent our interpretation of Frege’s notion of Sinn (with the exception that constructions are not truth-bearers; instead some constructions present either truth-values or truth-conditions) and are kindred to Church’s notion of concept. 12 For details, see Copeland (2000). 13 It is straightforward to describe such machines, or ‘hypercomputers’ (Copeland and Proudfoot (1999)) that generate functions that fail to be Turing-machine-computable (see e.g. Abramson (1971), Copeland (2000), Copeland and Proudfoot (2000), Stewart (1991)). 14 Portions of this section draw on material from Duží & Jespersen (in submission) and Duží et. al. (2010).
  • 12. 12 Constructions are linguistic senses as well as modes of presentation of objects and are our hyperintensions. While the Frege-Church connection makes it obvious that constructions are not formulae, it is crucial to emphasize that constructions are not functions(-in-extension), either. Rather, technically speaking, some constructions are modes of presentation of functions, including 0-place functions such as individuals and truth-values, and the rest are modes of presentation of other constructions. Thus, with constructions of constructions, constructions of functions, functions, and functional values in our stratified ontology, we need to keep track of the traffic between multiple logical strata. The ramified type hierarchy does just that. What is important, in this paper, about this traffic is, first of all, that constructions may themselves figure as functional arguments or values. Certain constructions, qua objects of predication, figure as functional arguments of other functions. Moreover, since constructions can be arguments of functions, we consequently need constructions of one order higher to grab these argument constructions. The sense of an empirical sentence is an algorithmically structured construction of the proposition denoted by the sentence. The denoted proposition is a flat, or unstructured, mapping with domain in a logical space of possible worlds. Our motive for working ‘top- down’ has to do with anti-contextualism: any given unambiguous term or expression (even one involving indexicals or anaphoric pronouns) expresses the same construction as its sense whatever sort of context the term or expression is embedded within. And the sense/meaning of an expression determines the respective denoted entity (if any) constructed by its sense, but not vice versa. The denoted entities are (possibly 0-ary) functions understood as set- theoretical mappings. The context-invariant semantics of TIL is obtained by universalizing Frege’s reference- shifting semantics custom-made for ‘indirect’ contexts.15 The upshot is that it becomes trivially true that all contexts are transparent, in the sense that pairs of terms that are co- denoting outside an indirect context remain co-denoting inside an indirect context and vice versa. In particular, definite descriptions that only contingently describe the same individual never qualify as co-denoting. Rather, they are just contingently co-referring in a given possible world and at a given time of evaluation. Our term for the extra-semantic, factual relation of contingently describing the same entity is ‘reference’, whereas ‘denotation’ stands for the intra-semantic, pre-factual relation between two words that pick out the same entity at the same world/time-pairs. Our neo-Fregean semantic schema, which applies to all contexts, is this triangulation: Expression Construction Denotation expresses constructs denotes The most important relation in this schema is between an expression and its meaning, i.e., a construction. Once constructions have been defined, we can logically examine them; we can investigate a priori what (if anything) a construction constructs and what is entailed by it. Thus meanings (i.e. constructions) are semantically primary, denotations secondary, because an expression denotes an object (if any) via its meaning that is a construction expressed by the expression. Once a construction is explicitly given as a result of logical analysis, the entity (if any) it constructs is already implicitly given. As a limiting case, the logical analysis may reveal that the construction fails to construct anything by being improper. In order to put our framework on a more solid ground, we now present particular definitions. First we set out the definitions of first-order types (regimented by a simple type 15 See (Frege, 1892a).
  • 13. 13 theory), constructions, and higher-order types (regimented by a ramified type hierarchy), which taken together form the nucleus of TIL, accompanied by some auxiliary definitions. The type of first-order objects includes all objects that are not constructions. Therefore, it includes not only the standard objects of individuals, truth-values, sets, etc., but also functions defined on possible worlds (i.e., the intensions germane to possible-world semantics). Sets, for their part, are always characteristic functions and insofar extensional entities. But the domain of a set may be typed over higher-order objects, in which case the relevant set is itself a higher-order object. Similarly for other functions, including relations, with domain or range in constructions. That is, whenever constructions are involved, we find ourselves in the ramified type hierarchy. The definition of the ramified hierarchy of types decomposes into three parts: firstly, simple types of order 1; secondly, constructions of order n; thirdly, types of order n + 1. Definition 1 (types of order 1). Let B be a base, where a base is a collection of pair-wise disjoint, non-empty sets. Then: (i) Every member of B is an elementary type of order 1 over B. (ii) Let α, β1, ..., βm (m > 0) be types of order 1 over B. Then the collection (α β1 ... βm) of all m-ary partial mappings from β1  ...  βm into α is a functional type of order 1 over B. (iii) Nothing is a type of order 1 over B unless it so follows from (i) and (ii). Definition 2 (construction) (i) The Variable x is a construction that constructs an object X of the respective type dependently on a valuation v; x v-constructs X. (ii) Trivialization: Where X is an object whatsoever (an extension, an intension or a construction), 0 X is the construction Trivialization. It constructs X without any change. (iii) The Composition [X Y1…Ym] is the following construction. If X v-constructs a function f of a type (αβ1…βm), and Y1, …, Ym v-construct entities B1, …, Bm of types β1, …, βm, respectively, then the Composition [X Y1…Ym] v-constructs the value (an entity, if any, of type α) of f on the tuple-argument B1, …, Bm. Otherwise the Composition [X Y1…Ym] does not v-construct anything and so is v-improper. (iv) The Closure [λx1…xm Y] is the following construction. Let x1, x2, …, xm be pair-wise distinct variables v-constructing entities of types β1, …, βm and Y a construction v- constructing an α-entity. Then [λx1 … xm Y] is the construction λ-Closure (or Closure). It v-constructs the following function f of the type (αβ1…βm). Let v(B1/x1,…,Bm/xm) be a valuation identical with v at least up to assigning objects B1/β1, …, Bm/βm to variables x1, …, xm. If Y is v(B1/x1,…,Bm/xm)-improper (see iii), then f is undefined on the argument B1, …, Bm. Otherwise the value of f on B1, …, Bm is the α-entity v(B1/x1,…,Bm/xm)- constructed by Y. (v) The Single Execution 1 X is the construction that either v-constructs the entity v- constructed by X or, if X v-constructs nothing, is v-improper (yielding nothing relative to valuation v). (vi) The Double Execution 2 X is the following construction. Where X is any entity, the Double Execution 2 X is v-improper (yielding nothing relative to v) if X is not itself a construction, or if X does not v-construct a construction, or if X v-constructs a v- improper construction. Otherwise, let X v-construct a construction Y and Y v-construct an entity Z: then 2 X v-constructs Z. (vii) Nothing is a construction, unless it so follows from (i) through (vi).
  • 14. 14 Definition 3 (ramified hierarchy of types) T1 (types of order 1). See Definition 1. Cn (constructions of order n) i) Let x be a variable ranging over a type of order n. Then x is a construction of order n over B. ii) Let X be a member of a type of order n. Then 0 X, 1 X, 2 X are constructions of order n over B. iii) Let X, X1,..., Xm (m > 0) be constructions of order n over B. Then [X X1... Xm] is a construction of order n over B. iv) Let x1,...xm, X (m > 0) be constructions of order n over B. Then [x1...xm X] is a construction of order n over B. v) Nothing is a construction of order n over B unless it so follows from Cn (i)-(iv). Tn+1 (types of order n + 1). Let n be the collection of all constructions of order n over B. Then i) n and every type of order n are types of order n + 1. ii) If m > 0 and , 1,...,m are types of order n + 1 over B, then ( 1 ... m) (see T1 ii)) is a type of order n + 1 over B. iii) Nothing is a type of order n + 1 over B unless it so follows from Tn+1 (i) and (ii). Remark. For the purposes of natural-language analysis, we are currently assuming the following base of ground types, which is part of the ontological commitments of TIL: ο: the set of truth-values {T, F}; ι: the set of individuals (the universe of discourse); τ: the set of real numbers (doubling as discrete times); ω: the set of logically possible worlds (the logical space). Empirical languages incorporate an element of contingency, because they denote empirical conditions that may or may not be satisfied at some world/time pair of evaluation. Non-empirical languages (in particular the language of mathematics) have no need for an additional category of expressions for empirical conditions. We model these empirical conditions as possible-world intensions. They are entities of type (): mappings from possible worlds to an arbitrary type . The type  is frequently the type of the chronology of -objects, i.e., a mapping of type (). Thus -intensions are frequently functions of type (()), abbreviated as ‘’. Extensional entities are entities of a type  where   () for any type . Examples of frequently used intensions are: propositions of type , properties of individuals of type (), binary relations-in-intension between individuals of type (), individual offices/roles of type . Our explicit intensionalization and temporalization enables us to encode constructions of possible-world intensions, by means of terms for possible-world variables and times, directly in the logical syntax. Where variable w ranges over possible worlds (type ) and t over times (type ), the following logical form essentially characterizes the logical syntax of any empirical language: wt […w….t…]. Where  is the type of the object v-constructed by […w….t…], by abstracting over the values of variables w and t we construct a function from worlds to a partial function from times to , that is a function of type ((τ)), or ‘τ’ for short. Logical objects like truth-functions and quantifiers are extensional:  (conjunction),  (disjunction) and  (implication) of type (), and  (negation) of type (). The quantifiers  ,  are type-theoretically polymorphous functions of type (()), for an
  • 15. 15 arbitrary type , defined as follows. The universal quantifier  is a function that associates a class A of -elements with T if A contains all elements of the type , otherwise with F. The existential quantifier  is a function that associates a class A of -elements with T if A is a non-empty class, otherwise with F. Another logical object we need is a partial polymorphic function Singularizer I of type (()). A singularizer is a function that associates a singleton S with the only member of S, and is otherwise (i.e. if S is an empty set or a multi- element set) undefined. Below all type indications will be provided outside the formulae in order not to clutter the notation. Furthermore, ‘X/’ means that an object X is (a member) of type . ‘X v ’ means that the type of the object v-constructed by X is . This holds throughout: w v  and t v . If C v  then the frequently used Composition [[C w] t], which is the intensional descent (a.k.a. extensionalization) of the -intension v-constructed by C, will be encoded as ‘Cwt’. When using constructions of truth-functions, we often omit Trivialisation and use infix notation to conform to standard notation in the interest of better readability. Also when using constructions of identities of -entities, =/(), we omit Trivialization, the type subscript, and use infix notion when no confusion can arise. For instance, instead of ‘[0  [0 = a b] [0 =(()) wt [Pwt a] wt [Pwt b]]]’ where =/() is the identity of individuals and =(())/() the identity of propositions; a, b constructing objects of type , P objects of type (), we write ‘[[a = b]  [wt [Pwt a] = wt [Pwt b]]]’. We invariably furnish expressions with procedurally structured meanings, which are explicated as TIL constructions. The analysis of an unambiguous sentence thus consists in discovering the logical construction encoded by a given sentence. The TIL method of analysis consists in three steps: a) Type-theoretical analysis, i.e., assigning types to the objects that receive mention in the analysed sentence. b) Type-theoretical synthesis, i.e., combining the constructions of the objects ad (1) in order to construct the proposition of type  denoted by the whole sentence. c) Type-theoretical checking, i.e. checking whether the proposed analysans is type- theoretically coherent. To illustrate the method, let us analyse the sentence (1) “The Church-Turing thesis is believed to be valid.” Ad (a). As always, first a type analysis: Church-Turing/(); Thesis_of/((n)()): an empirical function that assigns to a set of individuals (in this case the couple Church, Turing) a set of hyperpropositions that together form a thesis the individuals share; [0 Thesis_ofwt 0 Church-Turing] v (n): a set of hyperpropositions; (to be) Believed/(n): a property of a hyperproposition; Valid/(): a property of a proposition (namely, being true at a w, t-pair). Ad (b), (c). For the sake of simplicity, we now perform steps (b) and (c) of the method simultaneously. We must combine constructions of the objects ad (a) in order to construct the
  • 16. 16 proposition denoted by the sentence. Since we aim at a literal analysis of the sentence, we use Trivializations of these objects.16 Here is how. i) [0 Thesis_ofwt 0 Church-Turing] v (n); ii) [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]] v ; c v n, 2 c v ; iii) c [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]] v (n); iv) [0 *c [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]]] v , */((n)); v) wt [0 *c [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]]] v ; vi) [0 Believedwt 0 [wt [0 *c [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]]]] v ; (1*): wt [0 Believedwt 0 [wt [0 *c [[[0 Thesis_ofwt 0 Church-Turing] c]  [0 Validwt [2 c]]]]] v . Comments. We analysed the expression ‘The Church-Turing thesis’ as a expression that denotes a set of hyperpropositions, though the thesis as formulated in Section 1 is just one hyperproposition. Yet this thesis could be easily reformulated as a set of three hyperpropositions. Thus this analysis is a more general one. The Composition (ii) is glossed like this. For any hyperproposition c that belongs to the set of hyperpropositions that make up the Church-Turing thesis and the proposition v-constructed by 2 c it holds that the Composition (ii) v-constructs a truth-value. In other words, a hyperproposition belonging to the Church- Turing thesis constructs a proposition that takes value T in the given w, t-pair of evaluation. The Closure (iii) constructs the set of such hyperproposition c. Composition (iv) is glossed like this: for all hyperpropositions c belonging to the Church-Turing thesis it holds that the proposition v-constructed by 2 c is valid in the given w, t-pair of evaluation. Composition (v) constructs the proposition with truth conditions given by (iv). Finally, Composition (vi) v- constructs the truth value T according as the Trivialisation of the proposition constructed by (v) is believed (to be true at a given w, t-pair of evaluation). We construe Believed/(n) as a property of a hyperproposition. This leaves room for the fact that if the thesis were formulated in another (albeit equivalent) way it would not have to be generally believed. Thus (1*) is the construction expressed by sentence (1) as its meaning. Note that our analysis leaves it open whether (1*) constructs an analytically true proposition (that is, the proposition true in all w, t-pairs) or an empirical proposition (that is, a proposition true in some but not all w, t-pairs). This completes our exposition on the foundations of TIL. Now we have all the technical machinery that we will need in Section 4 in which I am going to introduce the procedural theory of concepts formulated by Materna (1998, 2004) within TIL. 4. Procedural Theory of Concepts The problems connected with the Church-Turing Thesis are surely of a conceptual character. A reasonable explication of the Thesis as well as of the other notions connected with algorithm, effective procedure and suchlike should be based on a fine-grained theory of concepts. The procedural theory of concepts presented below is one such fine-grained theory. Since the procedural theory of concepts did not come out of the blue, we first summarize the historical background underlying the origin of the theory. I begin with Bolzano. His 16 For the definition of literal analysis, see Duží et. al. (2010, §1.5, Def. 1. 10). Briefly, the literal analysis of an expression E is such an admissible analysis of E in which the objects that receive mention by semantically simple meaningful subexpressions of E are constructed by their Trivialisations.
  • 17. 17 Wissenschaftslehre offers a systematic realist theory of concepts. In Bolzano concepts are construed as objective entities endowed with structure. But his ingenious work was not well- known at the time when modern logic was founded by Frege and Russell. Thus the first theory of concepts that was recognized as being compatible with modern, entirely anti-psychologistic logic was Frege’s. Frege’s theory, as presented in (1891), (1892b), construes concepts as total, monadic functions whose arguments are objects (Gegenstände) and whose values are truth-values. At first sight this definition seems to be plausible. Yet there are, inter alia, two crucial questions: a) What are the content and the extension of a concept? b) What is the sense of a concept word? It is far from clear what answer Frege could propose to the question (b). After all, no genuine definition of sense can be found in Frege’s work.17 As for the question (a), it is obviously a Wertverlauf what can be called an extension. So it seems that it is the sense of the concept word that can be construed as the content of a concept. This is well compatible with Frege’s criticism of “Inhaltslogiker” in (1972, pp. 31-32). However, Frege oscillated between two different notions of a function: ‘function-in- extension’, i.e. function as a mapping (Wertverlauf) and what Church would later call ‘function-in-intension’. The latter notion was not well-defined by Church, yet obviously it can be understood as Frege’s mode of presentation of a particular function-in-extension. Thus function-in-intension would be a good candidate for explication of Frege’s sense. In his (1956) Church tries to adhere to Frege’s principles of semantics, but he soon realizes Frege’s explication of the notion of concept is untenable. Concepts should be located at the level of Fregean sense; in fact, as Church maintains, the sense of an expression E should be a concept of what E denotes. Consequently, concepts should be associated not only with predicates (as was the case of Frege), but also with definite descriptions, and in general with any kind of semantically self-contained expression, since all (meaningful) expressions are associated with a sense. Even sentences express concepts; in the case of empirical sentences they are concepts of propositions (‘proposition’ as understood by Church, as a concept of a truth-value, and not as understood in this article, as a function from possible worlds to (functions from times to) truth-values).18 The degree to which ‘intensional’ entities, and so concepts, should be fine-grained was of the utmost importance to Church.19 When summarizing Church’s heralded Alternatives of constraining intensional entities, Anderson (1998, p. 162) canvasses three options considered by Church. Senses are identical if the respective expressions are (A0) ‘synonymously isomorphic’, (A1) mutually -convertible (that is, - and -convertible), (A2) logically equivalent. (A2), the weakest criterion, was refuted already by Carnap in his (1947), and would not be acceptable to Church, anyway. (A1) is surely more fine-grained. Alternative (0) arose from Church’s criticism of Carnap’s notion of intensional isomorphism and is discussed in Anderson (1980). Carnap proposed intensional isomorphism as a criterion of the identity of belief. Roughly, two expressions are intensionally isomorphic if they are composed from expressions denoting the same intensions in the same way. Church, in (1954), constructs an example of expressions that are intensionally isomorphic according to Carnap’s definition (i.e., expressions that share the same structure and whose parts are necessarily equivalent), but which fail to satisfy the principle of substitutability.20 17 As for a detailed analysis of the problems with sense in Frege, see Tichý (1988), in particular Chapters 2 and 3. 18 For the critical analysis of Frege’s conception of concepts, see Duží & Materna (2010). 19 Now we are using Church’s terminology; in TIL concepts are hyperintensional entities. 20 See also Materna (2007).
  • 18. 18 The problem Church tackles is made possible by Carnap’s principle of tolerance (which itself is plausible). We are free to introduce into a language syntactically simple expressions which denote the same intension in different ways and thus fail to be synonymous. Yet they are intensionally isomorphic according to Carnap’s definition. Church used as an example of such expressions two predicates P and Q, defined as follows: P(n) = n  3, Q(n) = xyz (xn + yn = zn ), where x, y, z, n are positive integers. P and Q are necessarily equivalent, because for all n it holds that P(n) if and only if Q(n). For this reason P and Q are intensionally isomorphic, and so are the expressions “n (Q(n)  P(n))” and “n (P(n)  P(n))”. Still one can easily believe that n (Q(n)  P(n)) without believing that n (P(n)  P(n)).21 Church’s Alternative (1) characterizes synonymous expressions as those that are - convertible.22 But, Church’s -convertability includes also -conversion, which goes too far due to partiality; -reduction is not guaranteed to be an equivalent transformation as soon as partial functions are involved. Church also considered Alternative (1’) that includes - conversion. Thus (1’) without -conversion is the closest alternative to our definition of synonymy based on the notion of procedural isomorphism that we are going to introduce below. Summarising Church’s conception, we have: A concept is a way to the denotation rather than a special kind of denotation. Thus concepts should be situated at the level of sense. There are not only general concepts but also singular concepts, concepts of propositions, etc. More concepts can identify one and the same object. Now what would we, as realists, say about the connection between sense and concept? Accepting, as we do, Church’s version as an intuitive one, we claim that senses are concepts. Can we, however, claim the converse? This would be: concepts are senses. A full identification of senses with concepts would presuppose that every concept were the meaning of some expression. But then we could hardly explain the phenomenon of historical evolution of language, first and foremost the fact that new expressions are introduced into a language and other expressions vanish from it. Thus with the advent of a new expression, meaning-pair a new concept would have come into being. Yet this is unacceptable for a realist: concepts, qua logical entities, are abstract entities and, therefore, cannot come into being or vanish. Therefore, concepts outnumber expressions; some concepts are yet to be discovered and encoded in a particular language while others sink into oblivion and disappear from language, which is not to say that they would be going out of existence. For instance, before inventing computers and introducing the noun ‘computer’ into our language(s), the procedure that von Neumann made explicit was already around. The fact that in the 19th century we did not use (electronic) computers, and did not have a term for them in our language, does not mean that the concept (qua procedure) did not exist. In the dispute over whether concepts are discovered or invented the realist come down on the side of discovery. Hence in order to assign concept to an expression as its sense, we first have to define and examine concepts independently of a language, which we are going to do in the next paragraphs. Needless to say, our starting point is Church’s rather than Frege’s conception of concepts, because: - concepts are structured entities, where their structure is (in principle) derivable from the grammatical structure of the given (regimented) expression, and - concepts can be executed to produce an object (if any). 21 Criticism of Carnap’s intensional isomorphism can be also found in Tichý (1988, pp. 8-9), where Tichý points out that the notion of intensional isomorphism is too dependent on the particular choice of notation. 22 See Church (1993, p.143).
  • 19. 19 Fregean concepts (1891, 1892b) are interpretable as set-theoretical entities, which does not meet the above desiderata. Sets are flat, non-structured entities that cannot be executed to produce anything. It should be clear now that TIL constructions are strong candidates for ‘concepthood’. However, there are two problems that we must address. Firstly, only closed constructions can be concepts, because open constructions do not construct anything in and by themselves, they only v-construct something relative to a valuation v. Secondly, from the conceptual or procedural point of view, constructions are too fine-grained. Thus we must address the problem of the identity of procedures. As for the first problem, this concerns in particular expressions that contain indexicals, i.e. such expressions whose meanings are pragmatically incomplete.23 As an example, consider ‘my books’, ‘his father’. TIL’s anti-contextualist thesis of transparency, viz. that expressions are furnished with constructions as their context-invariant meanings is valid universally, that is also for expressions with indexicals. Their meaning is an open construction that is a construction containing free variables that are assigned to indexical pronouns as their meanings. In our case the meanings of ‘my books’ and ‘his father’ are wt [0 Book_ofwt me] v  wt [0 Father_ofwt him] v . Types. Book_of/(()): an attribute that dependently on w, t-pair assigns to an individual the set of individuals (his/her books); Father_of(); me, him v . Similarly as ‘my books’ and ‘his father’ do not denote any particular object, these constructions do not construct individual roles. Rather, they only v-construct. If in a given situation of utterance the value of ‘me’ or ‘him’ is supplied (for instance, by pointing at a particular individual, say, Marie or Tom), we obtain a complete meaning pragmatically associated with wt [0 Book_ofwt me] and wt [0 Father_ofwt him], say, wt [0 Book_ofwt 0 Marie], wt [0 Father_ofwt 0 Tom]. Yet the meanings of ‘books of me’ and ‘father of him’ are open constructions that cannot be executed in order to construct an individual role. These expressions do not express concepts. Thus we have a preliminary definition: Concepts are closed constructions that are procedurally indistinguishable. Now we have to address the second problem, viz. the problem of the individuation of procedures. This is a special problem of a broader one, namely how hyperintensions are individuated. Hyperintensionality is in essence a matter of the individuation of non- extensional (‘intensional’) entities. Any individuation is hyperintensional if it is finer than necessary co-extensionality, such that equivalence does not entail identity. Hyperintensional granularity was originally negatively defined, leaving room for various positive definitions of its granularity. It is well-established among mathematical linguists and philosophical logicians that hyperintensional individuation is required at least for attitudinal sentences with attitude relations that are not logically closed (especially in order to block logical and mathematical omniscience) and linguistic senses (in order to differentiate between, say, “a is north of b” and “b is south of a”, whose truth-conditions converge).24 23 For details on pragmatically incomplete meanings, see (Duží et. al., 2010, §3.4). 24 The theme of hyperintensionality will be explored in a special issue of Synthese to be guest-edited by Bjørn Jespersen and Marie Duží.
  • 20. 20 Our working hypothesis is that hyperintensional individuation is procedural individuation and that the relevant procedures are isomorphic modulo -, - or restricted -convertibility. Any two terms or expressions whose respective meanings are procedurally isomorphic are semantically indistinguishable, hence synonymous. Procedural isomorphism is a nod to Carnap’s intensional isomorphism and Church’s synonymous isomorphism. Church’s Alternatives (0) and (1) leave room for additional Alternatives.25 One such would be Alternative (½), another Alternative (¾). The former includes - and -conversion while the latter adds a form of restricted -conversion. If we must choose, we would prefer Alternative (¾) to soak up those differences between -transformations that concern only -bound variables and thus (at least appear to) lack natural-language counterparts. There are three reasons for excluding unrestricted -conversion. First, as mentioned above, unrestricted -conversion is not an equivalent transformation in logics boasting partial functions, such as TIL. The second reason is that occasionally even -equivalent constructions have different natural-language counterparts; witness the difference between attitude reports de dicto vs. de re. Thus the difference between “a believes that b is happy” and “b is believed by a to be happy” is just the difference between -equivalent meanings. Where attitudes are construed as relations to intensions (rather than hyperintensions), the attitude de dicto receives the analysis wt [0 Believewt 0 a wt [0 Happywt 0 b]] while the attitude de re receives the analysis wt [x [0 Believewt 0 a wt [0 Happywt x]] 0 b] Types: Happy/(); x v ; a, b/; Believe/(). The de dicto variant is the -equivalent contractum of the de re variant. The variants are equivalent because they construct one and the same proposition, the two sentences denoting the same truth-condition. Yet they denote this proposition in different ways, hence they are not synonymous. The equivalent -reduction leads here to a loss of analytic information, namely loss of information about which of the two ways, or constructions, has been used to construct this proposition.26 In this particular case the loss seems to be harmless, though, because there is only one, hence unambiguous, way to -expand the de dicto version into its equivalent de re variant.27 However, unrestricted equivalent -reduction sometimes yields a loss of analytic information that cannot be restored by -expansion.28 The restricted version of equivalent -conversion we have in mind consists in collision- less substituting free variables for -bound variables of the same type, and will be called r- conversion. This restricted r-reduction is just a formal manipulation with -bound variables that has much in common with -reduction and less with -reduction. The latter is the operation of applying a function f/() to its argument value a/ in order to obtain the value 25 Recall that (A0) is -conversion and synonymies resting on meaning postulates; (A1) is - and -conversion; (A1) is -, - and -conversion; (A2) is logical equivalence. See Church (1993). Anderson (1998) adds (A1*) as a generalization of (A0), in which identity is the only permissible permutation. (A1*) is an automorphism defined on a set of -terms. 26 For the notion of analytic information, see Duží (2010) and Duží et. al. (2010, §5.4). 27 In general, de dicto and de re attitudes are not equivalent, but logically independent. Consider “a believes that the Pope is not the Pope” and “a believes of the Pope that he is not the Pope”. The former, de dicto, variant makes a deeply irrational and most likely is not a true attribution, while the latter, de re, attribution is perfectly reasonable and most likely the right one to make. In TIL the de dicto variant is not an equivalent -contractum of the de re variant due to the partiality of the role Pope/. 28 For details, see Duží & Jespersen (in submission).
  • 21. 21 of f at a (leaving it open whether a value emerges). It is the fundamental computational rule of functional programming languages. Thus if f is constructed by the Closure C C = x [… x …] then -reduction is here the operation of calling the procedure C with a formal parameter x at an actual parameter a: [x [… x …] 0 a]. Now the Trivialisation of the value a is substituted for x and the ‘body’ of the procedure C is computed, which means that the Composition [… 0 a …] is evaluated. No such features can be found in r-reduction. If a variable y v  is not free in C then the r-contractum of [x [… x …] y] is [… y …]. Now the evaluation of the Composition […y …] does not yield a value of f. As a result we just obtain a formal simplification of [x [… x …] y]. Thus we define: Definition 4 (procedurally isomorphic constructions: Alternative (¾)) Let C, D be constructions. Then C, D are -equivalent iff they differ at most by deploying different -bound variables. C, D are -equivalent iff one arises from the other by -reduction or -expansion. C, D are r-equivalent iff one arises from the other by r-reduction or r- expansion. C, D are procedurally isomorphic, denoted ‘C  D’, /(nn), iff there are closed constructions C1,…,Cm, m1, such that 0 C = 0 C1, 0 D = 0 Cm, and all Ci, Ci+1 (1  i < m) are either -, - or r-equivalent. Example. 0 Prime  x[0 Prime x]  y [0 Prime y]  z [0 Prime z] r z [y [0 Prime y] z] … Types: Prime/(); x, y, z v ;  the type of natural numbers. Procedural isomorphism is an equivalence relation on the set S of closed constructions of a particular order and thus partitions S into equivalence classes. Hence in any partition cell we can privilege a representative element. In Horák (2002) the method of choosing a representative is defined. Briefly, this method picks out the alphabetically first, not - or r- reducible construction. The respective representative is then called a construction in its normal form. Constructions in the above example belong to one and the same partition class. The representative of this class is 0 Prime (that is, a primitive concept of the set of prime numbers). Definition 5 (Concept). A concept is a closed construction in its normal form. Corollaries. Concepts are equivalent iff they construct one and the same entity. Concepts are identical iff they are procedurally isomorphic. Example. Equivalent but different concepts of prime numbers: a) 0 Prime (simple, primitive) b) x [[0  x 0 1]  y [[0 Divide y, x]  [[y = 0 1]  [y = x]]]] natural numbers greater than 1 and divisible just by 1 and themselves c) x [[0 Card y [0 Divide y, x]] = 0 2] natural numbers possessing just two factors
  • 22. 22 Types. Let  be the type of natural numbers; Divide/(): the division function; Card/(()): function that assigns to a finite set of naturals the number of elements of this set; 1, 2/; x, y v . Next we need to define the distinction between empirical and analytical concepts. Definition 6 (empirical vs. analytical concept). a) A concept C is empirical iff C constructs a non-constant intension (that is, an intension I such that I has different values in at least two w, t-pairs). b) A concept C is analytical iff C constructs a constant intension (that has one and the same value in all w, t-pairs or no value in any w, t-pair), or C constructs an extension (typically a mathematical object). Examples. The above concepts of primes are analytical: they construct a mathematical entity, the set of primes, i.e., an extension. The concept wt [[0 All 0 Bachelorwt] 0 Manwt] expressed by “All bachelors are men” is analytical;29 it constructs the constant proposition TRUE that takes value T in every w, t- pair. Types. All/((())()): a restricted quantifier that assigns to a given set of individuals the set of all its supersets; Bachelor, Man/(). The term ‘female bachelor’ is also analytical; its denotation is the constant property of individuals that takes as its value an empty set of individuals in all w, t-pairs. The concept expressed by this term is [0 Femalem 0 Bachelor]. Additional type: Femalem /(()()): a property modifier.30 As a concept of a property modifier 0 Femalem is an analytical concept; however, if 0 Femalep  () is a concept of a property, then it is an empirical concept. The concepts 0 Bachelor, 0 Man are empirical. The concepts expressed by ordinary sentences of a natural language, like “Prague is the capital of the Czech Republic”, “Alan Turing was an ingenious man” are empirical; they are concepts of non-constant propositions. This completes our exposition on procedural theory of concepts. In the next Section we are going to apply this theory in order to throw some more light on the Church-Turing thesis. 5. The Church-Turing thesis from the conceptual point of view First, let us summarize the dramatis personae onstage. They are these different concepts: 1. concept of an effective procedure (or algorithm): EP 2. concept of a Turing machine: TM 3. concept of general recursion: GR 4. concept of -definability: D First we investigate TM, GR and D. These concepts construct kinds (classes) of procedures (functions-in-intension). Hence TM, D, GR/n+1  (n). 29 The term ‘bachelor’ is homonymous. Either it means an unmarried man or the lowest university degree, B.A. Here we take into account only the former. 30 For an analysis of property modifiers, see Duží et. al. (2010, §4.4). The latest TIL research into modifiers is found in Jespersen and Primiero (forthcoming) and Primiero and Jespersen (2010).
  • 23. 23 Moreover, it holds for each of these concepts that every procedure belonging to their product produces a computable function-in-extension. These functions-in- extension are of a type (), where ,  are types  of positive integers, or =(), or =(), and so on. Simply, these functions are numerical functions on positive integers. Formally, the following constructions construct the truth-value T: c [[TM c]  [0 Computable 2 c]] c [[GR c]  [0 Computable 2 c]] c [[D c]  [0 Computable 2 c]] Additional types. c/n; 2 c  (); Computable/(()). The variable c ranges over constructions/procedures producing numerical functions. If such a procedure belongs to the set of procedures identified by a concept TM or GR or D, then its product is a numerical computable function. For this reason we must use the Double Execution in the consequent in order to construct the respective numeric function of type () of which we wish to predicate that it is computable. These significantly different concepts TM, D and GR construct substantially different classes of procedures: TM  D  GR Yet it has been proved that these concepts are equivalent in the following way. A procedure belonging to any of the classes constructed by TM or D or GR produces a function-in-extension belonging to one and the same class CF/(()) of computable functions-in-extension. Thus we define: Definition 7 (equivalence on the set of concepts of classes of procedures). Let /(n+1n+1) be a relation of equivalence on the set of concepts producing classes of procedures. Let C1, C2/n+1  (n). Then31 0 C1  0 C2 if and only if the classes of functions-in-extension constructed by elements of C1, C2, respectively, are identical: f c1 [[C1 c1]  [2 c1 =1 f]] =2 g c2 [[C2 c2]  [2 c2 =1 g]] Types: f, g v (); c1, c2 v n; 2 c1, 2 c2 v (); =1/(()()): the identity of functions-in-extension; =2/((())(())): the identity of classes of functions-in- extension. Hence it has been proved that 0 TM  0 D  0 GR. It means that the class of computable functions-in-extension CF =2 f t [[TM t]  [2 t =1 f]] =2 g l [[D l]  [2 l =1 g]] =2 h r [[GR r]  [2 r =1 h]] Types: f, g, h v (); t, l, r v n; 2 t, 2 l, 2 r v (); =1/(()()): the identity of functions; =2/((())(())): the identity of classes of functions-in-extension; CF/(()). 31 In the interest of better readability, we use infix notation now.
  • 24. 24 Note that we typed the concepts TM, D and GR as analytical concepts. Each of them constructs a class of procedures, an object of type (n). Are we entitled to do so? Couldn’t any of them be empirical? I don’t think so. The concepts GR and D are obviously analytical concept: their definitions do not contain any empirical constituent, they are purely mathematical. Could TM perhaps be an empirical concept? Then there is the question what in the definition of a Turing machine might be of an empirical character. If one consults the Stanford Encyclopaedia of Philosophy,32 it is easy to see that in the definition of a Turing machine there is no trace of anything empirical that ‘might be otherwise’, that is, no trace of a concept that would define a non-constant function with the domain of possible worlds. There are a number of variations of the Turing-machine definition that turn out to be mutually equivalent in the following sense. Formulation F1 and formulation F2 are equivalent if for every machine described in F1 there is machine described in F2 which has the same input-output behaviour, and vice versa, i.e., when started on the same tape at the same cell, they will terminate with the same tape on the same cell. In other words, all possible concepts TMi of the Turing machine are equivalent according to Definition 7: 0 TM1  …  0 TMn. The alternative definitions include, inter alia, the definition of a machine with a two-way infinite tape, machines with an arbitrary number of read-write heads, machines with multiple tapes, bi-dimensional tapes, machines where arbitrary movement of the head is allowed, an arbitrary finite alphabet, etc. etc. Even the definition of non-deterministic Turing machine that is apparently a more radical reformulation of the notion of Turing machine does not alter the definition of Turing computability. Importantly, all these alternative definitions do not contain any empirical concept that would construct an intension and the defined concepts are equivalent (Definition 7) by constructing classes of procedures that produce elements of one and the same set CF of functions-in-extension. This might suffice as evidence that the concepts falling under the umbrella TM are analytical as well. Formally, we can prove it like this. Suppose that some of the concepts TMi, D, GR are empirical. Let a concept C be empirical. Then C constructs a property of procedures rather than a class of procedures: C  (n). In order that C be (contingently) equivalent to the other concepts, for instance, to D, the following must hold: wt [f c [[Cwt c]  [2 c = f]] =2 g l [[D l]  [2 l = g]]] Additional types: c v n; 2 c v (). Since C is empirical, the property of procedures it constructs is a non-constant intension and so is the proposition constructed by this Closure. But a non-constant proposition is not analytically provable. Hence, there is no empirical concept C among our concepts.33 In summary, GR, D, TM are all analytical concepts. Now there is a crucial problem concerning the class EP that can be formulated like this. Recall that CF is the class of computable functions-in-extension of naturals that TM, D and GR have in common. Then the Church-Turing thesis can be formulated like this: Only the elements of CF are computable by an effective procedure EP. 32 See Barker-Plummer, David, ‘Turing machines’, The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), forthcoming URL = http://plato.stanford.edu/archives/fall2012/entries/turing- machine/. 33 I am grateful to Pavel Materna for an outline of the idea of this proof.
  • 25. 25 And vice versa, Only the elements of EP compute the elements of CF. Formally, c [[[EP c]  [0 CF 2 c]]  [[0 CF 2 c]  [EP c]]] Types: c v n; 2 c v (); EP/n+1  (n); CF/(()). The second conjunct is unproblematic, for sure. If a function is computable then it is computable by an effective procedure. However, the first conjunct gives rise to a question: Could a new concept c belonging to EP such that c computes a function that does not belong to CF emerge? If the answer is in the affirmative, then the Church-Turing thesis would not be true. Again, let us consider two variants of a definition of the concept EP. Either (a) EP is an analytical concept or (b) it is defined as an empirical one. Let us first consider variant (a) that is an analytical concept EP. There are three alternatives: the Church-Turing Thesis is 1) a definition 2) an explication 3) possibly provable after a refinement of the concept EP. Ad 1): As mentioned above, Church (1936, p.356) speaks about defining the notion … of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or with a lambda-definable function of positive integers). Post rightly criticizes this formulation (1936, p. 105): “To mask this identification under a definition…blinds us to the need of its continual verification.” Indeed, a definition cannot be verified. It can only be tested whether the so defined concept is adequate so that a new definition (i.e. a new concept) is not needed. Ad 2): If TM, GR and D were (Carnapian) explications of EP then we would end up with at least three concepts which differ in a very significant way and explicate one and the same concept EP, which seems to be implausible as well. Explication should make the meaning of an inexact concept (explicandum) clear. It is purely stipulative, normative definition, and thus it cannot be true or false, just more or less suitable for its purpose. And it is hardly thinkable that one and the same thing (the EP concept) would be explicated in three substantially different ways unless we would end up with three different concepts EP1, EP2, EP3. Ad 3): In this case we encounter the problem of a proper calibration of EP. The basic idea or rather hypothesis is this. If we refine the concept EP so that we obtain a fine-grained definition of EP such that it strictly delimits the class of procedures involved, then the Church-Turing thesis becomes provable. First we have to define refinement of a construction (concept in this case).34 To this end we need two other notions, namely that of a simple concept and ontological definition: Let X be an object that is not a construction. Then 0 X is a simple concept. 34 For details, see Duží (2010) and Duží et. al. (2010, §5.4.4, Definition 5.5).
  • 26. 26 The ontological definition of an object X is a compound (= molecular rather than simple) concept of X. Definition 8 (refinement of a construction). Let C1, C2, C3 be constructions. Let 0 X be a simple concept of X, and let 0 X occur as a constituent of C1. If C2 differs from C1 only by containing in lieu of 0 X an ontological definition of X, then C2 is a refinement of C1. If C3 is a refinement of C2 and C2 is a refinement of C1, then C3 is a refinement of C1. ฀ In order to formulate corollaries of this definition, let us denote the analytical content of a construction C, that is, the set of constituents of C by ‘AC(C)’, and let |AC(C)| be the number of constituents of C. Then Corollaries. If C2 is a refinement of C1, then 1) C1, C2 are equivalent by constructing one and the same entity but not procedurally isomorphic; 2) AC(C1) is not a subset of AC(C2); 3) |AC(C2)| > |AC(C1)|. For instance, a refinement of the simple concept 0 Prime is the molecular concept x [0 Card y [[0 Divide y x] = 0 2]], or using prefix notion x [0 = [0 Card y [0 Divide y x]] 0 2].  The two concepts are equivalent by constructing one and the same set, viz. the set of primes, but these concepts are not procedurally isomorphic.  AC(0 Prime) = {0 Prime};  AC(x [0 = [0 Card y [0 Divide y x]] 0 2]) = {x [0 = [0 Card y [0 Divide y x]] 0 2], [0 = [0 Card y [0 Divide y x]] 0 2], 0 =, [0 Card y [0 Divide y x]], 0 2, 0 Card, y [0 Divide y x], [0 Divide y x], 0 Divide, y, x}.  Hence AC(0 Prime) ⊈ AC(x [0 = [0 Card y [0 Divide y x]] 0 2])  |AC(0 Prime)| = 1 whereas |AC(x [0 Card y [0 Divide y x] = 0 2])| = 11. There can be more than one refinement of a concept C. For instance, the Trivialization 0 Prime is in fact the least informative procedure for producing the set of primes. Using particular definitions of the set of primes, we can refine 0 Prime in many ways, including: x [0 Card y [0 Divide y x] = 0 2], x [[x  0 1]  y [[0 Divide y x]  [[y = 0 1]  [y = x]]]], x [[x > 0 1]  y [[y > 0 1]  [y < x]  [0 Divide y x]]. By refining the meaning CS of a sentence S we uncover a more fine-grained construction CS’ such that CS and CS’ are equivalent, yet not procedurally isomorphic, and such that the latter is more analytically informative than the former.35 But theoretically, we could keep refining one and the same construction ad infinitum, possibly criss-crossing between various 35 The notion of analytic information has been defined in Duží (2010). Briefly, analytic information conveyed by the meaning of an expression E is the set of constituents of the meaning of E. Comparison of the amount of analytic information conveyed by expressions is based on the definition of a refinement of their meanings.
  • 27. 27 conceptual systems. For instance, we could still refine the definitions of the set of primes above by refining the Trivialization 0 Divide: 0 Divide = yx [z [x = [0 Mult yz]]]. Types: x, y, z  ; Mult/(): the function of multiplication defined over the domain of natural numbers . Substituting the Closure for the Trivialization yields a more informative refinement (we denote the relation of being less analytically informative ‘<an’): 0 Prime <an [x [0 Card y [0 Divide y x] = 0 2]] <an [x [0 Card y [z [x = [0 Mult yz]]] = 0 2]] <an … The uppermost level of refinement depends on the conceptual system in use. Thus we must define the notion of conceptual system. In general, conceptual systems are a tool by means of which to characterise and categorize the expressive force of a vernacular and compare the expressive power of two or more vernaculars.36 In this paper I need the notion of conceptual system to fix the limit up to which we can refine, in a non-circular manner, the ontological definitions of the objects within the domain of a given language. A conceptual system is a set of concepts, some of which must be simple. Simple concepts are defined as Trivializations of non-constructional entities of types of order 1. A system’s compound concepts are exclusively derived from its simple concepts. Each conceptual system is unambiguously individuated in terms of its set of simple concepts. Thus we define: Definition 9 (conceptual system). Let a finite set Pr of simple concepts C1,…,Ck be given. Let Type be an infinite set of types induced by a finite base (e.g., {, , , } or {, }). Let Var be an infinite set of variables, countably infinitely many for each member of Type. Finally, let C be an inductive definition of constructions. In virtue of Pr, Type, Var and C, an infinite class Der is defined as the transitive closure of all the closed compound constructions derivable from Pr and Var using the rules of C, such that: i) every member of Der is a compound concept; ii) if C  Der, then every subconstruction of C that is a simple concept is a member of Pr. The set of concepts Pr  Der is a conceptual system derived from Pr. The members of Pr are the primitive concepts, and the members of Der the derived concepts, of the given conceptual system. Remark. As is seen, Pr unambiguously determines Der. The expressive power of a given (stage of a) language L is then determined by the set Pr of the conceptual system underlying the language L. Every conceptual system delimits a domain of objects that can be conceptualized by the resources of the system. There is the correlation that the greater the expressive power, the greater the domain of objects that can be talked about in L. Yet Pr can be extended into Pr’ in such a way that Pr’ is no longer logically independent (the way the axioms of an axiomatic system may be mutually independent). Independency means here that Der does not contain a concept C equivalent to C’ of Pr, unless C’ is a subconstruction of C. An example of a, minuscule, independent system would be Pr = {0 Succ, 0 0}, where Succ/(), 0/. Due to transitive closure, there is a derived concept of the function +/() defined as follows (f()): 36 The theory of conceptual systems was first introduced in Materna (1998, Chs. 6-7) and further elaborated on in Materna (2004).
  • 28. 28 [0 If x [[[f x 0 0] = x]  y [[f x [0 Succ y]] = [0 Succ [f x y]]]]]. This concept is not equivalent to any primitive concept of the system. However, among the derived concepts of this system there is, for instance, the compound concept of the sum 0+0, [0 If x [[[f x 0 0] = x]  y [[f x [0 Succ y]] = [0 Succ [f x y]]]] 0 0 0 0], which is equivalent to 0 0. Yet the system is independent, because the primitive concept 0 0 is a subconstruction of the above compound concept. An example of a, likewise minuscule, dependent system would be Pr1 = {0 , 0 , 0  }. In this system either 0  or 0  is superfluous because, e.g., disjunction can be defined by the compound concept pq [0  [0  [0 p][0 q]]], which is equivalent to 0 . The simple concept 0  is not a subconstruction of the compound concept pq [0  [0  [0 p][0 q]]]. To obtain independent systems, omit either 0  or 0 . This will yield either Pr2 = {0 , 0 } or Pr3 = {0 , 0  }. Thus, the set of primitive concepts of an independent system contains no superfluous concepts and is insofar minimal. Pr1 was an example of a system containing a superfluous element. However, it should be possible to take an independent system and add one or more concepts to it and still keep the system independent. When such interesting extensions are made, the expressive power of the new system increases. To show how this works, first we define proper extension of a system S as individuated by Pr. A proper extension of S is simply defined as a system S’ individuated by Pr’ such that Pr is a proper subset of Pr’. An interesting extension is one that preserves the independency of the initial system. The definition of conceptual system does not require that the system’s Pr contain concepts of logical or mathematical operations. However, any conceptual system intended to underpin a language possessing even a minimal amount of expressive power of any interest must contain such concepts. Otherwise there will be no means to combine the non-logical concepts of the system, whether that system be mathematical, empirical or a mix of both. Let ‘LM-part of S’ denote the portion of logical/mathematical concepts of S, and ‘E-part of S’ denote the portion of empirical concepts of S. Proper extensions of S come in two variants, essential and non-essential. A proper non- essential extension S’ of S is defined as follows: the LM-part of S  the LM-part of S’ and the E-part of S = the E-part of S’. A proper essential extension S’ of S is defined as follows: the LM-part of S = the LM-part of S’ and the E-part of S  the E-part of S’. It may happen that both the LM-part and the E-part of the system are extended. Then we simply talk of an extension of S. Here is an example. Let S be assigned to a language L as its conceptual system. Let PrL = {0 Parent, 0 Male, 0 Female, 0 , 0 , 0 , 0 =}. An element of DerL is the concept of the relation- in-intension Brotherhood; to wit, wt [xy z [[[0 Parentwt z x]  [ 0 Parentwt z y]]  [0 Malewt x]]]]. Types: Male, Female/(); Parent/ (); the types of the logical objects are obvious. In general, when the speakers of L find that the object defined by a compound concept is frequently needed, they are free to introduce, via a linguistic convention, a new expression co- denoting this object. Whenever this happens, a verbal definition sees the light of day. For instance, the speakers may decide to introduce the relational predicate ‘is a brother of’ to co- denote the relation-in-intension defined by some compound concept encompassing various logical concepts and empirical concepts such as Parent and Male, as done above. Back to our problems concerning effective procedure/algorithm (EP). Before adducing possible refinements of the concept EP, let us try to answer the question:
  • 29. 29 What do the concepts belonging to TM, GR, and D have in common? They comply with finitism. Mendelson (1990, p. 225) says about computable functions: … we do not mean actual human computability or empirically feasible computability. … When we talk about computability, we ignore any limitations of space, time, or resources. This does not violate the tenets of finitism; unlimited is not actually infinite, of course. It is a similar difference as the difference between application of the (unrestricted) general quantifier  (‘for all’) and lambda abstraction (‘for any’). For instance, Fermat’s Last Theorem, “No three positive integers a, b, and c can satisfy the equation an + bn = cn for any integer value of n greater than two” expresses the construction37 [0  n [[n > 0 2]  0 (a b c) [an + bn = cn ]]] or, equivalently [0 (a b c n) [[n > 0 2]  [an + bn  cn ]]] This construction/procedure is not effectively executable/computable, because it involves and presupposes the existence of actual infinity, viz. the set of positive integers. The execution of this construction would amount to, inter alia, the execution of these constituents:  construct the (characteristic function of the) set of 4-tuples a, b, c, n: a b c n [[n > 0 2]  [an + bn  cn ]]  check whether this set is the set of all such 4-tuples The first constituent is glossed “for any () positive integers a, b, c and n check whether Composition [[n > 0 2]  [an + bn  cn ]] v-construct T”. This constituent is easily executable and complies with finitism. Only potential infinity is involved rather than actual infinity. No such luck with the second constituent that involves actual infinity, viz. the set of all 4-tuples. Now we are going to try to refine the concept of algorithm/effective procedure (EP) in such a way that the Church-Turing thesis might become provable (though then there is the question whether the Church-Turing thesis would not degenerate to triviality). First, however, we must put the notion of procedure on a more solid ground. Using TIL vernacular, a procedure P is a sequence of constituents of P each of which (including P itself) must be executed in order to produce a product of P (if any). Note that procedure is not a mere sequence of constituent instructions that is a set. As mentioned above, a set cannot be executed. However, the phrase in parentheses ‘including P itself’ expresses an important constraint that raises P above the set-theoretical extensional level up to the hyperintensional one that is the procedural level of abstraction. Now a possible refinement of the concept of an effective procedure EP yields this refined definition: Let a concept C belong to EP. Then 1) C is a finite sequence of constituents each of which (including C itself) must be executed to produce a product of C (if any); 2) execution of none of the constituents of C involves actual infinity; 3) execution of none of the constituents of C calls for an additional input argument; 37 Now we use ordinary mathematical notation to make the constructions easier to read.
  • 30. 30 4) in order to produce the product of C neither infinitely small nor infinitely large execution time is necessary. Our hypothesis is that the so-defined EP is analytical and provably equivalent with TM, D and GR. But isn’t then the Thesis just trivial? I do not think so, because by lifting some of the constraints (for instance, if we allow infinitely large execution time), we obtain a new class of procedures and we may ask again whether those procedures are equivalent to the procedures defined by TM, D and GR. If we introduce a proper essential extension of our conceptual system in use then we enter the zone of empirical concepts, i.e. Variant (b). In this case the Church-Turing thesis is not true, because, as we have seen above, no empirical concept can be logically equivalent to the analytical concepts TM, D, GR. We would end up with an empirical procedure in our hands, viz. the one constructed by wt [EPwt = GR = D = TM] which is not analytically provable. However, this empirical conception leaves room for discovering other concepts of the class of procedures computing in this or that way numerical functions that are not computable in the classical sense, that is, that do not belong to the class CF. One such empirical variant is provided by the concept of machine-computable functions. But this would be too radical an extension, because there is a substantial difference between the analytical concept EP and the concept machine-computable in the wide sense;38 the latter involves infinitely small time and thus does not meet the constraint ad (4) of the refined definition above.39 Bertrand Russell, Ralph Blake and Hermann Weyl independently described one extreme form of temporal patterning. It seems that this temporal patterning was first described by Russell, in a lecture given in Boston in 1914. In a discussion of Zeno’s paradox of the race- course Russell said, “If half the course takes half a minute, and the next quarter takes a quarter of a minute, and so on, the whole course will take a minute” (Russell 1915, pp. 172-3).40 Hence analytical EP and machine-computable in the wide sense are different non- equivalent concepts. Recall that machine-computable in the narrow sense is an empirical concept and thus non-equivalent to the analytical variant of EP as well. Remark. In Gödel (Collected Works II., p. 306) we find Gödel’s philosophical criticism of Turing: What Turing disregards completely is the fact that mind, in its use, is not static, but constantly developing… This remark of Gödel’s is, in general, remarkable, but presumably there is a misconception. Turing actually had in mind EP as an analytical concept while Gödel intended to draw our attention to perspectives similar to the notion of machine-computable in the wide sense. Besides machine-computable in the narrow/wide sense we have another interesting notion of computability, namely the notion of O-machines. Turing (1939, pp. 172ff) defines O- machines as follows: 38 See Section 2. 39 Note that the ancient paradoxes like Zeno’s paradoxes of motion are paradoxical due to the same trick; they are based on the assumption of an infinitely small instant of time. 40 These passages draw on material from Copeland (1998).
  • 31. 31 … an O-machine is an ordinary Turing machine augmented by an ‘oracle’. Oracle is a primitive operation – a black box – that returns the value of an incomputable function on integers. O-machines can compute more functions than can ordinary Turing machines depending on arbitrary restrictions placed on the oracle. This is due to the fact that O-machines do not meet the constraint ad (3) of the refined definition. Moreover, if no restrictions are placed on the oracle, then the generalization and broadening of the concept of computability makes the concept trivial. Any function of integers is computable relative to the capabilities of an oracle. As a result, the concept of an O-machine is an empirical one. Tichý (1969) distinguishes between two kinds of procedures: a) autonomous (analytic): their product depends on the outcome of the foregoing steps only, irrespective of the state of the external world, and b) empirical: the product does depend on the state of the world. An empirical system contains a finite set of individuals and the ‘intensional basis of elementary tests’. These elementary tests and their results are then numerical surrogates of the elements of set W of possible worlds. Turing machine works with an oracle that supplies Turing machine computation with information about the state of the external world in terms of W, whenever needed. Using current IT terminology, we might say that Tichý’s empirical system corresponds to an information system with a database being gradually updated. The oracle is simulated by a data collection and corresponds to database update. However, each computation involving a given database state is effective, because it is executed over a finite database state that is a snapshot of a fragment of the actual world.41 This explains how such an empirical information system can function in practice, computing and producing its products. 6. Summary and concluding remarks We considered four ways of construing the notion of computability: 1) EP – analytical concept of effective procedure, algorithm 2) TM – Turing machine, GR – general recursivity, D – lambda definability 3) MN – machine-computable in the narrow sense (for instance with laws of physics imposing limitations on the machine) MW – machine-computable in the wide sense (for instance involving infinitely small times…) 4) O-machines with an oracle. The Church-Turing thesis claims the equivalence of (1) and (2). Thus the Church-Turing thesis proposes three kinds of a refinement of the concept of effective procedure/algorithm. At this point we can formulate a hypothesis: if the concept of an effective procedure (algorithm) is sufficiently refined and delimited, for instance, as proposed above by our refined definition, then the Church-Turing thesis becomes provable. Though only a hypothesis, the idea seems attractive. As for concepts (3) and (4), Concept MN is empirical, therefore not equivalent to EP, Concept MW is incompatible with (1) and O-computability is incompatible with (2). 41 On the assumption of flawless data collection.
  • 32. 32 In this paper I deployed TIL and the procedural theory of concepts built within TIL in order to analyse the problems connected with the Church-Turing thesis and consequently the problems of the specification of the concept of an effective procedure/algorithm. I did not provide definite answers to the questions posed by these problems, which was not the goal of the paper. Yet I believe that our exact, fine-grained analysis contributes to solving these problems by making available explicit and rigorous descriptions of them, thereby rendering them logically tractable. Acknowledgements. This research has been supported by the Grant Agency of the Czech Republic, Project No. 401-10- 0792, Temporal Aspects of Knowledge and Information, and also by the internal grant agency of VSB- Technical University Ostrava, Project No. SP2012/26, An Utilization of Artificial Intelligence in Knowledge Mining from Software Processes. A version of this paper was presented by the author as an invited talk at the Studia Logica International Conference on Church’s Thesis: Logic, Mind and Nature, Krakow, Poland, June 3-5, 2010. I am indebted to Pavel Materna, who was co-invited to the conference, for his inspiring ideas that positively contributed to the quality of the presentation as well as the resulting paper. I am also grateful to Bjørn Jespersen whose valuable comments helped me to improve the structure of the paper and to correct my inappropriate English formulations. References Abramson, F.G. (1971). ‘Effective Computation over the Real Numbers’. Twelfth Annual Symposium on Switching and Automata Theory. Northridge, Calif.: Institute of Electrical and Electronics Engineers. Anderson, C.A. (1980). ‘Some new axioms for the logic of sense and denotation’. Nous 14, 217-234. Anderson, C.A. (1998). ‘Alonzo Church’s contributions to philosophy and intensional logic’. The Bulletin of Symbolic Logic 4, 129-171. Blass, A. & Gurevich, Y. (2003). ‘Algorithms: A quest for absolute definitions’, Bulletin of European Association for Theoretical Computer Science 81, 2003. Börger, E., Grädel, E, Gurevich, Y. (2001). The Classical Decision Problem. Springer Verlag, Perspectives in Mathematical Logic, 1997; second printing, Springer Verlag, 2001. Brown, J.R. (1999). Philosophy of mathematics. London, New York: Routledge. Carnap, R. (1947). Meaning and necessity. Chicago: Chicago University Press. Church, A. (1932). ‘A set of Postulates for the Foundation of Logic’. Annals of Mathematics, second series, 33, 346-366. Church, A. (1936). ‘An Unsolvable Problem of Elementary Number Theory’. American Journal of Mathematics, 58, 345-363. Church, A. (1941). The calculi of lambda conversion. Annals of Mathematical Studies. Princeton: Princeton University Press. Church, A. (1954). ‘Intensional isomorphism and identity of belief’. Philosophical Studies 5, 65-73. Church, A. (1956). Introduction to mathematical logic. Princeton: Princeton University Press. Church, A. (1993). ‘A revised formulation of the logic of sense and denotation’. Alternative (1). Noûs 27, 141-157 Copeland, B.J. & Proudfoot, D. (1999). ‘Alan Turing's Forgotten Ideas in Computer Science’. Scientific American, 280 (April), 76-81. Copeland, B.J. & Proudfoot, D. (2000). ‘What Turing Did After He Invented the Universal Turing Machine’. Journal of Logic, Language, and Information, 9, 491-509. Copeland, B.J. & Sylvan, R. (1999). ‘Beyond the Universal Turing Machine’. Australasian Journal of Philosophy, 77, 46-66. Copeland, B.J. (1998). ‘Even Turing Machines Can Compute Uncomputable Functions’. In Calude, C., Casti, J., Dinneen, M. (eds) 1998, Unconventional Models of Computation, London and Singapore: Springer-Verlag, 150-164. Copeland, B.J. (2000). ‘Narrow Versus Wide Mechanism’. Journal of Philosophy, 97, 5-32.
  • 33. 33 Copeland, B.J. (2008). ‘The Church-Turing Thesis’. The Stanford Encyclopaedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2008/entries/church-turing/>. Curry, H.B. (1929). ‘An Analysis of Logical Substitution’. American Journal of Mathematics, 51, 363-384. Curry, H.B. (1930). ‘Grundlagen der kombinatorischen Logik’. American Journal of Mathematics, 52, 509-536, 789-834. Curry, H.B. (1932). ‘Some Additions to the Theory of Combinators’. American Journal of Mathematics, 54, 551-558. Detlefsen, M. (1990). ‘On an Alleged Refutation of Hilbert’s Program Using Gödel’s first incompleteness theorem’, Journal of Philosophical Logic, 19, 343-377. Duží, M. & Materna, P. (2010). ‘Can concepts be defined in terms of sets?’ Logic and Logical Philosophy, 19, 195-242. Duží, M. (2005). ‘Kurt Gödel. Metamathematical results on formally undecidable propositions: Completeness vs. Incompleteness’. Organon F, XII: 4, pp. 447-474. Duží, M. (2010). ‘The paradox of inference and the non-triviality of analytic information’. Journal of Philosophical Logic, 39: 5, pp. 473-510. Duží, M., Jespersen, B., Materna, P. (2010). Procedural Semantics for Hyperintensional Logic. Foundations and Applications of Trasnsparent Intensional Logic. First edition. Berlin: Springer, series Logic, Epistemology, and the Unity of Science, vol. 17, 2010. Duží, M., Jespersen, B. (in submission). ‘Procedural isomorphism and restricted -conversion’, revised and resubmitted to Logic Journal of the IGPL. Feferman, S., ed. (1986): Kurt Gödel: Collected Works. Oxford University Press. Frege, G. (1891). Funktion und Begriff. Jena: H. Pohle. (Vortrag, gehalten in der Sitzung vom 9. Januar 1891 der Jenaischen Gesellschaft für Medizin und Naturwissenschaft, Jena, 1891). Frege, G. (1892a). ‘Über Sinn und Bedeutung’. Zeitschrift für Philosophie und philosophische Kritik 100: 25-50. Frege, G. (1892b). ‘Über Begriff und Gegenstand’. Vierteljahrschrift für wissenschaftliche Philosophie 16: 192-205. Frege, G. (1972). Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle: L. Nebert, 1879. Translated as Begriffsschrift, a Formula Language, Modeled upon that of Arithmetic, for Pure Thought. In From Frege to Gödel, edited by Jean van Heijenoort. Cambridge, MA: Harvard University Press, 1967. Also as Conceptual Notation and Related Articles. Edited and translated by Terrell W. Bynum. London: Oxford University Press, 1972. Gandy, R. (1980). ‘Church's Thesis and Principles for Mechanisms’. In Barwise, J., Keisler, H.J., Kunen, K. (eds), The Kleene Symposium. Amsterdam: North-Holland. Gödel, K. (1934). ‘On Undecidable Propositions of Formal Mathematical Systems’. Lecture notes taken by Kleene and Rosser at the Institute for Advanced Study. Reprinted in Davis, M. (ed.) 1965. New York: Raven. Herbrand, J. (1932). ‘Sur la non-contradiction de l'arithmetique’. Journal fur die reine und angewandte Mathematik, 166, 1-8. Horák, A. (2002). The Normal Translation Algorithm in Transparent Intensional Logic for Czech, PhD Thesis, Masaryk University, Brno, retrievable at http://www.fi.muni.cz/~hales/disert/ Jespersen, B. and G. Primiero, ‘Alleged assassins: realist and constructivist semantics for modal modifiers’, Lecture Notes in Computer Science, forthcoming. Kleene, S.C. (1936). ‘Lambda definability and recursiveness.’ Duke Mathematical Journal, 2, 340- 353. Kleene, S.C. (1952). Introduction to Metamathematics. D. Van Nostrand Co., Inc., New York. Kleene, S.C. (1967). Mathematical Logic. John Wiley & Sons, Inc., New York-London-Sydney 1967. First Corrected printing 1968. Kolmogorov, A.N. & Uspensky, V.A. (1958, 1963). ‘On the definition of algorithm’, Uspekhi Mat. Nauk 13:4 (1958), 3-28, Russian; translated into English in AMS Translations 29 (1963), 217- 245.
  • 34. 34 Kolmogorov, A.N. (1953). ‘On the concept of algorithm’, Uspekhi Mat. Nauk 8:4 (1953), 175-176, Russian. An English translation in Uspenski & Semenov (1993), pp. 18-19]. Materna, P. (1998). Concepts and Objects. Helsinki: Acta Philosophica Fennica, vol. 63. Materna, P. (2004). Conceptual Systems. Berlin: Logos. Materna, P. (2007). ‘Church’s criticism of Carnap’s intensional isomorphism from the viewpoint of TIL’. In The World of Language and the World Beyond Language: A Festschrift for Pavel Cmorej, eds. T. Marvan and M. Zouhar, 108-118. Bratislava: Department of Philosophy, Slovak Academy of Sciences. Mendelson, E. (1990). ‘Second thoughts about Church’s thesis and mathematical proofs’. Journal of Philosophy, 87: 225–233. Post, E.L. (1936). ‘Finite Combinatory Processes - Formulation 1’, Journal of Symbolic Logic, 1, 103- 105. Post, E.L. (1943). ‘Formal Reductions of the General Combinatorial Decision Problem’, American Journal of Mathematics, 65, 197-215. Post, E.L. (1946). ‘A Variant of a Recursively Unsolvable Problem’, Bulletin of the American Mathematical Society, 52, 264-268. Primiero, G. and B. Jespersen (2010). ‘Two kinds of procedural semantics for privative modification’, Lecture Notes in Artificial Intelligence, 6284, 252-71. Russell, B.A.W. (1915). Our Knowledge of the External World as a Field for Scientific Method in Philosophy. Chicago: Open Court. Schönfinkel, M. (1924). ‘Uber die Bausteine der mathematischen Logik’. Mathematische Annalen, 92, 305-316. Shepherdson, J.C., Sturgis, H.E. (1963). ‘Computability of Recursive Functions’. Journal of the ACM, 10, 217-255. Siegelmann, H.T., Sontag, E.D. (1994). ‘On the Computational Power of Neural Nets’. Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, 440-449. Stewart, I. (1991). ‘Deciding the Undecidable’. Nature, 352, 664-5. Tichý, P. (1968). ‘Smysl a procedura’. Filosofický časopis 16: 222-232. Translated as ‘Sense and procedure’ in (Tichý 2004: 77-92). Tichý, P. (1969). ‘Intensions in terms of Turing machines’. Studia Logica 26: 7-25. Reprinted in (Tichý 2004: 93-109). Tichý, P. (2004). Pavel Tichý´s Collected Papers in Logic and Philosophy, V. Svoboda, B. Jespersen, C. Cheyne (eds.), Prague: Filosofia, Czech Academy of Sciences, and Dunedin: University of Otago Press. Turing, A.M. (1936). ‘On Computable Numbers, with an Application to the Entscheidungsproblem’. Proceedings of the London Mathematical Society, Series 2: 42 (1936-37), 230-265. Turing, A.M. (1939). ‘Systems of Logic Based on Ordinals‘. Proceeding of the London Mathematical Society, 45, 161-228. Uspensky, V.A. (1992). ‘Kolmogorov and mathematical logic’, Journal of Symbolic Logic 57: 2, 385- 412. Uspensky, V.A. and Semenov, A.L. (1993). Algorithms: Main Ideas and Applications, Kluwer.