Skip to main content
Humanities LibreTexts

3.1.1: Introduction

  • Page ID
    121729
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\def\Assign#1#2{ { #1^{\Struct{#2}} } }\)
    \(\def\Atom#1#2{ { \mathord{#1}(#2) } }\)
    \(\def\Bin{ {\mathbb{B}} }\)
    \(\def\cardeq#1#2{ { #1 \approx #2 } }\)
    \(\def\cardle#1#2{ { #1 \preceq #2 } }\)
    \(\def\cardless#1#2{ { #1 \prec #2 } }\)
    \(\def\cardneq#1#2{ { #1 \not\approx #2 } }\)
    \(\def\comp#1#2{ { #2 \circ #1 } }\)
    \(\def\concat{ { \;\frown\; } }\)
    \(\def\Cut{ { \text{Cut} } }\)
    \(\def\Discharge#1#2{ { [#1]^#2 } }\)
    \(\def\DischargeRule#1#2{ { \RightLabel{#1}\LeftLabel{\scriptsize{#2} } } }\)
    \(\def\dom#1{ {\operatorname{dom}(#1)} }\)
    \(\def\Domain#1{ {\left| \Struct{#1} \right|} }\)
    \(\def\Elim#1{ { {#1}\mathrm{Elim} } }\)
    \(\newcommand{\Entails}{\vDash}\)
    \(\newcommand{\EntailsN}{\nvDash}\)
    \(\def\eq[#1][#2]{ { #1 = #2 } }\)
    \(\def\eqN[#1][#2]{ { #1 \neq #2 } }\)
    \(\def\equivclass#1#2{ { #1/_{#2} } }\)
    \(\def\equivrep#1#2{ { [#1]_{#2} } }\)
    \(\def\Exchange{ { \text{X} } }\)
    \(\def\False{ { \mathbb{F} } }\)
    \(\def\FalseCl{ { \lfalse_C } }\)
    \(\def\FalseInt{ { \lfalse_I } }\)
    \(\def\fCenter{ { \,\Sequent\, } }\)
    \(\def\fdefined{ { \;\downarrow } }\)
    \(\def\fn#1{ { \operatorname{#1} } }\)
    \(\def\Frm[#1]{ {\operatorname{Frm}(\Lang #1)} }\)
    \(\def\fundefined{ { \;\uparrow } }\)
    \(\def\funimage#1#2{ { #1[#2] } }\)
    \(\def\funrestrictionto#1#2{ { #1 \restriction_{#2} } }\)
    \(\newcommand{\ident}{\equiv}\)
    \(\newcommand{\indcase}[2]{#1 \ident #2\text{:}}\)
    \(\newcommand{\indcaseA}[2]{#1 \text{ is atomic:}}\)
    \(\def\indfrm{ { A } }\)
    \(\def\indfrmp{ { A } }\)
    \(\def\joinrel{\mathrel{\mkern-3mu}}\)
    \(\def\lambd[#1][#2]{\lambda #1 . #2}\)
    \(\def\Lang#1{ { \mathcal{#1} } }\)
    \(\def\LeftR#1{ { {#1}\mathrm{L} } }\)
    \(\def\len#1{ {\operatorname{len}(#1)} }\)
    \(\def\lexists#1#2{ { \exists #1\, #2 } }\)
    \(\def\lfalse{ {\bot} }\)
    \(\def\lforall#1#2{ { \forall#1\, #2 } }\)
    \(\newcommand{\lif}{\rightarrow}\)
    \(\newcommand{\liff}{\leftrightarrow}\)
    \(\def\Log#1{ { \mathbf{#1} } }\)
    \(\def\ltrue{ {\top} }\)
    \(\def\Id#1{ {\operatorname{Id}_#1} }\)
    \(\def\Int{ {\mathbb{Z}} }\)
    \(\def\Intro#1{ { {#1}\mathrm{Intro} } }\)
    \(\def\mModel#1{ { \mathfrak{#1} } }\)
    \(\newcommand{\mSat}[3][{}]{\mModel{#2}{#1}\Vdash{#3}}\)
    \(\newcommand{\mSatN}[3][{}]{\mModel{#2}{#1}\nVdash{#3}}\)
    \(\def\Nat{ {\mathbb{N}} }\)
    \(\def\nicefrac#1#2{ {{}^#1/_#2} }\)
    \(\def\num#1{ { \overline{#1} } }\)
    \(\def\ran#1{ {\operatorname{ran}(#1)} }\)
    \(\newcommand{\Obj}[1]{\mathsf{#1}}\)
    \(\def\Rat{ {\mathbb{Q}} }\)
    \(\def\Real{ {\mathbb{R}} }\)
    \(\def\RightR#1{ { {#1}\mathrm{R} } }\)
    \(\def\Part#1#2{ { \Atom{\Obj P}{#1, #2} } }\)
    \(\def\pto{ { \hspace{0.1 cm}\to\hspace{-0.44 cm}\vcenter{\tiny{\hbox{|}}}\hspace{0.35 cm} } }\)
    \(\def\PosInt{ {\mathbb{Z}^+} }\)
    \(\def\Pow#1{ {\wp(#1)} }\)
    \(\newcommand{\Proves}{\vdash}\)
    \(\newcommand{\ProvesN}{\nvdash}\)
    \(\def\Relbar{\mathrel{=}}\)
    \(\newcommand{\Sat}[3][{}]{\Struct{#2}{#1}\vDash{#3}}\)
    \(\newcommand{\SatN}[3][{}]{\Struct{#2}{#1}\nvDash{#3}}\)
    \(\newcommand{\Sequent}{\Rightarrow}\)
    \(\def\Setabs#1#2{ { \{#1:#2\} } }\)
    \(\newcommand{\sFmla}[2]{#1\,#2}\)
    \(\def\Struct#1{ {#1} }\)
    \(\def\subst#1#2{ { #1/#2 } }\)
    \(\def\Subst#1#2#3{ { #1[\subst{#2}{#3}] } }\)
    \(\def\TMblank{ { 0 } }\)
    \(\newcommand{\TMendtape}{\triangleright}\)
    \(\def\TMleft{ { L } }\)
    \(\def\TMright{ { R } }\)
    \(\def\TMstay{ { N } }\)
    \(\def\TMstroke{ { 1 } }\)
    \(\def\TMtrans#1#2#3{ { #1,#2,#3 } }\)
    \(\def\Trm[#1]{ {\operatorname{Trm}(\Lang #1)} }\)
    \(\def\True{ { \mathbb{T} } }\)
    \(\newcommand{\TRule}[2]{#2#1}\)
    \(\def\tuple#1{ {\langle #1 \rangle} }\)
    \(\newcommand{\Value}[3][\,]{\mathrm{Val}_{#1}^{#3}(#2)}\)
    \(\def\Var{ { \mathrm{Var} } }\)
    \(\newcommand{\varAssign}[3]{#1 \sim_{#3} #2}\)
    \(\def\Weakening{ { \text{W} } }\)

    What does it mean for a function, say, from \(\Nat\) to \(\Nat\) to be computable? Among the first answers, and the most well known one, is that a function is computable if it can be computed by a Turing machine. This notion was set out by Alan Turing in 1936. Turing machines are an example of a model of computation—they are a mathematically precise way of defining the idea of a “computational procedure.” What exactly that means is debated, but it is widely agreed that Turing machines are one way of specifying computational procedures. Even though the term “Turing machine” evokes the image of a physical machine with moving parts, strictly speaking a Turing machine is a purely mathematical construct, and as such it idealizes the idea of a computational procedure. For instance, we place no restriction on either the time or memory requirements of a Turing machine: Turing machines can compute something even if the computation would require more storage space or more steps than there are atoms in the universe.

    It is perhaps best to think of a Turing machine as a program for a special kind of imaginary mechanism. This mechanism consists of a tape and a read-write head. In our version of Turing machines, the tape is infinite in one direction (to the right), and it is divided into squares, each of which may contain a symbol from a finite alphabet. Such alphabets can contain any number of different symbols, say, but we will mainly make do with three: \(\TMendtape\), \(\TMblank\), and \(\TMstroke\). When the mechanism is started, the tape is empty (i.e., each square contains the symbol \(\TMblank\)) except for the leftmost square, which contains \(\TMendtape\), and a finite number of squares which contain the input. At any time, the mechanism is in one of a finite number of states. At the outset, the head scans the leftmost square and in a specified initial state. At each step of the mechanism’s run, the content of the square currently scanned together with the state the mechanism is in and the Turing machine program determine what happens next. The Turing machine program is given by a partial function which takes as input a state \(q\) and a symbol \(\sigma\) and outputs a triple \(\tuple{q', \sigma', D}\). Whenever the mechanism is in state \(q\) and reads symbol \(\sigma\), it replaces the symbol on the current square with \(\sigma'\), the head moves left, right, or stays put according to whether \(D\) is \(\TMleft\), \(\TMright\), or \(\TMstay\), and the mechanism goes into state \(q'\).

    For instance, consider the situation in Figure \(\PageIndex{1}\).

    turing-machine.png
    Figure \(\PageIndex{1}\): A Turing machine executing its program.

    The visible part of the tape of the Turing machine contains the end-of-tape symbol \(\TMendtape\) on the leftmost square, followed by three \(1\)’s, a \(0\), and four more \(1\)’s. The head is reading the third square from the left, which contains a \(1\), and is in state \(q_1\)—we say “the machine is reading a \(1\) in state \(q_1\).” If the program of the Turing machine returns, for input \(\tuple{q_1, 1}\), the triple \(\tuple{q_2, 0, \TMstay}\), the machine would now replace the \(1\) on the third square with a \(0\), leave the read/write head where it is, and switch to state \(q_2\). If then the program returns \(\tuple{q_3, 0, \TMright}\) for input \(\tuple{q_2, 0}\), the machine would now overwrite the \(0\) with another \(0\) (effectively, leaving the content of the tape under the read/write head unchanged), move one square to the right, and enter state \(q_3\). And so on.

    We say that the machine halts when it encounters some state, \(q_n\), and symbol, \(\sigma\) such that there is no instruction for \(\tuple{q_n, \sigma}\), i.e., the transition function for input \(\tuple{q_n,\sigma}\) is undefined. In other words, the machine has no instruction to carry out, and at that point, it ceases operation. Halting is sometimes represented by a specific halt state \(h\). This will be demonstrated in more detail later on.

    The beauty of Turing’s paper, “On computable numbers,” is that he presents not only a formal definition, but also an argument that the definition captures the intuitive notion of computability. From the definition, it should be clear that any function computable by a Turing machine is computable in the intuitive sense. Turing offers three types of argument that the converse is true, i.e., that any function that we would naturally regard as computable is computable by such a machine. They are (in Turing’s words):

    1. A direct appeal to intuition.
    2. A proof of the equivalence of two definitions (in case the new definition has a greater intuitive appeal).
    3. Giving examples of large classes of numbers which are computable.

    Our goal is to try to define the notion of computability “in principle,” i.e., without taking into account practical limitations of time and space. Of course, with the broadest definition of computability in place, one can then go on to consider computation with bounded resources; this forms the heart of the subject known as “computational complexity.”

    Historical Remarks

    Alan Turing invented Turing machines in 1936. While his interest at the time was the decidability of first-order logic, the paper has been described as a definitive paper on the foundations of computer design. In the paper, Turing focuses on computable real numbers, i.e., real numbers whose decimal expansions are computable; but he notes that it is not hard to adapt his notions to computable functions on the natural numbers, and so on. Notice that this was a full five years before the first working general purpose computer was built in 1941 (by the German Konrad Zuse in his parent’s living room), seven years before Turing and his colleagues at Bletchley Park built the code-breaking Colossus (1943), nine years before the American ENIAC (1945), twelve years before the first British general purpose computer—the Manchester Small-Scale Experimental Machine—was built in Manchester (1948), and thirteen years before the Americans first tested the BINAC (1949). The Manchester SSEM has the distinction of being the first stored-program computer—previous machines had to be rewired by hand for each new task.


    This page titled 3.1.1: Introduction is shared under a CC BY license and was authored, remixed, and/or curated by Richard Zach et al. (Open Logic Project) .

    • Was this article helpful?