Skip to main content
Humanities LibreTexts

4.1: The Turing Test

  • Page ID
    29973
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Alan Turing29

    Alan Mathison Turing OBE FRS (/ˈtjʊərɪŋ/; 23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst and theoretical biologist. He was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.

    During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre. For a time he led Hut 8, the section responsible for German naval cryptanalysis. He devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine. Turing played a pivotal role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic; it has been estimated that this work shortened the war in Europe by more than two years and saved over fourteen million lives.

    After the war, he worked at the National Physical Laboratory, where he designed the ACE, among the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Machine Laboratory at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.

    Turing was prosecuted in 1952 for homosexual acts, when by the Labouchere Amendment, "gross indecency" was still criminal in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated." Queen Elizabeth II granted him a posthumous pardon in 2013.

    “The Turing Test”

    Turing’s work “Computing Machinery and Intelligence”30 was groundbreaking in his looking toward the future of how we can understand and deal with increasingly capable machines. The quotes from Turing below that are combined with my commentary are from this article.

    The first section of Turing’s article is labeled “The Imitation Game.” The recent biopic abut Turing (2014, Directed by Morten Tyldum) uses this as its title, making use both of Turing’s own most popular contribution to Philosophy and the fact that Turing had a lot of difficulties fitting socially, so that he had to “imitate” being a normal person. It’s worth a watch if you have the time. In this first section of the article, Turing outlines what his project is going to be, and he makes it very clear what he wants to argue for, and what he does not want to argue for,

    “I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

    The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A."”

    The form that the test will take is rather easy to understand given today’s technologies, but Turing had to go to great lengths to basically just describe the judge communicating with the two through electronic form, like a chat room, instant messaging, or texting. Communicating this way ensures that only the content of what is said is considered when making a decision. He then says,

    “We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"”

    So what he wants to do is rather simple: a judge (you, for example) chats with 2 beings through a computer. One is a computer and the other is a person. They both try to convince you that they are the person, and if you can’t decide which is which or guess wrong – and this happens repeatedly to both yourself and others – then Turing believes we ought to conclude that the computer is intelligent like we are since it has properly imitated us (hence calling it the “imitation game”).

    Turing then goes on to explain the advantages of approaching machine intelligence this way as opposed to trying to define thinking and showing that a machine can do it,

    “The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man. No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a "thinking machine" more human by dressing it up in such artificial flesh. The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing -their voices…

    We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant. The "witnesses" can brag, if they consider it advisable, as much as they please about their charms, strength or heroism, but the interrogator cannot demand practical demonstrations….

    It might be urged that when playing the "imitation game" the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man.”

    Why approach artificial intelligence in this way? It’s rather simple: how do we know other people are intelligent? Do we define “thinking” and then say “I know that you are thinking and thus can be intelligent”? Or do we just assume others are intelligent if we interact with them and they show they are intelligent? Turing just wants to extend this courtesy that we extend to other people to machines as well.

    After presenting his reasons for believing the test to be a good one, Turing goes on to deal with many counter arguments to his own. This is a solid philosophical move and is his attempt to deal with all of the most reasonable and common objections to what he has proposed. He dealt with a number of technological issues which we can now take for granted: there are few who doubt that artificial intelligence will happen some time, and now it appears to just be a matter of when and how. Computers advance in power every year, so some real form of artificial intelligence beyond Siri-like helpers is on the distant horizon. He does deal with other, more timeless objections as well, and those follow below.

    The Mathematical Objection

    Despite the advances in technology, it is still possible that computers will never be capable of computing everything necessary to mimic or have human intelligence. He summarizes these objections as follows,

    “The result in question refers to a type of machine which is essentially a digital computer with an infinite capacity. It states that there are certain things that such a machine cannot do. If it is rigged up to give answers to questions as in the imitation game, there will be some questions to which it will either give a wrong answer, or fail to give an answer at all however much time is allowed for a reply. There may, of course, be many such questions, and questions which cannot be answered by one machine may be satisfactorily answered by another. We are of course supposing for the present that the questions are of the kind to which an answer "Yes" or "No" is appropriate, rather than questions such as "What do you think of Picasso?" The questions that we know the machines must fail on are of this type…This is the mathematical result: it is argued that it proves a disability of machines to which the human intellect is not subject.”

    His response to this is rather simple: So what? How do you know that people aren’t limited? And even if a computer has limitations, does it really matter? His response is below and it finishes with his most important point for the purposes of the usefulness of the Imitation Game,

    “The short answer to this argument is that although it is established that there are limitations to the Powers If any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly. Whenever one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it. We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.

    Those who hold to the mathematical argument would, I think, mostly he willing to accept the imitation game as a basis for discussion…”

    So, even if there is something to this argument, the Imitation Game itself can still function as a test for intelligence.

    The Argument from Consciousness

    This argument is one of the more interesting and strongest ones that still stands today against the possibility of a true Artificial Intelligence akin to our own, human intelligence. Turing explains this argument as follows,

    “This argument is very, well expressed in Professor Jefferson's Lister Oration for 1949, from which I quote. "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."”

    This objection calls into the question validity of the Imitation Game test and essentially says, “Even if it passes the test, it’s not intelligent because it lacks consciousness or the ability to do something new and emotional.” Turing responds by asking whether or not we really know that others are thinking (or even ourselves). He has a point here since what constitutes “consciousness” and “thinking” is still mostly a mystery. We know how brains work during those processes, but we can’t exactly define them yet, hence why Turing likes his test as the method of determining intelligence.

    Turing goes on to say that the real force of the argument is that it calls into question a machines ability to understand anything like a human does. John Searle’s “Chinese Room” argument that follows below is a stronger presentation of this argument, and Turing gives his own response to these objections by saying that a machine could certainly behave as if it has understanding, and whether or not that understanding would be “genuine” is a separate issue; the test of the Imitation Game for intelligence can still work, especially since it’s how we know that other people understand things.

    Arguments from Various Disabilities

    Turing summarizes this objection as follows,

    “These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X." Numerous features X are suggested in this connexion I offer a selection:

    Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.”

    Turing’s response is surprisingly simple: So what if they can’t do these things since they could still be intelligent without having to do these things, and more importantly, couldn’t we just make the machine do these things? Can’t we make a computer make mistakes, learn, behave in a certain way, have taste buds, self-program, etc.? And if it’s not funny, is not human then? While I would like to call those without a sense of humor inhuman, I won’t. They’re just boring, and computers can certainly be boring.

    Finally, Turing drives home his main point yet again: none of this calls into question the appropriateness of his test in determining when a machine has achieved intelligence.

    Lady Lovelace's Objection

    This is perhaps one of the more cited objections to computer intelligence, and predates Turing by over a hundred years. He portrays it, and his response, as follows,

    “Our most detailed information of Babbage's Analytical Engine comes from a memoir by Lady Lovelace (1842). In it she states, "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform" (her italics). This statement is quoted by Hartree (1949) who adds: "This does not imply that it may not be possible to construct electronic equipment which will 'think for itself,' or in which, in biological terms, one could set up a conditioned reflex, which would serve as a basis for 'learning.' Whether this is possible in principle or not is a stimulating and exciting question, suggested by some of these recent developments But it did not seem that the machines constructed or projected at the time had this property."

    I am in thorough agreement with Hartree over this. It will be noticed that he does not assert that the machines in question had not got the property, but rather that the evidence available to Lady Lovelace did not encourage her to believe that they had it. It is quite possible that the machines in question had in a sense got this property. For suppose that some discrete-state machine has the property. The Analytical Engine was a universal digital computer, so that, if its storage capacity and speed were adequate, it could by suitable programming be made to mimic the machine in question. Probably this argument did not occur to the Countess or to Babbage. In any case there was no obligation on them to claim all that could be claimed…

    A variant of Lady Lovelace's objection states that a machine can "never do anything really new." This may be parried for a moment with the saw, "There is nothing new under the sun." Who can be certain that "original work" that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles.”

    Again, to come back to Turing’s main point, this doesn’t seem to create a genuine problem for his test. Whether or not a machine – or anyone – can do anything new is a separate question than whether something is intelligent.

    Argument from Continuity in the Nervous System

    This objection to Turing is a more technical one, and he explains it like this,

    “The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.”

    This is an early version of the distinction between what is now known as “hard AI” versus “soft AI”. Hard AI is the type of intelligence that Turing has been talking about: a computer, as we understand them, runs a program that is intelligent. This objection he is discussing basically says that there is no way this can work since a computer is, in some sense, binary (just using 1’s and 0’s to do everything) and the human brain and mind work in an entirely different way. This problem is still being discussed, since we just don’t know whether or not our brains do function like a very complex personal computer or not.

    Because there is still something to this objection, there are those that are going for what is known as “soft AI” where, instead of creating a traditional program on a traditional computer, the goal is to create an artificial mechanical brain that completely replicates our own brain in every way. Rather than having a program be intelligent, the idea is that the resultant thing would be intelligent since it would be, piece for piece, an exact (but artificial) replica of the human brain. Many people think this is the only way to get a real AI and people are laying the foundation to try to create an “artificial” brain that can think for itself and mimic the human mind. Time will give us the answer to whether or not this works.

    “The Chinese Room”31

    The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.

    The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument refutes a position Searle calls Strong AI:

    The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

    Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.

    The Chinese Room Thought Experiment

    Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

    The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

    Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

    Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

    Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.

    The Turing Test

    The Chinese room implements a version of the Turing test. Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

    Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote:

    I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

    To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

    “What it’s like to be a Bat”32

    The Philosopher Thomas Nagel devised a thought experiment which I believe can act as an interesting response to Searle’s Chinese Room argument. Searle does have a very strong and interesting point about understanding, but he doesn’t show that there is no understanding taking place – just that it won’t be the same type of understanding we (normal, conscious humans) believe that we have. Nagel asks us to try to understand what it would be like to be a bat, and concludes that we could not possibly begin to grasp how bats experience the world, but that doesn’t mean there’s not something happening, it’s just that it’s in a form we cannot comprehend. If you take what he says and apply it to the Chinese Room, then couldn’t a computer have its own type of non-human understanding that we can’t comprehend? I have included a summary of his arguments below, and it is very technical, but hopefully it makes sense.

    "What is it like to be a bat?" is a paper by American philosopher Thomas Nagel, first published in The Philosophical Review in October 1974, and later in Nagel's Mortal Questions (1979). In it, Nagel argues that materialist theories of mind omit the essential component of consciousness, namely that there is something that it is (or feels) like to be a particular, conscious thing. He argued that an organism had conscious mental states, "if and only if there is something that it is like to be that organism—something it is like for the organism." Daniel Dennett called Nagel's example "The most widely cited and influential thought experiment about consciousness."

    The thesis attempts to refute reductionism (the philosophical position that a complex system is nothing more than the sum of its parts). For example, a physicalist reductionist's approach to the mind–body problem holds that the mental process humans experience as consciousness can be fully described via physical processes in the brain and body.

    Nagel begins by arguing that the conscious experience is widespread, present in many animals (particularly mammals), and that for an organism to have a conscious experience it must be special, in the sense that its qualia or "subjective character of experience" are unique. Nagel stated, “An organism has conscious mental states if and only if there is something that it is like to be that organism - something that it is like for the organism to be itself.”

    The paper argues that the subjective nature of consciousness undermines any attempt to explain consciousness via objective, reductionist means. A subjective character of experience cannot be explained by a system of functional or intentional states. Consciousness cannot be explained without the subjective character of experience, and the subjective character of experience cannot be explained by a reductionist being; it is a mental phenomenon that cannot be reduced to materialism. Thus for consciousness to be explained from a reductionist stance, the idea of the subjective character of experience would have to be discarded, which is absurd. Neither can a physicalist view, because in such a world each phenomenal experience had by a conscious being would have to have a physical property attributed to it, which is impossible to prove due to the subjectivity of conscious experience. Nagel argues that each and every subjective experience is connected with a “single point of view,” making it unfeasible to consider any conscious experience as “objective”.

    Nagel uses the metaphor of bats to clarify the distinction between subjective and objective concepts. Bats are mammals, so they are assumed to have conscious experience. Nagel used bats for his argument because of their highly evolved and active use of a biological sensory apparatus that is significantly different from that of many other organisms. Bats use echolocation to navigate and perceive objects. This method of perception is similar to the human sense of vision. Both sonar and vision are regarded as perceptional experiences. While it is possible to imagine what it would be like to fly, navigate by sonar, hang upside down and eat bugs like a bat, that is not the same as a bat's perspective. Nagel claims that even if humans were able to metamorphose gradually into bats, their brains would not have been wired as a bat's from birth; therefore, they would only be able to experience the life and behaviors of a bat, rather than the mindset.

    Such is the difference between subjective and objective points of view. According to Nagel, “our own mental activity is the only unquestionable fact of our experience”, meaning that each individual only knows what it is like to be them (Subjectivism). Objectivity, requires an unbiased, non-subjective state of perception. For Nagel, the objective perspective is not feasible, because humans are limited to subjective experience.

    Nagel concludes with the contention that it would be wrong to assume that physicalism is incorrect, since that position is also imperfectly understood. Physicalism claims that states and events are physical, but those physical states and events are only imperfectly characterized. Nevertheless, he holds that physicalism cannot be understood without characterizing objective and subjective experience. That is a necessary precondition for understanding the mind-body problem.


    This page titled 4.1: The Turing Test is shared under a CC BY license and was authored, remixed, and/or curated by Noah Levin (NGE Far Press) .

    • Was this article helpful?