In The Armchair

The Chinese Room Experiment – Part I

Posted in Armchair Ruminations by Armchair Guy on December 20, 2006

The Chinese Room Experiment is a thought experiment building on the Turing Test put forward by the philosopher John R. Searle.

The intention of the thought experiment is to demonstrate that the hypothesis of Strong Artificial Intelligence (Strong AI), which claims that the human mind is an algorithm, is wrong. The Strong AI argument says that every human mental process is algorithmic, that is, it follows a predefined sequence of steps. This appears to be different from the Weak AI hypothesis, which claims that every human mental process can be simulated on a computer, but that the human mind itself is not an algorithm. Some argue that mental processes are essentially particular behaviours (behaviouralism) or a way of looking at physical processes (functionalism), and as a consequence there is no significant distinction between the Strong AI and Weak AI hypotheses. (Read about Strong and Weak AI.)

The Chinese Room Argument asks one to imagine that there is a native English speaker, Steve, sitting in a closed room with two windows. Steve knows nothing of Chinese. In the room is a book containing detailed information on how to respond to any sentence in Chinese. Outside the room is Wong, a native Chinese speaker. Steve receives a sentence in Chinese from Wong via the input window, consults the book, and responds in Chinese at the output window. Steve carries on such a conversation with Wong without understanding either the input, the output, or the logic behind the exchange.

Searle claims that to Wong, Steve would appear to “know”, or “understand”, Chinese. But Steve doesn’t. He is simply following an algorithm; he has no clue what any of the Chinese exchange means. Thus, according to Searle, no algorithm constitutes understanding or consciousness.

Objections to the Chinese Room Argument

There are several answers to Searle’s Chinese Room Experiment from people claiming that it does not prove the impossibility of Strong AI. Here are some of them:

  1. The Systems Reply

    1. Objection This objection says that, while Steve does not understand Chinese, the system consisting of Steve and the book does. This can be viewed as a single, larger entity which does understand Chinese.
    2. Searle’s Reply Searle replies to this objection by suggesting a modification of the experiment in which Steve memorizes the entire translation book and steps out of the room, talking face to face with Wong. Steve still does not understand Chinese, says Searle. He is still applying the rules without any understanding of Chinese.
    3. Rejoinder The problem with Searle’s reply is that it invites us to think of all of Steve’s memory as an integral part of his consciousness or ego, while in this modification to the experiment, Steve is using his memory as if it were just separate storage, no different than copying the book onto his forearm. Steve and his memory, taken together as a system, do understand Chinese.
  2. The Complexity Reply
    1. Objection This objection, due to Daniel Dennett, says that it is not easy to duplicate consciousness without being extremely complex. Ignoring this complexity in the Chinese Room Experiment fools our intuition into thinking the Steve+Book combination is ignorant of Chinese, since we think the book is “just a book”. In fact, if we considered the complexity of the algorithm required to converse in Chinese, we would be forced to conclude that the “book” is actually complex enough to be considered conscious. (The notion of a book being conscious may seem ridiculous, but this really refers to the algorithm contained in the book, not the physical book itself.)
    2. Searle’s Reply Searle interprets Dennett’s objection as the statement “You can’t have a book like that”, and goes on to say that the whole point of thought experiments is to imagine a situation that is conceivable, even if it we don’t know the details of how to set it up. He says Dennett is essentially denying the idea behind thought experiments.
    3. Rejoinder Dennett is not saying it is impossible to have a book that complex; he is saying anything that complex is already conscious and aware. If Searle insists on having a thought experiment where a book is complex enough to converse in Chinese but is not conscious, he is assuming too much and is begging the question.

More information on this interesting topic can be found in the following books:

Consciousness Explained – by Daniel Dennett
The Mystery of Consciousness – by John R. Searle

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: