In The Armchair

Strong and Weak Artificial Intelligence

Posted in Armchair Ruminations by Armchair Guy on December 23, 2006

Artificial Intelligence (AI) is almost an umbrella term today. Different people use it to refer to different things, and all of the uses taken together cover a lot of ground. Image processing, pattern recognition, various types of automated statistical analysis and syntactic reasoning have all been called artificial intelligence.

The AI of this article refers to the ideal of creating a computer with human-like behaviour or consciousness. That last sentence is already a loaded one. To some people, creating human-like behaviour is the same as creating human-like consciousness. Others argue that behaviour and consciousness are fundamentally different. The two are called, respectively, Weak AI and Strong AI.

Weak AI refers to the ideal of creating, via artificial means (artificial meaning demonstrably algorithmic, for instance via a program on a computer), a set of behaviours which are indistinguishable from human behaviour.

Strong AI refers to the notion that the human mind is in fact algorithmic. Not only can it be simulated using an algorithm, it is an algorithm.

Distinguishing between Weak AI and Strong AI can be hard. Although they appear to be different, the argument goes that if something behaves exactly like a human, then it is human, at least mentally. This may seem counterintuitive at first, but the crucial condition is that it behave like a human in all aspects. If this argument is accepted, then simulating something that behaves like a human is the same thing as creating a human. To understand this point of view, it helps to try to identify the difference between a “true” human and a “simulated” human from the mental perspective. Is there any aspect of the mentality of humans that cannot be simulated, which does not manifest in any form of behaviour? That is, is there anything about the mentality of humans that is not “simulatable”, even if every part of the behaviour is simulatable?

Opponents of the equality of Strong AI and Weak AI use arguments based essentially on the philosophical notion of qualia, or unique individual perceptions. For example, an organism experiencing pain has a unique, unified experience of the sensation. Opponents of the equality essentially claim that such an experience or quale cannot be simulated on a computer, even if the behaviour associated with the experience can.

Advertisements

4 Responses

Subscribe to comments with RSS.

  1. Hal said, on December 29, 2006 at 5:35 am

    Someone using an argument based on the existence of qualia might agree with the Weak AI hypothesis while disagreeing with Strong AI, since qualia say more about the subjective experience of being conscious than the outward manifestations. My own view is that if the only difference between our own minds and something algorithmic is the subjective experience then the difference is not worth much. Then again, I do not believe in Strong AI or Weak AI.

    I have not carefully read your discussion of Penrose’s argument, but I hope it is based on the argument from “Shadows of the Mind,” since that one is more convincing than the one from “Emperor’s New Mind.” I was aware in reading “Shadows of the Mind” that the argument was not irrefutable. Nonetheless, it was enough to cause me to stop accepting without question what is taken as a given in popular culture and science fiction, which is that one day computers will be thinking like us. I think it requires a pretty big leap of faith to think that we will one day understand the very basis of our own experience of reality in terms of a precisely described logical theory, but for some reason this is taken as a given by a majority of scientifically minded folk. This must be because we have had so much success describing so many other phenomena of nature.

    I do think Penrose is wrong to the extent that he thinks thought will be completely explained in terms of some yet to be developed theory of physics. Any precisely described explanation should just lead right back to the same paradox.

    I would not want to try to discourage believers in Strong or Weak AI from trying to substantiate their belief, but it is a bit vexing to me that more people don’t seem to realize where the burden of proof really lies.

  2. Armchair Guy said, on December 29, 2006 at 5:55 pm

    Hal:

    Thank you for your comments. My post does give the impression that qualia are used to argue against both Strong and Weak AI, which is not correct; as you say, qualia are used mainly to dispute the thesis of Strong AI, not Weak AI.

    The discussion of Penrose’s argument is not based directly on “Shadows of the Mind”. Penrose’s argument in “Shadows” is so long and involved that I found it hard to put it in succinct form. So I used various sources that paraphrased Penrose’s argument to derive the version I argue against, including John Searle, “The Mystery of Consciousness”. I think it captures the essence of Penrose’s argument without leaving out anything crucial, but would like to hear about it if I missed something.

    In a sentence, I think Penrose confuses the knowledge that “there exists a certain algorithm with certain properties” with the knowledge “of an algorithm with those properties”. We may know it exists, but we may not be able to recognize it if we see it.

    You raise an interesting point about whether we can find a logical theory that explains our own consciousness. It seems to me that this would be related (not identical) to the question of whether any system can comprehend itself. The answer for Turing machines is in the affirmative (there exists a universal Turing machine), using a narrow definition of comprehension: comprehension = the ability to simulate.

    I think any logical theory that encapsulates consciousness will be so complex (in terms of its symbols, formulae and axioms) that no human can understand it. I don’t think our brains are developed enough, yet. But I can imagine a time far in the future when our descendants homo brilliant might become smart enough to develop a first order theory describing consciousness as it is today.

    As for the burden of proof, I would normally agree with you. No one has proved that consciousness is a property of an algorithm. However, the contest is not fair. Creating a computer that behaves consciously (probably the only “proof” one could offer in any reasonable time frame) is a colossal task. But even that wouldn’t prove the Strong AI hypothesis. On the other hand, proving that Strong and Weak AI are impossible is a much easier task: someone has to exhibit just one counterargument that stands up to scrutiny. (The Gödelian argument is one such attempt.) This is analogous to the statement that to prove something requires a proof, while to disprove it requires one counterexample.

    So as far as proof is concerned, I think the jury is out. All we can do at this stage is opine based on available evidence, even if such evidence does not constitute proof.

  3. Hal said, on December 29, 2006 at 9:44 pm

    You say that disproving Strong or Weak AI is in principle an easier task, but I disagree. How can we prove that a computer can’t be conscious without a definition of consciousness?

    As for Turing machines comprehending other Turing machines, we have to be careful about how we define comprehension. In a sense, we all comprehend the mind. We can simulate a mind within our own mind by imagining what someone else would do in a particular situation, just like a universal Turing machine can simulate another Turing machine within itself. The Turing Test is sort of based on a simulation of a mind within a mind, but the precise criteria considered by the judge of the Turing Test are intentionally left vague. The sort of understanding I am speaking of is the kind that can be precisely described.

    There are examples of hypothetical physical processes that can be precisely described but that cannot be simulated by computer. Penrose discusses “oracles” in Shadows of the Mind. An oracle is a hypothetical physical process that could solve the halting problem. An oracle is an example of a hypothetical physical process that can be precisely described mathematically but that cannot be simulated computationally. Penrose concludes that consciousness cannot be explained in terms of oracles because you would get the same paradox as before.

    The question is whether we comprehend an oracle. I think we do, to the extent that we can define it as something that tells us correctly whether a Turing machine halts or not. On the other hand, we can’t simulate an oracle, unless you count the mathematical description of what an oracle does as a simulation.

  4. Armchair Guy said, on December 30, 2006 at 2:22 am

    Hal:

    Your question: “How can we prove that a computer can’t be conscious without a definition of consciousness?” highlights the imbalance between proving Strong AI and disproving it.

    I think we can in fact prove that a computer can’t be conscious without a definition of consciousness. To do this we don’t need a definition of consciousness or even a way to determine whether any process is conscious.

    All we need is one process which we recognize as conscious (such as human thought), which is provably not an algorithm. The Godel argument is such an attempt. We probably agree that we can and do recognize conscious processes. For example, I have no satisfactory definition of consciousness but I can recognize that most (maybe even all) humans are conscious. If I could produce something a human could do, and prove that it could not possibly be done by computer, then I’ve proved that Strong AI is false.

    Regarding comprehension: I agree that the definition of comprehension that I used for the UTM is a narrow one. By that definition, humans can also comprehend themselves. Of course, no human comprehends himself yet, but this doesn’t mean it won’t happen. Comprehension is another one of those hard-to-define terms.

    About Oracles: If we could simulate an oracle (in our minds, not on a computer) then that would prove we are not algorithms! I wonder whether it is widely accepted in the scientific community that oracles (natural processes that can’t be simulated on a computer) exist.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: