Artificial Intelligence (AI) is almost an umbrella term today. Different people use it to refer to different things, and all of the uses taken together cover a lot of ground. Image processing, pattern recognition, various types of automated statistical analysis and syntactic reasoning have all been called artificial intelligence.
The AI of this article refers to the ideal of creating a computer with human-like behaviour or consciousness. That last sentence is already a loaded one. To some people, creating human-like behaviour is the same as creating human-like consciousness. Others argue that behaviour and consciousness are fundamentally different. The two are called, respectively, Weak AI and Strong AI.
Weak AI refers to the ideal of creating, via artificial means (artificial meaning demonstrably algorithmic, for instance via a program on a computer), a set of behaviours which are indistinguishable from human behaviour.
Strong AI refers to the notion that the human mind is in fact algorithmic. Not only can it be simulated using an algorithm, it is an algorithm.
Distinguishing between Weak AI and Strong AI can be hard. Although they appear to be different, the argument goes that if something behaves exactly like a human, then it is human, at least mentally. This may seem counterintuitive at first, but the crucial condition is that it behave like a human in all aspects. If this argument is accepted, then simulating something that behaves like a human is the same thing as creating a human. To understand this point of view, it helps to try to identify the difference between a “true” human and a “simulated” human from the mental perspective. Is there any aspect of the mentality of humans that cannot be simulated, which does not manifest in any form of behaviour? That is, is there anything about the mentality of humans that is not “simulatable”, even if every part of the behaviour is simulatable?
Opponents of the equality of Strong AI and Weak AI use arguments based essentially on the philosophical notion of qualia, or unique individual perceptions. For example, an organism experiencing pain has a unique, unified experience of the sensation. Opponents of the equality essentially claim that such an experience or quale cannot be simulated on a computer, even if the behaviour associated with the experience can.