The Singularity is a recurring theme in artificial intelligence and in science fiction.
It refers to an event where a computer achieves certain significant feats of intelligence. Different authors use it slightly differently, or use different terms. Sometimes the word means achieving “self-awareness” (whatever that is). Sometimes it means the creation of a computer that is as smart as a human. Sometimes it means the addition of a technology that results in a massive increase in ability (the new state is usually one of higher-than-human intelligence). Some authors speak of multiple singularities — computers caught in an ever-rising spiral of super-intelligence.
The AI promises of the 1980s turned out to be too grand and ill-founded to be realistic. People thought then that they could program intelligence by programming the minutiae of thought. This turned out to be a vastly bigger task than anticipated.
For a while, it seemed there were things humans could do that computers would never be able to. One of the biggest, and most visible, blows was Deep Blue’s defeat of Garry Kasparov. Today, software (Rybka, Glaurung, Stockfish) running on the ordinary desktop computer will easily defeat the best human chess players. But Deep Blue and its younger cousins don’t really have intelligence, at least not what we usually mean by it. They’re “on-rails”, and can do very restricted things on very restricted input sets.
But all this doesn’t mean man-made intelligence is impossible. Instead of programming intelligence, we can perhaps include techniques like evolving it or learning it. Is that what Watson has done?
Watson is a massively parallel supercomputer, developed by IBM, that will participate on Jeopardy against Ken Jennings and Brad Rutter. Watson can understand a variety of language nuances and sift through its massive database to attempt to find answers. Given the diversity of subject matter as well as question phrasing on Jeopardy, this is quite a feat. Does this qualify as intelligence?
It’s not clear to me how Watson works, but the bits I’ve gleaned indicate that it is a collection of a number of hand-written subroutines that interact in carefully human-tuned ways. There’s no automated evolution or search of algorithms to try to make it better. In that sense, it is still algorithmic, much like Deep Blue. But Watson’s algorithm is much more complicated and chaotic than Deep Blue’s. It sounds complex enough that I view it as a limited form of intelligence.
Perhaps we are hitting the first technological singularity, although it’s not the single explosive moment some have imagined.
Watson might have been a good learning experience — the engineers at IBM must have figured out a lot about how to make computers think. But it still lacks the essential ingredient that sci-fi authors fantasize about. We still don’t have automated techniques to take a given computer and make it better. That is, we don’t know how to make computers improve other computers. That would be the a real Singularity.