So, I watched the first part (of three) of the IBM Watson Jeopardy challenge. So far, Brad Rutter and Watson are tied at first place, with Ken Jennings somewhat behind. I was interested in getting a sense for how Watson “thinks”. One of the things I tried to do was get a sense of the extent to which Watson is “understanding” human language. At this point the answer seems to be “not very well”.
Of course, it’s hard to glean much just watching a TV show, but it seems as if Watson isn’t quite understanding language the way we do. If a clue indicates the answer is a member of two sets, for example, Watson sometimes seems to ignore the second set. An example (paraphrased): This word can mean the bend in the elbow and also a thief. Watson’s best guess was “knee”, which has nothing to do with the second set (words that can mean “thief”) though it does have something to do with the first set (words that can mean the bend in the elbow). The right answer was “crook”.
Watson seems to do superlatively well when there are unique phrases to be matched, i.e. when the clue contains phrases that are pertinent only to the answer and wouldn’t occur anywhere else. Perhaps this is not surprising at all.
It’s possible Watson’s thought processes are a bunch of shortcuts completely unlike ours. It may for example simply be finding a bunch of words and phrases based on associations with keyphrases in the clue and then ranking them. Rather than searching for words/phrases in the sets that the clue is asking for.
Perhaps the right test is this: is it easy to add a subroutine to Watson that would allow it to rephrase the clue in several simpler English sentences? I don’t know. So I’m still unsure whether to call the creation of Watson a Singularity defining moment.