Until yesterday, I hadn’t even heard of Watson, the IBM computer that just beat Jeopardy champions Ken Jennings and Brad Rutter. I’m not even sure where I saw an offhand comment about it the computer on Jeopardy – I couldn’t imagine that the computer was one of the contestants, so I really didn’t know what the comment was about.
Then today I saw an article in the Wall Street Journal about Watson. I was fascinated. I’ve loved Jeopardy since I was a child, though I haven’t watched the show in years. (Actually I haven’t watched any TV in years, partly because I prefer to spend my time on other activities and partly because, since we cancelled cable to save money, the only channel we get is the local community information channel.) I would never want to be a contestant – sometimes it’s easy to get the answers when you’re just watching the show, but the pressure contestants are under makes it harder for the brain to produce the information needed.
Of course, Watson doesn’t have that particular problem, as it lacks emotions. It (I keep wanting to say “he” but Watson is just a machine) also may have an advantage over its human opponents in its ability to press the buzzer so fast. (This article discusses that question.) But until the big match against Jennings and Rutter, practice matches against other humans gave no clear indication how well Watson would do in this game made to test human intelligence.
Until reading the WSJ article, I had always thought of Jeopardy primarily as a test of knowledge. If you happen to have studied the right history, seen the right movies, read the right books, etc., you recognize the clues that are given and know what answer – er, that is, question – to give in response. But as the WSJ article points out, a great deal of the ability required to play Jeopardy well has to do with interpreting language.
Parsing ordinary sentences is a challenge for a computer, especially – I’m guessing – in a language like English where a word can be noun, verb, or adjective, depending on the context. In Jeopardy, where clues are short and often cryptic, the challenge is that much greater. Sometimes Watson figures it out. Sometimes it doesn’t – or at least, based on its answers, it doesn’t seem to have understood correctly.
I find the topic particularly interesting right now, because I’m busy writing a story – in collaboration with Al and at his request – about a robot that has gone amok. It has good enough linguistic skills to converse with human beings, but understands only what is actually said and is unable to consider whether the speaker really meant what he was saying, or expected it to be acted on.
I was wondering just how far into the realm of science fiction my story was going. Apparently not nearly as far as I thought.