“I
for one welcome our new computer overlords.”
That
is what Ken Jennings said when he was defeated in the television game show Jeopardy by Watson, an artificially
intelligent machine built by IBM.
Jeopardy is a long-running television game show, in which contestants are given clues that
are answers to questions, and they must guess the question. The clues are put under different categories. Each clue is assigned a monetary value, with
the harder clues having higher values.
Once the clue is displayed on a board and read by the host, the first of
the three contestants to press a buzzer is given the chance to answer. A correct answer is rewarded, and an incorrect
answer is punished.
Here are three examples. Under the category “Literary Character APB (All Points Bulletin),” a $600 clue is “Wanted for general evilness; last seen at the Tower of Barad-Dur; it’s a giant eye, folks, kinda hard to miss.” The correct answer is “What is Sauron?”
Under the category “Dialing for Dialects,” a clue for $800 is “While Maltese borrows many words from Italian, it developed from a dialect of this Semitic language.” The correct answer is “What is Arabic?”
Under the category “Church” and “State,” a clue for $1,600 is “It can mean to develop gradually in the mind or to carry during pregnancy.” The correct answer is “What is gestate?”
Watson correctly answered these clues and many more in playing the game against Ken Jennings and Brad Rutter. Jennings was the all-time champion, having won 74 Jeopardy Games in a row, winning prize money of $2,520, 700. Rutter was the all-time money winner, winning $4,355,102.
Winning Jeopardy requires not just knowledge, speed, and accuracy but also game strategy. The game has three contestants who play three rounds--Jeopardy, Double Jeopardy, and Final Jeopardy. In the first round of Jeopardy, there are 30 clues available, classified under six categories. Each of the six categories contains five clues which are valued at $100, $200, $300, $400 and $500. The higher values are for clues that are more difficult to unravel. In Double Jeopardy, there are also 30 clues in six categories, but the values are doubled.
The game begins with the winner of the previous game selecting a clue. The clue is then displayed on the board and then read by the host. When the host has finished reading the clue, the first contestant to press a buzzer has the chance to answer. A correct answer wins the value of the clue. An incorrect answer is punished by having that value deducted from the contestant's score. So contestants must strategize in deciding whether their confidence in their answer is strong enough to run the risk of getting it wrong. The contestant who gets the correct answer selects the next clue. When a contestant gives an incorrect answer, the first of the other two contestants to press the buzzer has the chance to answer.
One of Jeopardy's hidden clues and two of Double Jeopardy's clues are called Daily Doubles. With a Daily Double, a contestant can bet some amount of the contestant's winnings (from $5 to all of it). This allows the contestants to double their winnings by betting the whole of their winnings so far. But if they give an incorrect answer, they lose whatever they have betted.
Final Jeopardy is the last clue of the game. As with Double Jeopardy, contestants can bet all or any part of their winnings, but all three contestants participate. Each writes an answer and a bet, without seeing what the other two are writing. Often, the outcome of the whole game depends on their strategy in Final Jeopardy. The contestant with the largest winnings at the end is the winner.
Notice the many subtle, strategic decisions that must be made. Contestants must calculate their bets in Daily Doubles and Final Jeopardy. They must estimate their chances on clues they have not seen. The must weigh the risk of an incorrect answer before they decide to press the buzzer, and they have only seconds to make this decision. They must also anticipate the decisions of their opponents, particularly in Final Jeopardy where one bet can win or lose the whole game.
IBM had built the chess-playing machine Deep Blue that defeated Gary Kasparov, the reigning world champion in chess, in 1997. This was impressive, but it did not show that AI machines are capable of general intelligence and flexible judgment comparable to that of human beings. Chess is a restricted domain with clear rules and a clear objective (capturing the King). By contrast, success in playing Jeopardy! requires general knowledge of history, culture, literature, and science. It also depends on flexibility in interpreting puns, metaphors, and other nuances of language.
The scientists decided that if they could build an AI machine that could defeat a Jeopardy! champion like Ken Jennings, this would show that artificial intelligence was finally moving towards general intelligence like that of human beings. In 2011, Watson did indeed defeat Jennings and Rutter in playing the game.
In that game, Watson did not have the capacities for hearing speech or reading texts, but now it has those capacities. Scientists at IBM want Watson to read massive quantities of medical literature so that it can become a medical diagnostician. It might also read legal texts, so that it can become a legal consultant.
In much of the older AI research, it was assumed that intelligence could be reduced to facts and rules—accumulate lots of factual data and rules for inferring conclusions from those facts. But, in fact, much of what we identify as intelligence is intuitive judgment that is acquired by learning from experience, which cannot be completely reduced to rules and facts.
Watson’s great achievement is that it can learn on its own. It has accumulated massive quantities of data from encyclopedias, novels, newspapers, and all of Wikipedia—the equivalent of thousands of books. Then it surveys this data looking for patterns. It has also surveyed 10,000 of old Jeopardy questions and answers looking for patterns of success and failure.
Machine learning from examples allows machines to acquire knowledge that cannot be reduced to facts and rules. For example, the skills for speech recognition and reading texts cannot be achieved through a simple set of rules. How do we recognize the letter “A”? There are many different fonts in which this letter might be printed, and the hand-written letter differs in the hand-writing style of different writers. But if you give an intelligent machine millions of examples of the printed and hand-written letter “A,” and the machine looks for recurrent patterns, it can learn to recognize this letter. Similarly, speakers differ in how they pronounce letters and words, and so there is no clear set of rules for identifying spoken letters and words. But if you give an intelligent machine millions of examples of how a certain letter or word is pronounced by different speakers, the machine can learn to identify the patterns.
From his experience in competing against Watson, Jennings decided that Watson was a lot like the human players of Jeopardy. “Watson has lots in common with a top-ranked human Jeopardy player,” Jennings observed. “It’s very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman.”
Jennings also decided that Watson’s way of solving Jeopardy puzzles was similar to his own:
“The computer’s techniques for unraveling Jeopardy clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terbyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of anser being sought; the time, place, and gender hinted at in the clue; and so on. And when it fees ‘sure’ enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy player, but I felt convinced that under the hood my brain was doing more or less the same thing.”
But does Watson really think? John Searle answered no, the day after Watson won the Jeopardy competition. “IBM invented an ingenious program—not a computer that can think,” he declared. “Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything.”
Some computer scientists have responded to this question of whether a machine can think by asking, “Can a submarine swim?” Submarines don’t swim the way fish swim or the way some reptiles and mammals swim. But in some ways, submarines swim better than fish, reptiles, and mammals. Similarly, Watson certainly doesn’t think the way human beings or other animals think, but it can solve problems and answer difficult questions about the world, in ways that have persuaded many people that is really is thinking.
But can we trust our perception that a machine is thinking? Alan Turing's "imitation game" assumes that if a machine could successfully imitate a human thinker, so that we could not distinguish between the machine and humans through carrying on an exchange of questions and answers with them, that would show that the machine had achieved something like human-level intelligence. IBM is hoping to show in a few years that Watson can pass the Turing Test. Searle has objected, however, that this is not truly a test of human-level intelligence.
The scientists at IBM who built Watson admit that it does not have one crucial feature of human thinking—emotion or feeling. It did not feel any fear of failure when it played Jeopardy And it did not feel any pride in winning the game. The scientists behind Watson did feel such emotions.
When the IBM scientists were testing Watson, they set up Jeopardy games where Watson was playing against IBM employees who were good Jeopardy players. When a comedian hired to host practice matches ridiculed Watson’s more obtuse answers (Rembrandt rather than Pollock for a “late ’40s artist”), David Ferrucci, director of the Watson program, complained: “He’s making fun of a defenseless computer.” When Ferruci brought his daughters to see one of the practice sessions, one of the girls asked: “Daddy, why is that man being so mean to Watson?”
Does human-level intelligence require not just abstract reason but also emotional drives, because human minds care about what they’re thinking and doing? How could emotion be put into a machine?
One possibility is that an artificial brain might have to be put into an artificial body that would have something like a neuroendocrine system that would generate emotional experience.
Another possibility is building cyborgs—cybernetic organisms—in which human brains and bodies have an interface with intelligent machines. Thus, human intelligence is augmented by machines, but it’s combined with all the normal emotional drives of human beings. In a way, many human beings today have already become cyborgs because the intelligence of their brains is augmented by machines through interfaces with computers and smart phones. Over the next few years, that brain-machine interface will be put inside the human brain and body through neural implants.
Right now, the intelligence of many of us has been augmented by our computers and smart phones. We converse with our machines, and this conversation occurs through brain-machine interfaces in our typing fingers, our speaking voices, our hearing ears, and our seeing eyes. As these interfaces move to the surface of our bodies (as in Google Glass), electronic skin implants, and then inside our brain, we will have ever more direct access to all of human knowledge. Google Earth will give us instant views of every place on Earth. GPS will insure that we are never lost. Google Books will allow us to download every book that has ever been published. When we run out of storage space in our heads, we can store our knowledge in Google cloud computing.
This must be what Google cofounder Larry Page had in mind when he said:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything . . . some people could call that artificial intelligence. . . . The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly."
Page has said that the ultimate goal is for us to merely think of a question, and then we instantly hear or see the answer.
This understanding of everything in the world that cyborgs could have must include understanding emotion. As I indicated in my post on Morris Hoffman's The Punisher's Brain, Judge Hoffman thinks that most trial judges show "our evolved retributive feelings" when they punish. "We get a gut, retributive, feeling about the sentence, and then move in one direction or another off that gut feeling based on information about the criminal that affects our views about special deterrence--the likelihood he will reoffend and the crimes he is likely to commit." So if IBM wants to teach Watson how to be a good judge, they might have to find a way to instill the "gut feelings" that are part of our evolved human nature.
We must wonder about the wisdom of our moving under the rule of "our new computer overlords." This has already begun. Most of the buy-sell decisions on Wall Street are being made by computers acting autonomously. Most of the infrastructure network of North America (electricity, water, and transportation) is controlled by computer systems connected to the Internet. Doctors are adopting expert computer systems for diagnosing their patients. The scientists at IBM are improving Watson so that it can make decisions for us in many areas of life. Much of the research on robot intelligence is funded by DARPA (The Defense Advanced Research Projects Agency), which is aimed at creating autonomous robotic weapons. The United States military already relies on many weaponized robots.
In his survey of the latest research in AI directed to producing AGI (artificial general intelligence) and then ASI (artificial super-intelligence), James Barrat (Our Final Invention) concludes that there's no reason that ASI will care about human beings, that such super-intelligence will be incomprehensible to us, and that this will lead to the extinction of our species. He also indicates, however, that only a few AI researchers (like Stephen Omohundro and Eliezer Yudkowsky) share his pessimistic vision of the perils of ASI. Most of the leading proponents of advanced AI research (like Ray Kurzweil and Rodney Brooks) are optimistic in their utopian vision of ASI as allowing human beings to finally fulfill the human dream, expressed by early modern philosophers and scientists like Descartes and Bacon, of completely mastering nature for human benefit, even including human immortality.
1 comment:
I agree with your observation that at best, a computer can perform functions that only simulate human understanding.
But you also point out that beating a human at Jeopardy requires an ability to work with metaphors. The work of Bergen and Lakoff suggest metaphors are essential in human thought.
This, in my opinion, certainly makes the question more difficult to deal with. Then again, the more we have to wrestle with the question, the better we can understand ourselves.
Post a Comment