I have argued for explaining the human mind as an emergent property of the human brain once it passed over a critical threshold of size and complexity in the evolution of the primate brain. If that is true, then one might wonder whether technological evolution could do for robots what biological evolution has done for humans. Is it possible that once computer technology passes over a critical threshold of complexity, comparable to the complexity of the human brain, could a mechanical brain equal or even surpass the intelligence of human beings?
And if that is possible, what moral, legal, and political questions would this raise? Must we soon be ruled by robots who are smarter than us? Or will we use this technology of artificial intelligence to extend our human intelligence, so that we will be as super-intelligent as our machines? Will our super-intelligent robots demand to be treated as persons with rights? Will they have a morality like ours? Or will they be moved by a will to power that is beyond human good and evil?
We can anticipate that such questions about advances in artificial intelligence will become the deepest political questions of the twenty-first century.
I have been thinking about this in my course this semester--"Biopolitics and Human Nature"--because the last part of the course is on the debate over artificial intelligence. We're reading Alan Turing ("Computing Machinery and Intelligence," Mind, 1950), John Searle ("What Your Computer Can't Know," New York Review of Books, October 9, 2014), Ray Kurzweil (The Singularity is Near, 2005), and James Barrat (Our Final Invention, 2013).
Many years ago, when I first began thinking about this, I was persuaded by Searle's famous Chinese Room argument against the Turing Test for human-level intelligence in a machine. But now, I think Kurzweil is right in arguing that Searle's Chinese Room doesn't refute the Turing Test.
The Turing Test is the common name today for what Turing originally called the Imitation Game. He proposed this as the best test of whether a digital computer has achieved intelligence comparable to human intelligence. (Actually, Descartes proposed a similar test for machine intelligence in his Discourse on Method.) Put a computer and a human being in separate rooms. Ask a human being to try to detect which one is the computer by asking questions typed onto pieces of paper slipped under the doors of the rooms. The computer and human being will answer the questions on pieces of paper, with the computer pretending to be a human being, and the human being trying to show that he is the human being. If the computer has the intelligence for communicating in language in ways that a good human speaker of the language would interpret as showing human intelligence, then the computer has passed the test. Writing in 1950, Turing thought that digital computers would begin to pass the test by the year 2000.
In his article, Turing anticipated all of the major objections to his reasoning that have been developed over the past decades. One of those objections was the argument from consciousness. He quotes from a Professor Jefferson: "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not be the chance fall of symbols, could we agree that machine equals brain--that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants" (445-46).
Turing responds to this argument by suggesting that it is unreasonable, because it would throw us into a solipsism that none of us would accept: "This argument appears to be a denial of the validity of our test. According to the most extreme form of this view, the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. On could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise, according to this view, the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold, but it makes communication of ideas difficult. A is liable to believe 'A thinks, but B does not,' whilst B believes 'B thinks, but A does not.' Instead of arguing continually over this point, it is usual to have the polite convention that everyone thinks" (446).
Consciousness is mysterious, Turing observes, because while we all have direct subjective access to our own thoughts and feelings, we have no direct access to the conscious subjective experiences of anyone else. We can only indirectly infer the consciousness of other people (or of animals) from their behavior. We must do the same in inferring the conscious thinking of machines. So Turing's test for the intelligence of digital computers is essentially the same test that we all employ in our lives everyday to infer the conscious thoughts and feelings of other human beings.
Searle's Chinese Room argument is his way of stating the argument from consciousness as refuting the Turing Test. And Kurzweil's response to Searle is a restatement of Turing's response to Jefferson's objection.
Searle insists that his Chinese Room Argument shows that a computer could pass the Turing Test without having any conscious understanding of anything. He explains: "Imagine someone who doesn't know Chinese--me, for example--following a computer program for answering questions in Chinese. We can suppose that I pass the Turing Test because, following the program, I give the correct answers to the questions in Chinese, but all the same, I do not understand a word of Chinese. And if I do not understand Chinese on the basis of implementing the computer program, neither does any other digital computer solely on that basis."
According to Searle, this shows that a computer programmed for communicating in language has a syntax but no semantics. The computer can manipulate linguistic symbols according to rules of syntax, but it has no semantic understanding of the meaning of what it says, because it has no conscious experience of anything, no subjective thoughts or feelings.
Kurzweil responds by arguing that Searle's Chinese Room won't work, because a computer could not perfectly simulate understanding Chinese--pass a Chinese Turing Test--if it did not really understand Chinese. After all, for a human being to persuade us that he understands a language, he must actually understand it.
Human brains can understand language because human brains are amazingly complex. Any computer that could understand language would have to be as complex as the human brain. So far, no computer has ever reached that level of complexity. But, in principle, this is possible. Once we understand the complexity of the human brain, and once a computer has replicated the complexity of the human brain, Kurzweil argues, we will recognize this breakthrough when a computer can persuade us, through language and other intelligent behavior, that it has conscious thoughts and feelings comparable to human beings.
Oddly enough, Searle actually concedes this possibility of creating an "artificial brain." Searle writes: "An artificial brain has to literally create consciousness, unlike the computer model of the brain, which only creates a simulation. So an actual artificial brain, like the artificial heart, would have to duplicate and not just simulate the real causal powers of the original. In the case of the heart, we found that you do not need muscle tissue to duplicate the causal powers. We do not know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart. The point, however, is that whatever the substance is, it has to duplicate and not just simulate, emulate, or model the real causal powers of the original organ. The organ, remember, is a biological mechanism like any other, and it functions on specific causal principles."
Kurzweil agrees with this: human-level artificial intelligence will have to arise in an artificial mechanical brain that copies the organization and causal powers of the human brain. The evolution of the human brain from the primate brain shows that as the primate brain expanded in size and complexity, it eventually passed over a critical threshold in which new patterns emerged that gave rise to human conscious intelligence.
Like Turing, Kurzweil admits that consciousness is mysterious, because it somehow emerges from the brain, but unlike the brain, consciousness is not objectively observable or measurable, because the subjective experience of consciousness can be directly known only in our personal mental experience. Deciding whether entities outside of us are conscious depends on an indirect inference from the behavior of those entities, and thus we cannot prove that entities outside of us are conscious through objective tests.
This explains why the scientific study of consciousness is so difficult. Indeed, Kurzweil concludes that the question of consciousness is ultimately not a scientific question at all, because it's a question that cannot be answered finally through objectively measurable tests.
But despite this mystery of consciousness, we can scientifically observe the evolutionary history of the brain and the evolutionary emergence of the human mind in the brain. We are still left wondering, however, whether the next step in evolution is the technological evolution of human-level intelligence in an artificial brain.
2 comments:
Re: Kurzweil's argument
If we argue that language is complex enough that only human beings can grasp it, and we also grant that consciousness is an emergent property, aren't we still missing something when trying to prove that a robot who is complex enough to simulate language must also be complex enough to be conscious? Or is language somehow tied to consciousness in a way that makes the two inalienable?
When I was young, I wrote a bad science fiction story set far into the future in which some evolved apes began showing signs of hominid intelligence. The questions of how they should be treated were very knotty, and, if I recall the story correctly, not very well handled.
Post a Comment