Wednesday, March 08, 2006

Transhumanism and the Future of Human Nature

"Transhumanism" has become a popular term for the idea that technological enhancements of humans, animals, and machines will create a superhuman species of beings. Transhumanists believe that advances in genetic engineering, robotics, computer science, pscyhopharmacology, and nanotechnology will improve the physical and mental capacities of human beings to produce a new stage of evolutionary history. A new species of beings far superior to Homo sapiens might evolve from such technologies. Some of the best statements of transhumanism come from Nick Bostrum, James Hughes, and others in the World Transhumanist Association.

As I indicated in my chapter on biotechnology in Darwinian Conservatism, I am skeptical about transhumanism for two reasons. (I am now working on a new book that will elaborate my case against transhumanism.) My first reason is that transhumanism suffers from a Nietzschean utopianism that lacks common sense, because it ignores the ways in which the technologies for altering human traits are limited in both their technical means and their moral ends. My second reason is that I favor a stance of libertarian conservatism in response to technological changes that would allow improvement in human life but without the transcendence of human nature expected by the transhumanists.

The technology for enhancing human powers will be limited in its technical means, because complex behavioral traits arise from the intricate interplay of many genes interacting with developmental contingencies and unique life histories to form brains that constantly change as they respond flexibly to changing circumstances. Consequently, precise technological manipulation of human nature to enhance desirable traits while avoiding undesirable side effects will be very difficult if not impossible.

Consider, for example, the matter of human intelligence. One of the central assertions of the transhumanists is that we will soon create "superintelligent" beings that will be as intellectually superior to humans as humans are today to chimpanzees. Notice the extraordinary claims implicit in such an assertion--that we know what "intelligence" is in all of its complexity, that we can reduce intelligence to material causes that can be technically manipulated in precise ways, and that we can use this technical power to increase intelligence beyond anything ever achieved by living beings.

If we ask transhumanists to justify these claims, we get vague assertions about what might happen in the future. For instance, James Hughes, in his book Citizen Cyborg, says this about the future of computer intelligence: "Since computers powerful enough to model human brains should be common in thirty years, those computer models may then be able to run software simulations of our brains and bodies. Presumably these backups of our minds, if switched on, would be self-aware and have an independent existence. This is the scenario known as 'uploading.'"

No one knows how to fully model human brains or how to replicate such models in computers. No one knows how brains and bodies could be simulated in computer software. No one knows how computer software could become self-aware. And yet Hughes can imagine a "scenario" in which all of this ignorance is dispelled based on what he thinks "should" or "may" or "presumably" will happen in thirty years!

Now, of course, there are ways that we can use biomedical technology to protect against mental disabilities. For example, we could completely eliminate the mental retardation from Down syndrome through genetic screening of embryos or other means so that parents could be sure that they would not have children born with an extra 21st chromosome. But although this would be an improvement in human life, it would not transcend human nature by moving us towards "posthuman" beings with superhuman intelligence.

When transhumanists like Hughes predict the coming of "posthuman" humans as the fulfillment of what they think "should" happen, they are expressing not scientific or philosophic reasoning from observable experience but a religious longing for transcendence. Hughes is a Buddhist, and he foresees that the transhuman future will fulfill his Buddhist vision of a "society of enlightened beings as an infinite net, laced with pearls and gems, each enlightened mind a multicolored twinkle that is reflected in every other jewel." Like Friedrich Nietzsche, the transhumanists profess an atheistic materialism, and yet they still yearn for religious transcendence, which drives them to project fantasies of "overmen" and "posthumans" who have escaped the limitations of human nature to enter a heavenly realm of pure thought and immortal bliss.

The transhumanists also ignore how the technology of human enhancement will likely be limited in its moral ends. Human beings act to satisfy their natural desires. The use of technology to enhance human life will be driven by these natural desires. Transhumanists implicitly assume the enduring power of these desires. But if that is the case, then it is hard to see how human nature is going to be abolished if the natural desires endure.

For example, Hughes speaks about "the human needs and desires these technologies will be asked to serve," which include the desires for long, healthy lives, for intelligence and happiness, and the desires for parents to care for the physical and mental flourishing of their children. (All of these desires are included in my list of "twenty natural desires" in Darwinian Conservatism.) But if human beings are always going to be moved by the same natural desires, how does this take us into "posthuman" existence?

If we were really going to enter the "posthuman" realm, we would have to create beings who lacked the natural desires of human beings and who felt no concern for human life as moved by such desires. Such creatures might be superintelligent. But they would also be superpsychopathic predators who would feel no guilt or shame in enslaving or exterminating human beings.

The transhumanists respond to this prospect by explaining that we will have to be careful to instill in these posthuman beings what Nick Bostrum calls "human-friendly values." Hughes explains that we will have to instill by technological devices "sociability and empathy for all sentient beings." For example, we might require the installation of "morality chips." Hughes is not troubled by the naive expectation that we can develop "morality chips" to control the posthumans without any harmful side-effects.

Even if we could solve the technical problems in reducing morality to a mere matter of mechanical engineering, we might still wonder why Hughes and the other transhumanists want to preserve human morality if their goal is an absolutely posthuman life. If human morality as rooted in the natural human desires is at the core of human nature, then posthumanity would require the abolition of that morality. If the posthumans are going to be moved by the same natural desires and moral emotions that have always moved human beings, then it would seem that human nature has survived.

As an alternative to the transhumanist stance, I would defend a libertarian conservatism rooted in human nature. I would argue for leaving people free to exercise individual choice in developing and using new technologies to meet human needs and desires. This would allow people to learn by trial and error what is desirable and what is not in the use of such technologies.

Some legal regulation of choice might be required to promote the minimal safety and efficacy of the new technologies and to protect people against force and fraud. But within such a modest regulatory regime, people would have freedom of choice.

The moral standard here would be that a technology is good if it promotes the flourishing of our human nature by satisfying our natural desires. We can best conform to that standard by allowing people free choice in satisfying their desires. Although there will be great diversity in the choices people make, there will be some enduring patterns in their choices that reflect the universality of natural human desires. For example, we can assume that the natural desire for parental care will generally motivate parents to use technology in ways that promote the happiness of their children.

My stance is close to the position taken by Ron Baily in his book Liberation Biology. But I depart from Bailey when he moves towards a transhumanist libertarianism that assumes that somehow human nature will be superseded by a new, superior form of life.

I welcome the prospect of technological changes in the human condition that will improve the physical and mental functions of life. But rather than expecting the emergence of a transhuman form of life, I foresee that human nature will not only endure but prevail.

8 comments:

Anonymous said...

As a transhumanist (though, one without a Blogger account at the moment), let me point out a few misconceptions...

Re your paragraph on superintelligence: there are several proposals to increase intelligence that do not require a fine-grained understanding of intelligence itself (indeed, some of these take a "bootstrap" approach, on the belief that we can not understand ourselves precisely without first finding a way to augment ourselves). Further, the claim about increasing intelligence beyond anything ever achieved by living beings is already true, if you clarify "ever" as "previously": certain Internet utilities such as Google, if used proficiently and wisely, allow us to make more intelligent decisions (more precisely, decisions informed by a larger range of information and with more effective results), thus acting as a kind of enhanced intelligence - a kind not available to living beings until quite recently.

Re your paragraph about "No one knows how to...": actually, people are today modelling brains and bodies in computer software. There is room for debate as to whether we could make a traditional computer self-aware, but one backup solution was proposed long ago: model each individual neuron in the brain, and then replace, one neuron at a time, the brain of a volunteer subject. If the neurons are a close enough simulation, the patient's self-awareness (which would be tested every so often) would come through intact; otherwise, the experiment stops at the first concrete sign of a problem.

By the way, note that "volunteer". Very few transhumanists are suggesting that anyone should be upgraded against their will, in part for the very reasons you suggest. It will indeed be hard at first to make sure that any proposed "upgrade" is in fact beneficial for the person being upgraded - so, just as with any technology, you let those who want to, try it out first. If the "upgrade" truly does prove to be beneficial, other people will eventually come to want it of their own free will. (There may be a few holdouts who refuse a certain upgrade even after almost everyone else has it, but they can be allowed to live the way they wish so long as they do not forcibly deny the choice to anyone else. The Amish are a good example of this philosophy in practice today.)

In short, the stance you are taking is in fact one of the more popular forms of transhumanism. In other words, you are a (type of) transhumanist. There are some who suggest that humanity should be replaced or forcibly upgraded, and those you take a stand against. But many transhumanists would agree with you about preserving individual liberties at the same time as improving ourselves. (Besides, transhumanists have enough enemies from those who believe that any such improvement - especially when people freely choose it, and thus take moral responsibility for their own lives - is against God's will or otherwise immoral.)

Anonymous said...
This comment has been removed by a blog administrator.
Kent Guida said...

I certainly HOPE you are correct, but somehow I can't help thinking you skepticism about the technology is sounder than your confidence in the beneficial influence of our natural desires.

In the scope of evolutionary time I'm confident your point of view will prevail, but we could be in for a couple of rough generations before that happens.

When you consider that at least since Hobbes the campaign against thumos has been relentless and largely successful, I can easily imagine large numbers of people being induced to engineer thumos out of their ofspring by any available means. True, a society of people with no thumos is not likely to survive very long, but it takes a strong stomach to call that an optimistic view.

After all, socialism in all its forms is as contrary to man's natural desires as the worst forms of transhumanism. That didn't stop the human race from a catastrophic century of socialist experimentation. Nature has won out over socialism now, but it was rough sailing there for quite a while. Transhumanism could become the socialism of some future century. I would certainly prefer that it not happen.

Along the same lines, do you have any thoughts about the theories of Ray Kurzweil? How seriously should one take his predictions and the point of view behind them? Is he something more than just Descartes on steroids?

Larry Arnhart said...

Kent,

Yes, you're right. The yearning for a transhumanist revolution in the 21th century might prove to be just as destructive as the yearning for a socialist revolution in the 20th century.

In both cases, Darwinian conservatism would suggest short-run pessimism but long-run optimism. In the short-run (a century or so!), the utopian longings of the transhumanists can cause as much damage as the utopian longings of the socialists. But in the long-run, human beings will learn from their mistakes.

Both socialists and transhumanists rely on "what if" reasoning that appeals to utopian dreams. What if we could establish communities in which private property, private families, and political rule were abolished, so that people would cooperate for the common good without conflict? What if we could use technology to create superhuman beings with immortal bodies and happy souls?

As much as we might argue that such utopian speculations fail to recognize the enduring limitations of human nature--limited knowledge and limited virtue--the utopian appeal is so strong that we might have to allow people to learn this by trial and error.

Scott A. Edwards said...
This comment has been removed by a blog administrator.
Scott Arthur Edwards said...
This comment has been removed by a blog administrator.
Anonymous said...

James Hughes copied ideas from numerous essays, articles and papers on transhumanism by experts who have been writing about transhumanism for more than a decade.

Anonymous said...

Unfortunately, many of you are mistaken. The only person who really got it right was the first commentator, who self-identified as a transhumanist. Allow me to attempt to elucidate. I do not mean to say that your reasoning is flawed, as it is not necessarily. I simply mean to say that your arguments are against straw men. The original post by Larry Arnhart was actually exposing a classic transhumanist argument. Since transhumanism is a humanist value system, it places great importance on reason and choice and it holds great respect for the informed desires of competent individuals. No transhumanists are proposing that there should be a society-wide “transhuman revolution,” because such would controvert the humanist aspect of the philosophy.

As for transhumanism being “utopian” - nothing could be farther from the truth. All the arguments here against transhumanism are against fictionalised, Hollywoodised versions of that term. Transhumanism is, quite simply and quite plainly, all about allowing people, as individuals and as groups, to control the direction their lives collectively take. What differentiates it from classical humanism and posthumanism is its emphasis on the foresight that in the relatively near future (relative in a historical sense), our technology will reach a level of sophistication which will allow us to comprehensively and safely modify living systems. Transhumanism is most assuredly NOT about the abolition of mankind, or human nature as such – this would be impossible or undesirable, as stated above. Transhumanism is about the improvement of such – certainly, no one would agree that disease was good, or death, or aging, though many might elect these afflictions upon themselves for various reasons (and transhumanists respect these informed choices as valid). Nobody would claim war, killing, ignorance or extremism are good things, but all of these things are aspects of the human condition: these are our limitations, imposed upon us by coincidence by the blind hand of evolution. We transhumanists are certainly not advocates of the abolition of all that is human, only the amelioration of all that is bad about it, and then the nurturing of what is good.

A “posthuman” is not some entity so significantly different from us as to be inhuman; if that were the case, it would be called an “alien.” A posthuman is simply, as defined by the World Transhumanist Association's authoritative and comprehensive "Transhumanist FAQ," any “possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards.” An appendage in the same paragraph reads; “Care must be taken to avoid misinterpretation. ‘Posthuman’ does not denote just anything that happens to come after the human era, nor does it have anything to do with the ‘posthumous.’ In particular, it does not imply that there are no humans anymore.” It is merely some person who has basic abilities we humans could never have due to the particularities of our biological heritage, not some radical divergence that cannot possibly be reconciled with its origination.

We do not currently know how much of this idealistic foresight will eventually be revealed as possibility or practicality, but with the experience of history, we know this much: that SOME improvement WILL be possible, because repeatedly, invariably, and consistently this has always been the case with technology. Sometimes technology goes bad, but to oppose its implementation simply on this ground is to insinuate that we should all revert to a Palaeolithic stage of cultural development just because there have been some cumulatively negative effects in the past (and ignore that we seem to eventually overcome many of these problems once we realise their eventual effects). It has always been the case that there are unintended consequences, both negative in their restrictive powers and positive in their liberative powers. However, the fact that we are still here and now live in such a sophisticated and affluent state is a strong testament to the fact that despite setbacks, disappointments, and a whole “Pandora's Box” of other artificial problems, we do more good than harm when we put our minds to task (which is for what evolution imparted them to us). It simply seems to be the case that this will become a much more comprehensive enterprise in the future, when the mechanisms of life itself come further under the aegis of human endeavour than they already happen to be.

The predicted capacity for meliorism is not possible now, and it may continue to be impossible into the near future - however, simply because it has not been possible to date does NOT mean that it will not ever be possible. The difference now is that current avenues of research are leading us in these directions in such a way and at such a pace that we can actually create a realistic prognosis based upon present trends. Never before recently, for instance, have we ever engineered organisms genetically. Never before have we implanted two-way communications devices into the nervous systems of complex, intelligent animals. Never before have we had drugs to alter mood and personality, and never before have we had the capacity to create microfine structures using photolithography and other techniques. Our computers are becoming exponentially more powerful and have been doing so for decades, and in these other fields, there has been little or no progress before a few decades ago anyways.

Transhumanism is not utopian, nor is it coercive - these are two big mistakes commonly made by those who do not understand what transhumanism is. They arise, largely, from the technophobic, impassioned tripe the American entertainment industry typically puts out - Star Trek, The Terminator, and Gattaca. They all hearken back to the Second World War, relying on ridiculous parallels between the coercive, Aryo-utopianic pseudoscience of Nazism and contemporary transhumanism. Essentially, the public is afraid of Nazis, not transhumanists, though in effect that understandable apprehension is misdirected towards something that seems superficially to bear similar semblances, including mainly the otherwise innocuous and laudable concepts of meliorism and humanistic positivism. Nazism is essentially anti-humanist, and cannot be compared to a philosophy that holds individual choice and common wealth as highest material ideals. The problem is simply that the public is SO subconsciously afraid that they stick their fingers in their ears and gladly, blindly march off to war against things they know nothing about. Nobody cares that the Nazi eugenics programme was constructed in deliberate ignorance of even the very incomplete scientific knowledge available at the time. Yes, it is true our knowledge is still very incomplete, but this is irrelevant to transhumanism, because transhumanism does not assume that complete knowledge is what we have. Early 20th century eugenics, especially Nazi eugenics, were based upon the idea that more or less complete knowledge, or a fairly well-defined knowledge base was at hand, and that it could be utilised to effectively engineer broad-based public programmes to improve society as a lot. Today, we with our greater understanding of evolution, as well as sociology, social psychology, genetics, anthropology, and armed with our experience of recent history, know that large scale programmes, if they are to be implemented at all, cannot simply be instituted with little or no unintentional repercussion.

Transhumanists, along with future trans- and posthumans, have much more to fear of militant bioconservatism, as do we all, than the other way around. It is militant bioconservatives who are the new Nazis, because they believe in the right to coerce or regulate choices away from individuals because they and only they deem it necessary to do so. I do not mean to be rude or offensive, nor do I mean to be presumptuous (though I may be doing just that), but I would suggest most of you go out and try, really try to learn more about transhumanism, and to see the world from a transhumanist perspective. Feel free to not upgrade if that is your wish and you live long enough to have the choice; transhumanists support you in whatever your decisions may be. Transhumanists and reasonable bioconservatives have alike a belief in the freedom to choose how and when and if one should change one's life within responsible personal limitations. Don’t spread ignorance because it's easy, as it's nearly just as easy to learn the facts (there is, in fact, a link to the World Transhumanist Association website right on this page). However many questions or misconceptions you may have, and however many misgivings you may need dispelled, that is one tenth the number of transhumanist articles freely and conveniently available to elaborate upon what ever you presently misunderstand. For starters, I would suggest the aforementioned "Transhumanist FAQ" for a good, clear, plain English checklist of misattributions, many of which are clearly evident in the posts by some present here. The future, whether it comes late or soon, will be what we, the humans of today, make of it, whether we're transhumanist or bioconservative or whatever other polysyllabic terms one may wish to muster.

We ought not to repeat the mistakes of past generations, actively through coercive behaviour, or passively through the lack of coherent opposition to coercion. The plain fact is, unless something happens to stop it, these technologies WILL show up eventually, in some form or another, and we might as well prepare to handle the coming of the dangers and opportunities they bring with dignity and responsibility now. It would be much preferable to take a proactively positive approach to getting things right as they happen, than a reactive one, whether negative or positive, to fixing the consequences.