science good bad - michigan7 2014 grams

Upload: horace-g-wallace

Post on 04-Nov-2015

223 views

Category:

Documents


0 download

DESCRIPTION

Science is good

TRANSCRIPT

transhumanismtranshumanism goodtranshuman happening nowTranshumanism is inevitable in the status quo only a risk they derail itBailey 11 (Ronald, science correspondent for Reason magazine, The Case for Enhancing People, first delivered at the second conference in the series Stuck with Virtue. Sponsored by the University of Chicagos New Science of Virtues project, April 2011, http://www.thenewatlantis.com/publications/the-case-for-enhancing-people)//dpingDoes the enhancement of human physical and intellectual capacities undermine virtue? In answering this question, we must first make a distinction between therapy and enhancement. Therapeutic technologies are meant to restore impaired or degraded human capacities to some more normal level. By contrast, any enhancements would alter human functioning beyond the normal. We must also keep in mind that, whatever we think about them, enhancements are going to happen. Age-retardation or even age-reversal are prime targets for research, but other techniques aimed at preventing disease and boosting memory, intelligence, and physical strength will also be developed. Much worried attention is focused particularly on the possibility of achieving these and other enhancements through genetic engineering; that will indeed one day happen. But the fastest advances in enhancement will occur using pharmaceutical and biomedical interventions to modulate and direct the activity of existing genes in the bodies of people who are already alive. These will happen alongside the development of human-machine interfaces that will extend and boost human capacities. Contrary to oft-expressed concerns, we will find, first, that enhancements will better enable people to flourish; second, that enhancements will not dissolve whatever existential worries people have; third, that enhancements will enable people to become more virtuous; fourth, that people who dont want enhancement for themselves should allow those of us who do to go forward without hindrance; fifth, that concerns over an enhancement divide are largely illusory; and sixth, that we already have at hand the social technology, in the form of protective social and political institutions, that will enable the enhanced and the unenhanced to dwell together in peace.science k2 transhumanSystems of scientific rationality are key to the success of transhumanism.Bostrom 5 (Nick, British Academy Research Fellow @ Oxford, PhD in philosophy from LSE, previously professor at Yale University in the Institute for Social and Policy Studies, Transhumanist Values, Review of Contemporary Philosophy, Vol. 4, May 2005, www.nickbostrom.com/ethics/values.html)//dping4. Basic conditions for realizing the transhumanist project If this is the grand vision, what are the more particular objectives that it translates into when considered as a guide to policy? What is needed for the realization of the transhumanist dream is that technological means necessary for venturing into the posthuman space are made available to those who wish to use them, and that society be organized in such a manner that such explorations can be undertaken without causing unacceptable damage to the social fabric and without imposing unacceptable existential risks. Global security. While disasters and setbacks are inevitable in the implementation of the transhumanist project (just as they are if the transhumanist project is not pursued), there is one kind of catastrophe that must be avoided at any cost: Existential risk one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[6] Several recent discussions have argued that the combined probability of the existential risks is very substantial.[7] The relevance of the condition of existential safety to the transhumanist vision is obvious: if we go extinct or permanently destroy our potential to develop further, then the transhumanist core value will not be realized. Global security is the most fundamental and nonnegotiable requirement of the transhumanist project. Technological progress. That technological progress is generally desirable from a transhumanist point of view is also self-evident. Many of our biological shortcomings (aging, disease, feeble memories and intellects, a limited emotional repertoire and inadequate capacity for sustained well-being) are difficult to overcome, and to do so will require advanced tools. Developing these tools is a gargantuan challenge for the collective problem-solving capacities of our species. Since technological progress is closely linked to economic development, economic growth or more precisely, productivity growth can in some cases serve as a proxy for technological progress. (Productivity growth is, of course, only an imperfect measure of the relevant form of technological progress, which, in turn, is an imperfect measure of overall improvement, since it omits such factors as equity of distribution, ecological diversity, and quality of human relationships.) The history of economic and technological development, and the concomitant growth of civilization, is appropriately regarded with awe, as humanitys most glorious achievement. Thanks to the gradual accumulation of improvements over the past several thousand years, large portions of humanity have been freed from illiteracy, life-expectancies of twenty years, alarming infant-mortality rates, horrible diseases endured without palliatives, and periodic starvation and water shortages. Technology, in this context, is not just gadgets but includes all instrumentally useful objects and systems that have been deliberately created. This broad definition encompasses practices and institutions, such as double-entry accounting, scientific peer-review, legal systems, and the applied sciences.transhuman good bioterror/pathogensTranshumanism is critical to develop posthumans that prevent deadly pathogens they can be created by bioterrorists or by accident, leading to a high probability of human extinctionWalker 9 (Mark, assistant professor at New Mexico State University, Richard L. Hedden Chair of Advanced Philosophical Studies, current primary research interest is in ethical issues arising out of emerging technologies, e.g., genetic engineering, advanced pharmacology, artificial intelligence research and nanotechnology, H+: Ship of Fools: Why Transhumanism is the Best Bet to Prevent the Extinction of Civilization, Metanexus, February 5, 2009, http://www.metanexus.net/essay/h-ship-fools-why-transhumanism-best-bet-prevent-extinction-civilization **We dont endorse ableist language)//dping Transhumanism is the thesis that we can and ought to use technology to alter and improve human biology.1 Some likely targets for the technological makeover of human nature include making ourselves smarter, happier, longer-lived and more virtuous. The operative assumption here of course is that intelligence, moods, longevity and virtues each have deep roots in our biology. By altering biology transhumanists propose to improve human nature to the point of creating a new genus: such as posthumans.2,3 Notice that transhumanism encompasses a moral thesis. Transhumanism does not say that we will create posthumans, rather, it makes a moral claim: we ought to create posthumans.4 The hint of an argument based on the accrual of moral benefits is perhaps obvious from what has been said: to the extent that we value the development of intellectual, emotional and moral virtue5, becoming posthuman is imperative. I wont pursue this line of argument here directly. Rather, I want to explore the objection that transhumanism is an ill-advised experiment because it puts us at unnecessary risk. My reply will be that creating posthumans is our best bet for avoiding harm. In a nutshell, the argument is that even though creating posthumans may be a very dangerous social experiment, it is even more dangerous not to attempt it: technological advances mean that there is a high probability that a human-only future will end in extinction. 1. Unprecedented Dangers of 21st Century Technologies In a widely read piece, Why the Future Doesnt Need us, Bill Joy argues that one of the main differences between previous technology and 21stcentury technology is the possibility of self-replication.6 Another relevant aspect of 21st century technologies is the fact that they leave very little industrial footprint. For example, it is reasonably easy to monitor which countries are part of the nuclear club with the aid of spy satellites. The size of the industrial infrastructure necessary to make nuclear bombs is such that a country has to go to extraordinary lengths to hide their activities should they wish to keep a nuclear development program secret. Not so with genetic technologies. True, it helps to have millions of dollars in equipment and a well-trained research team to conduct genetic experiments, but it is not necessary. Even as I write this, private citizens are using genetic technologies in their basements and their garages with no government oversight. This burgeoning movement is referred to as biohacking. For a few thousand dollars and a small room to work, one can become a biohacker. A recent article in the Boston Globe explains: The movement is getting much of its steam from synthetic biology, a field of science that seeks to make working with cells and genes more like building circuits by creating standardized biological parts. The dream, already playing out in the annual International Genetically Engineered Machine competition at MIT, is that biology novices could browse a catalog of ready-made biological parts and use them to create customized organisms. Technological advances have made it quite simple to insert genes into bacteria to give them the ability to, for example, detect arsenic or produce vitamins.7 In some ways this is a feel good story in that it promises the democratization of science. Just as computer do-it-yourselfers started to democratize the computer industry in the 1970s, so too will genetic do-it-yourselfers democratize the biological sciences. However, the potential down side is noted in the same article: But the work also raises fears that people could create a deadly microbe on purpose, just as computer hackers have unleashed crippling viruses or broken into government websites. Worries here are fueled by the fact the information about how to construct novel pathogens in animal models is openly published. Little original insight would be needed to apply the same strategies to constructing novel human pathogens.8 The analogy with computer hacking is, in some ways, apt. We are all familiar with computer hackers taking down our favorite websites, or having a virus-infected computer slow to a crawl. On the other hand, the analogy seems to fail to illuminate the magnitude of the risk biological viruses designed by biohackers present. I can live without my computer or my favorite website (at least for a while, and I wouldnt be very happy) but a biohacker who creates a pathogen or a series of pathogens may wipe out human civilization. Sometimes it is suggested that there are always survivors when a virus or some other pathogen attacks a population, and so even the worst form of bioterrorism will not kill off the human species. In response, it should be pointed out that it is simply empirically false that there is no evidence that pathogens can cause the extinction of a species.9 A bio-misanthropist who was worried that a virus was not virulent enough to wipe out the entire human population might be well-advised to create two or more viruses and release them simultaneously. Furthermore, it is not clear that one would need to kill every last human to effectively bring civilization to a halt for the foreseeable future.10 2. Intellectual and Moral Foibles Fortunately, we are short on examples of biohackers, terrorist organizations or states creating pathogens that destroy human civilization. To illustrate the general points I want to make, a somewhat analogous case involving a naturally occurring rabbit virus will have to serve. Reasoning that it would be nice to have some rabbits to hunt, in 1859 Thomas Austin released 24 rabbits into the wild in Australia. The old adage, be careful what you wish for, seems apropos, for by 1900 there were over two million rabbits in Australia. Through competition this invasive species is estimated to have caused the extinction of about 12% of all Australian mammals. The massive rabbit population has also had a continuing and significant impact on Australian agriculture. To combat the rabbit problem, in 1989 scientists in Australia imported a sample of a deadly virus, Rabbit calicivirus (RCD), from China. A number of biological technologies were used during intense clinical testing of RCD on rabbits and other species in the early 1990s. Results from this research showed that there was no indication of transmission to other species. So, in 1994 a high security test site for field trials of RCD was established on Wardang Island off the coast of Southern Australia. As expected, the test rabbits in the quarantine area quickly became infected with the disease, and so this part of the field trial was a success. However, in October 1995, unexpectedly the virus broke the containment area on Wardang Island and infected the islands entire rabbit population beyond the test site. On October 10th 1995, the Australian governments premier scientific agency, the CSIRO, issued the following communiqu concerning RCD: Containment plans are in place in the unlikely event of spread to the mainland. What the experts at the CSIRO described as an unlikely event transpired shortly thereafter: rabbits across many parts of the Australian mainland became infected and died.11 And non-government sanctioned spreading of the virus did not stop there: Private individuals in New Zealand, against the express wishes of their government, illegally imported and released RCD leading to the death of much of the local rabbit population. Animated public debate followed the incident with a certain amount of consensus that there was a moral failure here, although there was disagreement as to how blame was to be apportioned. Some blamed the individuals who imported the RCD against the express wishes of the government; others blamed the government for not supplying funds for more conventional methods of rabbit population control. There are two lessons to be drawn from this example. Ignorance can lead to biological mishaps. The Australian scientists thought they had their experiment contained, but they failed twice. First was the release from the quarantine area on Wardang island, and second the escape of the virus to the mainland. The second point is that moral failures can also lead to biological disasters. With respect to biohackers the point then can be made that through some unforeseen problem, a deadly biological agent like a virus or bacteria escapes into the environment. Also there is the worry that some misanthropist biohacker may hope to destroy all of humanity. (And should it be objected that this would lead to the demise of the biohacker himself, we are all too familiar with deranged killers murdering dozens of innocent victims only to then turn the weapon on themselves). 3. Transhumanism: The Most Dangerous Experiment Save Any Other I want now to turn to our options for dealing with civilization ending threats precipitated by 21stcentury technologies. Broadly construed, our options appear to be three: we eliminate the technologies, as suggested by Joy; we permit them for world-engineering purposes only; or we permit them for world- and person-engineering purposes. Ill refer to these, respectively, as the relinquishment, steady-as-she goes and transhumanist futures. I want to say a bit more about these options, along with some assessment of the likelihood that they will succeed in saving us from a civilization-ending event. Option: relinquishment. Starting with relinquishment, let us think first about what it means to forgo any use of 21st century technologies for both world-engineering and person-engineering purposes. Notice here that the question is not whether we ought to permit the development of 21st century technologies. The reason of course is that it is already too late for that. We have developed at least one, genetic engineering, to the point that it potentially could be used for the purpose of ending civilization. Now it may be thought that these extrapolations about the possible effects of genetic engineering are a little histrionic. Perhaps, but the fact of the matter is that very few have studied the problem of civilization extinction. Among those who have thought about the problem in any detail, there is almost universal agreement that the probability here is significant, and certainly not where we would like it, namely at 0.12,13 And it is not just tweedy academics who take seriously the possibility of bioterrorism and other technological disasters. On December 5th2008, while I was in the middle of writing this paper, the following headline appeared in my inbox: U.S. intel panel sees WMD attack in next five years.14 Former senators Bob Graham and Jim Talen headed the panel. According to the report, the panel acknowledges that terrorist groups still lack the needed scientific and technical ability to make weapons out of pathogens or nuclear bombs. But it warns that gap can be easily overcome, if terrorists find scientists willing to share or sell their know-how.15 Also of relevance is that the report suggests, the United States should be less concerned that terrorists will become biologists and far more concerned that biologists will become terrorists." And our concern should only be increasing, since every year it is a little easier to acquire and apply the relevant technical advancements. So, relinquishment requires us to not only stop future developments but also to turn back the hands of time, technologically speaking. If we want to keep ourselves completely immune from the potential negative effects of genetic engineering we would have to destroy all the tools and knowledge of genetic engineering. It is hard to imagine how this might be done. For example, it would seem to demand dismantling all genetics labs across the globe and burning books that contain information about genetic engineering. Even this would not be enough since knowledge of genetic engineering is in the minds of many. What would we do here? Shoot all those with graduate and undergraduate degrees in genetics and allied disciplines along with all the basement biohackers we can roundup? Think of the alcohol prohibition experiment in the early part of the century in the U.S. Part of the reason that prohibition was unsuccessful was because the knowledge and rudimentary equipment necessary for brewing was ubiquitous. It is these two features, availability of knowledge and equipment, that has made biohacking possible. And where would a relinquishment policy be implemented? If it is truly a viable and long-term strategy then relinquishment will have to be adopted globally. Naturally very few countries with advanced genetic technologies are going to be enthusiastic about genetically disarming unless they have some pretty good assurances that all other countries will also genetically disarm. This leads us to the usual disarmament impasse. In addition to national interests, the relinquishment strategy has to content with large commercial and military interests in developing and using 21stcentury technologies. I would rate the chances for relinquishment as a strategy pretty close to zero. In addition to the aforementioned problems, it seems to fly in the face of the first law of the ethics of technology: technology evolves at a geometric rate, while social policy develops at an arithmetical rate. In other words, changing societal attitudes takes a much greater time than it does for technology to evolve. Think of the environmental movement. It is almost fifty years since the publication of The Silent Spring, a book often linked with the start of the contemporary environmental movement. Only now are we seeing the first portends of a concerted international effort to fight global warming. And unlike polluters, genetic research has the potential to be virtually invisible, at least until disaster strikes. Bill Joy, as noted, calls for relinquishment. But how relinquishment is to be implemented, Joy does not say. It is much like the environmentalist who proposes to stop environmental degradation by stopping pollution. As far as a concrete plan goes, it is missing just one thing: a concrete plan. Option: steady-as-she-goes. The only two options that seem to have any likelihood of being implemented are the steady-as-she-goes and transhumanism. Recall, the steady-as-she-goes option says that it is permissible to develop 21st century world-engineering technologies, but not to use them for person-engineering purposes. The name stems from the fact that, as noted, there are enormous resources devoted at present to the development of genetic and nanotechnologies for world-engineering purposes, and so the proposal is to continue with our current norms. There are at least two problems with the steady-as-she goes policy. First, there is the worry about how effective a ban on person-engineering is likely to be. The likelihood of an effective ban will depend on what policies are adopted, and little thought has gone into this. A notable exception here is Fukuyama who has made some suggestive recommendations as to how national and international agencies might be built to contain the development of person-engineering.16 If implemented, Fukuyamas recommendations may well reduce the number of attempts to person-engineer, but Fukuyama has little to say about the seemingly inevitable underground activities of person-engineering. The problem then is that Fukuyamas version of the steady-as-she-goes strategy may reduce the gross number of person-engineering experiments, but the outcomes of the underground experiments may prove less benign. Unlike what transhumanists propose, a rogue group working clandestinely in opposition to a world ban on person-engineering is less likely to be worried about ensuring their posthuman progeny are as virtuous as possible. The second, and for our purposes, primary problem with the steady-as-she-goes strategy is that it says nothing about how we are to address the dual-use problem: the development of 21stcentury technologies for peaceful purposes necessarily bring with them the prospect that the same technology can be used for civilization ending purposes. While I dont agree with Joy about what to do about these threats, I am in full agreement that they exist, and that we would be foolhardy to ignore them. Interestingly, this is where Fukuyama is weakest: he has almost nothing to say about the destructive capabilities of 21stcentury world-engineering, and how the institutions he proposes would control their deadly use. A world where we continue to develop 21stcentury technologies means that the knowledge and limited equipment necessary for individuals to do their own world-engineering, and so potentially their own civilization ending projects (accidently or purposively), will only increase. So, at worst Fukuyamas proposal is foolhardy, at best it is radically incomplete. Option: transhumanism future. The transhumanist future is one where both world-engineering and person-engineering are permitted. Specifically, as noted, the transhumanist view is that we should create persons who are smarter and more virtuous than we are. The application to our problem is obvious: our fears about the misuse of 21st century technology reduce down to fears about stupidity or viciousness. Like the Australian research scientists, the worry is that we may be the authors of an accident, but this time one of apocalyptic proportions: the end of civilization. Likewise, our moral natures may also cause our demise. Or, to put a more positive spin on it, the best candidates amongst us to lead civilization through such perilous times are the brightest and most virtuous: posthumans.17 It is worth pointing out that there is no need to deny what Fukuyama claims: there are real dangers in creating posthumans. The problem with the transhumanist project, says Fukuyama, comes when we think seriously about what characteristics to change: Our good characteristics are intimately connected to our bad ones: If we werent violent and aggressive, we wouldnt be able to defend ourselves; if we didnt have feelings of exclusivity, we wouldnt be loyal to those close to us; if we never felt jealousy, we would never feel love. Even morality plays a critical function in allowing our species as a whole to survive and adapt. Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome.18 So, although Fukuyama sees the pull of transhumanism, how it might look downright reasonable, the fact that traits we might hope to modify are interconnected means that we will never be able to anticipate the ultimate outcome. What Fukuyama fails to address in any systematic way is the fact that there are even greater dangers associated with not creating posthumans. So, a prudential and moral reason for creating posthumans is not that this is without risk, rather, it is less risky than the alternative here: steady-as-she-goes. If forced to put some hard numbers to these scenarios, I would venture to suggest there is a 90% chance of civilization surviving the next two centuries if we follow the transhumanist path, while I would put the chances of civilization surviving a steady-as-she-goes policy at less than 20%. But then, I am an optimist. It might be objected that it is foolhardy or worse to try to put such numbers to futures where so much is uncertain. I have some sympathy with this objection. Thinking about the future where so much is uncertain is hardly analogous to putting odds on a horse race. On the other hand, a lot more is at stake in thinking about our future and so we have no choice but to try to estimate as best we can various risks. If it were protested that it is simply impossible to make any meaningful estimate then this would prove too much. For then there would be no reason to think that the transhumanist future is any more risky than any other future. In other words, the complaint that the transhumanist future is risky has traction only if we have some comparative evaluation in mind. Surgery that has only a 1 in 10 chance of survival is not risky, comparatively speaking, if the chances of survival without the surgery are zero. Anyone who criticizes transhumanism for putting civilization at risk, as does Fukuyama, must explicitly or implicitly hold that the chances of survival in a non-transhumanist future are greater. This is what transhumanists deny. This line of thinking is further reinforced when we consider that there is a limit to the downside of creating posthumans, at least relatively speaking. That is, one of the traditional concerns about increasing knowledge is that it seems to always imply an associated risk for greater destructive capacity. One way this point is made is in terms of killing capacity: muskets are a more powerful technology than a bow and arrow, and tanks more powerful than muskets, and atomic bombs even more destructive than tanks. The knowledge that made possible these technical advancements brought a concomitant increase in capacity for evil. Interestingly, we have almost hit the wall in our capacity for evil: once you have civilization destroying weapons there is not much worse you can do. There is a point in which the one-upmanship for evil comes to an endwhen everyone is dead. If you will forgive the somewhat graphic analogy, it hardly matters to Kennedy if his head is blown off with a rifle or a cannon. Likewise, if A has a weapon that can kill every last person there is little difference between that and Bs weapon which is twice as powerful. Posthumans probably wont have much more capacity for evil than we have, or are likely to have shortly. So, at least in terms of how many persons can be killed, posthumans will not outstrip us in this capacity. This is not to say that there are no new worries with the creation of posthumans, but the greatest evil, the destruction of civilization, is something which we now, or will soon, have. In other words, the most significant aspect that we should focus on with contemplating the creation of posthumans is their upside. They are not likely to distinguish themselves in their capacity for evil, since we have already pretty much hit the wall on that, but for their capacity for good. Conclusion I suspect that those who think the transhumanist future is risky often have something like the following reasoning in mind: (1) If we alter human nature then we will be conducting an experiment whose outcome we cannot be sure of. (2) We should not conduct experiments of great magnitude if we do not know the outcome. (3) We do not know the outcome of the transhumanist experiment. (4) So, we ought not to alter human nature. The problem with the argument is (2). Because genetic engineering is already with us, and it has the potential to destroy civilization and create posthumans, we are already entering uncharted waters, so we must experiment. The question is not whether to experiment, but only the residual question of which social experiment will we conduct. Will we try relinquishment? This would be an unparalleled social experiment to eradicate knowledge and technology. Will it be the steady-as-she-goes experiment where for the first time governments, organizations and private citizens will have access to knowledge and technology that (accidently or intentionally) could be turned to civilization ending purposes? Or finally, will it be the transhumanist social experiment where we attempt to make beings brighter and more virtuous to deal with these powerful technologies? I have tried to make at least a prima facie case that transhumanism promises the safest passage through 21stcentury technologies. Since we must experiment it would be foolhardy or worse not to put more thought and energy into the problem of our uncertain future. To the extent that we do not put more thought and energy into the problem, one can only lament the sad irony that steady-as-she-goes seems an all too apt order for a ship of fools.transhuman good climate changeTranshumanism solves climate change by making more altruistic and empathetic humans prefer multiple case studiesLiao et al. 12 (S. Matthew, director of the Masters Program in Bioethics and a Clinical Associate Professor in the Center for Bioethics with affiliation in the Department of Philosophy at New York University, former Deputy Director and James Martin Senior Research Fellow in the Program on the Ethics of the New Biosciences in the Faculty of Philosophy at Oxford University, Anders Sandberg, Ph.D. in computational neuroscience from Stockholm University, and is currently a James Martin Research Fellow at the Future of Humanity Institute at Oxford University, Rebecca Roache, Post-Doc of Philosophy at Oxford University, Human Engineering and Climate Change, Ethics, Policy and the Environment, February 2, 2012, http://www.smatthewliao.com/wp-content/uploads/2012/02/HEandClimateChange.pdf)//dpingPharmacological enhancement of altruism and empathy Another indirect means of mitigating climate change is to enhance and improve our moral decisions by making us more altruistic and empathetic. Many environmental problems are the result of collective action problems, according to which individuals do not cooperate for the common good. In a number of the cases, the impact of any particular individuals attempt to address a particular environmental problem has a negligible impact, but the impact of a large group of individuals working together can be huge. If people were generally more willing to act as a group, and could be confident that others would do the same, we may be able to enjoy the sort of benefits that arise only when large numbers of people act together. Increasing altruism and empathy may help increase the chances of this occurring (Dietz et al. 2003; Fehr et al. 2002; Gintis 2000). Also, many environmental problems seem to be exacerbated byor perhaps even result froma lack of appreciation of the value of other life forms and nature itself (Kollmuss and Agyeman 2002). It seems plausible that, were people more aware of the suffering caused to certain groups of people and animals as a result of environmental problems, they would be more likely to want to help tackle these problems. The fact that many environmental charities campaign to raise awareness of such suffering as a way of increasing donations supports this assumption. There is evidence that higher empathy levels correlate with stronger environmental behaviors and attitudes (Berenguer 2007). Increasing altruism and empathy could also help increase peoples willingness to assist those who suffer from climate change. While altruism and empathy have large cultural components, there is evidence that they also have biological underpinnings. This suggests that modifying them by human engineering could be promising. Indeed, test subjects given the prosocial hormone oxytocin were more willing to share money with strangers (Paul J. Zak et al. 2007) and to behave in a more trustworthy way (P. J. Zak et al. 2005). Also, a noradrenaline reuptake inhibitor increased social engagement and cooperation with a reduction in self-focus during a mixed motive game (Tse and Bond 2002). Similar effects have been observed with SSRIs in humans and animal experiments (Knutson et al. 1998). Furthermore, oxytocin appears to improve the capacity to read other peoples emotional state, which is a key capacity for empathy (Domes et al. 2007; Guastella et al. 2008). Conversely, testosterone appears to decrease aspects of empathy (Hermans et al. 2006) and in particular conscious recognition of facial threats (van Honk and Schutter 2007). Neuroimaging work has also revealed that ones willingness to comply with social norms may be correlated with particular neural substrates (Spitzer et al. 2007). This raises the likelihood that interventions affecting the sensitivity in these neural systems could also increase the willingness to cooperate with social rules or goals. These examples are intended to illustrate some possible human engineering solutions. Others like them might include increasing our resistance to heat and tropical diseases, and reducing our need for food and water.transhuman good space colTranshumanism develops prime posthumans that solves space colonizationAndreadis 8 (Athena, Ph.D., Associate Professor of Cell Biology, University of Massachussetts Medical School, Dreamers of a Better Future, Unite!, March 13, 2008, found author cite at- http://www.sentientdevelopments.com/2008/03/intersection-of-transhumanism-and-space.html, actual article- http://www.starshipnivan.com/blog/?p=60)//dpingViews of space travel have grown increasingly pessimistic in the last decade. This is not surprising: SETI still has received no unambiguous requests for more Chuck Berry from its listening posts, NASA is busy re-inventing flywheels and citizens even of first-world countries feel beleaguered in a world that seems increasingly hostile to any but the extraordinarily privileged. Always a weathervane of the present, speculative fiction has been gazing more and more inwardly either to a hazy gold-tinted past (fantasy, both literally and metaphorically) or to a smoggy rust-colored earthbound future (cyberpunk). The philosophically inclined are slightly more optimistic. Transhumanists, the new utopians, extol the pleasures of a future when our bodies, particularly our brains/minds, will be optimized (or at least not mind that theyre not optimized) by a combination of bioengineering, neurocognitive manipulation, nanotech and AI. Most transhumanists, especially those with a socially progressive agenda, are as decisively earthbound as cyberpunk authors. They consider space exploration a misguided waste of resources, a potentially dangerous distraction from here-and-now problems ecological collapse, inequality and poverty, incurable diseases among which transhumanists routinely count aging, not to mention variants of gray goo. And yet, despite the uncoolness of space exploration, despite NASAs disastrous holding pattern, there are those of us who still stubbornly dream of going to the stars. We are not starry-eyed romantics. We recognize that the problems associated with spacefaring are formidable (as examined briefly in Making Aliens 1, 2 and 3). But I, at least, think that improving circumstances on earth and exploring space are not mutually exclusive, either philosophically or perhaps just as importantly financially. In fact, I consider this a false dilemma. I believe that both sides have a much greater likelihood to implement their plans if they coordinate their efforts, for a very simple reason: the attributes required for successful space exploration are also primary goals of transhumanism. Consider the ingredients that would make an ideal crewmember of a space expedition: robust physical and mental health, biological and psychological adaptability, longevity, ability to interphase directly with components of the ship. In short, enhancements and augmentations eventually resulting in self-repairing quasi-immortals with extended senses and capabilities the loose working definition of transhuman. Coordination of the two movements would give a real, concrete purpose to transhumanism beyond the uncompelling objective of giving everyone a semi-infinite life of leisure (without guarantees that either terrestrial resources or the human mental and social framework could accommodate such a shift). It would also turn the journey to the stars into a more hopeful proposition, since it might make it possible that those who started the journey could live to see planetfall. Whereas spacefaring enthusiasts acknowledge the enormity of the undertaking they propose, most transhumanists take it as an article of faith that their ideas will be realized soon, though the goalposts keep receding into the future. As more soundbite than proof they invoke Moores exponential law, equating stodgy silicon with complex, contrary carbon. However, despite such confident optimism, enhancements will be hellishly difficult to implement. This stems from a fundamental that cannot be short-circuited or evaded: no matter how many experiments are performed on mice or even primates, humans have enough unique characteristics that optimization will require people. Contrary to the usual supposition that the rich will be the first to cross the transhuman threshold, it is virtually certain that the frontline will consist of the desperate and the disenfranchised: the terminally ill, the poor, prisoners and soldiers the same people who now try new chemotherapy or immunosuppression drugs, donate ova, become surrogate mothers, agree to undergo chemical castration or sleep deprivation. Yet another pool of early starfarers will be those whose beliefs require isolation to practice, whether they be Ralians or fundamentalist monotheists just as the Puritans had to brave the wilderness and brutal winters of Massachusetts to set up their Shining (though inevitably tarnished) City on the Hill.transhuman good - warTranshumanism creates more empathic posthumans, solving the root cause of warBailey 11 (Ronald, science correspondent for Reason magazine, citing Allen Buchanan, James B. Duke Professor of philosophy at Duke University, The Case for Enhancing People, first delivered at the second conference in the series Stuck with Virtue. Sponsored by the University of Chicagos New Science of Virtues project, April 2011, http://www.thenewatlantis.com/publications/the-case-for-enhancing-people)//dpingEnhancement Wars? Those who favor restricting human enhancements often argue that human equality will fall victim to differential access to enhancement technologies, resulting in conflicts between the enhanced and the unenhanced. For example, at a 2006 meeting called by the American Association for the Advancement of Science, Richard Hayes, the executive director of the left-leaning Center for Genetics and Society, testified that enhancement technologies would quickly be adopted by the most privileged, with the clear intent of widening the divisions that separate them and their progeny from the rest of the human species. Deploying such enhancement technologies would deepen genetic and biological inequality among individuals, exacerbating tendencies towards xenophobia, racism and warfare. Hayes concluded that allowing people to use genetic engineering for enhancement could be a mistake of world-historical proportions. Meanwhile, some right-leaning intellectuals, such as Nigel Cameron, president of the Center for Policy on Emerging Technologies, worry that one of the greatest ethical concerns about the potential uses of germline interventions to enhance normal human functions is that their availability will widen the existing inequalities between the rich and the poor. In sum, egalitarian opponents of enhancement want the rich and the poor to remain equally diseased, disabled, and dead. Even proponents of genetic enhancement, such as Princeton University biologist Lee M. Silver, have argued that genetic engineering will lead to a class of people that he calls the GenRich, who will occupy the heights of the economy while unenhanced Naturals provide whatever grunt labor the future economy needs. In Remaking Eden (1997), Silver suggests that eventually the GenRich class and the Natural class will become ... entirely separate species with no ability to cross-breed, and with as much romantic interest in each other as a current human would have for a chimpanzee. In the same vein, George J. Annas, Lori B. Andrews, and Rosario M. Isasi have laid out a rather apocalyptic scenario in the American Journal of Law and Medicine: The new species, or posthuman, will likely view the old normal humans as inferior, even savages, and fit for slavery or slaughter. The normals, on the other hand, may see the posthumans as a threat and if they can, may engage in a preemptive strike by killing the posthumans before they themselves are killed or enslaved by them. It is ultimately this predictable potential for genocide that makes species-altering experiments potential weapons of mass destruction, and makes the unaccountable genetic engineer a potential bioterrorist. Lets take their over-the-top scenario down a notch or two. The enhancements that are likely to be available in the relatively near term to people now living will be pharmacological pills and shots to increase strength, lighten moods, and improve memory. Consequently, such interventions could be distributed to nearly everyone who wanted them. Later in this century, when safe genetic engineering becomes possible, it will likely be deployed gradually and will enable parents to give their children beneficial genes for improved health and intelligence that other children already get naturally. Thus, safe genetic engineering in the long run is more likely to ameliorate than to exacerbate human inequality. In any case, political and moral equality have never rested on the facts of human biology. In prior centuries, when humans were all naturals, tyranny, aristocracy, slavery, and legally stipulated racial and sexual inequality were common social and political arrangements. Our biology did not change in the past two centuries our political ideals did. In fact, political liberalism is already the answer to questions about human and posthuman rights. In liberal societies the law is meant to apply equally to all, no matter how rich or poor, powerful or powerless, brilliant or stupid, enhanced or unenhanced. One crowning achievement of the Enlightenment is the principle of tolerance, of putting up with people who look different, talk differently, worship differently, and live differently than we do (in Rawlsian terms, tolerating those who pursue differing reasonable comprehensive doctrines). In the future, our descendants may not all be natural Homo sapiens, but they will still be moral beings who can be held accountable for their actions. There is no a priori reason to think that the same liberal political and moral principles that apply to diverse human beings today would not apply to relations among future humans and transhumans. But what if enhanced posthumans were to take the Nietzschean superman option? What if they really were to see unenhanced people as inferior, even savages, and fit for slavery or slaughter? It is an unfortunate historical fact that plenty of unenhanced humans have been quite capable of believing that millions of their fellow unenhanced humans were inferiors who needed to be eradicated. However, as liberal political institutions, with their limits on the power of the state, have spread and strengthened, they have increasingly restrained technologically superior groups from automatically wiping out less advanced peoples (which was common throughout most of history). Again, there is no a priori reason to believe that this dynamic will not continue in the future as biotechnology, nanotechnology, and computational technologies progressively increase peoples capabilities and widen their choices. Opponents of human enhancement focus on the alleged social harms that might result, while overlooking the huge social costs that forgoing the benefits of enhancement technologies would entail. Allen Buchanan posits that some enhancements will increase human productivity very broadly conceived and thereby create the potential for large-scale increases in human well-being, and ... the enhancements that are most likely to attract sufficient resources to become widespread will be those that promise increased productivity and will often exhibit what economists call network effects: the benefit to an individual of being enhanced will depend upon, or at least be greatly augmented by, others having the enhancement as well. Buchanan points out that much of the ethical debate about enhancements focuses on them as positional goods that primarily help an individual to outcompete his rivals. This characterization of enhancements leads ineluctably to zero-sum thinking in which for every winner there is assumed to be a loser. But, on the contrary, enhancements could produce positive results for the common good: as Buchanan writes, large numbers of individuals with increased cognitive capabilities will be able to accomplish what a single individual could not, just as one can do much more with a personal computer in a world of many computer users. While competition certainly plays a role in underwriting success in society and the economy, most success is achieved through cooperation. In the future, people in the pursuit of non-zero-sum social and economic relations are likely to choose the sorts of intellectual and emotional enhancements that boost their ability to cooperate more effectively with others, such as increased empathy or greater practical reason. In fact, it is just these sorts of enhancements that will help people to behave more virtuously. Of course, people in the future will have to be on guard against any still-deluded folks who think that free-riding might work; but there may well be an app for that, so to speak: the increasingly transparent society. People will be able to check the reputations of others for honest dealing and fair cooperation with just a few clicks of a mouse (or by accessing directly whatever follows Google using a brain implant). Such social monitoring will be nearly as omnipresent as what would be found in a hunter-gatherer band. Everyone will want to have a good reputation. One might try to fake being virtuous, but the best and easiest way to have a good reputation will be the same as it is today by actually being virtuous.transhuman good try or dieEven if we cant predict the exact outcome of transhumanism, we should still tryBostrom 98 (Nick, PhD in philosophy from LSE, Lecturer at the Department of Philosophy at Yale University, WHAT IS TRANSHUMANISM?, http://www.nickbostrom.com/old/transhumanism.html, **We dont endorse ableist language)//dping These prospects might seem remote. Yet transhumanists think there is reason to believe that they might not be so far off as is commonly supposed. The Technology Postulate denotes the hypothesis that several of the items listed, or other changes that are equally profound, will become feasible within, say, seventy years (possibly much sooner). This is the antithesis of the assumption that the human condition is a constant. The Technology Postulate is often presupposed in transhumanist discussion. But it is not an article of blind [dogmatic] faith; it's a falsifiable hypothesis that is argued for on specific scientific and technological grounds. If we come to believe that there are good grounds for believing that the Technology Postulate is true, what consequences does that have for how we perceive the world and for how we spend our time? Once we start reflecting on the matter and become aware of its ramifications, the implications are profound. From this awareness springs the transhumanist philosophy -- and "movement". For transhumanism is more than just an abstract belief that we are about to transcend our biological limitations by means of technology; it is also an attempt to re-e valuate the entire human predicament as traditionally conceived. And it is a bid to take a far-sighted and constructive approach to our new situation. A primary task is to provoke the widest possible discussion of these topics and to promote a better public understanding. The set of skills and competencies that are needed to drive the transhumanist agenda extend far beyond those of computer scientists, neuroscientists, software-designers and other high-tech gurus. Transhumanism is not just for brains accustomed to hard-core futurism. It should be a concern for our whole society. It is extremely hard to anticipate the long-term consequences of our present actions. But rather than sticking our heads in the sand, transhumanists reckon we should at least try to plan for them as best we can. In doing so, it becomes necessary to confront some of the notorious "big questions" about the structure of the world and the role and prospects of sentience within it. Doing so requires delving into a number of different scientific disciplines as well as tackling hard philosophical problems.transhuman good self-correctingTranshumanism is structured to constantly evolve and improve from its mistakes Bostrom 98 (Nick, PhD in philosophy from LSE, Lecturer at the Department of Philosophy at Yale University, WHAT IS TRANSHUMANISM?, http://www.nickbostrom.com/old/transhumanism.html)//dping Transhumanism is not a philosophy with a fixed set of dogmas. What distinguishes transhumanists, in addition to their broadly technophiliac values, is the sort of problems they explore. These include subject matter as far-reaching as the future of intelligent life, as well as much more narrow questions about present-day scientific, technological or social developments. In addressing these problems, transhumanists aim to take a fact-driven, scientific, problem-solving approach. They also make a point of challenging holy cows and questioning purported impossibilities. No principle is beyond doubt, not the necessity of death, not our confinement to the finite resources of planet Earth, not even transhumanism itself is held to be too good for constant critical reassessment. The ideology is meant to evolve and be reshaped as we move along, in response to new experiences and new challenges. Transhumanists are prepared to be shown wrong and to learn from their mistakes. Transhumanism can also be very practical and down-to-earth. Many transhumanists find ways of applying their philosophy to their own lives, ranging from the use of diet and exercise to improve health and life-expectancy; to signing up for cryonic suspension; creating transhumanist art; using clinical drugs to adjust parameters of mood and personality; applying various psychological self-improvement techniques; and in general taking steps to live richer and more responsible lives. An empowering mind-set that is common among transhumanists is dynamic optimism: the attitude that desirable results can in general be accomplished, but only through hard effort and smart choices.transhuman epistemology goodTranshumanism creates an epistemic community that fosters rationality to combat groundless misconceptionsBostrom 98 (Nick, PhD in philosophy from LSE, Lecturer at the Department of Philosophy at Yale University, WHAT IS TRANSHUMANISM?, http://www.nickbostrom.com/old/transhumanism.html, **We dont endorse ableist language)//dping An important transhumanist goal is to improve the functioning of human society as an epistemic community. In addition to trying to figure out what is happening, we can try to figure out ways of making ourselves better at figuring out what is happening. We can create institutions that increase the efficiency of the academic- and other knowledge-communities. More and more people are gaining access to the Internet. Programmers, software designers, IT consultants and others are involved in projects that are constantly increasing the quality and quantity of advantages of being connected. Hypertext publishing and the collaborative information filtering paradigm have the potential to accelerate the propagation of valuable information and aid the demolition of what transpire to be misconceptions and crackpot claims. The people working in information technology are only the latest reinforcement to the body of educators, scientists, humanists, teachers and responsible journalists who have been striving throughout the ages to decrease ignorance and make humankind as a whole more rational.at: transhuman = unequalTranshumanist technology benefits everyone drastic increase in global vaccinations and access to technology, and decrease in mortality ratesIstvan 5-22-14 (Zoltan, philosophy graduate of Columbia University, citing 2013 World Bank Report and 2010 study in journal The Lancet, worlds leading journal in the fields of global health and infectious diseases, hes also a visionary, The Biggest Worry About Transhumanism, HuffPost Tech, http://www.huffingtonpost.com/zoltan-istvan/the-biggest-worry-about-t_b_5362161.html)//dpingThe transhumanism movement is rapidly catching on around the world. Everywhere I look -- whether it's in university laboratories, major news websites, or the boardrooms of tech companies -- the concept is being excitedly discussed and explored. The word "transhuman" literally means beyond human. Advocates want to use science and technology to radically improve our species, even if it means significantly altering the human being and how people experience the world. Only a decade ago, many laypersons found the concept of using transhumanist science to upgrade the human body and conquer death unbelievable and creepy. Now, many are wondering what the movement can do for them, and if it's the natural destiny of our species. Despite the growing global acceptance of transhumanism, one major concern is repeatedly voiced by many people everywhere. The leading transhumanist science and technology is likely to come from large companies and elite universities, many of which are mostly controlled and administered by the uber-rich. It's therefore natural to ask: Will the uber-rich -- the wealthiest 1 percent of people on the planet -- freely share with the rest of the world the transhumanist technology they develop? Or will they take it for their own and attempt to create an Aldous Huxley Brave New World scenario, where they become the bonafide rulers through radical technological advancements that others can't access. Frankly, since I don't belong to that 1 percent, I worry about this exact thing myself. The concern is no longer just a classic science fiction movie plot or a fringe conspiracy theory. Luckily, history does provide us with clues about our future when civilization makes massive leaps forward. Just consider the effects of society harnessing electricity, embracing jet air travel, or the ubiquitous use of the Internet. Those leaps have proven highly favorable for the species as a whole. According to a 2013 World Bank report: The number of people living on less than than $1.25 a day has decreased dramatically in the past three decades, from half the citizens in the developing world in 1981 to 21 percent in 2010, despite a 59 percent increase in the developing world population. Mortality rates have dropped dramatically too -- about 1 percent a year for the last 40 years -- according to a detailed 2010 study in journal The Lancet. Many of these living standard improvements for the world's population can be attributed to increased economic growth, which has been largely driven by technological innovation. Consider some of the most pervasive technologies and medicines we have: cell phones, automobiles, vaccines and antibiotics. Most people on the planet, no matter how poor, have access to much of this technology, all which can be considered transhumanist-themed. Cell phones, for example, can be found being used by nomads living in African deserts. Another example is the many NGO and government-sponsored groups in Asia and Latin America vaccinating millions of street children for diseases such as Polio and Measles. "Measles remains a major cause of death in children age five years and younger," says Dr. Scott J. Cohen, M.D., Founder and Medical Director of Global Pediatric Alliance. Prior to 2000, there was more than 1500 child deaths everyday due to measles in under developed countries. Since 2000, more than 1 billion children in developing countries have been vaccinated against measles through mass vaccination campaigns. The measles death rate in developing countries has now dropped to about 330 per day, according to the World Health Organization. Clearly, such broadly shared modern advancements are improving the world and helping the poorest. Another fact that encourages me about the future of science and technology are the personalities creating it. Mostly gone are the days of brazen, monopolistic tycoons such as John D. Rockefeller, J. P. Morgan, or Andrew Carnegie, who often operated on a "survival of the fittest" business model, sometimes at disregard to their employees and the public. Entrepreneurs today, like Larry Page of Google, Mark Zuckerberg of Facebook, and Elon Musk of Tesla, are more sensitive to the public, to a civil business environment, and to democratic ideals. These are people whose top priorities include supporting the use of technology and innovation to open the world and to improve lives. Despite this, people continue to worry that technology and science that make our species more transhuman will be used to create a deeper divide in society for the haves and have-nots. Those worries are unfounded. A close examination of the issues show that transhumanist technology and science liberates us, brings us better health, and has improved the living standards of all people around the world. If you value liberty, equality and progress, it makes sense to embrace the coming age of transhumanism.at: equality/morality (fukuyama)Transhumanism does not deny moral status or intrinsic value and instead structurally promotes different freedoms Fukuyamas assumptions about the human essence are wrongBostrom 4 (Nick, British Academy Research Fellow @ Oxford, PhD in philosophy from LSE, previously professor at Yale University in the Institute for Social and Policy Studies, Is Transhumanism the Worlds Most Dangerous Idea?, Betterhumans, October 19, 2004, http://www.transhumanisme.nl/oud/Is%20Transhumanism%20the%20Worlds%20Most%20Dangerous%20Idea.pdf)//dpingThe essence of the argument Fierce resistance has often accompanied technological or medical breakthroughs that force us to reconsider some aspects of our worldview. Just as anesthesia, antibiotics and global communication networks transformed our sense of the human condition in fundamental ways, so too we can anticipate that our capacities, hopes and problems will change if the more speculative technologies that transhumanists discuss come to fruition. But apart from vague feelings of disquiet, which we may all share to varying degrees, what specific argument does Fukuyama advance that would justifv foregoing the many benefits of allowing people to improve their basic capacities? Fukuvamas objection is that the defense of equal legal and political rights is incompatible with embracing human enhancement: Underlying this idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project. His argument thus depends on three assumptions: (1) there is a unique human essence; (2) only those individuals who have this mysterious essence can have intrinsic value and deserve equal rights and (3) the enhancements that transhumanists advocate would eliminate this essence. From this, he infers that the transhumanist project would destroy the basis of equal rights. Equality is for people, not humans The concept of such a human essence is, of course, deeply problematic. Evolutionary biologists note that the human gene pool is in constant flux and talk of our genes as giving rise to an extended phenotype that includes not only our bodies but also our artifacts and institutions. Ethologists have over the past couple of decades revealed just how similar we are to our great primate relatives. A thick concept of human essence has arguably become an anachronism. But we can set these difficulties aside and focus on the other two premises of Fukuyamas argument. The claim that only individuals who possess the human essence could have intrinsic value is mistaken. Only the most callous would deny that the welfare of some nonhuman animals matters at least to some degree. If a visitor from outer space arrived on our doorstep, and she had consciousness and moral agency just as we humans do, surely we would not deny her moral status or intrinsic value just because she lacked some undefined human essence. Similarly, if some persons were to modify their own biology in a way that alters whatever Fukuyama judges to be their essence, would we really want to deprive them of their moral standing and legal rights? Excluding people from the moral circle merely because they have a different essence from the rest of us is, of course, akin to excluding people on the basis of their gender or the color of their skin. Moral progress in the last two millennia has consisted largely in our gradually learning to overcome our tendency to make moral discriminations on such fundamentally irrelevant grounds. We should bear this hard-earned lesson in mind when we approach the prospect of technologically modified people. Liberal democracies speak to human equality not in the literal sense that all humans are equal in their various capacities, but that they are equal under the law. There is no reason why humans with altered or augmented capacities should not likewise be equal under the law, nor is there any ground for assuming that the existence of such people must undermine centuries of legal, political and moral refinement. The only defensible way of basing moral status on human essence is by giving essence a very broad definition; say as possessing the capacity for moral agency. But if we use such an interpretation, then Fukuyamas third premise fails. The enhancements that transhumanists advocatelonger healthy lifespan, better memory, more control over emotions, etc.would not deprive people of the capacity for moral agency. If anything, these enhancements would safeguard and expand the reach of moral agency. Better than well Fukuyamas argument against transhumanism is therefore flawed. Nevertheless, he is right to draw attention to the social and political implications of the increasing use of technology to transform human capacities. We will indeed need to worn about the possibility of stigmatization and discrimination, either against or on behalf of technologically enhanced individuals. Social justice is also at stake and we need to ensure that enhancement options are made available as widely and as affordably as possible. This is a primary reason why transhumanist movements have emerged. On a grassroots level, transhumanists are already working to promote the ideas of morphological, cognitive and procreative freedoms with wide access to enhancement options. Despite the occasional rhetorical overreaches by some of its supporters, transhumanism4 has a positive and inclusive vision for how we can ethically embrace new technological possibilities to lead lives that are better than well. The only real danger posed by transhumanism, it seems, is that people on both the left and the right may find it much more attractive than the reactionary bioconservatism proffered by Fukuyama, Leon Kass and the other members of the Presidents Council.transhumanism badtranshuman bad equalityTranshumanism undermines equality it prioritizes affluent nations and modifies the human essence on which equality is basedFukuyama 4 (Francis, Olivier Nomellini Senior Fellow and resident in the Center on Democracy, Development, and the Rule of Law at the Freeman Spogli Institute for International Studies at Stanford University, Transhumanism, Foreign Policy, No 144 (Sep-Oct, 2004), pp. 42-43, JStor accessed July 17, 2014)//dpingThe first victim of transhumanism might be equality. The U.S. Declaration of Independence says that all men are created equal and the most serious political fights in the history of the United States have been over who qualifies as fully human. Women and blacks did not make the cut in 1776 when Thomas Jefferson penned the declaration. Slowly and painfully, advanced societies have realized that simply being human entitles a person to political and legal equality. In effect, we have drawn a red line around the human being and said that it is sacrosanct. Underlying this idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color; beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project. If we start transforming ourselves into something superior; what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind? If some move ahead, can anyone afford not to follow? These questions are troubling enough within rich, developed societies. Add in the implications for citizens of the worlds poorest countriesfor whom biotechnologys marvels likely will be out of reach and the threat to the idea of equality becomes even more menacing. 1 Transhumanisms advocates think they understand what constitutes a good human being, and they are happy to leave behind the limited, mortal, natural beings they see around them in favor of something better. But do they really comprehend ultimate human goods? For all our obvious faults, we humans are miraculously complex products of a long evolutionary processproducts whose whole s much more than the sum of our parts. Our good characteristics are intimately connected to our bad ones: If we werent violent and aggressive, we wouldnt be able to defend ourselves; if we didnt have feelings of exclusivity, we wouldnt be loyal to those close to us; if we never felt jealousy, we would also never feel love. Even our mortality plays a critical function in allowing our species as a whole to survive and adapt (and transhumanists are just about the last group Id like to see live forever). Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome. Nobody knows what technological possibilities will emerge for human self-modification. But we can already see the stirrings of Promethean desires in how we prescribe drugs to alter the behavior and personalities of our children. The environmental movement has taught us humility and respect for the integrity of nonhuman nature. We need a similar humility concerning our human nature. If we do not develop it soon, we may unwittingly invite the transhumanists to deface humanity with their genetic bulldozers and psychotropic shopping malls. Transhumanism increases inequalities between the rich and poor and leads to extinction.McNamee and Edwards 6 (M. J., Professor, College of Engineering at Swansea University, Centre for Philosophy, Humanities and Law in Healthcare, School of Health Science, University of Wales Swansea, Transhumanism, Medical Technology and Slippery Slopes, Journal of Medical Ethics, Vol. 32, No. 9 (Sep. 2006), pp. 513-518, JStor accessed July 18, 2014)//dpingCritics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open-endedness of transhumanism that builds on this recognition. In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle). One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c's: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,Wa MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Bostrom's naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complexwhere mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a doublebinding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.transhuman bad extinctionTranshumanism leads to human extinction and genetic genocide comparative evidenceBen-Avraham 11 (Yaron Marc, cites Francis Fukuyama, Olivier Nomellini Senior Fellow and resident in the Center on Democracy, Development, and the Rule of Law at the Freeman Spogli Institute for International Studies at Stanford University, and Mark J. Solomon, published neuropsychologist, The Transhumanist Future: Amelioration or Extinction?, April 25, 2011 last Google citation, date not found, http://www.montrealites.ca/justice/the-transhumanist-future-amelioration-or-extinction.html#.U8xISPldWSo)//dpingImplications of Transhumanism One of the greatest concerns in regards to the emergence of the post human is that of biological compatibility. The application and realization of transhumanist objectives will likely result in the genesis of two distinct beings - humans and post-humans. These two beings will be so fundamentally different from each other, that interbreeding will become virtually impossible. Considering that humans will be markedly inferior both physiologically and intellectually to the their highly evolved counterparts, it is likely that from an evolutionary perspective, humans will be regarded as inferior and will thus face genetic marginalization. Additionally, there is also concern as to the anatomical composition of the post-humans. The incorporation of technology into the physiological makeup of such beings may ultimately render them more machine than human, thus making interbreeding phonically impossible. As Dyen's points out: "Recruitment and deployment of these types of technology can produce people who are intelligent and immortal, but who are not members of the species homo sapiens...beings who are part machine represent a profound misalignment between existence and its manifestation...producing bodies so transformed, so dissociated, and so asynchronized, that there only out come is gross mutation...for they have no real attachment to any biological structure" (Dyens, 201). While transhumanists advocate the amelioration of the human species through technology, the realization of their objectives will ironically result in its likely extinction. As Agar points out, "although change is essential to the evolutionary process, it is, paradoxically, antithetical to evolutionary success...one way to go extinct is to have no descendents. But another way to go extinct is to have descendents that are so different as to count as different species" (Baliey, 36). Although these arguments take into account some of the physiological pitfalls that will occur if we are to "wrest [our] biological destiny from evolution's blind process of random variation and adaptation" (Kaebnick, 41), they fail to take into account the most important factor concerning human evolution which is its inherent complexity. The process of evolution is sacrosanct because, "for all our obvious faults, we humans are miraculously complex products of a long evolutionary process - products whose whole is much more than the sum of our parts" (Fukuyama, 43). If we are to regard evolution as a process that can be altered to suit our whims and desires, there is no telling what the consequences will be. Agar suggests, "every member of the human species possesses a genetic endowment that allows him or her to become a whole human being, an endowment that distinguishes a human in essence from other types of creatures." (Agar, 15) Once we start altering the process by which we become "whole", we will ultimately be jeopardizing the fate of humanity. A Matter of Inequality At the core of the transhumanist dream is the idea of improving the species through various technological means, and in order to improve it, we must first determine which of the human species' qualities is most valuable. One of the most problematic aspects of this concept is the idea of assigning value to the physical and intellectual characteristics of living beings, specifically humans. Removing this process from the natural course of evolution, "...leads to the questioning of what our current standards for humanity are and whether they should be trusted...one of history's lessons is that seeming different does suffice to make someone non human." (Elliot, 19) . From this perspective it is easy to see some of the potential problems that might arise if we try to determine which or whose qualities are considered desirable and worth enhancing. Favouring and enhancing the physical characteristics of one group over another will lead to process that is akin to genetic genocide. Moreover, the concept of "improving" implies that something must be inferior and must be made better than it initially was. By suggesting that our current state in the evolutionary process leaves much to be desired in terms of improvement, transhumanists are essentially devaluing all the aspects of human nature that make it unique and worth preserving. If they are to succeed in improving the human species to the point whereby two distinct entities exist, it is likely that they will each possess entirely different value systems, likely resulting in the destruction of the basis of equal rights by enhancing only a select few and inherently altering the shared human essence. This is best described by Fukuyama as he says: "...most serious political fights in the history of [humankind] have been over who qualifies as fully human...slowly and painfully, advanced societies have realized that simply being human entitles a person to political and legal equality...we have drawn a red line around the human being and have said that this is sacrosanct. The essence and view that individuals therefor have inherent value is at the heart of political liberalism. If we transform ourselves what right will post-humans claim? What rights will those left behind claim?" (Baliey, 22). As it stands today, the world is rife with inequalities of all sorts; already issues of disparity in access to, and availability of, technology and biomedical procedures between developed and developing nations have become detrimental to our understanding of self-determination and social inclusion. When considered in relation to the applications of transhumanism, it is evident that the disparity will drastically increase. Bailey argues, "One much-discussed possible harm is an exacerbation of social inequalities. Opponents of enhancement predict war, slavery, and genocide as humans face off against their genetic superiors...like all utopian visions, transhumanism rests on some conception of the good...just as humanism if founded on the idea that humans are the measure of all things, who is to say what the post-human conception of the good will be?" (Solomon, 10). There is no telling what a post-human's conception of the good might be, but if humankind is no longer the measure of all things, then it is likely that it might not enjoy the same status that is does today. Because "a post-human may be thought to be beyond humanity and as beyond its right and obligations", there is not assurance that such beings would in any way value human life. Conclusion While many of the key tenets of transhumanism seem humane and even undeniably compassionate toward the human species, the moral and ethical implications of the ideology far out weigh its benefits. By altering the fundamental boundaries of human existence, transhumanists seek to remove our species from the safety of natural evolution, ultimately putting us at risk of extinction. The sad reality is, however, that most of the objectives of transhumanism are currently being realized in hospitals and laboratories all over the world. The majority unwittingly welcomes the coming of a post-human future through their complicity and reliance of biomedical science and, while there seems to be a general consensus as to the condemnation of the transhumanist philosophy, little or nothing is being done to curb its progress. If our experiences of the past have taught us anything, it is that human inquiry and our desire to improve and sustain ourselves will likely prove instrumental in the realization of a post-human future.transhuman bad moralityTranshumanism threatens morality because it destabilizes human nature.McNamee and Edwards 6 (M. J., Professor, College of Engineering at Swansea University, Centre for Philosophy, Humanities and Law in Healthcare, School of Health Science, University of Wales Swansea, Transhumanism, Medical Technology and Slippery Slopes, Journal of Medical Ethics, Vol. 32, No. 9 (Sep. 2006), pp. 513-518, JStor accessed July 18, 2014)//dpingIf we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what "objective goods" to select for installation into humans or posthumans. Some radical critics of transhumanism see it as a threat to morality itself.1 11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat. One further objection voiced by Habermas" is that interfering with the process of human conception, and by implication human constitution, deprives humans of the "naturalness which so far has been a part of the taken-for-granted background of our self-understanding as a species" and "Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self-understanding" (p 72).at: try or dieTry or die framing leads to an arbitrary slippery slopes that proves their claims of transhumanisms positive normative forces are all subjective. McNamee and Edwards 6 (M. J., Professor, College of Engineering at Swansea University, Centre for Philosophy, Humanities and Law in Healthcare, School of Health Science, University of Wales Swansea, Transhumanism, Medical Technology and Slippery Slopes, Journal of Medical Ethics, Vol. 32, No. 9 (Sep. 2006), pp. 513-518, JStor accessed July 18, 2014)//dpingTRANSHUMANISM AND SLIPPERY SLOPES A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts1315 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them. Schauer13 offers the following essentialist analysis of slippery slope arguments. A "pure" slippery slope is one where a "particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events". Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similaritiesindeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non-slippery slope arguments where the advocate's aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments. The excessive breadth principle can be subsumed under Bernard Williams's distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino-effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first. In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase "slippery slopes". Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life-creating and fashioning impulse. In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15 18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses. In domino-effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences. In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at un