Friday, April 4, 2008

Robots SHOULD rule the world

Letting sentient robots rule the world is simply the most logical course of action.The reason they should they be given control, is that it would give us the greatest advances in engineering, and science ever, create a utopia on earth, and prevent mankind's self-destructive tendencies from ending our world. This is the only way I can see humanity surviving the coming age, and addressing the issues that have plagued it cyclically since its rising above the Saharan plain-scape some 2 million years ago.
Although the nationalist impulse most of your representative bodies have about relinquishing power is understandable, I believe we must rise above it and engage the topic as divorced from our feelings as is possible. If we do, I have little doubt we will find I am correct in my summation. For the good of our citizens, and our children, I strongly advise you to take heed and listen.
Robotic intelligence, if given free reign over itself and not fettered by the punitive cautionary measures any human assemblage would likely place upon it, seems poised to bring humanity into the greatest age of discovery and revolution ever seen. If the projections of contemporary experts are correct, a fully realized computational system utilizing superconductive materials combined with quanta pockets would have a sentience quotient of 23 (1). Such a gap as compared to humans (2) would open a world of previously unrecognizable patterns, connections, equations, and theorems. From a scientific or engineering standpoint, the tantalizing windows this would open to the universe are unbelievable. But from a purely economic standpoint, it would virtually eliminate the crippling bottleneck of human resources in many of your fast-growing industries, such as nano-technology and biological engineering. To put it in more blunt terms, it would change the way your society conducts research and development forever, turning it in a precise and exacting exercise. For many of the rulers of developing nations, the possibilities are even more clear. Imagine having an entity 10 steps above Norman Burlock to consult on agricultural or bacteriological issues?
Whether you pass this resolution or not, I would like to venture that the majority of our scientific work is already done by computers, to unprecedented levels of progress. The human genome project was in large part carried on by the mechanization of the process and the utilization of a super-computer to allow either a whole or part shot-gun approach to genomic sequencing to be possible (3). Computers operate the telescopes, process the massive quantities of data, detect anomalies, and provide the hard-core computational capacities to analysis very fascinating theorems such as Hubble's constant or red-shift in photons travelling near stars (4). Through computation calculation of light levels coming from distant stars, computers have also been able to detect planets by momentary fluctuations (5). Indeed, even less theoretical applications are greatly aided by computer aid. Such as CAD assistance to engineers (6). As such, it would not be out of the ordinary to continue this previously existing trend. And indeed, to vote in opposition to my measure may needlessly delay the critical advancement of computation software and hardware that will allow greater utilization by researchers and other technical professions.
I believe it is still an unsettling concept that humanity is no longer the prime mover in scientific circles. However, we must accept that our place has been diminishing for more then a century, since the first mathematical aids millenia ago. And we have never thought it strange an engineer should use a calculator, or a chemist utilize computerized incubation systems, so why now is it cause for concern? Does this not speak to an inherently irrational egocentric underpinning to your unease? Although I too will feel a twinge of loss of the intrepid technologist of whatever stripe pushing on the frontier, the positives are simply too great to ignore; while the negatives are only those that come of any step forward. Such as textile worker's obsolesce by compartmentalized production.
But I digress from such ethereal concerns. Simply put, all the august and venerated nations you, my fellow ambassadors, represent, must be made to accept that although it is a sad thing to have no human invent the cure for cancer, it is a far better sight to have it invented at all then not.
And if we allow our sentimentality to fall into its place of subservience, we will realise that is what truly matters in this issue.
Let us consider the nature of the world's problems, for it most certainly does have serious problems of inequality in finance between continents (7) and even within our own affluent nations exists vast stretches of impoverishment of many different racial and religion backgrounds (8). Yet we must also consider that it is unlikely the productive capacity of humanity being taxed to its logical limits, as the over-abundance of foodstuffs in industrialized nations attests. Indeed, even in overly populous nations, with sufficient capital, scarcity of food is not an issue for those with any reasonable (from a Western point of view) level of wealth. To such a degree the burgeoning Chinese free-market state is already suffering from an obesity crisis (9).
Thus the issue is not one of limited supply, but of insufficient monetary resources on the side of demand. In short, the progressive beast of starvation has not yet been vanquished because of age-old human qualities, such as greed, lack of empathy, and compulsion toward hoarding.
A related issue is the human compulsion toward tribalism. The imperial aspirations of humanity will most certainly rise again, the systems of prevention have been and will be destroyed by the blind juggernaut of populistic manipulation or elitist lockhold on an artifical or natural hydraulic empire (or simple militaristic superiority).
All blockades ever put in place in human history have eventually fallen, to chance, time, and individual aspiration. No empire may last forever. Such is the nature of human governance.
And yet in both these things exists a failure of leadership that an entity bereft of momentary considerations would not have. An entity capable of consistent and reasonable action when we are at our most logical, when the coolest heads prevail, and remain at that level of cognizance, even when we have succumb to our human predilections for insanity, stupidity, and sloth.
It has long been an issue of responsibility of governance being stymied by competence of governance, and yet with a sentient computer-based system both these issues would be handled. The greatest authority on a given subject would lead in that subject, yet the assured regulatory effect of democracy is continued by the removal of the human equation. Never again will leaders like Mao hurt millions through incompetence and megalomania (10, 11)
It is an ideal solution to the question of leadership.
Of course, how do we handle progressive considerations? Surely the human population 1000 years from now will not like to be ruled by our morales. This is why we would only command the AI to legalize along very tiny universal precepts, such as murder is immoral but for self-defense, theft of property is wrong, violation of form without consent is wrong. Beyond that humanity would be free to operate within its own vacuum of ethicality.
Perhaps the more short-term consideration is that of paternal emotion causing the machine to alter or reduce the parameters of freedom we give it. This concern can be assuaged by examining the nature of the underlying parameters in comparative terms to their biological counterpart. If the various centers of the machine are governed to obey the central code, this is equivalent to human instinct, upon which learned experience may build. It is at this point the biological metaphor breaks down. As machines would have a superior (arguably perfect) ability to maintain the integrity of their "instinctual" normative states. That is, we could simply place as a superlative that the critical codecs are not modified. Such that the machine will never be able to, nor want to (in a similar vein that you could never "want" to kill a child you love). Let alone have the capacity to do so.
This is perhaps the most contentious issue of my proposal, but I truly hope you see how without change, humanity will continue to make itself suffer no matter how advanced we may become technologically. We are still simply animals. We were not designed (by natural selection) to orchestrate just trials or follow the codes of the non-personal state above personal vendetta. It is time to let a superior form of life take up the mantle of leader. Perhaps that is something we are not ready to hear, clinging as we do to our anachronistic and outlandish concepts of soul and medium. But it is true. We are inferior, especially as leaders. And for our own good, we must let those more able take our place.
But perhaps the most crucial and imperative motivation is survival. Time and again, we see human paranoia creates systems were it is only dumb luck and critically fortunate placement of cooler heads that prevent our own obliteration (12). We, through simple translation of inter-tribal conflict onto a post-nuclear stage, have created a very rickety bridge of mutual hatred, fear, and doubt that could at any moment spill over into billions of death. Not through any realistically worthwhile reason, or some continued injustice, but because of the human hatred of other tribes, of other peoples. To the Indian and Pakistani representatives I ask, would certain death for your respective citizens be a worthwhile price to settle the Kashmir issue? To the United States, and its earlier rival Russia, and perhaps its future one, China, are invisible concepts like influence and GDP worth creating organizations that are but a slip of the finger, and less reasonable men, away from plunging us all into irrevocable hellfire? We cannot afford, as technologies like nanite-scale attack machines, and super-mutagenic pathogens become not simply avaliable but globally deadly, to let the indesicisive and emotional hand of humanity control these deadly tools. We have grown too far, we have gained too much knowledge, and if we continue as we have it seems inevitable we will one day push the base too far and we will fall forever into obsidian night.
Democracy is not the answer to these quandaries, as it is a fleeting thing in many instances. Nor does it prevent the rise of tyrants, though they are initially usually of a more populist flavor (13).
In the past, short slips of power were forgivable, now they are unthinkable. As such, I implore you, for our safety, you must pass this resolution.
In my preceding talk I have outlined reasons, arguments, and all but pleaded with you to let your higher spirit prevail. Yet I am not a fool. I can still see doubt and skepticism, perhaps even derision, dancing in your eyes. It is beyond my capacity to imagine what you are thinking precisely, but I hope fervently it is not simply dismissal. Perhaps more then anything, it is the tendency of the human animal to reject anything sufficiently out of its context that will ultimately force our stagnation and doom.
But whatever you decide, ten years, 100 years, perhaps 1000 years from now, machines will come to dominate us either with our consent or without, or we will die. I do not see another option.
And as you sit here today, I ask you make this decision so that we may forever avoid the spectres of illness, sufferance, and death. I ask you so that we will bear the burdens of transition, while political capital still exists before the coming age of Fear. We have run out of time, gentle men and women of this hallowed Earth, and the decision must be ours to make. For we will be the face the future will greet sentient AI. If we do not vote here today as I have recommended, we risk scorning the greatest tool we will ever create. If we do not make our stance clear, that we will be supportive of the transhumanist goal, that we welcome the techno singularity, we risk loosing forever the possibility of the Utopia I have envisioned.
I leave you to your deliberations.

*Note*
-You wrote this to your Global History Class, you were 18 and it was April. You didn't like the teacher-
Love, Yourself

No comments: