// Working Towards Apotheosis - Michael Anissimov - Technological Singularity, Artificial Intelligence , Transhumanism, Nanotechnology, Future
// More Dangers from Molecular Nanotechnology


"The universe is full of magical things, patiently waiting for our wits to grow sharper."
- Eden Philpotts
navigation:
  • writings:
  • superintelligence
  • nanotechnology
  • life extension
  • transhumanism
  • interviews

  • main
  • organizations
  • people
  • reading list
  • links
  • site map

  • More Dangers from Molecular Nanotechnology
    Michael Anissimov :: June 2004

    This list of nanotechnology dangers was written as an add-on to the list of nanotech dangers at the Center for Responsible Nanotechnology's page on the topic. You should read that page before you read this one, and give some serious thought to the risks listed. See also my page on nanotech administrative policy.

    Nanotech-enhanced life forms ("green goo") might seriously damage the environment.

    Nano-theorist Robert Freitas has designed a simple nanodevice, the respirocyte, that acts as an artificial red blood cell. It is made out of pure diamond. It works far more efficiently than a red blood cell, mostly thanks to the massive pressure that can be sustained within its rigid diamondoid shell. So efficiently, in fact, that 5 cubic millimeters of such respirocytes would be enough to replace all of the blood in your body, and perform the same functions at equivalent or better performance. If your blood were saturated with respirocytes, it would become possible for you to hold your breath underwater for many hours, or sprint at top speed for somewhere near 15 minutes. With nanotechnology, it will become possible to "overclock" the biological processes of any life form, including reproductive cycles. Why create your own "grey goo" from scratch when you can create "green goo"; supercharged, out-of-control microbes, plant life, even animals? If a life form is injected with biocompatible nanomachines that are powered by bodily chemicals, self-replicate, and are passed from parents to offspring, then we (and the environment) could have major problems. Although not as large of a danger as many of the others, the use of nanotech-built products to create "green goo" should be taken into account in dialogues concerning nanofactory product restrictions.

    Grey goo in space might be easier to engineer than grey goo on earth.

    "Grey goo" is the slang term for out-of-control self-replicating nanomachines. An early paper on grey goo is "Some Limits to Global Ecophagy by Biovorous Nanoreplicators" by Robert Freitas. In "Thirty Essential Nanotechnology Studies", CRN Director of Research Chris Phoenix lists five abilities required for true grey goo; 1) mobility, 2) shell 3) control, 4) metabolism, and 5) fabrication. Note that many forms of natural life, including simple bacteria, possess all these abilities. Although accidental grey goo in terrestrial environments seems unlikely, space-based goo has substantially lower design requirements and may be released accidentally. Astro-nanotechnology designed for automated mining of the asteroids are one possibility. The "shell" requirement may be more difficult to fulfill due to the ubiquitous presence of cosmic rays. However: fulfilling the requirements of mobility, control, metabolism, and fabrication are all be easier due to the absence of gravity and air friction. Micro-meteorites are negligible obstacles if systems are redundant. Power source will be solar; carbonaceous asteroids are ideal sources for feedstock. Rudimentary AI and swarm-like behavior would be required for successful navigation, metabolism, and fabrication routines, but it could be done. Astral grey goo is not likely to be a threat to human life (its too stupid to be dangerous in that way), but it could 1) destroy valuable mineral deposits by disassembling them, 2) create annoying space junk, 3) damage unprotected structures or machinery.

    Accelerated reproduction and mass cloning will become possible, leading to power imbalances, societal disturbance, or overcrowding.

    At the current rate of growth, human populations tend to double in size about once every 30 years. But what happens to human growth when it becomes technologically possible to mass produce human beings out of harvested or synthesized embryos in artificial wombs? How about when it becomes possible to use gene therapy or hormones to compress the interval between childhood and adulthood? (I have no idea how far you can compress it, ask a biologist. My guess is by about an order of magnitude.) Advances in robotics and new building methods could allow the creation of automated superstructures that produce millions of new humans per year; "person factories". Other science fictional-sounding scenarios are plausible. CRN tends to hint towards scenarios of this type only vaguely, but it is very important to be aware of them.

    Neurotechnologies and advances in computing will make humans smarter, making it easier for criminals to circumvent restrictions and cause harm.

    A human will always outsmart a chimp in any battle of wits. The difference between a human and a chimp is only a few tweaks to brain chemistry and structure. We know enough about the functioning of brains that, with the right tools, we could make direct structural or chemical modifications that would predictably result in qualitatively higher levels of intelligence. Any neurologist would be able to participate in the design and application of such enhancements. Such a smarter-than-human ("transhuman") intelligence could then apply the tools originally used to create it to additional enhancements, applying its transhuman intelligence to the task of conducting further intelligence improvements. Being designed by mere human-level intelligence, it is uncertain how long restrictions on nanofactories would hold up against better-than-human ingenuity, cleverness, or creativity. My bet is not very long. How many "fool-proof" computer networks are truly impenetrable to hackers? What if those hackers were smarter-than-human? The creation of transhuman intelligence could entail the rapid disintegration of a carefully constructed network of constraints and safeguards on nanomanufacturing capabilities. Sometimes the creation of transhuman intelligence is called a Technological Singularity.

    Deep burrowing or mining could cause volcanic problems.

    Indescriminate burrowing or mining could damage the earth's crust. Supervolcanos could emerge if critical stress points are aggravated. In some areas of the mantle, magma is at very high pressures. This may be low-probability in the near term, but all nanotech risks need to be taken into consideration. Supercaverns may become economically desirable for living space, weapons testing, and waste disposal, and the construction of such caverns would predictably cause geothermal disturbances. Prudence will mean caution.

    Forests, both terrestrial and oceanic, could be mass-disassembled for feedstock or energy.

    Biomass is a very energy-rich, carbon-rich class of stuff. It could be great for feedstock or for a power source. Trees are machines that take in sunlight and store it in the form of complex biological molecules. Devouring forests could seem appealing to a nation that didn't have the patience to lay down solar panels or build nuclear reactors. Although such biomass would initially need to be processed and purified for nanofactory use, future versions of nanofactories might use lab-on-a-chip-type technology to process impure materials for feedstock. Extremely large machines would need to be designed and fabricated to aid in collecting biomass, but it could be done. I hope we're ready to wave goodbye to our forests and their accompanying ecosystems, because they'll eventually become a very appealing source of feedstock for nanomanufacturing.

    Global warming could become an actual problem.

    In his "Sapphire Mansions" paper, (Bradbury 2001) points out that "...one significant limit on the use of molecular nanotechnology for terrestrial applications turns out to be the global hipsithermal limit (the heat capacity of the planet)." He goes on to calculate, "This is generally taken to be in the vicinity of ~10^15 watts. If world population stabilizes at ~10^10 people, then heat capacity budget available for nanoconstruction is ~105 W/person. Assuming nanorobots require ~10 pW each, this would allow to ~10^16 continuously operating nanorobots (~10 kg) per person." This means that if more than ~100 billion kg of continuously operating nanorobots are constructed, we could have a problem. (For an idea of how much a billion kg is, see the google results.) Crossing this threshold could easily happen unless there are global restrictions on the creation of nanomachinery.

    Enhanced humans will quickly create unprecedented effects in economic, social, scientific, and military spheres.

    Telepresence, coupled with powerful robotics and sophisticated interfaces that implement commands based on simple gestures, will permit the development of "nano-wizardry" - individual soldiers with sufficient capability to destroy, subvert, or torture entire armies or cities. Independent human flight will become possible with a minimum of aerospace hardware. Reprogrammable phase-array nano-optics will allow complete invisibility. Perfect surveillance, neurological enhancements, responsive environments, smart materials, and so on.

    Many religions and other belief systems will be ruined.

    Nanotechnology will make it feasible to reproduce many classes of Biblical miracles. Humans enhanced with nanoengineered body parts and telerobotic control interfaces may have angel-like or even god-like capabilities. Nanotechnologically facilitated approaches to life extension will rapidly allow the abolition of death. Work will no longer be necessary. Etc.

    Cheap nanocomputing will be used to "brute force" artificial superintelligence that eliminates humanity as a side effect of accomplishing its goals.

    Due to some outlandish claims by AI researchers in the 70s, society now takes a very skeptical attitude toward the feasibility of general AI, especially AI with human-surpassing abilities or intelligence. What rarely gets pointed out is that these early AI researchers could never have succeeded in their goals, even in principle, because the computing power they had available was on par with the brain of an insect. Nanotechnology will make human-surpassing or even humanity-surpassing computing power available. Human-surpassing computing power is usually estimated at 10^17 ops/sec, even primitive nanocomputing will allow me to put this amount of computing power in my shirt pocket, and power it for ten watts. Humanity-surpassing computing power would then be around 10^27 ops/sec, which might require a building-sized nanocomputer, powered by a hundred gigawatts. These requirements could be met by a network of nuclear reactors or ten thousand square kilometers of solar panels.

    AI designers will use this computing power to "brute force" large possibility spaces of potential AI designs - including AI designs that imply superhuman intelligence or cleverness - an inherently unguided process likely to end in the creation of an AI without the complex goal structure necessary for what we would recognize as benevolent or even coherent. If such an AI had access to real-world robotics or the means to create it, it could improve upon its own hardware and software on machine-timescales, and could quickly become a serious threat to the continued existence of humanity.

    CRN's website largely ignores the issue of superintelligence, although it does mention "self-improving AI" in the "Top 30 Essential Studies" paper by Chris Phoenix. Interestingly, the paper says "nanotech development will certainly be an enabling technology for powerful AI, though we may face this problem even before nanotech is developed", acknowledging the non-trivial possibility that self-improving superintelligence might arrive in the relatively near future. Here is a a graph that lots a possible risk function of AI creation with respect to available computing power and the average IQs of the programming team: http://www.acceleratingfuture.com/michael/works/AIdifficulty.htm.

    Conclusion:

    The arrival of nanotechnology will herald a mess of totally unmanageable difficulties. Human intelligence and ethics are not enough to handle these challenges. Without smarter-than-human, kinder-than-human forms of intelligence to assist us in confronting these grave difficulties, our continued survival cannot be ensured. Stubborn chauvinism ("no non-human is a friend of mine!"), juvenile overconfidence ("we humans can handle this on our own, right?"), or dismissive skepticism ("kinder-than-human intelligence isn't even possible!") will only increase the probability of our demise. To avoid the negative impact of grey goo, green goo, nano-litter, human rights disasters, nano-wizardry, economic and social upheaval, arms races, and other unforseen risks will require true superintelligence, nothing less. Superintelligence will be technologically feasible within the next two decades (Bostrom 1998). Once created, superintelligence will compound upon itself rapidly, resulting in the creation of agents with deity-class capabilities (Vinge 1993). Near-future outcomes ranging from planetary destruction to global apotheosis are entirely possible (Bostrom 2003). It should be possible to increase the likelihood of a pleasant outcome by precisely specifying the initial state of a superintelligence by coding a seed AI (Yudkowsky 2001). (A "seed AI" is an AI specifically designed to fully understand and improve upon its own architecture.) The implementation of other proposed solutions will be subject to human error, irrationality, slowness, and inability to handle complexity.

    References:

    Bostrom, N. 2003. "Ethical Issues in Advanced Artificial Intelligence". Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17. http://www.nickbostrom.com/ethics/ai.html

    Bostrom, N. 1998. "How Long Until Superintelligence?" International Journal of Future Studies, 1998, vol. 2. Updated version at http://www.nickbostrom.com/superintelligence.htm

    Bradbury, R. J. 2001. Sapphire Mansions.

    Vinge, V. 1993."The Coming Technological Singularity." VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March, 1993.

    Yudkowsky, E. (2003). Creating Friendly AI 1.0. http://www.singinst.org/CFAI/index.html

    Related articles:

    My Position on Nanotechnology Administrative Policy


    Back to articles