It’s not clear to me whether the Singularity is a technical belief system or a spiritual one.
The
Singularity–a notion that’s crept into a lot of skiffy, and whose most
articulate in-genre spokesmodel is Vernor Vinge–describes the black
hole in history that will be created at the moment when human
intelligence can be digitized. When the speed and scope of our
cognition is hitched to the price-performance curve of microprocessors,
our "prog-ress" will double every eighteen months, and then every
twelve months, and then every ten, and eventually, every five seconds.
Singularities
are, literally, holes in space from whence no information can emerge,
and so SF writers occasionally mutter about how hard it is to tell a
story set after the information Singularity. Everything will be
different. What it means to be human will be so different that what it
means to be in danger, or happy, or sad, or any of the other elements
that make up the squeeze-and-release tension in a good yarn will be
unrecognizable to us pre-Singletons.
It’s
a neat conceit to write around. I’ve committed Singularity a couple of
times, usually in collaboration with gonzo Singleton Charlie Stross,
the mad antipope of the Singularity. But those stories have the same
relation to futurism as romance novels do to love: a shared jumping-off
point, but radically different morphologies.
Of
course, the Singularity isn’t just a conceit for noodling with in the
pages of the pulps: it’s the subject of serious-minded punditry,
futurism, and even science.
Ray
Kurzweil is one such pundit-futurist-scientist. He’s a serial
entrepreneur who founded successful businesses that advanced the fields
of optical character recognition (machine-reading) software,
text-to-speech synthesis, synthetic musical instrument simulation,
computer-based speech recognition, and stock-market analysis. He cured
his own Type-II diabetes through a careful review of the literature and
the judicious application of first principles and reason. To a casual
observer, Kurzweil appears to be the star of some kind of Heinlein
novel, stealing fire from the gods and embarking on a quest to bring
his maverick ideas to the public despite the dismissals of the
establishment, getting rich in the process.
Kurzweil
believes in the Singularity. In his 1990 manifesto, "The Age of
Intelligent Machines," Kurzweil persuasively argued that we were on the
brink of meaningful machine intelligence. A decade later, he continued
the argument in a book called The Age of Spiritual Machines,
whose most audacious claim is that the world’s computational capacity
has been slowly doubling since the crust first cooled (and before!),
and that the doubling interval has been growing shorter and shorter
with each passing year, so that now we see it reflected in the computer
industry’s Moore’s Law, which predicts that microprocessors will get
twice as powerful for half the cost about every eighteen months. The
breathtaking sweep of this trend has an obvious conclusion: computers
more powerful than people; more powerful than we can comprehend.
Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever
(with Terry Grossman, Rodale, November 2004). The former is a
technological roadmap for creating the conditions necessary for ascent
into Singularity; the latter is a book about life-prolonging
technologies that will assist baby-boomers in living long enough to see
the day when technological immortality is achieved.
See what I meant about his being a Heinlein hero?
I
still don’t know if the Singularity is a spiritual or a technological
belief system. It has all the trappings of spirituality, to be sure. If
you are pure and kosher, if you live right and if your society is just,
then you will live to see a moment of Rapture when your flesh will
slough away leaving nothing behind but your ka, your soul, your
consciousness, to ascend to an immortal and pure state.
I wrote a novel called Down and Out in the Magic Kingdom
where characters could make backups of themselves and recover from them
if something bad happened, like catching a cold or being assassinated.
It raises a lot of existential questions: most prominently: are you
still you when you’ve been restored from backup?
The
traditional AI answer is the Turing Test, invented by Alan Turing, the
gay pioneer of cryptography and artificial intelligence who was forced
by the British government to take hormone treatments to "cure" him of
his homosexuality, culminating in his suicide in 1954. Turing cut
through the existentialism about measuring whether a machine is
intelligent by proposing a parlor game: a computer sits behind a locked
door with a chat program, and a person sits behind another locked door
with his own chat program, and they both try to convince a judge that
they are real people. If the computer fools a human judge into thinking
that it’s a person, then to all intents and purposes, it’s a person.
So
how do you know if the backed-up you that you’ve restored into a new
body–or a jar with a speaker attached to it–is really you? Well, you
can ask it some questions, and if it answers the same way that you do,
you’re talking to a faithful copy of yourself.
Sounds good. But the me who sent his first story into Asimov’s seventeen years ago couldn’t answer the question, "Write a story for Asimov’s" the same way the me of today could. Does that mean I’m not me anymore?
Kurzweil has the answer.
"If
you follow that logic, then if you were to take me ten years ago, I
could not pass for myself in a Ray Kurzweil Turing Test. But once the
requisite uploading technology becomes available a few decades hence,
you could make a perfect-enough copy of me, and it would
pass the Ray Kurzweil Turing Test. The copy doesn’t have to match the
quantum state of my every neuron, either: if you meet me the next day,
I’d pass the Ray Kurzweil Turing Test. Nevertheless, none of the
quantum states in my brain would be the same. There are quite a few
changes that each of us undergo from day to day, we don’t examine the
assumption that we are the same person closely.
"We
gradually change our pattern of atoms and neurons but we very rapidly
change the particles the pattern is made up of. We used to think that
in the brain–the physical part of us most closely associated with our
identity–cells change very slowly, but it turns out that the components
of the neurons, the tubules and so forth, turn over in only days. I’m a completely different set of particles from what I was a week ago.
"Consciousness
is a difficult subject, and I’m always surprised by how many people
talk about consciousness routinely as if it could be easily and readily
tested scientifically. But we can’t postulate a consciousness detector
that does not have some assumptions about consciousness built into it.
"Science
is about objective third party observations and logical deductions from
them. Consciousness is about first-person, subjective experience, and
there’s a fundamental gap there. We live in a world of assumptions
about consciousness. We share the assumption that other human beings
are conscious, for example. But that breaks down when we go outside of
humans, when we consider, for example, animals. Some say only humans
are conscious and animals are instinctive and machinelike. Others see
humanlike behavior in an animal and consider the animal conscious, but
even these observers don’t generally attribute consciousness to animals
that aren’t humanlike.
"When
machines are complex enough to have responses recognizable as emotions,
those machines will be more humanlike than animals."
The
Kurzweil Singularity goes like this: computers get better and smaller.
Our ability to measure the world gains precision and grows ever
cheaper. Eventually, we can measure the world inside the brain and make
a copy of it in a computer that’s as fast and complex as a brain, and
voila, intelligence.
Here
in the twenty-first century we like to view ourselves as ambulatory
brains, plugged into meat-puppets that lug our precious grey matter
from place to place. We tend to think of that grey matter as
transcendently complex, and we think of it as being the bit that makes
us us.
But brains aren’t that complex, Kurzweil says. Already, we’re starting to unravel their mysteries.
"We
seem to have found one area of the brain closely associated with
higher-level emotions, the spindle cells, deeply embedded in the brain.
There are tens of thousands of them, spanning the whole brain (maybe
eighty thousand in total), which is an incredibly small number. Babies
don’t have any, most animals don’t have any, and they likely only
evolved over the last million years or so. Some of the high-level
emotions that are deeply human come from these.
"Turing
had the right insight: base the test for intelligence on written
language. Turing Tests really work. A novel is based on language: with
language you can conjure up any reality, much more so than with images.
Turing almost lived to see computers doing a good job of performing in
fields like math, medical diagnosis and so on, but those tasks were
easier for a machine than demonstrating even a child’s mastery of
language. Language is the true embodiment of human intelligence."
If
we’re not so complex, then it’s only a matter of time until computers
are more complex than us. When that comes, our brains will be
model-able in a computer and that’s when the fun begins. That’s the
thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.
Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z
petabytes, and that n and x and z are all within reach. It may be that
we can take a brain apart and record the position and relationships of
all the neurons and sub-neuronal elements that constitute a brain.
But
there are also a nearly infinite number of ways of modeling a brain in
a computer, and only a finite (or possibly nonexistent) fraction of
that space will yield a conscious copy of the original meat-brain.
Science fiction writers usually hand-wave this step: in Heinlein’s "Man
Who Sold the Moon," the gimmick is that once the computer becomes
complex enough, with enough "random numbers," it just wakes up.
Computer
programmers are a little more skeptical. Computers have never been
known for their skill at programming themselves–they tend to be no
smarter than the people who write their software.
But
there are techniques for getting computers to program themselves, based
on evolution and natural selection. A programmer creates a system that
spits out lots–thousands or even millions–of randomly generated
programs. Each one is given the opportunity to perform a computational
task (say, sorting a list of numbers from greatest to least) and the
ones that solve the problem best are kept aside while the others are
erased. Now the survivors are used as the basis for a new generation of
randomly mutated descendants, each based on elements of the code that
preceded them. By running many instances of a randomly varied program
at once, and by culling the least successful and regenerating the
population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.
Indeed,
evolutionary computing is a promising and exciting field that’s
realizing real returns through cool offshoots like "ant colony
optimization" and similar approaches that are showing good results in
fields as diverse as piloting military UAVs and efficiently
provisioning car-painting robots at automotive plants.
So
if you buy Kurzweil’s premise that computation is getting cheaper and
more plentiful than ever, then why not just use evolutionary algorithms
to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinlein’s Mike computer?
Indeed, this is the crux of Kurz-weil’s argument in Spiritual Machines: if
we have computation to spare and a detailed model of a human brain, we
need only combine them and out will pop the mechanism whereby we may
upload our consciousness to digital storage media and transcend our
weak and bothersome meat forever.
But
it’s a cheat. Evolutionary algorithms depend on the same mechanisms as
real-world evolution: herit-able variation of candidates and a system
that culls the least-suitable candidates. This latter–the
fitness-factor that determines which individuals in a cohort breed and
which vanish–is the key to a successful evolutionary system. Without
it, there’s no pressure for the system to achieve the desired goal:
merely mutation and more mutation.
But
how can a machine evaluate which of a trillion models of a human brain
is "most like" a conscious mind? Or better still: which one is most
like the individual whose brain is being modeled?
"It is a sleight of hand in Spiritual Machines," Kurzweil admits. "But in The Singularity Is Near,
I have an in-depth discussion about what we know about the brain and
how to model it. Our tools for understanding the brain are subject to
the Law of Accelerating Returns, and we’ve made more progress in
reverse-engineering the human brain than most people realize." This is
a tasty Kurzweilism that observes that improvements in technology yield
tools for improving technology, round and round, so that the thing that
progress begets more than anything is more and yet faster progress.
"Scanning
resolution of human tissue–both spatial and temporal–is doubling every
year, and so is our knowledge of the workings of the brain. The brain
is not one big neural net, the brain is several hundred different
regions, and we can understand each region, we can model the regions
with mathematics, most of which have some nexus with chaos and
self-organizing systems. This has already been done for a couple dozen
regions out of the several hundred.
"We
have a good model of a dozen or so regions of the auditory and visual
cortex, how we strip images down to very low-resolution movies based on
pattern recognition. Interestingly, we don’t actually see things, we
essentially hallucinate them in detail from what we see from these low
resolution cues. Past the early phases of the visual cortex, detail
doesn’t reach the brain.
"We are getting exponentially more knowledge. We can get detailed scans of neurons working in vivo,
and are beginning to understand the chaotic algorithms underlying human
intelligence. In some cases, we are getting comparable performance of
brain regions in simulation. These tools will continue to grow in
detail and sophistication.
"We
can have confidence of reverse-engineering the brain in twenty years or
so. The reason that brain reverse engineering has not contributed much
to artificial intelligence is that up until recently we didn’t have the
right tools. If I gave you a computer and a few magnetic sensors and
asked you to reverse-engineer it, you might figure out that there’s a
magnetic device spinning when a file is saved, but you’d never get at
the instruction set. Once you reverse-engineer the computer fully,
however, you can express its principles of operation in just a few
dozen pages.
"Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. We’re just now getting these tools and there’s very rapid application of the tools to obtain the data.
"Twenty
years from now we will have realistic simulations and models of all the
regions of the brain and [we will] understand how they work. We won’t
blindly or mindlessly copy those methods, we will understand them and
use them to improve our AI toolkit. So we’ll learn how the brain works
and then apply the sophisticated tools that we will obtain, as we
discover how the brain works.
"Once
we understand a subtle science principle, we can isolate, amplify, and
expand it. Air goes faster over a curved surface: from that insight we
isolated, amplified, and expanded the idea and invented air travel.
We’ll do the same with intelligence.
"Progress
is exponential–not just a measure of power of computation, number of
Internet nodes, and magnetic spots on a hard disk–the rate of paradigm
shift is itself accelerating, doubling every decade. Scientists look at
a problem and they intuitively conclude that since we’ve solved 1
percent over the last year, it’ll therefore be one hundred years until
the problem is exhausted: but the rate of progress doubles every
decade, and the power of the information tools (in price-performance,
resolution, bandwidth, and so on) doubles every year. People, even
scientists, don’t grasp exponential growth. During the first decade of
the human genome project, we only solved 2 percent of the problem, but
we solved the remaining 98 percent in five years."
But
Kurzweil doesn’t think that the future will arrive in a rush. As
William Gibson observed, "The future is here, it’s just not evenly
distributed."
"Sure,
it’d be interesting to take a human brain, scan it, reinstantiate the
brain, and run it on another substrate. That will ultimately happen."
"But the most salient scenario is that we’ll gradually
merge with our technology. We’ll use nanobots to kill pathogens, then
to kill cancer cells, and then they’ll go into our brain and do benign
things there like augment our memory, and very gradually they’ll get
more and more sophisticated. There’s no single great leap, but there is
ultimately a great leap comprised of many small steps.
"In The Singularity Is Near,
I describe the radically different world of 2040, and how we’ll get
there one benign change at a time. The Singularity will be gradual,
smooth.
"Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 1026
to 1029 calculations per second (cps) in the approximately 1010
biological human brains on Earth and that number won’t change much in
fifty years, but nonbiological thinking will just crash through that.
By 2049, nonbiological thinking capacity will be on the order of a
billion times that. We’ll get to the point where bio thinking is
relatively insignificant.
"People
didn’t throw their typewriters away when word-processing started.
There’s always an overlap–it’ll take time before we realize how much
more powerful nonbiological thinking will ultimately be."
It’s well and good to talk about all the stuff we can do with technology, but it’s a lot more important to talk about the stuff we’ll be allowed
to do with technology. Think of the global freak-out caused by the
relatively trivial advent of peer-to-peer file-sharing tools:
Universities are wiretapping their campuses and disciplining computer
science students for writing legitimate, general purpose software;
grandmothers and twelve-year-olds are losing their life savings;
privacy and due process have sailed out the window without so much as a
by-your-leave.
Even P2P’s worst enemies admit that this is a general-purpose technology with good and
bad uses, but when new tech comes along it often engenders a response
that countenances punishing an infinite number of innocent people to
get at the guilty.
What’s
going to happen when the new technology paradigm isn’t song-swapping,
but transcendent super-intelligence? Will the reactionary forces be
justified in razing the whole ecosystem to eliminate a few parasites
who are doing negative things with the new tools?
"Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.
"Everything
will become software–objects will be malleable, we’ll spend lots of
time in VR, and computhought will be orders of magnitude more important
than biothought.
"Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.
"That’s
partly because technology is unregulated and people have access to the
tools to create malware and the medicine to treat it. Today’s software
viruses are clever and stealthy and not simpleminded. Very clever.
"But
here’s the thing: you don’t see people advocating shutting down the
Internet because malware is so destructive. I mean, malware is
potentially more than a nuisance–emergency systems, air traffic
control, and nuclear reactors all run on vulnerable software. It’s an
important issue, but the potential damage is still a tiny fraction of
the benefit we get from the Internet.
"I
hope it’ll remain that way–that the Internet won’t become a regulated
space like medicine. Malware’s not the most important issue facing
human society today. Designer bioviruses are. People are concerted
about WMDs, but the most daunting WMD would be a designed biological
virus. The means exist in college labs to create destructive viruses
that erupt and spread silently with long incubation periods.
"Importantly,
a would-be bio-terrorist doesn’t have to put malware through the FDA’s
regulatory approval process, but scientists working to fix bio-malware do.
"In Huxley’s Brave New World,
the rationale for the totalitarian system was that technology was too
dangerous and needed to be controlled. But that just pushes technology
underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.
"The
way to put more stones on the defense side of the scale is to put more
resources into defensive technologies, not create a totalitarian regime
of Draconian control.
"I
advocate a one hundred billion dollar program to accelerate the
development of anti-biological virus technology. The way to combat this
is to develop broad tools to destroy viruses. We have tools like RNA
interference, just discovered in the past two years to block gene
expression. We could develop means to sequence the genes of a new virus
(SARS only took thirty-one days) and respond to it in a matter of days.
"Think
about it. There’s no FDA for software, no certification for
programmers. The government is thinking about it, though! The reason
the FCC is contemplating Trusted Computing mandates,"–a system to
restrict what a computer can do by means of hardware locks embedded on
the motherboard–"is that computing technology is broadening to cover
everything. So now you have communications bureaucrats, biology
bureaucrats, all wanting to regulate computers.
"Biology
would be a lot more stable if we moved away from regulation–which is
extremely irrational and onerous and doesn’t appropriately balance
risks. Many medications are not available today even though they should
be. The FDA always wants to know what happens if we approve this and
will it turn into a thalidomide situation that embarrasses us on CNN?
"Nobody
asks about the harm that will certainly accrue from delaying a
treatment for one or more years. There’s no political weight at all,
people have been dying from diseases like heart disease and cancer for
as long as we’ve been alive. Attributable risks get 100-1000 times more
weight than unattributable risks."
Is
this spirituality or science? Perhaps it is the melding of both–more
shades of Heinlein, this time the weird religions founded by people who
took Stranger in a Strange Land way too seriously.
After
all, this is a system of belief that dictates a means by which we can
care for our bodies virtuously and live long enough to transcend them.
It is a system of belief that concerns itself with the meddling of
non-believers, who work to undermine its goals through irrational
systems predicated on their disbelief. It is a system of belief that
asks and answers the question of what it means to be human.
It’s
no wonder that the Singularity has come to occupy so much of the
science fiction narrative in these years. Science or spirituality, you
could hardly ask for a subject better tailored to technological
speculation and drama.