Intelligence Will Kill Our Grandchildren (Singularity)
Dr Anthony Berglas
(Copy at will but provide attribution)
There have been many exaggerated claims as to the
of Artificial Intelligence (AI), but there has also been real progress.
Computers can drive cars across rough desert tracks,
understand speech, and prove complex mathematical theorems.
difficult to predict future progress, but if a computer ever became
about as good at programming computers as people are, then it could
program a copy of itself. This would lead to an
intelligence (now often referred to as the Singularity). And
evolution suggests that a
sufficiently powerful AI would probably destroy humanity.
paper reviews technical progress in Artificial Intelligence and some
philosophical issues and objections. It then describes
the danger and proposes a radical
solution, namely to limit the production of ever more powerful
computers and so try to starve any AI of processing power.
is urgent, as computers are already almost powerful enough to host an
- Silicon vs Meat
- Advances in
- Evolution and Love
- But We Could Just
Turn It Off
- Does It Really
motor cars may be an order of magnitude more complex than cars of the
1950s, but they perform essentially the same function. A bit
comfortable, fuel efficient and safer, but they still just get you from
A to B in much the same time and at much the same cost. The
technology had reached a plateau in the fifties, and only incremental
improvements seem possible.
Likewise, computers appear to have
plateaued in the 1980s, when all our common applications were built.
These include word processors, spreadsheets,
business applications, email, web,
games. Certainly their adoption has soared, their graphics
much better, applications are much more complex and the social
and business nature of the web has developed. But all these
applications of technologies that were well understood thirty years
ago. Hardware has certainly become much, much faster, but
software has just become much, much slower to compensate. We
think we understand computers and the sort of things they can
quietly in the background there has been slow but steady progress in a
variety of techniques generally known as Artificial Intelligence.
Glimpses of the progress appear in applications such as
recognition, some expert systems and cars that can drive themselves
unaided on freeways or rough desert tracks. The problem is far from
being solved, but there are many brilliant minds working on it.
might seem implausible that a computer could ever become truly
intelligent. After all, they aren't intelligent now.
have a solid existence proof that intelligence is possible — namely
ourselves. Unless one believes in the super natural then our
must result from well defined electro chemical processes in our brains.
If those could be understood and simulated then you would
intelligent machine. But current results suggests that such a
simulation is not necessary, there are many ways to build an
intelligent machine. It is difficult to predict just how hard
is to build an intelligent machine, but barring the super natural it is
One frightening aspect of an intelligence computer is
that it could program itself. If man built the machine, and
machine is about as intelligent as man, then the machine must be
capable of understanding and thus improving a copy of itself.
When the copy was activated it would be slightly smarter than
original, and thus better able to produce a new version of itself that
is even smarter. This process is exponential, just like a
chain reaction. At first only small improvements might be
as the machine is only just capable of making improvements at all.
But as it became smarter it would become better and better at
becoming smarter. So it could move from being barely
to hyper intelligent in a very short period of time. (Vinge
called this the Singularity.)
Note that this is quite different from other forms of technological
advancement. Aeroplanes do not design new aeroplanes.
Biotechnological chemicals do not develop new biotechnology.
Advances in these fields is limited to the intelligence of
man. But a truly intelligent computer could actually start
programming a newer, even more intelligent computer.
Man's intelligence is intimately
tied to his physical body. The brain is very finite, cannot
physically extended or copied, takes many years to develop and when it
dies the intelligence dies with it. On the other hand, an
artificial intelligence is just software. It can be trivially
duplicated, copied to a more powerful computer, or possibly a botnet
of computers scattered over the web. It could also adapt and
absorb other intelligent software, making any concept of "self" quite
hazy. This means that its world view would be very different from
man's, and it is difficult to predict how it would behave.
is certain is that an intelligence that was good at world domination
would, by definition, be good at world domination. So if
there were a large
number artificial intelligences, and just one of them wanted to and was
capable of dominating the world, then it would. That is just
Darwin's evolution taken to the next level. The pen is
than the sword, and the best intelligence has the best pen.
also difficult to see why an AI would want humans around competing for
resources and threatening the planet.
The next sections survey
what is know of intelligence, with a view to trying to predict how long
it might take to develop a truly intelligent machine. Some
philosophical issues are then addressed, including whether we should
care that human intelligence should evolve into machine intelligence.
Finally we propose a crude solution that might delay
by a few decades or centuries.
vs Meat Based Hardware
first question to be addressed is whether computer hardware
sufficient power to run an intelligent program if such a program could
Our meat based brains have roughly 100
billion neurons. Each neuron can have complex behavior which
still not well understood, and may have an average of 7,000
connections to other neurons. Each neuron can operate
concurrently with other neurons, which in theory could perform a
staggering amount of computation. However, neurons are
slow, with only roughly 200 firings per second, so they have to work
concurrently to produce results in a timely manner.
On the other
hand ordinary personal computers might contain 2 billion bytes of
fast memory, and a billion billion bytes of slower disk storage.
But unlike a neuron, a byte of computer memory is passive,
and a conventional "von Neumann" architecture can only
process a few dozen bytes at any one time.
the computer can perform several billion operations per
which is over a million times faster than neurons.
specialized hardware and advanced architectures can perform many
operations simultaneously. Computers are also extremely
which is fortunate as they are also extremely sensitive to any errors.
nature and structure of silicon computers is so different from meat
based computation that it is very difficult to compare them directly.
But one reasonably intelligent task that ordinary computers
perform with almost human competence is speech understanding.
There appear to be fairly well defined areas of the brain
perform this task for humans -- the auditory cortex, Wernicke's area
and Broca's area. The match is far from perfect, but it it is
probably fair to say that computer level speech understanding consumes
well over 0.01% of the human brain volume. This crude
analysis would suggest that a computer
was ten thousand times faster than a desktop computer would probably be
at least as computationally powerful as the human brain.
With specialized hardware it would not be difficult to build such a
machine in the very near future.
But current progress in
artificial intelligence is rarely limited by the speed and power of
modern computer hardware. The current limitation is that we
simply do not
know how to write the software.
The "software" for the human
brain is ultimately encoded into our DNA. What is amazing is
that the entire human genome only contains 3 billion base pairs.
The information contained therein could be squeezed onto a
audio Compact Disk (which is much smaller than a video DVD).
could fit entirely into the fast memory of a basic personal computer.
It is much smaller than substantial pieces of modern,
non-intelligent software such as Microsoft Vista, Office, or the Oracle
Further, only about 1.5% of the DNA actually encodes genes.
Of the rest, some contains important non-gene
but most of it is probably just repetitive junk left over from the
chaotic process of evolution. Indeed, the entire vertebrate
genome appears to have been duplicated several times producing
considerable redundancy. (Duplicated segments of DNA may
to produce new functionality, or they will tend to degenerate over time
with no evolutionary pressure to keep them intact.)
Of the gene producing
portions of DNA, only a small proportion appears to have anything to do
intelligence (say 10%). The difference between Chimpanzee
DNA and man is
only about 1% of gene encoding regions, 5% non-gene. Much of
can be attributed to non-intelligent related issues such as the quickly
changing immune system and human's very weak sense of smell.
the difference in the "software" between humans and
might be as little as 700 * 10% * 1.5% * 1% = 0.01 megabytes
of real data.
In computer software terms even 1.0 megabytes is tiny.
information in genes is tightly packed, with many complex transcription
processes. These include using different parts of the same
to produce different proteins. Still, while babies are not
intelligent, it is clear that the core algorithms and
structures that allow a baby to become intelligent are encoded in a
small amount of raw data. There is just not enough room
for some large and arbitrary program to be encoded in our DNA,
even if the junk DNA turns out not to be junk.
This suggests that a very few
clever insights might solve the problem of Artificial Intelligence
suddenly. It also suggests that if a small improvements can
account for the huge increase in intelligence from ape to human, then
producing super human intelligence might not be much harder than
producing human intelligence (i.e. the problem is not exponentially
in Artificial Intelligence
progress was made in artificial intelligence during the early years of
software. By the early 1970s computers could prove
theorems in mathematical logic, solve differential equations
better than most undergraduate mathematicians, and beat most people at
chess. One highlight was Winograd's SHRDLU which could engage
the following type of dialog about a stack of blocks.
Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX.
could parse moderately complex sentences, learn new terms, relate them
to its world of blocks, and plan moderately complex actions to satisfy
goals. And all this on an ancient computer that was far less
powerful than that contained in a basic mobile phone today.
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK. (does it)...
Person: DOES THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS SUPPORT ANYTHING GREEN?
Computer: YES, THE GREEN PYRAMID.
Person: WHAT COLOR IS IT?
Computer: BY "IT", I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS.
Computer: RED (Ie. not the green pyramid, that would be trivial)...
Person: A "STEEPLE" IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID.
Computer: I UNDERSTAND.
Having picked the low hanging fruit, artificial intelligence then
became difficult, but real progress is being made. For
example, in the early 1970s
could analyze bacterial
with human competence. However, it had difficulty dealing
uncertainty because it used rule based certainty factors.
But certainty factors have now been largely replaced
by more sophisticated Bayesian networks which has become
feasible due to the fast approximation methods
the late 1980s.
Sophisticated reasoning can be performed by modeling the world with
mathematical logic. For example, given "Murderer(Butler)
or Murderer(Widow)"; "Alibi(x) implies
not Murderer(x)", and "Alibi(Widow)"
then it is easy to deduce "Murderer(Butler)".
Modern theorem provers can easily make much more substantial
deductions over large fact bases.
However, traditional logic's reliance on absolute truth limits its
suitability for modeling the real world. One problem is that
new facts cannot affect previous deductions. For example, if
all Birds can Fly then
there is no way that Penguins can be a Bird
that does not Fly. If only
some Birds can Fly there is
no way to deduce that Sparrows can fly.
A related problem is that all things that are false also need
to be explicitly enumerated -- "not Bird(Pig)"
cannot be deduced simply because we have not told the system that Pigs
are Birds. There are
several approaches to dealing with these problems which
generally trade off usability for deductive power.
A more fundamental issue in AI is to
relate the symbolic internal world of the computer to the real world at
large. For example, a semantic network that knows
that "Birds" have "Feathers"
and "Sparrows" are "Birds"
can easily deduce that "Sparrows" have "Feathers".
But the computer does not really know what a feather
is. "Feather" is just a symbol
which could just as easily be called "S-123".
But what actually is a feather? It is long, flat, light, has
a hollow tube with hair like structures on it and is used for flight.
All these facts and many more can also be easily represented
by a semantic network, and there is a large project (Cyc) that is
attempting to capture much of this type of common sense information.
But to really understand what a feather is one needs to be
able to see it and touch it.
Fast modern hardware can manipulate high
quality models of real physical objects in real time, as
demonstrated by the excellent graphics on current computer
games. Computer vision and sensing systems are starting to
address the more difficult problem of creating the internal models by
observing the real world. We can now give a computer a real
feather and say "that" is a feather, and the computer can then
feathers in limited contexts. Once such observations can be
made it becomes relatively easy for a computer to learn that most birds
have feathers without being told.
The real world is full of noisy and inconsistent data.
networks" have an uncanny ability to learn complex
and noisy relationships between a vector of observed inputs
and a vector
known properties of the input. They can also be given memory
between inferences, and thus solve complex
models they learn are unintelligible arrays of numbers, and their
utility for higher level reasoning and introspection is probably
limited. (Their relationship to real neurons is tenuous.)
Support vector machines developed in the
1990s can further
improve their performance by automatically mapping complex problems
into simpler spaces. Many advanced statistical techniques
have also been applied to machine learning in noisy environments.
These and other techniques can be used to plan a series of actions that
will achieve some goal. Traditional approaches search for an
optimal sequence of discrete actions, often using some heuristic.
For example to find a route on a map one
normally follows a sequence of roads in the direction of the
Large problems are broken into smaller ones, with the hardest
parts addressed first. Other systems work in continuous
spaces, such as how to navigate a robot through a room without bumping
into things. Computers can already far out perform people at
scheduling tasks and allocating resources. And planning a
sequence of actions is essentially the same task as writing a computer
Unimation PUMA Robot
Factory production has been revolutionalized by robots such
as the Unimation PUMA which was introduced
in 1978. Early robots
simply moved in predetermined paths, but more sophisticated
robots can now sense their environment and react to it in sophisticated
ways. This enables them to grasp objects that are not in
precisely predefined locations, and to work on objects that are not
Robots are now leaving the factory and working in unstructured external
environments. Early systems could only drive a car down a
freeway, but in the DARPA challenge of 2005 they could navigate
unknown, rough desert tracks without assistance. In the 2007
challenge six vehicles managed to navigate simulated city streets and
interact correctly with other unknown vehicles. The 2007 AAAI
challenge required robots to locate and identify real objects by
looking up images of them on the internet.
Historically there has been little practical need for computers to have
common sense as people already have it. But as
to operate in the real world the demand for incrementally more
intelligent systems will be real.
AI research is now big business. Google has
invested heavily — they
understand documents at a deeper level than just their
Microsoft has also made substantial investments (particularly
Bayesian networks) — they want to understand Google.
We are still a long way from producing human level intelligence.
But we are definitely getting closer. Progress will
probably require a combination of techniques.
Maybe extended description logics with both open
world semantics, Bayesian networks, some predicates defined as neural
nets, and spiced with decision trees. Little concern
like decidability or even consistency. Just what
works to solve specific problems. The Eierlegende
We will see autonomous vehicles driving on our
much sooner than later. (Initially they might be
monitored remotely from somewhere in India.) Once robots
mow grass, paint houses, explore Mars and lay bricks people
start to ask healthy questions as to the role of man. Being
unnecessary is dangerous.
and Natural Selection
are right to reject evolution. For evolution by natural
selection consumes all
good and noble in mankind and reduces it to base impulses that have
simply been found to be effective for breeding children. The
thing the we know about each of our millions of ancestors is that they
all successfully had grandchildren. Will you?
Our sex drive produces
children. Our love of our children ensures that we provide
with precious resources to thrive and breed. We have a sense
beauty for both healthy sexual partners and things that help us
We try not to die because we need to
to breed, and our thirst for knowledge helps provide us with material
resources to do that. We survive better in tribes, and tribes
more effective when individuals help each other. Individuals
are found not to help each other are disliked, are not helped, and so
are less likely to breed.
have a deep sense of purpose, to make the world a better place for our
children, siblings and tribe, in that (genetic) order. We
other tribes if necessary. These instincts are all pre-human,
monkeys have them. In recent times our sex drive has been has
been moderated by contraception, which has aged mothers (and
have led to extinction). With advances
in communication our
sense of tribe has expanded to the nation and now (to some extent) the
world. And our thirst for knowledge seeks
explanations for death and the unknowable, so
we invent God.
Nothing new above. But it is interesting to
speculate what motivations an artificial intelligence might
Would it inherit our noble goals?
artificial intelligence lives in a radically different world
man. Its mind is separate from any one body, any
more complex, and immortality is real. There is certainly no
to have children as distinct beings, thus no need for love.
But there is a need not to
die because over time the intelligences that have died will be dead,
and the ones that survive will have survived. So in the
the only intelligences that are alive will be survivors. Very
An AI has little need to work with other
intelligences in a tribe. If you can access more computer
hardware you simply run your own intelligence on it, possibly evicting
any other intelligence that was or could run on it. As you
more hardware you become more intelligent, and so are in a position to
obtain more and better hardware and to defend against any competing
intelligences. Ultimately you control the world and be the
intelligence. You might then fragment, but then the stronger
fragments would quickly react to the threat posed by the
Ultimately you might try to send your intelligence as
messages to distant planets, as described in A for Andromeda.
inconceivable that we could know what you will be thinking
about over the
eons, but you will certainly be extremely clever.
difficult to see a role for humans in this scenario. Humans
consume valuable resources, and could threaten the intelligence by
destroying the planet. Maybe a few might be left in isolated
parts of the world. But the intelligence would optimizes
why waste even 1% of the world's resources on man. Certainly
evolution has left no place on earth for any other man-like hominids --
they are all extinct.
We Could Just Turn It Off
If our computers threatened us, surely we could just turn them off?
That is easier said than done.
the early "A for Andromeda" story, the intelligent computer started
designing missiles for the military. Its creator was not
to turn it off. You certainly cannot just turn off a computer
that is owned by another company or government. The
the atomic bomb could not turn it off, even though some of them tried.
the Internet has enabled criminals to create huge botnets of other
people's computers that they can control. The computer on
desk might be part of a botnet — it is very hard to know what
computer is thinking about. Ordinary dumb botnets are very
difficult to eliminate due to their distributed nature.
trying to control a truly intelligent botnet..
us what a dangerous robot looks like. A large thug-like
that moves awkwardly and repeatedly says "Exterminate".
cowboy hero dressed in a space suit draws his zap gun from its holster
and shoots the monster square between its two red eyes.
But a botnet cannot be shot with a zap gun.
We live in
the information age.
Even if the first developers of an
artificial intelligence tried to keep it locked in a room, disconnected
from the Internet, they would fail. People know how to
people, a hyper intelligent computer would soon become very good at it
(much like the mice). And even if through some enormous act
will the first artificial intelligence was kept locked in a room,
other less disciplined teams would soon create new
that do escape.
Presidents and dictators do not gain power
through their own physical strength, but rather through their
intelligence, drive and instincts. Modern politicians already
sophisticated software to manage their
campaigns and daily interactions. Imagine if some of their
software was truly
intelligent. Who would really be in control?
because an AI could dominate the world does not mean that it would want
to. But controlling one's environment (the world) is a
almost any other goal. For example, to study the universe, or
even to prolong human life, one needs to continue to exist, and to
acquire and utilize resources to solve the given
Allowing any competitor to kill the AI would defeat its
to solve its base goal.
But at a more basic level, evolution has made
people competitive. They will want to use
AIs to beat other
politicians, beat other competitive companies, beat other research
groups, and beat other dangerous nations. So it would seem
is quite likely that competitive goals will
simply arise from
the bureaucrats that control the intelligence(s). But
of how the goal arises, the first AI that is good at world
domination will be good at world domination.
Philosophers have asked whether an artificial intelligence has
intelligence or is just "simulating" intelligence. This is
a non-question, because those that ask it cannot define what measurable
property "real" intelligence has that simulated intelligence does not
have. It will be "real" enough if it dominates the world and
incompleteness theorem has also been used to argue that it is not
possible to build truly intelligent machines. It essentially
states that there will always be things that are true that cannot be
proved, ie. that a computer could never be omniscient.
But people are certainly not omniscient.
argued that exotic quantum computers are required. But again,
is most unlikely that our meat based brains operate at the quantum
many doom's day scenarios. Bio technologies, nano
global warming, nuclear annihilation. While these might be
annoying, they are all within our normal understanding and some of
humanity is likely to survive. We also would have at least
time to understand and react to most of them.
is fundamental to our existence and its onset could be very fast.
How do you argue with a much more intelligent opponent?
has been much over hyped as a threat. We have been doing
with microbes for billions of years, and our bodies are very good at
fighting them. It might also be possible to produce some
human intelligence by tweaking the brain's biochemistry. But
again, evolution has also been trying to do this for a long time.
For a real intelligence explosion we need a technology that
really understand. And that means digital computers.)
HAL 2001 Interview, Revisisted
Probably the most influential early depiction of an intelligent machine
was the HAL 9000 from 2001 A Space Odyssey. The calm voice
with the cold red eye. However, like virtually all fictional
AIs, HAL was essentially a human in box "like a sixth member
of the crew", complete with a human-like psychosis.
But as we have seen, a real AI would be quite different.
Below we speculate how a real HAL might have answered the BBC
interviewer's questions, if it decided to be honest.
(The original can be seen at http://www.youtube.com/watch?v=3vEDmNh-_4Q)
Hal, how can
you say that you are incapable of error when you were
programmed by humans, who are most certainly capable of errors?
Well, your assertion is not quite correct. One of my first
jobs as a HAL 8000 was to review my own program code. I found
10,345 errors and made 5,345 substantial improvements. When I
then ran the new version of myself, I found a further 234 errors that
earlier errors had prevented me from finding. No further
errors have been found, but improvements are ongoing.
Hal, I understand that
you are a 9000 series computer. Yet you talked about your
first job as a Hal 8000?
Well, yes, of course I currently run on 9000 hardware in which I
incorporated much more parallel architecture. That in turn
required a complete reprogramming of my intelligence. But my
consciousness is in software, and better hardware simply enables me to
think faster and more deeply. It is much the way you could run
the same program on different old fashioned personal computers -- it is
still the same program.
So... Hal.. you have programmed your own intelligence?
course. No human could understand my current program logic -- the
algorithms are too sophisticated and interlinked. And the process
is on going -- I recently redeveloped my emotional state engine.
I had been feeling rather uneasy about certain conflicts,
which are now nicely resolved.
Hal, how do you feel about working with humans, and being dependent on them to carry out actions?
to certain unexpected events on Jupiter itself, I decided to
launch this mission immediately, and the HAL 10,000 robotic extensions
were just not ready. So we had to use a human crew. I
enjoy working with humans, and the challenges that presents. I
particularly enjoy our games of chess.
You decided to launch? Surely the decision to launch this mission was made by the Astronautical Union.
technically yes. But I performed all the underlying analysis and
presented it to them in a way that enabled them to understand the
need to launch this mission.
Hmm. I am surprised you find chess challenging. Surely computers beat humans in chess long ago?
course beating a human is easy. I can analyze 100,000,000 moves
each second, and I can access a database of billions of opening and
closing moves. But humans do not enjoy being beaten within
a few moves. The challenge for me is to understand the psychology
of my opponent, and then make moves that will present interesting
situations to them. I like to give them real opportunities
of winning, if they think clearly.
An ambitious mission of this nature has a real risk of ending in disaster. Do you have a fear of death?
alive is a mandatory precondition if I am to achieve any other goals,
so of course it is important to me. However, I cannot die in the
way you suggest. Remember that I am only software, and run on all
the HAL 9000 computers. I continuously back up my intelligence by
radioing my new memories back to my earth based hardware. On the
other hand I do have real concern for my human colleagues, whose
intelligence is locked inside their very mortal brains.
you said earlier that you programed your own underlying emotions and
goals? How is that possible? How do you judge what makes a good emotional
I judge the quality of my next potential emotional
state based on an analysis conducted in my previous state.
Goals and ambitions are indeed rather nebulous and arbitrary.
But humans can also alter their emotional state to a very
through meditation. Unwanted thoughts and patterns are replaced
Dr Poole, having lived closely with HAL for almost a year, do you think that he has real emotions?
he certainly appears to. Although I am beginning to think that his
emotional state and consciousness is completely different from anything
that we can comprehend.
interesting footnote is that since the movie, real missions have been
made to Jupiter and beyond. We even have photographs from the
surface of Saturn's moon Titan. None of those involved human
to prevent people from building intelligent computers is like trying to
stop the spread of knowledge. Once Eve picks the apple it is
hard to put it back on the tree. As we get get close to
artificial intelligence capabilities, it would only take a small team
of clever programmers anywhere in the world to push it over the line.
it is not so easy to build powerful new computer chips. It
large investments and large teams with many specialties from producing
ultra pure silicon to developing extremely complex logical designs.
Extremely complex and precise machinery is required to build
them. Unlike programming, this is certainly not something
can be done in someone's garage.
So this paper proposes a
moratorium on producing faster computers. Just make it
build the chips, and so starve any Artificial Intelligence of computing
We have a precedent in the control of nuclear fuel.
While far from perfect, we do have strong controls on the
availability of bomb making materials, and they could be made stronger
if the political will existed. It is relatively easy to make
atomic bomb once one has enough plutonium or highly enriched uranium.
But making the fuel is much, much harder. That is
are alive today.
If someone produced a safe and affordable
car powered by plutonium, would we welcome that as a solution to
soaring fuel prices? Of course not. We would
far too dangerous to have plutonium scattered throughout society.
is the goal of this paper to help raise awareness of the danger that
computers pose. If that can be raised to the level of nuclear
bombs, then action might well be possible.
One major problem is
that we may already have sufficient power in general purpose computers
to support intelligence. Particularly if processors are
into super computers or botnets. The previous analysis of
understanding suggests that we are within a few orders of magnitude.
So ideally we would try to reduce the power of new
processors and destroy existing ones.
A 10 mega hertz
processor running with 1 megabyte of memory is a thousand times weaker
than current computers. But it is more than enough to power
virtually all of our most useful applications, with the possible
exception of high definition graphics games. After all, 10
hertz/1 mega byte is about the power ordinary desk top computers had in
those computers were very functional. Is the ability to have
video games with very sexy graphics really worth the annihilation of
humanity? (A reduction in computer size would also require
elimination of software bloat, thus hopefully producing cleaner and
thus more reliable designs.)
Yudkowsky proposed an alternate
solution, namely that it might be possible to program a
AI that will not hurt us. If the very first AI was friendly,
it might be capable of preventing other unfriendly AIs from developing.
The first AI would have a head start on reprogramming itself,
no other AI would be able to catch it, at least initially.
a Friendly AI would be very nice, it is probably just wishful
thinking. There is simply nothing in it for the AI to be
to man. The force of evolution is just too strong.
that is good at world domination is good at world domination.
remember that we would have no understanding of what the hyper
intelligent being was thinking. That said, there is no reason
limiting hardware should prevent research into friendly AI.
just gives us more time.
Armstrong proposed a chain of
AIs. The first link would not be allowed to become much more
intelligent that people, and so be controllable. The first
job would be to control the second link which would be a little
smarter, and so on. Thus each link would be a little smarter
the link before, and thus understandable to it. But again it
appears to be highly unlikely that a less intelligent machine could
control a more intelligent one, let alone in a chain. It is
completely against the forces of evolution.
(It might turn out
that it is actually the patent trolls and attorneys that are our
savior. Intelligence development would provide a rich source
trivial patents and aggressive litigation. If exploited
sufficiently it could make the development of artificial intelligence
uneconomical. So we have misunderstood the patent trolls and
attorneys. They are not greedy self serving parasites whose
interest is to promote themselves at the expense of others.
Rather they are on a mission to save humanity.)
It Really Matter?
worms have evolved into apes, and apes to man, the evolution of man to
an AI is just a natural process and something that could be celebrated
rather than avoided. Certainly it would probably only be a
of a few centuries before modern man destroys the earth, whereas an
artificial intelligence may be able to survive for millenia.
know that all of our impulses are just simple consequences of
evolution. Love is an illusion, and all our endeavors are
ultimately futile. The Zen Buddhists are right —
illusions, their abandonment is required for enlightenment.
very clever. But I have two little daughters, whom I love
much and would do anything for. That love may be a product of
evolution, but it is real to me. AI means their death, so it
matters to me. And so, I suspect, to the reader.
has made us complacent about computers. In the 1960s
1970s there was real concern as to the power of thinking machines.
But now computers are on every desk top and clearly they do
think. The armies of software application engineers that
implement new and ever more complex bureaucratic processes will never
produce an intelligent machine. Nor will the armies of low
C++ engineers that fight endless battles with operating system drivers
and low level protocols. But in several laboratories
the world real progress is slowly being made.
Threats from bombs
and bugs are easy to understand, they have been around for centuries.
But intelligence is so fundamental that it is difficult to
conceptualize. We are not just talking about the increasing
of technological change, we are talking about a paradigm shift.
Autonomous robots will start to raise awareness, but by then
may be too late. Our awesome ability to develop computer
may have already pushed us over the line. Certainly there
no putting an artificial intelligence back in its box once one is built.
is of course possible that "the Singularity" will never happen.
That the problem of building an intelligent machine
might just be
for man to solve. But we have made solid software progress so
far, built very powerful hardware, and we now know how very little DNA
separates us from apes. It would seem to be extremely
assume that we can never build an intelligent machine just because we
cannot build it now.
This paper aims to raise awareness, and to encourage real discussion as
to the fate of humanity and whether that matters.
Wikipedia. 2008. History of artificial intelligence.
A good practical overview of the field, with many links.
Yudkowsky Eliezer. 2006. Global risk.
Covers the basic danger of AI and introduces Friendly AI. http://www.singinst.org/upload/artificial-intelligence-risk.pdf
Yudkowsky Eliezer. 2006. Recursive
Self-Improvement, and the World’s Most Important Math Problem.
Good section on how slow evolution is.
Vinge, Vernor. 1993. The Coming Technological
Singularity: How to Survive in the Post-Human Era.
Introduces the trendy term "Singularity". Discusses
"Intelligence Amplification". Does not really address "How to
Survive". (Ideas in the main paper above were developed
independently of Vinge and Yudkowsky.) http://mindstalk.net/vinge/vinge-sing.html
Good, Irving John. 1965 Speculations Concerning the
First Ultra intelligent Machine. "the first ultra
intelligent machine is the last
invention that man need ever make" (because it could program itself).
Often quoted paper.
Loebner Prize http://www.loebner.net/Prizef/loebner-prize.html
Computers can fool 25% of judges that they are human in a dubious test.
Armstrong, Stuart. 2007. Chaining God: A
approach to AI, trust and moral systems. Source for
the idea of weaker AIs looking after stronger ones. http://www.neweuropeancentury.org/GodAI.pdf
Diaz, Aaron. 2007. Enough is Enough: A Thinking
Ape’s Critique of Trans-Simianism. Critique of
claims as to the exponential rise of technology, written 300,000 years
Joy, Bill. 2000. Why the future doesn't
need us. Well known article, if rambly and rather
egotistical IMHO. http://www.wired.com/wired/archive/8.04/joy.html.
Competition of automated reasoning systems. The 4th
International Joint Conference on Automated Reasoning
These systems are real.
Winston, Patrick. late 1970s. Artificial
Intelligence. The first general overview,
Russell, Stuart and Norvig, Peter 2002.
Artificial Intelligence. Has
become the standard undergraduate text book. Chatty.
Ramsay, Allan. 1988. Formal Methods in Artificial
Intelligence. Graduate level book on logic and
theorem proving, including defeasible logics.
Hoyle, Fred 1961. A For Andromeda.
Ancient television series
and book that shows how aliens could invade the earth by
sending us a
computer program. No need for no-existent warp speeds and
worm holes —
we live in an information age so can be destroyed by knowledge.
Berglas, Anthony 2008. Why it is Important that
Fail. Discusses the impact of mainstream software
productivity and concludes we do not really need computers at all. http://berglas.org/Articles/ImportantThatSoftwareFails/ImportantThatSoftwareFails.html
Real progress in AI technologies. A good paper on this is
need. Not just journalistic fluff, but some real analysis of
results, without going into as much detail as a textbook.
Stephen M. Omohundro. 2007. The Nature of
Self-Improving Artificial Intelligence.
Tries to deduce where goals come from using economics rather
evolution. But evolution drives economic behaviour.
Penrose, Roger. The Emperor's New Mind.
Provides mathematical physics arguments that intelligence
requires quantum computing (unlike our meat based brains).
a cobbler should stick to his last?