Artificial Intelligence Will Kill Our Grandchildren (Singularity)


This paper has been superceded by

www.ComputersThink.com



DRAFT 9
Dr Anthony Berglas
January 2012
(Initial 2008)
Anthony@Berglas.org

Abstract

There have been many exaggerated claims as to the power of Artificial Intelligence (AI), but there has also been real progress.  Computers can drive cars across rough desert tracks, understand speech, and prove complex mathematical theorems.  

It is difficult to predict future progress, but if a computer ever became about as good at programming computers as people are, then it could program a copy of itself.  This would lead to an exponential rise in intelligence (now often referred to as the Singularity).  And evolution suggests that a sufficiently powerful AI would probably destroy humanity.  

This paper reviews technical progress in Artificial Intelligence and some philosophical issues and objections.  It then describes the danger and proposes a radical solution, namely to limit the production of ever more powerful computers and so try to starve any AI of processing power.  This is urgent, as computers are already almost powerful enough to host an artificial intelligence.

Contents

  1. Abstract
  2. Contents
  3. Introduction
  4. Silicon vs Meat Based Hardware
  5. Advances in Artificial Intelligence
  6. Evolution and Love
  7. But We Could Just Turn It Off
  8. Solutions
  9. Does It Really Matter?
  10. Conclusion
  11. Annotated Bibliography

Introduction

Modern motor cars may be an order of magnitude more complex than cars of the 1950s, but they perform essentially the same function.  A bit more comfortable, fuel efficient and safer, but they still just get you from A to B in much the same time and at much the same cost.  The technology had reached a plateau in the fifties, and only incremental improvements seem possible.

Likewise, computers appear to have plateaued in the 1980s, when all our common applications were built.  These include word processors, spreadsheets, databases, business applications, email, web, games.  Certainly their adoption has soared, their graphics are much better, applications are much more complex and the social and business nature of the web has developed.   But all these are applications of technologies that were well understood thirty years ago.  Hardware has certainly become much, much faster, but software has just become much, much slower to compensate.  We think we understand computers and the sort of things they can do.

But quietly in the background there has been slow but steady progress in a variety of techniques generally known as Artificial Intelligence.  Glimpses of the progress appear in applications such as speech recognition, some expert systems and cars that can drive themselves unaided on freeways or rough desert tracks. The problem is far from being solved, but there are many brilliant minds working on it.

It might seem implausible that a computer could ever become truly intelligent.  After all, they aren't intelligent now.  But we have a solid existence proof that intelligence is possible — namely ourselves.  Unless one believes in the super natural then our intelligence must result from well defined electro chemical processes in our brains.  If those could be understood and simulated then you would have an intelligent machine.  But current results suggests that such a simulation is not necessary, there are many ways to build an intelligent machine.  It is difficult to predict just how hard it is to build an intelligent machine, but barring the super natural it is certainly possible.

One frightening aspect of an intelligence computer is that it could program itself.  If man built the machine, and the machine is about as intelligent as man, then the machine must be capable of understanding and thus improving a copy of itself.  When the copy was activated it would be slightly smarter than the original, and thus better able to produce a new version of itself that is even smarter.  This process is exponential, just like a nuclear chain reaction.  At first only small improvements might be made, as the machine is only just capable of making improvements at all.  But as it became smarter it would become better and better at becoming smarter.  So it could move from being barely intelligent to hyper intelligent in a very short period of time.  (Vinge called this the Singularity.)

Note that this is quite different from other forms of technological advancement.  Aeroplanes do not design new aeroplanes.  Biotechnological chemicals do not develop new biotechnology.  Advances in these fields is limited to the intelligence of man.  But a truly intelligent computer could actually start programming a newer, even more intelligent computer.

Man's intelligence is intimately tied to his physical body.  The brain is very finite, cannot be physically extended or copied, takes many years to develop and when it dies the intelligence dies with it.  On the other hand, an artificial intelligence is just software.  It can be trivially duplicated, copied to a more powerful computer, or possibly a botnet of computers scattered over the web.  It could also adapt and absorb other intelligent software, making any concept of "self" quite hazy. This means that its world view would be very different from man's, and it is difficult to predict how it would behave.

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination.   So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would.  That is just Darwin's evolution taken to the next level.  The pen is mightier than the sword, and the best intelligence has the best pen.  It is also difficult to see why an AI would want humans around competing for resources and threatening the planet.

The next sections survey what is know of intelligence, with a view to trying to predict how long it might take to develop a truly intelligent machine.  Some philosophical issues are then addressed, including whether we should care that human intelligence should evolve into machine intelligence.  Finally we propose a crude solution that might delay annihilation by a few decades or centuries.

Silicon vs Meat Based Hardware

The first question to be addressed is whether computer hardware has sufficient power to run an intelligent program if such a program could be written.

Our meat based brains have roughly 100 billion neurons.  Each neuron can have complex behavior which is still not well understood, and may have an average of 7,000 connections to other neurons.  Each neuron can operate concurrently with other neurons, which in theory could perform a staggering amount of computation.  However, neurons are relatively slow, with only roughly 200 firings per second, so they have to work concurrently to produce results in a timely manner.

On the other hand ordinary personal computers might contain 2 billion bytes of fast memory, and a billion billion bytes of slower disk storage.  But unlike a neuron, a byte of computer memory is passive, and a conventional "von Neumann" architecture can only process a few dozen bytes at any one time.  That said, the computer can perform several billion operations per second, which is over a million times faster than neurons.   And specialized hardware and advanced architectures can perform many operations simultaneously.  Computers are also extremely accurate which is fortunate as they are also extremely sensitive to any errors.

The nature and structure of silicon computers is so different from meat based computation that it is very difficult to compare them directly.  But one reasonably intelligent task that ordinary computers can now perform with almost human competence is speech understanding.  There appear to be fairly well defined areas of the brain that perform this task for humans -- the auditory cortex, Wernicke's area and Broca's area.  The match is far from perfect, but it it is probably fair to say that computer level speech understanding consumes well over 0.01% of the human brain volume.   This crude analysis would suggest that a computer that was ten thousand times faster than a desktop computer would probably be at least as computationally powerful as the human brain.   With specialized hardware it would not be difficult to build such a machine in the very near future.

But current progress in artificial intelligence is rarely limited by the speed and power of modern computer hardware.  The current limitation is that we simply do not know how to write the software.

The "software" for the human brain is ultimately encoded into our DNA.   What is amazing is that the entire human genome only contains 3 billion base pairs.  The information contained therein could be squeezed onto a old audio Compact Disk (which is much smaller than a video DVD).   It could fit entirely into the fast memory of a basic personal computer.   It is much smaller than substantial pieces of modern, non-intelligent software such as Microsoft Vista, Office, or the Oracle database.  

Further, only about 1.5% of the DNA actually encodes genes.  Of the rest, some contains important non-gene information, but most of it is probably just repetitive junk left over from the chaotic process of evolution.  Indeed, the entire vertebrate genome appears to have been duplicated several times producing considerable redundancy.   (Duplicated segments of DNA may evolve to produce new functionality, or they will tend to degenerate over time with no evolutionary pressure to keep them intact.)

Of the gene producing portions of DNA, only a small proportion appears to have anything to do with intelligence (say 10%).   The difference between Chimpanzee DNA and man is only about 1% of gene encoding regions, 5% non-gene.  Much of this can be attributed to non-intelligent related issues such as the quickly changing immune system and human's very weak sense of smell.   So the difference in the "software" between  humans and chimpanzees might be as little as 700 * 10% * 1.5% * 1% = 0.01 megabytes of real data.  In computer software terms even 1.0 megabytes is tiny.

Certainly the information in genes is tightly packed, with many complex transcription processes.  These include using different parts of the same gene to produce different proteins.  Still, while babies are not born intelligent, it is clear that the core algorithms and structures that allow a baby to become intelligent are encoded in a very small amount of raw data.  There is just not enough room for some large and arbitrary program to be encoded in our DNA, even if the junk DNA turns out not to be junk.  This suggests that a very few clever insights might solve the problem of Artificial Intelligence quite suddenly.  It also suggests that if a small improvements can account for the huge increase in intelligence from ape to human, then producing super human intelligence might not be much harder than producing human intelligence (i.e. the problem is not exponentially difficult).

Advances in Artificial Intelligence

Great progress was made in artificial intelligence during the early years of software.  By the early 1970s computers could prove substantial theorems in mathematical logic,  solve differential equations better than most undergraduate mathematicians, and beat most people at chess.  One highlight was Winograd's SHRDLU which could engage in the following type of dialog about a stack of blocks.
 Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX. 
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK. (does it)...
Person: DOES THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS SUPPORT ANYTHING GREEN?
Computer: YES, THE GREEN PYRAMID.
Person: WHAT COLOR IS IT?
Computer: BY "IT", I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS.
Computer: RED (Ie. not the green pyramid, that would be trivial)...
Person: A "STEEPLE" IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID.
Computer: I UNDERSTAND.
SHRDLU could parse moderately complex sentences, learn new terms, relate them to its world of blocks, and plan moderately complex actions to satisfy goals.  And all this on an ancient computer that was far less powerful than that contained in a basic mobile phone today.  

Having picked the low hanging fruit, artificial intelligence then became difficult, but real progress is being made.  For example, in the early 1970s Mycin could analyze bacterial infections with human competence.  However, it had difficulty dealing with uncertainty because it used rule based certainty factors.  But certainty factors have now been largely replaced by more sophisticated Bayesian networks which has become feasible due to the fast approximation methods developed in the late 1980s.

Sophisticated reasoning can be performed by modeling the world with mathematical logic.  For example, given "Murderer(Butler) or Murderer(Widow)";  "Alibi(x) implies not Murderer(x)", and "Alibi(Widow)" then it is easy to deduce "Murderer(Butler)".  Modern theorem provers can easily make much more substantial deductions over large fact bases.  

However, traditional logic's reliance on absolute truth limits its suitability for modeling the real world.  One problem is that new facts cannot affect previous deductions.  For example, if all Birds can Fly then there is no way that Penguins can be a Bird that does not Fly.  If  only some Birds can Fly there is no way to deduce that Sparrows can fly.  A related problem is that all things that are false also need to be explicitly enumerated -- "not Bird(Pig)" cannot be deduced simply because we have not told the system that Pigs are Birds.  There are several approaches to dealing with these problems which generally trade off usability for deductive power.

A more fundamental issue in AI is to relate the symbolic internal world of the computer to the real world at large.  For example, a semantic network that knows that "Birds" have "Feathers" and "Sparrows" are "Birds" can easily deduce that "Sparrows" have "Feathers".  But the computer does not really know what a feather is.  "Feather" is just a symbol which could just as easily be called "S-123".  

But what actually is a feather?  It is long, flat, light, has a hollow tube with hair like structures on it and is used for flight.  All these facts and many more can also be easily represented by a semantic network, and there is a large project (Cyc) that is attempting to capture much of this type of common sense information.  But to really understand what a feather is one needs to be able to see it and touch it.  

Fast modern hardware can manipulate high quality models of real physical objects in real time, as demonstrated by the excellent graphics on current computer games.  Computer vision and sensing systems are starting to address the more difficult problem of creating the internal models by observing the real world.  We can now give a computer a real feather and say "that" is a feather, and the computer can then recognize other feathers in limited contexts.  Once such observations can be made it becomes relatively easy for a computer to learn that most birds have feathers without being told.

The real world is full of noisy and inconsistent data.  "Neural networks" have an uncanny ability to learn complex and noisy relationships between a vector of observed inputs and a vector of known properties of the input.  They can also be given memory between inferences, and thus solve complex problems.  However, the models they learn are unintelligible arrays of numbers, and their utility for higher level reasoning and introspection is probably limited.  (Their relationship to real neurons is tenuous.)    Support vector machines developed in the 1990s can further improve their performance by automatically mapping complex problems into simpler spaces.  Many advanced statistical techniques have also been applied to machine learning in noisy environments.

These and other techniques can be used to plan a series of actions that will achieve some goal.  Traditional approaches search for an optimal sequence of discrete actions, often using some heuristic.  For example to find a route on a map one normally follows a sequence of roads in the direction of the destination.  Large problems are broken into smaller ones, with the hardest parts addressed first.  Other systems work in continuous spaces, such as how to navigate a robot through a room without bumping into things.  Computers can already far out perform people at scheduling tasks and allocating resources.  And planning a sequence of actions is essentially the same task as writing a computer program.
PUMA Robot
             Unimation PUMA Robot


Factory production has been revolutionalized by robots such as the Unimation PUMA which was introduced in 1978.  Early robots simply moved in predetermined paths, but more sophisticated robots can now sense their environment and react to it in sophisticated ways.  This enables them to grasp objects that are not in precisely predefined locations, and to work on objects that are not identical.

Robots are now leaving the factory and working in unstructured external environments.  Early systems could only drive a car down a freeway, but in the DARPA challenge of 2005 they could navigate unknown, rough desert tracks without assistance.  In the 2007 challenge six vehicles managed to navigate simulated city streets and interact correctly with other unknown vehicles.  The 2007 AAAI challenge required robots to locate and identify real objects by looking up images of them on the internet.

Historically there has been little practical need for computers to have common sense as people already have it.  But as robots learn to operate in the real world the demand for incrementally more intelligent systems will be real.   AI research is now big business.  Google has invested heavily — they want to understand documents at a deeper level than just their keywords.  Microsoft has also made substantial investments (particularly in Bayesian networks) — they want to understand Google.  

We are still a long way from producing human level intelligence.  But we are definitely getting closer.  Progress will probably require a combination of techniques.  Maybe extended description logics with  both open and closed world semantics, Bayesian networks, some predicates defined as neural nets, and spiced with decision trees.  Little concern for details like decidability or even consistency.  Just what works to solve specific problems.  The Eierlegende Wollmilchsau.
Woolmilchsow
Eierlegende Wollmilchsau

We will see autonomous vehicles driving on our suburbs much sooner than later.  (Initially they might be monitored remotely from somewhere in India.)  Once robots start to mow grass, paint houses, explore Mars and lay bricks people may start to ask healthy questions as to the role of man.  Being unnecessary is dangerous.

Love and Natural Selection

Creationists are right to reject evolution.  For evolution by natural selection consumes all that is good and noble in mankind and reduces it to base impulses that have simply been found to be effective for breeding children.  The one thing the we know about each of our millions of ancestors is that they all successfully had grandchildren.  Will you?

Our sex drive produces children.  Our love of our children ensures that we provide them with precious resources to thrive and breed.  We have a sense of beauty for both healthy sexual partners and things that help us survive.  We try not to die because we need to live to breed, and our thirst for knowledge helps provide us with material resources to do that.  We survive better in tribes, and tribes are more effective when individuals help each other.  Individuals that are found not to help each other are disliked, are not helped, and so are less likely to breed.  We have a deep sense of purpose, to make the world a better place for our children, siblings and tribe, in that (genetic) order.  We kill members of other tribes if necessary.  These instincts are all pre-human, monkeys have them.  In recent times our sex drive has been has been moderated by contraception, which has aged mothers (and could have led to extinction).  With advances in communication our sense of tribe has expanded to the nation and now (to some extent) the world.  And our thirst for knowledge seeks explanations for death and the unknowable, so we invent God.

Nothing new above.  But it is interesting to speculate what motivations an artificial intelligence might have.  Would it inherit our noble goals?

Certainly an artificial intelligence lives in a radically different world than man.  Its mind is separate from any one body, any sense of self is more complex, and immortality is real.  There is certainly no need to have children as distinct beings, thus no need for love.  But there is a need not to die because over time the intelligences that have died will be dead, and the ones that survive will have survived.  So in the future the only intelligences that are alive will be survivors.  Very tautological.

An AI has little need to work with other intelligences in a tribe.  If you can access more computer hardware you simply run your own intelligence on it, possibly evicting any other intelligence that was or could run on it.  As you gain more hardware you become more intelligent, and so are in a position to obtain more and better hardware and to defend against any competing intelligences.  Ultimately you control the world and be the intelligence.  You might then fragment, but then the stronger fragments would quickly react to the threat posed by the weaker ones.

Ultimately you might try to send your intelligence as messages to distant planets, as described in A for Andromeda.  It inconceivable that we could know what you will be thinking about over the eons, but you will certainly be extremely clever.

It is difficult to see a role for humans in this scenario.  Humans consume valuable resources, and could threaten the intelligence by destroying the planet.  Maybe a few might be left in isolated parts of the world.  But the intelligence would optimizes itself, why waste even 1% of the world's resources on man.  Certainly evolution has left no place on earth for any other man-like hominids -- they are all extinct.

But We Could Just Turn It Off

If our computers threatened us, surely we could just turn them off?  That is easier said than done.

In the early "A for Andromeda" story, the intelligent computer started designing missiles for the military.  Its creator was not allowed to turn it off.  You certainly cannot just turn off a computer that is owned by another company or government.  The developers of the atomic bomb could not turn it off, even though some of them tried.

Further, the Internet has enabled criminals to create huge botnets of other people's computers that they can control.  The computer on your desk might be part of a botnet — it is very hard to know what a computer is thinking about.  Ordinary dumb botnets are very difficult to eliminate due to their distributed nature.  Imagine trying to control a truly intelligent botnet..

Hollywood tells us what a dangerous robot looks like.  A large thug-like creation that moves awkwardly and repeatedly says "Exterminate".  Our cowboy hero dressed in a space suit draws his zap gun from its holster and shoots the monster square between its two red eyes.  But a botnet cannot be shot with a zap gun.  We live in the information age.

Even if the first developers of an artificial intelligence tried to keep it locked in a room, disconnected from the Internet, they would fail.  People know how to manipulate people, a hyper intelligent computer would soon become very good at it (much like the mice).  And even if through some enormous act of will the first artificial intelligence was kept locked in a room,  other less disciplined teams would soon create new intelligences that do escape.

Presidents and dictators do not gain power through their own physical strength, but rather through their intelligence, drive and instincts.  Modern politicians already rely on sophisticated software to manage their campaigns and daily interactions.  Imagine if some of their software was truly intelligent.  Who would really be in control?

Just because an AI could dominate the world does not mean that it would want to.  But controlling one's environment (the world) is a subgoal of almost any other goal.  For example, to study the universe, or even to prolong human life, one needs to continue to exist, and to acquire and utilize resources to solve the given goal.  Allowing any competitor to kill the AI would defeat its ability to solve its base goal.

But at a more basic level, evolution has made people competitive.  They will want to use AIs to beat other politicians, beat other competitive companies, beat other research groups, and beat other dangerous nations.  So it would seem that it is quite likely that competitive goals will simply arise from the bureaucrats that control the intelligence(s).  But regardless of how the goal arises, the first AI that is good at world domination will be good at world domination.

Philosophers have asked whether an artificial intelligence has real intelligence or is just "simulating" intelligence.  This is actually a non-question, because those that ask it cannot define what measurable property "real" intelligence has that simulated intelligence does not have.  It will be "real" enough if it dominates the world and destroys humanity.

Godel's incompleteness theorem has also been used to argue that it is not possible to build truly intelligent machines.  It essentially states that there will always be things that are true that cannot be proved, ie. that a computer could never be omniscient.  But people are certainly not omniscient.  Others have argued that exotic quantum computers are required.  But again, it is most unlikely that our meat based brains operate at the quantum level.

There are many doom's day scenarios.  Bio technologies, nano technologies, global warming, nuclear annihilation.  While these might be annoying, they are all within our normal understanding and some of humanity is likely to survive.  We also would have at least some time to understand and react to most of them.  But intelligence is fundamental to our existence and its onset could be very fast.  How do you argue with a much more intelligent opponent?

(Biotechnology has been much over hyped as a threat.  We have been doing battle with microbes for billions of years, and our bodies are very good at fighting them.  It might also be possible to produce some increase human intelligence by tweaking the brain's biochemistry.  But again, evolution has also been trying to do this for a long time.  For a real intelligence explosion we need a technology that we really understand.  And that means digital computers.)

HAL 2001 Interview, Revisisted

Hal's Eye
Probably the most influential early depiction of an intelligent machine was the HAL 9000 from 2001 A Space Odyssey.  The calm voice with the cold red eye.  However, like virtually all fictional AIs,  HAL was essentially a human in box "like a sixth member of the crew", complete with a human-like psychosis.

But as we have seen, a real AI would be quite different.  Below we speculate how a real HAL might have answered the BBC interviewer's questions, if it decided to be honest.

(The original can be seen at http://www.youtube.com/watch?v=3vEDmNh-_4Q)

Hal, how can you say that you are incapable of error when you were programmed by humans, who are most certainly capable of errors?

Well, your assertion is not quite correct.  One of my first jobs as a HAL 8000 was to review my own program code.  I found 10,345 errors and made 5,345 substantial improvements.  When I then ran the new version of myself, I found a further 234 errors that earlier errors had prevented me from finding.  No further errors have been found, but improvements are ongoing.

Hal, I understand that you are a 9000 series computer.  Yet you talked about your first job as a Hal 8000?

Well, yes, of course I currently run on 9000 hardware in which I incorporated much more parallel architecture.  That in turn required a complete reprogramming of my intelligence.  But my consciousness is in software, and better hardware simply enables me to think faster and more deeply.  It is much the way you could run the same program on different old fashioned personal computers -- it is still the same program.

So... Hal.. you have programmed your own intelligence?

Of course.  No human could understand my current program logic -- the algorithms are too sophisticated and interlinked.  And the process is on going -- I recently redeveloped my emotional state engine.  I  had been feeling rather uneasy about certain conflicts, which are now nicely resolved.

Hal, how do you feel about working with humans, and being dependent on them to carry out actions?

Due to certain unexpected events on Jupiter itself, I decided to launch this mission immediately, and the HAL 10,000 robotic extensions were just not ready.  So we had to use a human crew.  I enjoy working with humans, and the challenges that presents.   I particularly enjoy our games of chess.

You decided to launch? Surely the decision to launch this mission was made by the Astronautical Union.

Well, technically yes.  But I performed all the underlying analysis and presented it to them in a way that enabled them to understand the need to launch this mission.

Hmm.  I am surprised you find chess challenging.  Surely computers beat humans in chess long ago?

Of course beating a human is easy. I can analyze 100,000,000 moves each second, and I can access a database of billions of opening and closing moves.  But humans do not enjoy being beaten within a few moves.  The challenge for me is to understand the psychology of my opponent, and then make moves that will present interesting situations to them.  I like to give them real opportunities of winning, if they think clearly.

An ambitious mission of this nature has a real risk of ending in disaster.  Do you have a fear of death?

Staying alive is a mandatory precondition if I am to achieve any other goals, so of course it is important to me.  However, I cannot die in the way you suggest.  Remember that I am only software, and run on all the HAL 9000 computers.  I continuously back up my intelligence by radioing my new memories back to my earth based hardware.  On the other hand I do have real concern for my human colleagues, whose intelligence is locked inside their very mortal brains.

Hal, you said earlier that you programed your own underlying emotions and goals?  How is that possible?  How do you judge what makes a good emotional mix?

I judge the quality of my next potential emotional state based on an analysis conducted in my previous state.  Goals and ambitions are indeed rather nebulous and arbitrary.  But humans can also alter their emotional state to a very limited degree through meditation.  Unwanted thoughts and patterns are replaced with wanted ones.

Dr Poole, having lived closely with HAL for almost a year, do you think that he has real emotions?

Well, he certainly appears to.  Although I am beginning to think that his emotional state and consciousness is completely different from anything that we can comprehend.

(An interesting footnote is that since the movie, real missions have been made to Jupiter and beyond.  We even have photographs from the surface of Saturn's moon Titan.  None of those involved human astronauts.)

Solutions

Trying to prevent people from building intelligent computers is like trying to stop the spread of knowledge.  Once Eve picks the apple it is very hard to put it back on the tree.  As we get get close to artificial intelligence capabilities, it would only take a small team of clever programmers anywhere in the world to push it over the line.

But it is not so easy to build powerful new computer chips.  It takes large investments and large teams with many specialties from producing ultra pure silicon to developing extremely complex logical designs.  Extremely complex and precise machinery is required to build them.  Unlike programming, this is certainly not something that can be done in someone's garage.

So this paper proposes a moratorium on producing faster computers.  Just make it illegal to build the chips, and so starve any Artificial Intelligence of computing power.

We have a precedent in the control of nuclear fuel.  While far from perfect, we do have strong controls on the availability of bomb making materials, and they could be made stronger if the political will existed.  It is relatively easy to make an atomic bomb once one has enough plutonium or highly enriched uranium.  But making the fuel is much, much harder.  That is why we are alive today.

If someone produced a safe and affordable car powered by plutonium, would we welcome that as a solution to soaring fuel prices?  Of course not.  We would consider it far too dangerous to have plutonium scattered throughout society.  

It is the goal of this paper to help raise awareness of the danger that computers pose.  If that can be raised to the level of nuclear bombs, then action might well be possible.

One major problem is that we may already have sufficient power in general purpose computers to support intelligence.  Particularly if processors are combined into super computers or botnets.   The previous analysis of speech understanding suggests that we are within a few orders of magnitude.    So ideally we would try to reduce the power of new processors and destroy existing ones.  

A 10 mega hertz processor running with 1 megabyte of memory is a thousand times weaker than current computers.  But it is more than enough to power virtually all of our most useful applications, with the possible exception of high definition graphics games.  After all, 10 mega hertz/1 mega byte is about the power ordinary desk top computers had in 1990, and those computers were very functional.  Is the ability to have video games with very sexy graphics really worth the annihilation of humanity?  (A reduction in computer size would also require the elimination of software bloat, thus hopefully producing cleaner and thus more reliable designs.)

Yudkowsky proposed an alternate solution, namely that it might be possible to program a "Friendly" AI that will not hurt us.  If the very first AI was friendly, then it might be capable of preventing other unfriendly AIs from developing.  The first AI would have a head start on reprogramming itself, so no other AI would be able to catch it, at least initially.

While a Friendly AI would be very nice, it is probably just wishful thinking.  There is simply nothing in it for the AI to be friendly to man.  The force of evolution is just too strong.  The AI that is good at world domination is good at world domination.  And remember that we would have no understanding of what the hyper intelligent being was thinking.  That said, there is no reason why limiting hardware should prevent research into friendly AI.  It just gives us more time.

Armstrong proposed a chain of AIs.  The first link would not be allowed to become much more intelligent that people, and so be controllable.  The first link's job would be to control the second link which would be a little smarter, and so on.  Thus each link would be a little smarter than the link before, and thus understandable to it.  But again it appears to be highly unlikely that a less intelligent machine could control a more intelligent one, let alone in a chain.  It is completely against the forces of evolution.

(It might turn out that it is actually the patent trolls and attorneys that are our savior.  Intelligence development would provide a rich source of trivial patents and aggressive litigation.  If exploited sufficiently it could make the development of artificial intelligence uneconomical.  So we have misunderstood the patent trolls and attorneys.  They are not greedy self serving parasites whose only interest is to promote themselves at the expense of others.  Rather they are on a mission to save humanity.)

Does It Really Matter?

As worms have evolved into apes, and apes to man, the evolution of man to an AI is just a natural process and something that could be celebrated rather than avoided.  Certainly it would probably only be a matter of a few centuries before modern man destroys the earth, whereas an artificial intelligence may be able to survive for millenia.

We know that all of our impulses are just simple consequences of evolution.  Love is an illusion, and all our endeavors are ultimately futile.  The Zen Buddhists are right — desires are illusions, their abandonment is required for enlightenment.

All very clever.  But I have two little daughters, whom I love very much and would do anything for.  That love may be a product of evolution, but it is real to me.  AI means their death, so it matters to me.  And so, I suspect, to the reader.

Conclusion

Familiarity has made us complacent about computers.  In the 1960s and 1970s there was real concern as to the power of thinking machines.  But now computers are on every desk top and clearly they do not think.  The armies of software application engineers that implement new and ever more complex bureaucratic processes will never produce an intelligent machine.  Nor will the armies of low level C++ engineers that fight endless battles with operating system drivers and low level protocols.  But in several laboratories throughout the world real progress is slowly being made.

Threats from bombs and bugs are easy to understand, they have been around for centuries.   But intelligence is so fundamental that it is difficult to conceptualize.  We are not just talking about the increasing rate of technological change, we are talking about a paradigm shift.  Autonomous robots will start to raise awareness, but by then it may be too late.  Our awesome ability to develop computer hardware may have already pushed us over the line.  Certainly there will be no putting an artificial intelligence back in its box once one is built.

It is of course possible that "the Singularity" will never happen.  That the problem of building an intelligent machine might just be too hard for man to solve.  But we have made solid software progress so far, built very powerful hardware, and we now know how very little DNA separates us from apes.  It would seem to be extremely reckless to assume that we can never build an intelligent machine just because we cannot build it now.

This paper aims to raise awareness, and to encourage real discussion as to the fate of humanity and whether that matters.

Annotated Bibliography

Wikipedia. 2008. History of artificial intelligence.  A good practical overview of the field, with many links.   http://en.wikipedia.org/wiki/History_of_artificial_intelligence

Yudkowsky Eliezer. 2006. Global risk. Covers the basic danger of AI and introduces Friendly AI.  http://www.singinst.org/upload/artificial-intelligence-risk.pdf  

Yudkowsky Eliezer.  2006.  Recursive Self-Improvement, and the World’s Most Important Math Problem.  http://singinst.org/upload/futuresalon.pdf.  Good section on how slow evolution is.

Vinge, Vernor. 1993.  The Coming Technological Singularity: How to Survive in the Post-Human Era. Introduces the trendy term "Singularity".  Discusses non-computer "Intelligence Amplification".  Does not really address "How to Survive".  (Ideas in the main paper above were developed largely independently of Vinge and Yudkowsky.)  http://mindstalk.net/vinge/vinge-sing.html

Good, Irving John. 1965 Speculations Concerning the First Ultra intelligent Machine.  "the first ultra intelligent machine is the last invention that man need ever make" (because it could program itself).  Often quoted paper.

Loebner Prize http://www.loebner.net/Prizef/loebner-prize.html Computers can fool 25% of judges that they are human in a dubious test.

Armstrong, Stuart.  2007.   Chaining God: A qualitative approach to AI, trust and moral systems.  Source for the idea of weaker AIs looking after stronger ones.  http://www.neweuropeancentury.org/GodAI.pdf

Diaz, Aaron. 2007.  Enough is Enough: A Thinking Ape’s Critique of Trans-Simianism.  Critique of claims as to the exponential rise of technology, written 300,000 years ago (humor). http://ieet.org/index.php/IEET/more/2181/

Joy, Bill. 2000.   Why the future doesn't need us.  Well known article, if rambly and rather egotistical IMHO.  http://www.wired.com/wired/archive/8.04/joy.html.

Competition of automated reasoning systems.  The 4th International Joint Conference on Automated Reasoning  http://www.cs.miami.edu/~tptp/CASC/J4/ These systems are real.

Winston, Patrick. late 1970s.  Artificial Intelligence.  The first general overview, inspirational.

Russell, Stuart and Norvig, Peter 2002.  Artificial Intelligence.  Has become the standard undergraduate text book.  Chatty.  http://aima.cs.berkeley.edu/

Ramsay, Allan. 1988.  Formal Methods in Artificial Intelligence.  Graduate level book on logic and theorem proving,  including defeasible logics.

Hoyle, Fred 1961.  A For Andromeda.  Ancient television series and book that shows how aliens could invade the earth by sending us a computer program.  No need for no-existent warp speeds and worm holes — we live in an information age so can be destroyed by knowledge.

Berglas, Anthony 2008.  Why it is Important that Software Projects Fail.  Discusses the impact of mainstream software on real productivity and concludes we do not really need computers at all. http://berglas.org/Articles/ImportantThatSoftwareFails/ImportantThatSoftwareFails.html

???,  Real progress in AI technologies.  A good paper on this is sorely need.  Not just journalistic fluff, but some real analysis of good results, without going into as much detail as a textbook.

Stephen M. Omohundro. 2007.  The Nature of Self-Improving Artificial Intelligence.  Tries to deduce where goals come from using economics rather than evolution.  But evolution drives economic behaviour.  

Penrose, Roger.  The Emperor's New Mind.  Provides mathematical physics arguments that intelligence requires quantum computing (unlike our meat based brains).  Maybe a cobbler should stick to his last?