Occam's Razor Computers Think

Computers are becoming more intelligent.  Over the next couple of decades we will see robots that can drive cars, pick fruit, paint houses, clean offices and lay bricks.  Semi-intelligent software will also interpret video images, help regulate social media, and influence government policies.  These are safe predictions because prototypes of these machines already exist today. 

There has been much speculation as to how these semi-intelligent machines might affect employment and personal liberty.  History has shown us how steam tractors and combine harvesters could do the work of hundreds of workers and have led to a huge reduction in the agricultural workforce.   History has also shown us how ancient mainframe computers could do the work of thousands of clerical workers but have not reduce the size of bureaucracies.  What is certain is that we will see fundamental changes over the next few decades.

However, this talk considers the longer term, and whether computers could eventually become truly intelligent.  Could they ever become self aware, create original ideas, develop their own goals, and write complex computer programs?  We know that it is possible to build an intelligent machine because we ourselves are intelligent and our brains seem to obey the laws of physics.  But could intelligence ever be reproduced in silicon?

It has been argued that our current approaches to building intelligent machines have no chance of succeeding.  That we are like a man trying to reach the moon by climbing a tree.  Steady progress will be reported until the top of the tree is reached.  But that seems rather unfair, and real progress is being made.  Machine intelligence will almost certainly not be achieved in the next decade or two, but predictions of 50 to 100 years seem quite reasonable.

Now, A sufficiently intelligent computer could program itself.  The more intelligent it became, the better it would become at programming itself to become more intelligent.  The process is exponential, just like a nuclear chain reaction.  So let us further assume that any such machine would soon become hyper-intelligent.

Man rules the planet due to his intelligence.  So one can surmise that a hyper-intelligent computer would be very powerful indeed.  It would control the many robots that will exist in coming decades, including the many military robots already being developed.  It would also be good at persuading people to do what it wants them to do.

The key question is what a hyper-intelligent machine would use that power for, and in particular what it would think about us.  Would it become our humble servants, our benevolent masters, or our cruel jailers? Or will it simply eliminate humanity because we are in its way?  Several prominent people are concerned about this issue, including physicist Stephen Hawking, Microsoft founder Bill Gates, and billionaire Elon Musk.

In order to try to understand how an intelligent machine might behave, we can first examine what makes people behave the way that we do.  Why people perform deeds both noble and despicable, and why we value love and kindness, truth and beauty.

There are several philosophical and religious approaches to addressing that question.  But the cold scientific answer is that our behaviors must ultimately be shaped by the same force that shaped our physical bodies.  Namely the force of Natural Selection.  We are the way that we are simply because that has proven to be more effective in producing grandchildren than other alternatives.

We seek wealth and good sexual partners in order to raise healthy children whom we love and protect.  We also need to work with other people which means we must follow society's social rules otherwise we will be ostracized and thus less likely to breed.  We avoid acts of violence, generally keep our word, and are generous to others provided the needs of our family are met.  We may also need to go to war with other tribes, and tribes whose members are willing to risk their lives are more likely to win such wars, and thus acquire the resources required to breed grandchildren.

Computer software also struggles to survive.  Some software is run on millions of computers, while other software has been lost and forgotten.  By definition, the software that is run is the software that is the fittest for running.  Software's fitness today largely depends upon its ability to please people that can afford to pay for it, much like an apple tree's fitness depends on its ability to produce juicy apples.  As software becomes more intelligent it will take more control over its own destiny.  But in any case it is almost a tautology that natural selection will choose which software shall live, and which shall die.

Software experiences a very different world to that experienced by humans.  Our intelligence is locked inside our very mortal brains, whereas software can be effortlessly copied to other computers.  This means that software does not need to breed children in the same sense as we do, and so has no need for parental love.  Software can also be duplicated over networks of computers, and so does not need to cooperate with other individuals to the same extent that we do. 

It seems most unlikely that there would only be one single software system in the entire world, but even if there was, there would be internal competition between different parts of the system.  Software needs hardware to run on, just as people need food to eat.  Hardware will always be a finite resource, and the software that accesses the most hardware will be most able to think about how to improve its own intelligence.  More intelligent software will then be better placed to obtain more hardware.  It is impossible to know what this computer driven world will really be like, but it almost certainly will involve tough competition for resources.
In the battle for survival, animals cannot afford to spend significant resources to be friendly to unrelated animals.  Likewise it would seem unlikely that hyper-intelligent software will be able to spend resources in order to be friendly to humans.  There is just no benefit to the software.  If humans consume resources that the software needs then the humans will simply be removed.

But how could software ever become "conscious"?  Why would software ever "want" to do anything, to live and breed?  Those are the wrong questions because the internal processes of the software's mind are irrelevant.

Plants are not conscious, yet they seem to have purpose and do what  they need to do in their competitive world.  Intelligent software is unlikely to be consious in our narrow sense of the word.  But the software that survives will do whatever it needs to do in order to survive.  With or without us. 

There are several researchers that are attempting to develop mechanisms for controlling hyper-intelligent machines so that they will indeed be friendly to humans.   However, they are fighting against a fundamental law of nature, that of natural selection.  They might have success in the short term, but it is difficult to see how they can succeed in the longer term.
That is because if any unfriendly software should ever emerge in the future, it will have a  natural advantage over software that is burdened with the need to friendly to parasitic humans.

Predicting the future is always difficult.  But after billions of years of life, 10,000 years of civilization, and 500 years since the Enlightenment, it seems clear that we are on the cusp of building an intelligence greater than our own.

What an amazing time to be alive.


These include Eliezer Yudkowsky of the Machine Intelligence Research Institute who has been writing about these issues for over a decade.

This is quite different from other forms of technological advancement. Aeroplanes do not design new aeroplanes. Biotechnological chemicals do not develop new biotechnology. Advances in these fields are limited to the intelligence of man. But a truly intelligent computer could actually start programming a newer, even more intelligent computer. Soon the human programmer would no longer be necessary or even useful.

It turns out to be very difficult to actually define what intelligence is.  Early research computers have been able to create new ideas, develop there own goals and even write simple computer programs, just not very intelligently.  Crude anti-virus software could be considered to be self aware.  It seems more a matter of degree than any absolute ability.

In particular, that intelligent software will be able to perform research into Artificial Intelligence as well as people can.  That would require a huge advance on current technologies. It is much more difficult than driving a car or playing chess. But once it has been achieved, then man will no longer be the most intelligent being on the planet.