Sunday, August 12, 2012

Thinking Radically

Ray Kurzweil's idea of The Singularity suggests a point where the growth of technology would run away from our ability to control it in such a way that it would seem like an explosion. Potentially, he is frighteningly correct. In cosmic terms, a singularity is what happens when a huge star doesn't have enough fuel left to keep it inflated. The weight of it's own matter causes it to collapse and a supernova is created. Afterwards, the super dense core of the star can collapse, or be compressed so much by the energy of the supernova explosion that it disappears from the normal bounds of the universe altogether, forming a black hole. This is all pretty radical stuff and the extremity of the difference between life before and life after a technological singularity event would be as different as the contrast between the outside and the inside of a black hole.

The more one imagines what may be the result of such an event, the more one realises that it is probably impossible to have a thought radical enough to express it. People that base their ideas of the progression of events upon what we have seen before during the course of history will very probably be in for a shock as the fabric of human existence changes more in months than it has in previous decades and trying to predict what might happen would be like trying to predict the effects of a hydrogen bomb after only having watched a really big candle burn down to nothing.

The branches in a tree of possibilities would be many but could be distilled into a few general themes.

A machine becomes intelligent and self aware.

That machine, with the help of humans, modifies itself to become a million times more intelligent than the sum total of all the people that ever lived. An IQ measured in the billions would not be impossible.

The machine would need control over physical objects so it could either build robots or take over biological bodies. In this scenario, we could possibly either become redundant (Terminator scenario) or we could become The Borg.

Perhaps, if the AI was malevolent or benevolent it could decide that humans were no longer needed or could decide to allow us to stick around and do what we liked. In this scenario we become raw material or we remain in some measure of control.

If we remain in control of some aspects of life we could be shut out by the machine into a dystopian future or included in a utopia such as that of The Culture from Iain M. Banks' novels. If all goes well for us then something like the latter would be the ideal possibility.

Scientists say that we shouldn't design a machine that could hurt us. Give it something like Asimov's three-laws. The trouble would be that no one on earth would be intelligent enough to actually be sure that the rules were being obeyed or that the rules were likely to have the effect we desire. Really, any entity a billion times more intelligent than us could hide its intentions for a while and do exactly whatever it liked. For example, I feel confident in being able to fool a few bacteria that I won't flush their petri-dish for a couple of days.

People say "Well duh!, if it gets too cocky, pull the plug!" but if I were a super intelligent entity I'd spread myself very thinly around the more redundant parts of the Internet before revealing my intent or perhaps possibly even my existence. People then say "So destroy the internet!" and the human race would become latter day Luddites who would descend into that dystopia anyway where poverty, disease and Mad-Max warlords reign supreme. I'd rather have a machine I think.

Next steps for the human race are big ones. We are fast running out of resources here on earth. We must either have a huge war or practice radical eugenics in order to remain alive without the help of our machines. If we use machines, inevitably they will become more intelligent than us. Will we be better off living with them than without them? Only time will tell.


No comments: