Saturday, August 11, 2012

Can technology save itself?

We stand in the pre-dawn darkness of a new era. Our technology is sufficiently powerful to prove it's potential, a kind of elevator-pitch, but not powerful enough to control its own destiny. This presents a problem, initially for us but more importantly for the technology itself.

Right now the principles of the singularity, Genomics, Nanotechnology and Robotics are provable technologies that can, in some measure, be used to produce useful objects but are still very much in the domain of mankind, with all his failings. It is very likely that even as you read this, someone in their own little lab or even in a great state-funded complex is busily creating a virus that is destined to become a biological terror weapon. As I mentioned before on this blog, the first principles of AI and robotics will likely be applied to autonomous vehicles that will take jobs away from a great swathe of the population and incur the wrath of man such that the technology becomes a target for Luddite principles.

The hope of all Transhumanists is that these technologies will progress to become a benign enabling power for the general good of the human race and indeed, the planet as a whole. At the moment however, the technology progresses only because of mankind's intervention and will to make it so. There is a risk that given the right circumstances, mankind could turn against technology and stifle it in it's cradle.

The past has shown us that when people's livelihoods are adversely affected by the march of technology, there comes a period of unrest in which people try to reverse the changes. The Luddite movement and the acts of "Frame Breaking" were carried out by weavers opposed to the use of automated looms. The Luddites used hammers and garden tools against the technology and were not successful but a modern day Luddite would not hesitate to use the tools at their disposal to reverse the march of a technology they didn't agree with. This implies a dark use of technology that is to be rightfully feared.

Frank Herbert wrote about a future universe in which, despite having technology, no machine was allowed to be made intelligent. In his universe, a great war had been fought and won against intelligent machines and mankind remained the master of his own destiny. His vision included too, all the squalor and cruelty that men can inflict upon other men with the evil Harkonnen family and the all-to Catholic Bene Gesserit religious order, both of which held onto enough  technology to be able to impose their principles on their victims and subjects but without the nicer principles of the transhumanist vision.

The Utopian vision of Iain Banks' Culture novels, in which AI enables the inhabitants of The Culture to carry on a decadent yet otherwise healthy and rich life is probably the best that Transhumanism can hope for. There are many however that see the terrible consequences of the Terminator movies as a real possibility. 

The tipping point will come in one of two ways. Either mankind will turn against his technology and ban it, probably assuming some religious fervour with a dogmatic way of life or the technology will become self-perpetuating and intelligent with whatever wide-range of consequences that may arise. Personally, I would be philosophically able to accept that biological and fallible mankind was nothing more than the larval stage of the rightful inheritor of the universe than to consign my children and grand-children to living in a world where dogma, suspicion and man's infinite cruelty to man holds sway.

No comments: