Tuesday, June 20, 2006



I THOUGHT ISAAC ASIMOV HAD FIXED THIS YEARS AGO?
The following report came from The Times of London via MSNBC
"The race is on to keep humans one step ahead of robots: an international team of scientists and academics is to publish a "code of ethics" for machines as they become more and more sophisticated. Although the nightmare vision of a Terminator world controlled by machines may seem fanciful, scientists believe the boundaries for human-robot interaction must be set now - before super-intelligent robots develop beyond our control. "Security, safety and sex are the big concerns," said Henrik Christensen, a member of the Euron ethics group. How far should robots be allowed to influence people's lives? How can accidents be avoided? Can deliberate harm be prevented? And what happens if robots turn out to be sexy?" http://www.timesonline.co.uk/article/0,,2087-2230715,00.html


Clearly what’s being proposed is a modern and for-real version of The Three Laws of Robotics formulated in 1940 by Isaac Asimov and John W. Campbell for use in science fiction, which are as follows.

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I feel, however, that at least two other factors are on play here. One is Ray Kurzweil’s concept of The Singularity, when in about 30 years time, machines will be self-aware and smarter than we are. The other is, of course, the logical assumption of fictional artificial intelligences like HAL9000 and Skynet, that humanity is a worthless viral infection and should be erradicated. A sentiment with which I don’t always disagree.

The secret word is Exterminate

(The email address is still byron4d@msn.com)

No comments: