Letter Re: Artificial Intelligence

James and Hugh,

Regarding your entry on Artificial Intelligence, please see the book “Our Final Invention” by James Barrat. Our world lives with potential catastrophe, i.e. nuke war, EMP attack, famine, drought, civil unrest, asteroids, etc. These are all potentially survivable occurrences. The achievement of AGI (artificial general intelligence) leading to super machine intelligence is not. You might also check out the following: http://Plato.stanford.edu/entries/Chinese-room/.

Hugh Replies: While I understand the reasoning behind the desire to advance artificial intelligence, I also clearly see the dangers in it. Isaac Asimov was a visionary in his “I Robot” series in dealing with this issue. While the ability to make decisions can be clearly advanced, even in the face of unclear logic, the morality of making any decision without guiding ethics and principles is definitely questionable. That is really the crux of our humanity– the subjugation of our decisions to our sense of right and wrong. The whole point of natural law is that there are some universal concepts that we just recognize as “right” or “wrong”. Machines don’t and can’t know inherently what these “laws” are. At best, they can be programmed with the programmer’s ideas of the laws, but they will always be unable to infer the laws.