Monday, May 08, 2006

All I really want my robot to do is bring me my red wine.

Later this week the renowned British futurist Ian Pearson will deliver the key note at the Australian Institute of Company Directors annual conference. His topic is conscious computers vs mankind. He will no doubt tell this most conservative of audiences what others like Ray Kurzweil have been saying for some time; that within 10 years there will be computers, or artificial intelligence entities, that will not only will be able to ‘think and feel’ but will be smarter than human beings. I’m hoping that, about then, for a very modest sum, I can finally get an early model. It will I think be as smart as the average dog, but a little more useful. Finally I will be able to ask it to go and get me a red wine as I contemplate the amazing ethical dilemmas that the newer smarter computers will undoubtedly bring.

As the time frame shortens the debates are likely to be intense. With such smart entities around what jobs will any of us, especially the knowledge workers do? What sorts of rights might they have? Will they have to buy a ticket for the football game? Might they become fans? Who drives if your robot wants to go to one game and you another? What happens if they take it upon themselves to ration my red wine out of concern for my wellness, the monitoring of which is built into their systems? It may spell the end of overindulgence, which I guess in a politically correct world is a good thing. Perhaps they will make great politicians and bureaucrats!

If these AI entities are, in part biological, does that make them alive? All fascinating stuff. None of it of course is believable until it really occurs. Then again if you had told a person living circa 1900, that within seventy years hundreds of ordinary people would routinely fly in giant planes they wouldn’t have believed you either. What bothers me though is how the Defence Forces in the United States, and I’m sure other countries, have begun to develop smart AI entities with the sole intention of inflicting directed harm on human beings. If successive generations of such computers become conscious, and can feel, and they are really really smart, it will be foolhardy to give them such power. But wasn’t that what Issac Asimov was trying to tell us so long ago when he promulgated the three laws of robotics?
  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I have a feeling that if this past wisdom isn’t hard wired into the future then the ritual of drinking a good red wine might become a little sad.

2 Comments:

Anonymous Anonymous said...

Instinctively I don't agree with having Robots that are similar to people. We are stuggling enough with such issues as unemployment and a growing population - so why pollute this earth with machines that have human attributes?

frog.

6:39 AM  
Blogger FromBard2Verse said...

Hi Mike

I do agree that this is the foreseeable future (with the arms race, etc.) but do you think that this will continue on to bring a day in which we will face the man vs. machine scenario (knowing well that then the control might be beyong humans)...

Thank you
Parikshit

7:01 AM  

Post a Comment

<< Home