< link rel="DCTERMS.isreplacedby" href="http://davejustus.com/" >

Wednesday, February 16, 2005

So much for the 1st Law

This The New York Times story on military robots is pretty cool.

The lawyers tell me there are no prohibitions against robots making life-or-death decisions,' said Mr. Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk, Va. 'I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.' Trusting robots with potentially lethal decision-making may require a leap of faith in technology not everyone is ready to make. Bill Joy, a co-founder of Sun Microsystems, has worried aloud that 21st-century robotics and nanotechnology may become 'so powerful that they can spawn whole new classes of accidents and abuses.' 'As machines become more intelligent, people will let machines make more of their decisions for them,' Mr. Joy wrote recently in Wired magazine. 'Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control.'
Eventually, this is bound to happen. Either we will embrace these technologies or we will be destroyed by those who do. Yeah, it will cause problems. Every scientific advance has. (via VodkaPundit)

0 Comments:

Post a Comment

<< Home