Basic cell phones once dominated the lives of consumers, but we’ve now replaced them with “smart” phones that can surf the Web, play music, and download files and documents simultaneously. Samsung’s latest flagship, the Galaxy S5, can pause videos when your eyes leave the screen (Smart Pause) and scroll to the bottom of the web page you’re reading (Smart Scroll). Some consumers deem these features unnecessary and “gimmicky,” but they show us the power that lies within a five-inch display with desktop capabilities that can be gripped in the palm of your hand.

If the new robot moral decision experiment goes well, looks such as these will become typical actions of autonomous robots. Credit: UK Telegraph
Google has started its task of building a robotic army and a robotics OS, but researchers from Rensselaer Polytechnic Institute (RPI), Tufts University, and Brown University are teaming up with the U.S. Navy on a somewhat different mission: to supply moral decision-making skills to autonomous robots. “The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities,” said Tufts Engineering School and Human-Robot Interaction Laboratory Director Matthias Scheutz. It’s always been said that man is not a machine, and a machine is not a man. The new study, if the challenge is successful, may start to prove this old idea wrong.
What is the motivation behind such a challenge? In a sentence, researchers want to empower robots to become autonomous, to learn how to function on their own and interact productively with humans without remaining so “robotic.” If you’ve ever watched movies of the ‘80s, robots were seen as remote-controlled objects with metallic bodies that did what you wanted when you pressed a certain button that was tied to a certain function. Unfortunately, robots have not progressed in their skills much beyond the ‘80s. While I’d argue that they are far more impressive than they were 30 years ago, robots are still “programmed” to do what humans demand.
Could it be the case that robots are able to make moral decisions on their own, without human aid? Well, strictly speaking, robots can only grow to make moral decisions if humans program them to do so. Researchers from the institutions performing the study look to add moral decision-making to the growing list of robotic capabilities. Should a robot become an army medic in a war situation with the task of transporting seriously-wounded soldiers to an airplane, will the robot stop for an injured soldier while on the way – or will it continue with its one-track task and neglect the wounded soldier? Humans cannot predict all the numerous moral dilemmas that can occur in such situations, which makes the task of moral decision programming all the more difficult.
If the researchers involved pull this off, I’m sure Google will give them a call sometime soon. With autonomous vehicles driving unmanned on California city streets, and drones patrolling the skies, moral robots will be another tech marvel to astound us. Can you handle the thought of a robot making the same moral decisions as you?
Leave a Reply