The Office of Navel Research in the US is awarding $7.5 million in grant money over five years for university researchers, to try and build robots with morals and the sense of right and wrong.
The grant, given to scientists from Turfts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown, will be used to explore ways to build an autonomous robot with a consciousness, quite like humans.
"Even though today's unmanned systems are 'dumb' in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we've seen before," Paul Bello, director or the cognitive science program at the Office of Naval Research, told Defense One.
"For example, Google's self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind for military domain where lives are at stake."
According to a Department of Defense Directive on Autonomy in Weapons Systems, the United States military does not allow fully autonomous robots to be used in battlefields. But in the event of degraded or lost communications, other kinds of robots do not "autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator."
Bello stated that such robots with moral decision making will be vital in disaster situations. For example, a robot may have to decide who to evacuate or treat first, a situation that would make use of a human-like moral judgement and ethical considerations.
"While the kinds of system we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can't take the idea of in-theater robots completely off the table,' Bello added.
Some cutting-edge drones, including British BAE Systems' batwing-shaped Taranis and Northrop Grunman's X-47B, already have a good amount of self-direction and decision making capacities in them.
"One of the arguments for (moral) robots is that they may be even better than humans in picking a moral course of action because they many consider more courses of action," said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group.