Artificial Intelligence
Artificial IntelligenceReuters

People who've watched Stanley Kubrick's "2001: A Space Odyssey" can probably relate to how a sentient AI can be something to lose sleep over. Stephen Hawking, Elon Musk and Bill Gates agree that artificial intelligence (AI) is getting closer to being self aware, and Google feels it's necessary to have a means to rein in an out-of-control AI.

Laurent Orseau, director of Google's DeepMind (the division that built AlphaGo, which defeated champion Lee SeDol in the strategy board game Go) has authored a paper titled "Safely Interruptible Agents" proposing the incorporation of a kill switch while programming an AI. The paper was written in collaboration with Oxford University professor Stuart Armstrong.

In the paper, Orseau and Armstrong talk about how AI may not necessarily operate optimally in the real world. The duo believes that if the AI is functioning under human supervision, then in the event that the AI is going to do something that will result in more harm than good, the AI should be stopped by pressing what they call "the big red button".

The abstract to the paper also introduces an outcome where the AI learns to ensure that the manual override isn't engaged. The paper explores ways to ensure that this doesn't happen.

Self-preservation is a very human concept and could essentially result in a self-aware, autonomous car choosing to keeps itself alive, putting at risk the passengers or pedestrians. It is for situations like this and, obviously, more complicated ones that Orseau and Armstrong propose the kill switch.

Elon Musk also presented his point of view with regard to sentient AI gone rogue. At the Code Conference held recently, Musk postulated that a neural link be established between humans and AI where the collective will of the others involved would result in both individuals as well as AI behaving properly.