Tesla Motors and SpaceX CEO Elon Musk stumped the tech community while addressing a conference by online publication Recode this week when he floated oxford professor Nick Bostrom's theory which suggests we may be living as characters in an advanced civilisation's computer-simulated reality.

He also said Artificial Intelligence (AI) and machine learning will create computers so sophisticated and godlike that humans will need to implant "neural laces" in their brains to keep up.

Although this may seem like the stuff of science fiction, top tech executives have repeatedly said AI was on the verge of changing everyday life. AI that combs through large amounts of raw data to predict outcomes and recognise patterns is already being used in web search systems, marketing recommendation functions and security and financial trading programs. But the industry is set to see exponential growth in the coming years.

Industrial ramifications

The range of estimated investments in the industry is predicted to be between $40 billion and $70 billion by 2020, according to a Bank of America report that cites IDC research. The technology is expected to spread to driverless cars, service robots, medical robots and computer-assisted surgery in the future.

Manufacturing and businesses are set to make massive cost reductions: $8-9 trillion across manufacturing and healthcare, $9 trillion in employment costs via AI-enabled automation of knowledge work and $1.9 trillion in efficiency gains via autonomous cars and drones. Adoption of robots and AI could boost productivity by 30 percent in many industries, while cutting manufacturing labour costs by 18-33 percent, the report said.

But the report also raised concerns about technological unemployment. The fear of luddites and neo-luddites is not altogether unfounded, as the report points out to the possible displacement of human labour: 47 percent of U.S. jobs have the potential to be automated.

Recent breakthroughs

With the recent advancements in deep learning and AI, the historical contests between man and machine in the past two decades, as exhibited in the chess match between IBM's Deep Blue and Gary Kasparov and the jeopardy match between IBM's Watson and Ken Jennings, may become obsolete.

In a detailed post published on Wednesday, June 1, Facebook made ripples around the world with the announcement of introducing a deep-learning AI system titled DeepText that is able to understand the content of several thousand posts per second.

The system will also be able to filter out malicious, hateful, or hurtful speech on the social network, along with photos that contravene Facebook's policies, said the post.

Future work is expected to refine the AI engine's ability to get at the deeper meanings of text so it can spot subtle connections between words such as "bro" and "brother" that are often missed by other language analysis tools, said Facebook on the post.

Messenger is where DeepText is expected to make its impact first. In a post on its developer forum, Facebook said the engine is being developed to fully understand context in posts, regardless of how it's framed. For example, when DeepText identifies a sentence it understands to someone seeking a ride, it will suggest the use of Messenger transportation integrations with services like Uber and Lyft.

Other companies are also investing. Amazon is working on Alexa, the company's voice-based smart assistant software system. IBM has been working on an AI-based cognitive system since 2005, when it started developing its Watson supercomputer.

Google, the undisputed leader of AI and robotics, first started applying the technology through "deep neural networks" to voice recognition software about three-four years ago, and is ahead of rivals such as Inc, Apple Inc and Microsoft Corp in machine learning, Google CEO Sundar Pichai told Reuters at the Recode conference.

Google's Brain team just recently released the first computer-generated song — a 90-second piano melody â€” as a part of its project Magenta. In a detailed post, Google explained the project runs on top of Google's open-source AI engine TensorFlow, and attempts to find out if computers and machines are able to create art and music.

This is not the first time Google has dived into a project that uses trained neural networks to generate art from machines. Before Magenta, there was DeepDream, a free visualisation tool that allowed users to create their own images inspired by neural networks.

However developers still have to iron out the kinks before their technology hit the shelves. Microsoft, for instance, apologised and went back to the lab after an AI chatbot talking on Twitter "learned" to make racist comments.