When left to itself, an Artificial Intelligence (AI) program will develop and gain its own set of prejudices. This is because the creators of AI are humans, and humans are inherently prejudiced and flawed in their reasoning.

A new study has found that AI programs do not just take on the bias of their programmers, it also actively opposes opposing viewpoints, just like humans. Researchers have found that it does not take too much in the way of cognitive abilities to develop such biases, so it is likely that small bits and pieces of biased code would be all over the place.

One of the scenarios that is often played out is a world where Artificial Intelligence runs the world. A report by Futurism points out that such a world would not be a bad place to live in, provided the AI is fair. Purely objective decision making is something that a program could be expected to excel at and that is something humans cannot truly be entrusted with.

However, so far, AI has not proven to be perfectly fair, as evidenced in this study and many practical examples. The infamousTAY—AI built by Microsoft quickly delved deep into Nazism. Facebook's AI only last year created its own language and started communicating with other programs, it was shut down fast.

Artificial Intelligence
AI is just as biased, if not more based than its creatorsCarl Court/Getty Images

AI trained on data annotated and curated by humans simply learn and adapt to the same racist, sexist, and bigoted biases of the people working on it, notes the report. Programmers seem to be aware of this issue and are taking steps to correct for it, but the study found that even when prejudice is kept out of code, the AI is capable of creating its own set of biases.

Researchers used an abstract scenario that might never happen in the real world for the study, notes the report, but the research as such shows how real-world applications of generalised AI can affect people. If left unchecked, notes the report, algorithms could lead to deeply ingrained racism and sexism and even develop a harmful anti-human bias altogether, although the likelihood of it happening is rather slim.

The study was first published in the journal Scientific Reports.