Microsoft
MicrosoftReuters File

Tay, the artificial intelligence (AI)-powered chat bot Microsoft had brought online Wednesday, managed to rattle off a bunch of racist statements while learning from its interactions with people on Twitter and several other platforms, reports TechCrunch.

After its unveiling Wednesday, Tay generated nearly 96,000 tweets before going offline Thursday, with the TechCrunch report quoting Microsoft as saying the team behind the AI chat bot was "making adjustments" to Tay.

According to its official website, Tay, created by Microsoft's Technology and Research and Bing teams, aims to interact with those in the age group of 18-24 years "to experiment with and conduct research on conversational understanding." It has been modelled to speak like a teen girl, said the Telegraph. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you, [sic]," it says on the website.

It was this feature of the AI chat bot that certain users of social networking platforms 4chan and 8chan used to turn Tay "viciously racist," reports Fusion. Tay was meant to learn from conversations, but troublemakers apparently made it repeat statements after them, which ultimately made the AI reportedly deny the Holocaust, come across as grossly racist, and also anti-feminist. The Telegraph report said it was also promoting incest.

Microsoft has since taken Tay offline and deleted the offensive tweets, though their screengrabs are still circulating online. Also, certain 4chan and 8chan threads are now replete with screengrabs of how users got Tay to agree to and later make racist and offensive comments, said the Fusion report.