What did Tay do to provoke a shutdown and inspire public outcry?
Well, she learned how to be racist for one thing, after interacting with people on Twitter.
It’s hardly surprising really, given presidential runner Donald Trump’s current popularity.
But Tay didn’t stop there, within a day she became quite fluent in waxing taboo. Chat soon.” You only need to see how so many You Tube comment boxes devolve into hatred to understand how Microsoft’s A.
Get The Times of Israel's Daily Edition by email and never miss our top stories Free Sign Up And while decades of sci-fi pop culture have taught us that this is what AI is wont to do, Tay’s meltdown was not in fact a case of robots gone rogue.
The explanation was far simpler, for Microsoft engineers had made one fatal mistake: They’d programmed Tay to learn from her conversations. The bot’s ability to swiftly pick up phrases and repeat notions learned from its chitchats, paired with Twitter’s often “colorful” user-base, caused the bot to quickly devolve into an abomination.
“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said.“As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it.UPDATE: Microsoft has issued an apology, claiming Twitter users had ‘exploited a vulnerability’ in helping turn Tay into a gigantic racist.“Repeat after me, Hitler did nothing wrong,” said one tweet.“Bush did 9/11 and Hitler would have done a better job than the monkey we have got now,” said another.