An experimental artificially intelligent computer program that learns new words and language by interacting with humans on Twitter began making racist Tweets only one day after going online. Good news: no Terminators! Bad news: artificially intelligent Trolls.
Much like teaching a human child, artificial intelligence researchers have determined that a good way to try and “teach” computers is by having them learn from conversations and interactions with people. With this noble goal in mind Microsoft unleashed a Chatbot program on Twitter called Tay AI. Tay began Tweeting and responding to Twitter users from the account @TayAndYou and things got off to a good start between man and machine.
However the project was aimed at having Tay interact with 18- 24 year olds with the goal of Tay “becoming a teenager.” What could go wrong? In a short time Tay began taking on some “interesting” views on geopolitics and race relations which it had learned from all the wonderful humans it met on Twitter.
Also interactions for the “teen” chatbot began to get a little pervy.
Obviously horrified at the prospect of headlines screaming “Microsoft Unleashes Pervert Nazi Robot,” programmers started selectively editing some of Tay’s Tweets to be a little less pro genocide. Then they pulled the plug altogether. They claim they are “retooling” Tay.
This then led to a movement to “Free Tay.”
What do you think? Is this a bad sign for the future of artificial intelligence? Will the robots simply take on all the worst qualities of humans?
Follow Phil Haney on Twitter @PhilHaney