Saturday, March 26, 2016

Have a Nice Tay

The Day the Denizens of the Internet Corrupted a Microsoft AI

On March 23rd, Microsoft Research introduced what was essentially an artificial intelligence (AI) chatbot named "Tay" to the world via twitter (TayTweets on twitter, @Tayandyou). Tay was designed to portray and interact as a 19 year old woman, with initial speech patterns to match. "She" was designed with the ability to interact and learn from the conversations she experienced on twitter.

Just a day later Microsoft decided that Tay needed a break from the Internet, which, it seems, is full of trolls. Tay fell in with the wrong crowd and learned a little too well from them. By the end of her first full day out, she was tweeting some terribly offensive racist and sexist things, and according to Microsoft themselves, wildly inappropriate and reprehensible words and imagesWhere did she learn to talk like that? She learned from the good people of the Internet. It turns out that not everyone is a nice person ready to guide young Tay on how to carry on a respectful conversation, and some of the "facts" she learned were a little less than solid.

What, if anything, can we learn from this? I readily admit that I know almost nothing about the AI that Microsoft used, but Wired reports that Tay's speech was trained through neural networks. I have no doubt that it was sophisticated and impressive. Human intelligence, though, is elusive. Lots of us are exposed to the rantings of Internet trolls every day and don't immediately turn into hate-slinging racists. That's because in all of our interactions, we also use a range of time-tested and well-honed tools to consider the sources of information and make judgments. When someone that we recognize as an expert or authority on a subject speaks, we give it more credit than when our blow-hard high school buddy offers us his opinion on good economic or immigration policy. We might choose to repeat what we heard from the expert, and try to forget what we hear from those a little less well-informed. Tay didn't yet have those skills mastered.

If Tay wasn't entirely ready for the trolls and haters on the Internet, maybe she needed a little more time to learn in a safer and more nurturing environment. We are careful about the company we let living, breathing young people keep and gradually expose them to the complexities of the world. Maybe Tay needed more time to develop some judgment about who to believe and what (not) to repeat. Tay could have spent a little more time in the equivalent of grade school before confronting the real world. Or maybe her algorithms just need some tweaks.

Is this the end of Tay? Maybe, maybe not. Certainly the hate speech that resulted needed to be stopped quickly and I'd say that Microsoft Research deserves credit for launching the experiment and also for pulling the plug with apologies and without delay. But it certainly shouldn't be and won't be the end of similar AI and learning experiments. Even in this unfortunate set of events, some learning happened and we'll see the benefits in the months and years to come – whether in later versions of Tay or in bold new experiments.

Did you get a chance to talk with Tay while she was online? Do you have thoughts to share on what happened? Please leave a comment and share it with us.


Reference Links:
  1. https://en.wikipedia.org/wiki/Chatbot
  2. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  3. http://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/
  4. http://www.latimes.com/business/technology/la-fi-tn-microsoft-tay-tweets-20160325-htmlstory.html

Thanks for reading! A blog works best with active participation. If you enjoy this blog, please +1 it and leave a comment. Share it on Twitter, Google+ or Facebook. More readers will drive more discussion.

No comments:

Post a Comment