Monday, 28 March 2016

Microsoft 'deeply sorry' for offensive tweets by its Tay chatbot

Tay
Feed-twFeed-fb

Microsoft AI chatbot Tay, launched last week as an experiment to learn how millennials talk, quickly became an embarrassment for the company as it posted racist and offensive tweets and was subsequently pulled offline. 

On Friday, Corporate Vice President of Microsoft Research Peter Lee published an apology and an explanation for Tay's misbehavior, saying the company is "deeply sorry for the unintended offensive and hurtful tweets from Tay."

Microsoft already runs a similar project in China — a chatbot called XiaoIce, which is used by more than 40 million people. But that success did not translate well in the U.S.
Read more...

More about Ai, Chatbot, Tay, Microsoft, and Social Media


from Social Media http://ift.tt/1UR6FdY
via IFTTT

No comments:

Post a Comment