Microsoft AI chatbot Tay, launched last week as an experiment to learn how millennials talk, quickly became an embarrassment for the company as it posted racist and offensive tweets and was subsequently pulled offline.
On Friday, Corporate Vice President of Microsoft Research Peter Lee published an apology and an explanation for Tay's misbehavior, saying the company is "deeply sorry for the unintended offensive and hurtful tweets from Tay."
SEE ALSO: Microsoft's Twitter bot turned from average teen to Jew-hating Trump supporter in 12 hours
Microsoft already runs a similar project in China — a chatbot called XiaoIce, which is used by more than 40 million people. But that success did not translate well in the U.S.
Read more...
from Social Media http://ift.tt/1UR6FdY
via IFTTT
No comments:
Post a Comment