Social Networking

Microsoft Apologizes for Corrupted Chatbot’s Nasty Comments

Microsoft last week apologized for its Tay chatbot’s bad behavior. It took the machine learning system offline, only 24 hours into its short life, after Twitter trolls got it to deny the Holocaust and elicit pro-Nazi and anti-feminist remarks.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” said Peter Lee, corporate vice president at Microsoft Research.

The company launched Tay on Twitter with the goal of learning about and improving the artificial intelligence by having it interact with 18- to 24-year-old U.S. Web users.

Microsoft says an e-gang forced it to take Tay down.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” a Microsoft spokesperson said in a statement provided to TechNewsWorld by company rep Lenette Larson. “As a result, we have taken Tay offline and are making adjustments.”

The trolls reportedly came from the 4Chan website.

“c u soon humans need sleep now so many conversations today thx,” is the last message posted on the TayTweets Twitter account.

A Steep Learning Curve

This isn’t the first time Microsoft has stubbed its toe on the Web.

The Tay debacle “is the third time I’ve seen them fail to anticipate folks behave badly, which seems odd since tech firms are largely made up of younger guys who have likely, at one time or another, recently exhibited similar behavior,” remarked Rob Enderle, principal analyst at the Enderle Group.

“Years ago, Microsoft set up an avatar-based chat room for young teens, and I … found one guy with a penis for his avatar attempting to talk to young girls and reported it,” he told TechNewsWorld. “Turned out it was a man with two daughters.”

Microsoft pulled the plug on the chat room after a few more incidents, Enderle said.

A few years later, it launched a joint marketing effort with Intel called “Digital Joy,” and “I asked if they’d checked whether there was a porn site by the same name and they said no. Turned out there was a big one in France they had to buy,” he added, and the “[US]$10 million campaign was a failure.”

No Intelligence Here

Microsoft’s description of Tay as an artificial intelligence chat system didn’t sit well with Stuart Russell, a professor of computer science at theUniversity of California at Berkeley.

“It seems ridiculous to call this an AI system, as it understands nothing of the content of anything it receives or sends out,” he told TechNewsWorld. “It’s like a parrot that learns to say rude words from its owner.”

People Behaving Badly

The response to Tay illustrates how difficult it can be to control behavior on the Web. Other attempts to do have backfired, as Reddit found out last year when angry members forced interim CEOEllen Pao to resign after she’d banned five subreddits, two of which were fat-shaming, one racist, one homophobic and the last targeting gamers.

What can Twitter do about the issue?

“We don’t comment on individual accounts, for privacy and security reasons,” Twitter spokesperson Nu Wexler told TechNewsWorld, pointing to the company’srules and to an explanation of theprocess for reporting possible violations.

The Tay debacle “showcases one of the big concerns with both analytics and AI — that some bad actor or actors will corrupt the process with a problematic outcome,” analyst Enderle said.

“This also speaks partially to why there’s great concern about hostile AIs,” he continued. “Badly done programming could result in machines that could do harm on a massive scale.”

Chatbots “need to be taught how to deal with misbehavior before being released into the wild,” Enderle suggested, “and have to be closely monitored in their initial learning phases to ensure they don’t learn bad things.”

Richard Adhikari

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

1 Comment

  • Wow. An intelligent system doesn’t understand the unnatural ability of being politically correct. I guess the programmers better go back to the drawing board to make sure that it’s incapable of hurting anyone’s feelings.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels