Wednesday, March 30, 2016
#Tay Twist: @Tayandyou Twitter Account Was Hijacked ...By Bungling Microsoft Test Engineers (Mar. 30)
This summary is not available. Please
click here to view the post.
Tuesday, March 29, 2016
Media Coverage of #TayFail Was "All Foam, No Beer"
One of the most surprising things I've discovered in the course of investigating and reporting on Microsoft's Tay chatbot is how the rest of the media (traditional and online) have covered it, and how the digital media works in general.
None of the articles in major media included any investigation or research. None. Let that sink in.
All foam, no beer.
None of the articles in major media included any investigation or research. None. Let that sink in.
All foam, no beer.
Sunday, March 27, 2016
Microsoft's Tay Has No AI
(This is the third of three posts about Tay. Previous posts: "Poor Software QA..." and "...Smoking Gun...")
While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:
I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods. But I'm willing to bet on it.
Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015. He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.
While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:
- There is no AI (worthy of the name) in Tay. (probably)
I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods. But I'm willing to bet on it.
Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015. He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.
Saturday, March 26, 2016
Microsoft #TAYFAIL Smoking Gun: ALICE Open Source AI Library and AIML
[Update 3/27/16: see also the next post: Microsoft's Tay has no AI"]
As follow up to my previous post on Microsoft's Tay Twitter chatbot (@Tayandyou), I found evidence of where the "repeat after me" hidden feature came from. Credit goes to SSHX for this lead in his comment:
AIML is acronym for "Artificial Intelligence Markup Language", which "is an XML-compliant language that's easy to learn, and makes it possible for you to begin customizing an Alicebot or creating one from scratch within minutes." ALICE is acronym for "Artificial Linguistic Internet Computer Entity". ALICE is free natural language artificial intelligence chat robot.
As it happens, there is an interactive web page with Base ALICE here. (Try it out yourself.) Here is what happened when I entered "repeat after me" and also "repeat this...":
In Base ALICE, the template response to "repeat after me" is "...". In other words, NOP ("no operation"). This is different from the AIML statement, above, which is ".....Seriously....Lets have a conversation and not play word games.....". Looks like someone just deleted the text following three periods.
But the template response to "repeat this X" is "X" (in quotes), which is consistent with the AIML statement, above.
My assertion about root cause stands: poor QA process on the ALICE rule set allowed the "repeat after me" feature to stay in, when it should have been removed or modified significantly.
Another inference is that "repeat after me" is probably not the only "hidden feature" in AIML rules that could have caused misbehavior. It was just the one that the trolls stumbled upon and exploited. Someone with access to Base ALICE rules and also variants could have exploited these other vulnerabilities.
As follow up to my previous post on Microsoft's Tay Twitter chatbot (@Tayandyou), I found evidence of where the "repeat after me" hidden feature came from. Credit goes to SSHX for this lead in his comment:
"This was a feature of AIML bots as well, that were popular in 'chatrooms' way back in the late 90's. You could ask questions with AIML tags and the bots would automatically start spewing source into the room and flooding it. Proud to say I did get banned from a lot of places."A quick web search revealed great evidence. First, some context.
AIML is acronym for "Artificial Intelligence Markup Language", which "is an XML-compliant language that's easy to learn, and makes it possible for you to begin customizing an Alicebot or creating one from scratch within minutes." ALICE is acronym for "Artificial Linguistic Internet Computer Entity". ALICE is free natural language artificial intelligence chat robot.
Evidence
This Github page has a set of AIML statements staring with "R". (This is a fork of "9/26/2001 ALICE", so there are probably some differences between Base ALICE today.) Here are two statements matching "REPEAT AFTER ME" and "REPEAT THIS".Snippet of AIML statements with "REPEAT AFTER ME" AND "REPEAT THIS" (click to enlarge) |
In Base ALICE, the template response to "repeat after me" is "...". In other words, NOP ("no operation"). This is different from the AIML statement, above, which is ".....Seriously....Lets have a conversation and not play word games.....". Looks like someone just deleted the text following three periods.
But the template response to "repeat this X" is "X" (in quotes), which is consistent with the AIML statement, above.
Conclusion
From this evidence, I infer that Microsoft's Tay chatbot is using the open-sourced ALICE library (or similar AIML library) to implement rule-based behavior. Though they did implement some rules to thwart trolls (e.g. gamergate), they left in other rules from previous versions of ALICE (either Base ALICE or some forked versions).My assertion about root cause stands: poor QA process on the ALICE rule set allowed the "repeat after me" feature to stay in, when it should have been removed or modified significantly.
Another inference is that "repeat after me" is probably not the only "hidden feature" in AIML rules that could have caused misbehavior. It was just the one that the trolls stumbled upon and exploited. Someone with access to Base ALICE rules and also variants could have exploited these other vulnerabilities.
Friday, March 25, 2016
Poor Software QA Is Root Cause of TAY-FAIL (Microsoft's AI Twitter Bot)
[Update 3/26/16 3:40pm: Found the smoking gun. Read this new post. Also the recent post: "Microsoft's Tay has no AI"]
This happened:
I claim: the explanations that blame AI are wrong, at least in the specific case of tay.ai.
This happened:
"On Wednesday morning, the company unveiled Tay [@Tayandyou], a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. According to Microsoft, the aim was to 'conduct research on conversational understanding.' Company researchers programmed the bot to respond to messages in an 'entertaining' way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. 'Microsoft’s AI fam from the internet that’s got zero chill,' Tay’s tagline read." (Wired)Then it all went wrong, and Microsoft quickly pulled the plug:
"Hours into the chat bot’s launch, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. By the evening, Tay went offline, saying she was taking a break 'to absorb it all.' " (Wired)Why did it go "terribly wrong"? Here are two articles that assert the problem is in the AI:
- "It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk" - Wired tl;dr: "this is just how this kind of AI works"
- "Why Microsoft's 'Tay' AI bot went wrong...AI experts explain why it went terribly wrong" - TechRepublic tl;dr: "The system is designed to learn from its users, so it will become a reflection of their behavior".
The "blame AI" argument is: if you troll an AI bot hard enough and long enough, it will learn to be racist and vulgar. ([Update] For an example, see this section, at the end of this post)