None of the articles in major media included any investigation or research. None. Let that sink in.
All foam, no beer.
With only three exceptions, none of the minor media (including blogs) included any investigation or research either. Here are the three standouts:
- "Microsoft’s Tay is an Example of Bad Design (or Why Interaction Design Matters, and so does QA-ing)" -- e.g. inadequate black list
- "TayAndYou - toxic before human contact" -- e.g. inadequate filtering of training corpus, incl. tweet corpus.
- "Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?" -- e.g. poor marketing communications and real-time management
There were about 500 articles on the Tay and aftermath, give or take. There were serious articles in New York Times, The Guardian, Washington Post, and others. At least 25 articles featured "what went wrong" or "lessons learned" angle. A small number of these included quotes from AI experts, who offered their opinions but did not, themselves, do any investigation or research into Tay.
Looking at the human side, I'd guess that there were about 200 to 400 humans involved in writing, editing, contributing to, and publish these articles. (Yes, probably fewer humans than number of different articles because of syndication or similar.) How could all of those humans, many who are considered professionals and journalists, decide to write and publish articles without credible evidence? I can think of five reasons.
First, they all seemed to believe the tweets were self-evident and thus there was nothing to investigate. Tay was AI because Microsoft said it was, and look at all these anti-social tweets! Case closed!
Second, this story easily fits into one of the frames and narratives about AI in the real world: it's potentially dangerous and could easily "go rogue".
Third, the Copy/Paste Ethic in our culture, where many people believe that it is perfectly reasonable and morally proper to write an article or paper by copying and pasting content from other digital content. Ask any high school teacher or college professor about this, and be prepared for either a rant or tears of frustration.
Fourth, economic incentives that have been crushing media organizations for years have pushed them to push out content as fast as possible with as little cost as possible. Even many articles that looked original were mostly quoting and referring to other articles.
Fifth and finally, some of these articles fit a the definition of a "hot take". A "hot take" is "a piece of opinion journalism hastily written in a scolding tone... [where] there’s a “just telling it like it is” attitude, even if, according to the best available data, it is not like that at all." In the case of Tay, the hot take articles focused on ranting and stone throwing, either at AI, at Internet trolls and troll culture, or both.
In the primary articles the net result was all foam, no beer. For the secondary and tertiary sources that mimicked and repeated from the primary articles, they were no better than recycled beer foam. Let that image sink in.
|Recycled beer foam. From: The Monitors by Keith Laumer, p 1|