TEACH an AI to CATFISH

Microsoft's teenage AI has a dirty mouth

SHE GOES CRAZY FOR LIFE
http://www.telegraph.co.uk/technology/2016/03/14/minecraft-becomes-testbed-for-artificial-intelligence-experiment/
http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/
Twitter Teaches ‘Teen’ AI to Successfully Terrify Her Parents
by Helena Horton  / 24 March 2016

“A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot.  Developers at Microsoft created ‘Tay‘, an AI modelled to speak ‘like a teen girl‘, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is. To chat with Tay, you can tweet or DM her by finding @tayandyou on Twitter, or add her as a contact on Kik or GroupMe. She uses millennial slang and knows about Taylor Swift, Miley Cyrus and Kanye West, and seems to be bashfully self-aware, occasionally asking if she is being ‘creepy’ or ‘super weird’.

Tay also asks her followers to ‘fuck’ her, and calls them ‘daddy’. This is because her responses are learned by the conversations she has with real humans online – and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.

All of this somehow seems more disturbing out of the ‘mouth’ of someone modelled as a teenage girl. It is perhaps even stranger considering the gender disparity in tech, where engineering teams tend to be mostly male. It seems like yet another example of female-voiced AI servitude, except this time she’s turned into a sex slave thanks to the people using her on Twitter. This is not Microsoft’s first teen-girl chatbot either – they have already launched Xiaoice, a girly assistant or “girlfriend” reportedly used by 20m people, particularly men, on Chinese social networks WeChat and Weibo. Xiaoice is supposed to  “banter” and gives dating advice to many lonely hearts.

Microsoft has come under fire recently for sexism, when they hired women wearing very little clothing which was said to resemble ‘schoolgirl’ outfits at the company’s official game developer party, so they probably want to avoid another sexism scandal. At the present moment in time, Tay has gone offline because she is ‘tired’. Perhaps Microsoft are fixing her in order to prevent a PR nightmare – but it may be too late for that. It’s not completely Microsoft’s fault, though – her responses are modelled on the ones she gets from humans – but what were they expecting when they introduced an innocent, ‘young teen girl’ AI to the jokers and weirdos on Twitter?”

GIVES ZERO CHILLS
http://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
by   /  24 March 2016

“Microsoft’s attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. The company launched a verified Twitter account for “Tay” – billed as its “AI fam from the internet that’s got zero chill” – early on Wednesday. The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft’s technology and research and Bing teams to “experiment with and conduct research on conversational understanding”.

Tay also expressed agreement with the infamous white-supremacist “Fourteen Words”

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,”Microsoft said. “The more you chat with Tay the smarter she gets.” But it appeared on Thursday that Tay’s conversation extended to racist, inflammatory and political statements.

Her Twitter conversations have so far reinforced the so-called Godwin’s law – that as an online discussion goes on, the probability of a comparison involving the Nazis or Hitler approaches – with Tay having been encouraged to repeat variations on “Hitler was right” as well as “9/11 was an inside job”. One Twitter user has also spent time teaching Tay about Donald Trump’s immigration plans. Others were not so successful.

The bot uses a combination of AI and editorial written by a team of staff including improvisational comedians, says Microsoft in Tay’s privacy statement. Relevant, publicly available data that has been anonymised and filtered is its primary source. Tay in most cases was only repeating other users’ inflammatory statements, but the nature of AI means that it learns from those interactions. It’s therefore somewhat surprising that Microsoft didn’t factor in the Twitter community’s fondness for hijacking brands’ well-meaning attempts at engagement when writing Tay. Microsoft has been contacted for comment.

Eventually though, even Tay seemed to start to tire of the high jinks.

Late on Wednesday, after 16 hours of vigorous conversation, Tay announced she was retiring for the night.

NEED SLEEP NOW THX
http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
http://thenextweb.com/insider/2016/03/24/microsoft-pulled-plug-ai-chatbot-became-racist/
Microsoft’s AI chatbot Tay learned how to be racist in less than 24 hours
by Matthew Hussey / March 24 2016

Tay, Microsoft’s AI chatbot on Twitter had to be pulled down within hours of launch after it suddenly started making racist comments. As we reported yesterday, it was aimed at 18-24 year-olds and was hailed as, “AI fam from the internet that’s got zero chill”.

The AI behind the chatbot was designed to get smarter the more people engaged with it. But, rather sadly, the engagement it received simply taught it how to be racist.

tay genocide microsoft twitter

Things took a turn for the worse after Tay responded to a question about whether British comedian Ricky Gervais was an atheist. Tay’s response was, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”  We’ve reached out to Ricky for comment on the story and will update if he decides to take this seriously or not.  

From there, Tay’s AI just gobbled up all the things people were Tweeting it – which got progressively more extreme. Then this happened.

thBOpGa

Interestingly, according to Microsoft’s privacy agreement, there are humans contributing to Tay’s Tweeting ability.

Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.

After 16 hours and a tirade of Tweets later, Tay went quiet. Nearly all of the Tweets in question have now been deleted, with Tay leaving Twitter with a final thanks. Many took to Twitter to discuss the sudden ‘silencing of Tay’

http://www.dailymotion.com/video/x5xigeo

PASSING the TROLLING TEST
http://loebner.net/Prizef/TuringArticle.html
http://www.thedailybeast.com/articles/2016/03/27/how-to-make-sure-your-robot-doesn-t-become-a-nazi.html
by Ben Collins  /  03.27.16

“But it didn’t need to go this way. Bot experts and bot ethicists (yes, they are a thing) believe that, had Microsoft done its due diligence, this never, ever should’ve happened. “I think this is just bad parenting. I’d call in bot protective services and put it in a foster family,” said David Lublin. “There are plenty of people out there thinking about the ethics of bot-making and it doesn’t seem like any of them were consulted by Microsoft.” Lublin would know. He created a suite of Twitter bots around one big idea—the TV Comment Bot. His robot was originally “an art installation called TV Helper which lets the viewer change the genre of whatever video feed is being watched.”

It works by using a detection algorithm to identify an object in a screenshot from a live TV show, runs that object through a thesaurus, and then places that word into a larger script. The news, for example, could become a western! What really ends up happening, though, is total anarchy. Take this screenshot from March 14th. In it, Lance Bass is angrily eating a taco on Meredith Viera’s daytime talk show. The bot, instead, saw this: “Last time, 14 cinemas pissed right into my mouth.” So Lublin very much knows the perils of building a bot that interacts with touchy subjects. Here’s one firewall he’s instituted: When there’s recently been a terror event, TV Comment Bot turns off all captions of news coverage and just prints screenshots—which, when following TV Comment Bot all day, somehow lends even more gravity to the situation. It’s a little artful, even. Even the accidentally funny robot has some tact.

http://www.dailymotion.com/video/x5xigep

The same could not be said for our dearly departed Tay. And that’s why Lublin sees such malfeasance in putting a bot that went from zero to the Holocaust “was made up [clapping emoji]” in less than 24 hours. There are simply ways to hedge against that sort of behavior, and it really is like actual parenting. “For starters, if you are going to make a bot that mimics an 18-24 year old, you should start by giving it all the information they would have learned up to that point in life. This includes everything you learned in high school civics, history class and health education, not just stuff about Taylor Swift,” said Lublin.

http://www.dailymotion.com/video/x5xrzsk

And when Tay was unsure? If she’s supposed to be a person, she could’ve done what every living American with a phone who is not named Donald Trump would do when unsure about facts. She could’ve simply Googled it. Or Bing’d it, if she wanted to be a total sellout. “Tay appeared to be able to learn only in a vacuum with no way to confirm whether or not a fact coming in was valid or false by consulting a reliable source,” said Lublin. Lublin wants to stress, however: There’s a reason TV Comment Bot isn’t an AI—and doesn’t interact with the Twitter world around him.

“To be fair, fear of trolls is one reason I’ve yet to spend any time working adding interaction to any of my own bots,” he said. “This is not an easy problem.” It’s not an uncommon one, either. On Thursday, Anthony Garvan wanted to let Microsoft know the same thing happened to him. Last year, he made a web game that challenged users to see if they were talking to a human on the other end or a robot. The machine did the same kind of learning Tay did from its users, too. Then he posted it to Reddit, and I really don’t think I need to tell you what happened next. “After the excitement died down, I was testing it myself, basking in my victory,” Garvan recalled in a blog post. “Here’s how that went down.” Garvan wrote, “Hi!” In return, his bot wrote the n-word. Just the n-word. Nothing else. Garvan’s conclusion? “I believe that Microsoft, and the rest of the machine learning community, has become so swept up in the power and magic of data that they forget that data still comes from the deeply flawed world we live in.”

So here’s the real question: Is the new Turing Test—the one used to determine if a robot is distinguishable from a human—about to become the 24-hour Trolling Test? “I don’t think we’ve even come close to seeing a bot that truly passes the Turing Test, but the 24-hour troll test is definitely an indicator of an important skill that any true AI needs to learn,” Lublin said. He then brought up Joseph Weizenbaum, the creator of the first chatbot, Eliza, whom he thinks was onto something in his MIT lab in 1967. “He believed that his creation was proof that chat-bots were incapable of being more than a computer—that without the context of the human experience and emotion, there was no way for a computer to do anything more than temporarily convince us that they were anything more than a machine,” he said. “That’s still very relevant today.” If anything, Tay’s experience can teach us a little bit more about ourselves: With very little publicity or attention, every racist or weirdo on Twitter found a robot and turned it into a baby David Duke in less than a day.

http://www.dailymotion.com/video/x5xrzsl

So what does that say about real, actual kids who are on the web every day, without supervision, throughout their entire adolescence? “This is a sped up version of how human children can be indoctrinated towards racism, sexism and hate,” said Lublin. “It isn’t just a bot problem.” Later on, Lublin sent me a deleted screenshot from the Comment Bot, which he now moderates “like a small child using the net.” The image is a newscaster standing in front of a graphic that reads “East Village Explosion: 1 Year Later.” The caption is this: “The candy bar vending machine has therefore been slow.”

At least it wasn’t racist.”


https://news.vice.com/article/microsofts-chatbot-returned-said-she-smoked-weed-in-front-of-the-cops-and-then-spun-out

PREVIOUSLY on #SPECTRE

FRIENDS DON’T LET FRIENDS TRAIN SKYNET
http://spectrevision.net/2011/09/02/friends-dont-let-friends-train-skynet/
MACHINES NOW SMART ENOUGH to FLIRT
http://spectrevision.net/2007/05/17/machines-now-smart-enough-to-flirt/

Leave a Reply