MAKING COMPUTERS LAUGH
http://www.soundsfunny.org/turing/
http://www.medicaldaily.com/how-empathy-illuminates-sarcasm-irony-empathetic-capacity-proportional-sarcasm-recognition-children
http://www.nextgov.com/defense/2014/06/secret-service-software-will-detect-sarcasm-social-media-users/85633/
https://thestack.com/cloud/2016/02/11/why-sarcasm-is-such-a-problem-in-artificial-intelligence/
Why sarcasm is such a problem in artificial intelligence
by Martin Anderson / 11 Feb 2016
“A new paper from researchers in India and Australia highlights one of the strangest and ironically most humorous facets of the problems in machine learning – humour. Automatic Sarcasm Detection: A Survey [PDF] outlines ten years of research efforts from groups interested in detecting sarcasm in online sources. The problem is not an abstract one, nor does it centre around the need for computers to entertain or amuse humans, but rather the need to recognise that sarcasm in online comments, tweets and other internet material should not be interpreted as sincere opinion. The need applies both in order for AIs to accurately assess archive material or interpret existing datasets, and in the field of sentiment analysis, where a neural network or other model of AI seeks to interpret data based on publicly posted web material. Attempts have been made to ring-fence sarcastic data by the use of hash-tags such as #not on Twitter, or by noting the authors who have posted material identified as sarcastic, in order to apply appropriate filters to their future work.
Some research has struggled to quantify sarcasm, since it may not be a discrete property in itself – i.e. indicative of a reverse position to the one that it seems to put forward – but rather part of a wider gamut of data-distorting humour, and may need to be identified as a subset of that in order to be found at all. Most of the dozens of research projects which have addressed the problem of sarcasm as a hindrance to machine comprehension have studied the problem as it relates to the English and Chinese languages, though some work has also been done in identifying sarcasm in Italian-language tweets, whilst another project has explored Dutch sarcasm. The new report details the ways that academia has approached the sarcasm problem over the last decade, but concludes that the solution to the problem is not necessarily one of pattern recognition, but rather a more sophisticated matrix that has some ability to understand context. Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.”
GOT IT?
http://www.thesarcasmdetector.com/about/
http://www.cs.brandeis.edu/~marc/misc/proceedings/lrec-2008/pdf/133_paper.pdf
https://web.archive.org/web/20130208145433/http://inf.abdn.ac.uk/research/standup
http://www.science20.com/beachcombing_academia/sarcasm_analysis_software_usc-103902
“The Signal Analysis and Interpretation Laboratory (SAIL) at the University of Southern California, US, is one of the few, perhaps the only human-centered information processing lab to have built and tested an ‘Automatic Sarcasm Recognizer’. Lab director Professor Shrikanth (Shri) S. Narayanan and colleagues started out with the premise that : “Sarcasm, also called verbal irony, is the name given to speech bearing a semantic interpretation exactly opposite to its literal meaning.” With that in mind, they then focussed on 131 occurrences of the phrase “yeah right” in the ‘Switchboard’ and ‘Fisher’ recorded telephone conversation databases. Human listeners who sifted the data found that roughly 23% of the “yeah right”s which occurred were used in a recognisably sarcastic way. The lab’s computer algorithms were then ‘trained’ with two five-state Hidden Markov Models (HMM) and set to analyse the data – and the programmes performed relatively well, successfully flagging some 80% of the sarky “yeah right”s. But what should a computerised ‘agent’ do if it detects sarcasm in a caller’s dialogue? “As for handling the sarcasm once it’s detected, a dialogue agent ought to do what real humans do and acknowledge it. Either generate some synthetic laughter or, for more advanced agents, somehow point out that it ‘gets’ the joke.” say the team. And, by a set of circumstances which are almost certainly not coincidental, the SAIL team may be in a position to provide the requisite synthetic laughter.”
JOKE NOT as FUNNY ONCE EXPLAINED
https://dl.acm.org/citation.cfm?id=1220575.1220642
https://www.newscientist.com/article/mg20727691.900-laughters-secrets-how-to-make-a-computer-laugh/
https://www.newscientist.com/article/dn19227-laughters-secrets-faking-it-the-results/
http://www.nytimes.com/2013/01/06/opinion/sunday/can-computers-be-funny.html
A Motherboard Walks Into a Bar
by Alex Stone / Jan. 4, 2013
“What do you get when you cross a fragrance with an actor? Answer: a smell Gibson. Groan away, but you should know that this joke was written by a computer. “Smell Gibson” is the C.P.U. child of something called Standup (for System to Augment Non-Speakers’ Dialogue Using Puns), a program that generates punning riddles to help kids with language disabilities increase their verbal skills. Though it’s not quite Louis C. K., the Standup program, engineered by a team of computer scientists in Scotland, is one of the more successful efforts to emerge from a branch of artificial intelligence known as computational humor, which seeks to model comedy using machines.
As verbal interaction between humans and computers becomes more prominent in daily life — from Siri, Apple’s voice-activated assistant technology, to speech-based search engines to fully automated call centers — demand has grown for “social computers” that can communicate with humans in a natural way. Teaching computers to grapple with humor is a key part of this equation. “Humor is everywhere in human life,” says the Purdue computer scientist Julia M. Taylor, who helped organize the first-ever United States symposium on the artificial intelligence of humor, in November. If we want a computational system to communicate with human life, it needs to know how to be funny, she says.
As it turns out, this is one of the most challenging tasks in computer science. Like much of language, humor is loaded with abstraction and ambiguity. To understand it, computers need to contend with linguistic sleights like irony, sarcasm, metaphor, idiom and allegory — things that don’t readily translate into ones and zeros. On top of that, says Lawrence J. Mazlack of the University of Cincinnati, a seminal figure in the field of computational linguistics, humor is context-dependent: what’s funny in one situation may not be funny in another.
The cognitive processes that cause people to snicker at this sort of one-liner are only partly understood, which makes it all the more difficult for computers to mimic them. Unlike, say, chess, which is grounded in a fixed set of rules, there are no hard-and-fast formulas for comedy. To get around that cognitive complexity, computational humor researchers have by and large taken a more concrete approach: focusing on simple linguistic relationships, like double meanings, rather than on trying to model the high-level mental mechanics that underlie humor. Standup, for instance, writes jokes by searching through a “lexical database” (basically, a huge dictionary) for words that fit linguistic patterns found in puns — phonetic and semantic similarities, mostly — and comes up with doozies like: “What do you call a fish tank that has a horn? A goldfish bull.”
Another tack has been to apply machine-learning algorithms, which crunch mountains of data to identify statistical features that can be used to classify text as funny or unfunny. This is more or less how spam filters work: they decide which messages to tag by analyzing billions of e-mails and compiling a database of red flags (like any urgent message from a deposed Nigerian prince).
Figuring out when a joke is a joke is where artificial intelligence researchers have made, perhaps, the most progress. For her Ph.D. dissertation, Dr. Taylor built a system that could identify children’s jokes out of various selections of prose with remarkable accuracy. Not only that, but it could also explain why it found something funny, which suggests that on some level it “got” the jokes.
In a related experiment, the computer scientists Rada Mihalcea at the University of North Texas, Denton, and Carlo Strapparava, now at Fondazione Bruno Kessler in Italy, trained computers to separate humorous one-liners from nonhumorous sentences borrowed from Reuters headlines, proverbs and other texts. By analyzing the content and style of these sentences, the program was able to spot the jokes with an average accuracy of 87 percent. Putting such research to good use, a pair of wags at the University of Washington last year taught a computer when to use the refrain “That’s what she said” — theirs being one of the few academic papers to cite “The Office” among its references.
Some will surely wonder if the point of such research goes beyond devising software that can make the C++ set crack up at hackathons. Thankfully, it does. The goal of computational humor, and of computational linguistics as a whole, is to design machines akin to the shipboard computer on “Star Trek” — ones that can answer open-ended questions and carry on casual conversations with human beings. In the process, scientists hope to gain insights into the nature of humor: Why do we laugh at certain things and not at others?”
PREVSIOUSLY on #SPECTRE
FRIENDS DON’T LET FRIENDS TRAIN SKYNET
http://spectrevision.net/2011/09/02/friends-dont-let-friends-train-skynet/
MACHINES NOW SMART ENOUGH to FLIRT
http://spectrevision.net/2007/05/17/machines-now-smart-enough-to-flirt/