TEACH a GIRL to FISH

COMPUTERS TAUGHT to RTFM
http://www.wired.co.uk/news/archive/2011-07/13/computer-learning-language-games
by Duncan Geere / 13 July 11

To a computer, words and sentences appear like data. But AI researchers want to teach computers how to actually understand the meaning of a sentence and learn from it. One of the best ways to test the capability of an AI to do that is to see whether it can understand and follow a set of instructions for a task that it’s unfamiliar with. Regina Barzilay, a professor of computer science and electrical engineering at MIT’s computer science and AI lab, has attempted to do just that — teaching a computer to play Sid Meier’s Civilization. In Civilization, the player is asked to guide a nation from the earliest periods of history through to the present day and into the future. It’s complex, and each action doesn’t necessarily have a predetermined outcome, because the game can react randomly to what you do. Barzilay found that putting a machine-learning system to work on Civ gave it a victory rate of 46 percent, but that when the system was able to use the manual for the game to guide the development of its strategy, it rose dramatically to 79 percent.

It works by word association. Starting completely from scratch, the computer behaves randomly. As it acts, however, it can read words that pop up on the screen, and then search for those words in the manual. As it finds them, it can scan the surrounding text to develop ideas about what action that word corresponds with. Ideas that work well are kept, and those that lead to bad results are discarded. “If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.” The eventual goal is both to develop AIs that can extract useful information from manuals written for humans, allowing them to approach a problem armed with just the instructions, rather than having to be painstakingly taught how to deal with any eventuality. Barzilay has already begun to adapt these systems to work with robots.


“Civilization” is a strategy game in which players build empires by, among other things, deciding where to found cities and deploy armies.

CIVILIZATION II
http://www.gizmag.com/machine-learning-systems/19205/
Computers learn language (and world domination) by reading the manual
by Darren Quick / July 13, 2011

Researchers at MIT’s Computer Science and Artificial Intelligence Lab have been able to create computers that learn language by doing something that many people consider a last resort when tackling an unfamiliar task – reading the manual (or RTBM). Beginning with virtually no prior knowledge, one machine-learning system was able to infer the meanings of words by reviewing instructions posted on Microsoft’s website detailing how to install a piece of software on a Windows PC, while another was able to learn how to play Sid Meier’s empire-building Civilization II strategy computer game by reading the gameplay manual.

Without so much as an idea of the task they were intended to perform or the language in which the instructions were written, the two similar systems were initially provided only with a list of possible actions they could take, such as moving the cursor or performing right or left clicks. They also had access to the information displayed on the screen and were able to gauge their success, be it successfully installing the software or winning the game. But they didn’t know what actions corresponded to what words in the instructions, or what the objects in the game world represent. Predictably, this means that initially the behavior of the system is pretty random, but as it performs various actions and words appear on the screen it looks for instances of that word in the instruction set as well as searching the surrounding text for associated words. In this way it is able to make assumptions about what actions the words correspond to and assumptions that consistently lead to good results are given greater credence, while those that consistently lead to bad results are abandoned.

Using this method, the system attempting to install software was able to reproduce 80 percent of the steps that a person reading the same instructions would carry out. Meanwhile, the system playing Civilization II ended up winning 79 percent of the games it played, compared to a winning rate of 46 percent for a version of the system that didn’t rely on the written instructions. What makes the results even more impressive for the Civilization II-playing system is that the manual only provided instructions on how to play the game. “They don’t tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own,” said Regina Barzilay, associate professor of computer science and electrical engineering, who took the best-paper award at the annual meeting of the Association for Computational Linguistics (ACL) in 2009 for the software installing system. “Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says graduate student S. R. K. Branavan, who along David Silver of University College London applied a similar approach to Barzilay in developing the system that learned to play Civilization II. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways,” Branavan said.

Although the main purpose of the project was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments is a promising area for future research, Barzilay and Branavan say that such systems could also have more near-term applications. Most computer games that let a player play against the computer require programmers to develop strategies for the computer to follow and write algorithms that execute them. Systems like those developed at MIT could be used to automatically create algorithms that perform better than the human-designed ones. With such machine-learning systems also having applications in the field of robotics, and Barzilay and her students at MIT have begun to adapt their meaning-inferring algorithms to this purpose. Let’s just hope they don’t take the lessons learned playing Civilization II and try for the world domination win in the real world.

Screen shot of Sid Meier's strategy computer game, Civilization II
Screen shot of Sid Meier’s strategy computer game, Civilization II

or URDU… or MANDARIN…
http://people.csail.mit.edu/regina/my_papers/civ11.pdf
http://web.mit.edu/newsoffice/2011/language-from-games-0712.html
Computer learns language by playing games
By basing its strategies on the text of a manual, a computer infers the meanings of words without human supervision.
by Larry Hardesty, MIT / July 12 2011

Computers are great at treating words as data: Word-processing programs let you rearrange and format text however you like, and search engines can quickly find a word anywhere on the Web. But what would it mean for a computer to actually understand the meaning of a sentence written in ordinary English — or French, or Urdu, or Mandarin?

One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results. In 2009, at the annual meeting of the Association for Computational Linguistics (ACL), researchers in the lab of Regina Barzilay, associate professor of computer science and electrical engineering, took the best-paper award for a system that generated scripts for installing a piece of software on a Windows computer by reviewing instructions posted on Microsoft’s help site. At this year’s ACL meeting, Barzilay, her graduate student S. R. K. Branavan and David Silver of University College London applied a similar approach to a more complicated problem: learning to play “Civilization,” a computer game in which the player guides the development of a city into an empire across centuries of human history. When the researchers augmented a machine-learning system so that it could use a player’s manual to guide the development of a game-playing strategy, its rate of victory jumped from 46 percent to 79 percent.

Starting from scratch
“Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says Branavan, who was first author on both ACL papers. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways.” Moreover, Barzilay says, game manuals have “very open text. They don’t tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own.” Relative to an application like the software-installing program, Branavan explains, games are “another step closer to the real world.” The extraordinary thing about Barzilay and Branavan’s system is that it begins with virtually no prior knowledge about the task it’s intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn’t know what actions correspond to what words in the instruction set, and it doesn’t know what the objects in the game world represent. So initially, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.

Proof of concept
In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn’t rely on the written instructions won only 46 percent. The researchers also tested a more-sophisticated machine-learning algorithm that eschewed textual input but used additional techniques to improve its performance. Even that algorithm won only 62 percent of its games. “If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.” Charniak points out that when the MIT researchers presented their work at the ACL meeting, some members of the audience argued that more sophisticated machine-learning systems would have performed better than the ones to which the researchers compared their system. But, Charniak adds, “it’s not completely clear to me that that’s really relevant. Who cares? The important point is that this was able to extract useful information from the manual, and that’s what we care about.” Most computer games as complex as “Civilization” include algorithms that allow players to play against the computer, rather than against other people; the games’ programmers have to develop the strategies for the computer to follow and write the code that executes them. Barzilay and Branavan say that, in the near term, their system could make that job much easier, automatically creating algorithms that perform better than the hand-designed ones. But the main purpose of the project, which was supported by the National Science Foundation, was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments are a promising subject for further research. And indeed, Barzilay and her students have begun to adapt their meaning-inferring algorithms to work with robotic systems.

CONTACT
Regina Barzilay
http://people.csail.mit.edu/regina/
http://people.csail.mit.edu/regina/papers.html
http://groups.csail.mit.edu/rbg/
email : regina [at] csail.mit [dot] edu

S.R.K. Branavan
http://people.csail.mit.edu/branavan/
email : branavan [at] csail.mit [dot] edu


An incidental challenge in building a computer system that could decipher Ugaritic (inscribed on tablet) was developing a way to digitally render Ugaritic symbols (inset).

or ANCIENT UGARITIC
http://web.mit.edu/newsoffice/2010/ugaritic-barzilay-0630.html
Computer automatically deciphers ancient language
A new system that took a couple hours to decipher much of the ancient language Ugaritic could help improve online translation software.
by Larry Hardesty, MIT /  June 30 2010

In his 2002 book Lost Languages, Andrew Robinson, then the literary editor of the London Times’ higher-education supplement, declared that “successful archaeological decipherment has turned out to require a synthesis of logic and intuition … that computers do not (and presumably cannot) possess.” Regina Barzilay, an associate professor in MIT’s Computer Science and Artificial Intelligence Lab, Ben Snyder, a grad student in her lab, and the University of Southern California’s Kevin Knight took that claim personally. At the Annual Meeting of the Association for Computational Linguistics in Sweden next month, they will present a paper on a new computer system that, in a matter of hours, deciphered much of the ancient Semitic language Ugaritic. In addition to helping archeologists decipher the eight or so ancient languages that have so far resisted their efforts, the work could also help expand the number of languages that automated translation systems like Google Translate can handle.

To duplicate the “intuition” that Robinson believed would elude computers, the researchers’ software makes several assumptions. The first is that the language being deciphered is closely related to some other language: In the case of Ugaritic, the researchers chose Hebrew. The next is that there’s a systematic way to map the alphabet of one language on to the alphabet of the other, and that correlated symbols will occur with similar frequencies in the two languages. The system makes a similar assumption at the level of the word: The languages should have at least some cognates, or words with shared roots, like main and mano in French and Spanish, or homme and hombre. And finally, the system assumes a similar mapping for parts of words. A word like “overloading,” for instance, has both a prefix — “over” — and a suffix — “ing.” The system would anticipate that other words in the language will feature the prefix “over” or the suffix “ing” or both, and that a cognate of “overloading” in another language — say, “surchargeant” in French — would have a similar three-part structure.

Crosstalk
The system plays these different levels of correspondence off of each other. It might begin, for instance, with a few competing hypotheses for alphabetical mappings, based entirely on symbol frequency — mapping symbols that occur frequently in one language onto those that occur frequently in the other. Using a type of probabilistic modeling common in artificial-intelligence research, it would then determine which of those mappings seems to have identified a set of consistent suffixes and prefixes. On that basis, it could look for correspondences at the level of the word, and those, in turn, could help it refine its alphabetical mapping. “We iterate through the data hundreds of times, thousands of times,” says Snyder, “and each time, our guesses have higher probability, because we’re actually coming closer to a solution where we get more consistency.” Finally, the system arrives at a point where altering its mappings no longer improves consistency.

Ugaritic has already been deciphered: Otherwise, the researchers would have had no way to gauge their system’s performance. The Ugaritic alphabet has 30 letters, and the system correctly mapped 29 of them to their Hebrew counterparts. Roughly one-third of the words in Ugaritic have Hebrew cognates, and of those, the system correctly identified 60 percent. “Of those that are incorrect, often they’re incorrect only by a single letter, so they’re often very good guesses,” Snyder says. Furthermore, he points out, the system doesn’t currently use any contextual information to resolve ambiguities. For instance, the Ugaritic words for “house” and “daughter” are spelled the same way, but their Hebrew counterparts are not. While the system might occasionally get them mixed up, a human decipherer could easily tell from context which was intended.

Babel
Nonetheless, Andrew Robinson remains skeptical. “If the authors believe that their approach will eventually lead to the computerised ‘automatic’ decipherment of currently undeciphered scripts,” he writes in an e-mail, “then I am afraid I am not at all persuaded by their paper.” The researchers’ approach, he says, presupposes that the language to be deciphered has an alphabet that can be mapped onto the alphabet of a known language — “which is almost certainly not the case with any of the important remaining undeciphered scripts,” Robinson writes. It also assumes, he argues, that it’s clear where one character or word ends and another begins, which is not true of many deciphered and undeciphered scripts. “Each language has its own challenges,” Barzilay agrees. “Most likely, a successful decipherment would require one to adjust the method for the peculiarities of a language.” But, she points out, the decipherment of Ugaritic took years and relied on some happy coincidences — such as the discovery of an axe that had the word “axe” written on it in Ugaritic. “The output of our system would have made the process orders of magnitude shorter,” she says. Indeed, Snyder and Barzilay don’t suppose that a system like the one they designed with Knight would ever replace human decipherers. “But it is a powerful tool that can aid the human decipherment process,” Barzilay says. Moreover, a variation of it could also help expand the versatility of translation software. Many online translators rely on the analysis of parallel texts to determine word correspondences: They might, for instance, go through the collected works of Voltaire, Balzac, Proust and a host of other writers, in both English and French, looking for consistent mappings between words. “That’s the way statistical translation systems have worked for the last 25 years,” Knight says. But not all languages have such exhaustively translated literatures: At present, Snyder points out, Google Translate works for only 57 languages. The techniques used in the decipherment system could be adapted to help build lexicons for thousands of other languages. “The technology is very similar,” says Knight, who works on machine translation. “They feed off each other.