the ACCURACY of CROWDS

or CARING WHAT OTHERS THINK
http://www.wired.com/wiredscience/2011/05/wisdom-of-crowds-decline/
Sharing Information Corrupts Wisdom of Crowds
by Brandon Keim / May 16, 2011

When people can learn what others think, the wisdom of crowds may veer towards ignorance.

In a new study of crowd wisdom — the statistical phenomenon by which individual biases cancel each other out, distilling hundreds or thousands of individual guesses into uncannily accurate average answers — researchers told test participants about their peers’ guesses. As a result, their group insight went awry. “Although groups are initially ‘wise,’ knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines” collective wisdom, wrote researchers led by mathematician Jan Lorenz and sociologist Heiko Rahut of Switzerland’s ETH Zurich, in Proceedings of the National Academy of Sciences on May 16. “Even mild social influence can undermine the wisdom of crowd effect.”

The effect — perhaps better described as the accuracy of crowds, since it best applies to questions involving quantifiable estimates — has been described for decades, beginning with Francis Galton’s 1907 account of fairgoers guessing an ox’s weight. It reached mainstream prominence with economist James Surowiecki’s 2004 bestseller, The Wisdom of Crowds. As Surowiecki explained, certain conditions must be met for crowd wisdom to emerge. Members of the crowd ought to have a variety of opinions, and to arrive at those opinions independently. Take those away, and crowd intelligence fails, as evidenced in some market bubbles. Computer modeling of crowd behavior also hints at dynamics underlying crowd breakdowns, with he balance between information flow and diverse opinions becoming skewed.

Lorenz and Rahut’s experiment fits between large-scale, real-world messiness and theoretical investigation. They recruited 144 students from ETH Zurich, sitting them in isolated cubicles and asking them to guess Switzerland’s population density, the length of its border with Italy, the number of new immigrants to Zurich and how many crimes were committed in 2006. After answering, test subjects were given a small monetary reward based on their answer’s accuracy, then asked again. This proceeded for four more rounds; and while some students didn’t learn what their peers guessed, others were told. As testing progressed, the average answers of independent test subjects became more accurate, in keeping with the wisdom-of-crowds phenomenon. Socially influenced test subjects, however, actually became less accurate.

The researchers attributed this to three effects. The first they called “social influence”: Opinions became less diverse. The second effect was “range reduction”: In mathematical terms, correct answers became clustered at the group’s edges. Exacerbating it all was the “confidence effect,” in which students became more certain about their guesses. “The truth becomes less central if social influence is allowed,” wrote Lorenz and Rahut, who think this problem could be intensified in markets and politics — systems that rely on collective assessment. “Opinion polls and the mass media largely promote information feedback and therefore trigger convergence of how we judge the facts,” they wrote. The wisdom of crowds is valuable, but used improperly it “creates overconfidence in possibly false beliefs.”


Study participants were asked how many murders occurred in Switzerland in 2006. At the end of each round of questioning, they were given small payments for coming close to the actual answer (signified by the gray bar). At left is the range of responses among participants who received no information about others.

ABSTRACT
http://www.pnas.org/content/early/2011/05/10/1008636108.abstract
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects’ convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others’ responses was provided. Although groups are initially “wise,” knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The “social influence effect” diminishes the diversity of the crowd without improvements of its collective error. The “range reduction effect” moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The “confidence effect” boosts individuals’ confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.

CONTACT
Jan Lorenz
http://janlo.de/blog/
http://www.staff.uni-oldenburg.de/jan.lorenz/
email : jan.lorenz [at] uni-oldenburg [dot] de / math [at] janlo [dot] de

Heiko Rauhut
http://www.soms.ethz.ch/people/rauhut
email : rauhut [at] gess.ethz [dot] ch

Crowdminingschematic2

SO DON’T ASK THEM
http://www.wired.com/wiredscience/2009/03/ecodatamining/
Crawling the Web to Foretell Ecosystem Collapse
by Alexis Madrigal  /  March 19, 2009

The Interwebs could become an early warning system for when the web of life is about to fray. By trawling scientific list-serves, Chinese fish market websites, and local news sources, ecologists think they can use human beings as sensors by mining their communications. “If we look at coral reefs, for example, the Internet may contain information that describes not only changes in the ecosystem, but also drivers of change, such as global seafood markets,” said Tim Daw, an ecologist at the UK’s University of East Anglia in a press release about his team’s new paper in Frontiers in Ecology and the Environment.

The six billion people on Earth are changing the biosphere so quickly that traditional ecological methods can’t keep up. Humans, though, are acute observers of their environments and bodies, so scientists are combing through the text and numbers on the Internet in hopes of extracting otherwise unavailable or expensive information. It’s more crowd mining than crowd sourcing. Much of the pioneering work in this type of Internet surveillance has come in the public health field, tracking disease. Google Flu Trends, which uses a cloud of keywords to determine how sick a population is, tracks epidemiological data from the Centers for Disease Control. Less serious projects — like this map of a United Kingdom snowstorm based on Tweets about snow — have also had some success tracking the real world.

These research efforts seem to indicate that people are good sensors, but pulling the information from what they post in human-readable formats and transforming it into quantitative models of the world is tough. The Global Public Health Intelligence Network has developed an epidemic warning system that pulls in data from news wires, web sites, and public health mailing lists. The GPHIN, which is probably the most advanced and uses highly variegated information, only picks up on about 40 percent of the 200 to 250 outbreaks that the World Health Organization investigates each year.

Nonetheless, Daw and and his co-authors from the  Stockholm Univeristy Resilience Centre, say traditional ecological monitoring has its problems, too. Humans can make huge changes to ecosystems faster than the standard methods of data collection can keep up. “The challenge is that existing monitoring systems are not at all in tune with the speed of social, economical and ecological changes,” the researchers write on their blog. By looking at human data, not just fisheries and ecological readings, they think they’ll be able to detect ecosystem tipping points before they happen. “Web crawlers can collect information on the drivers of ecosystem change, rather than the resultant ecological responses,” they write. “For example, if rapidly emerging markets for high value species are known to be socio-economic drivers which lead to overexploitation and collapse of a fishery, web crawlers can be designed to collect information on rapid changes in prices, landings or investments.”

But right now, their plans remain theoretical, and while scraping data seems easy enough, turning it into knowledge is another story. John Brownstein, a Harvard bioinformaticist and co-founder of HealthMap, which does for disease what Daw wants to do for ecology, said that applying the framework to ecology could work. “There’s no reason it can’t be done,” Brownstein said. “The only difference is that this is more difficult. The media and other sources are sensitive and fine-tuned to things like human disease. The threshold for the reporting of a mysterious disease is different from the threshold for an ecological phenomenon.” In other words, while reporters (or Tweeters) will include individual-level death data in human stories, massive die-offs or flora changes could very well go unnoticed and probably unquantified.

And even with disease data, there are serious signal-to-noise challenges. In a paper that Brownstein co-authored last week, he showed that monitoring search terms for disease indicators could have tipped officials off to a deadly outbreak of listeriosis in Canada. But spotting emergent diseases instead of ones that have already caused major damage is a more challenging proposition. “It’s so tough to figure out why people search for specific information,” he said.

[youtube=https://www.youtube.com/watch?v=B8ofWFx525s]

EXCEPT : FILTER BUBBLES
http://www.rene-pickhardt.de/algorithmic-information-filter-from-elis-parisers-ted-talks/
Google is filtering and personalizing search results

Eli is pointing out a thing some people might have already noticed. If two different people search for the same thing on Google it is very probable that the search results will be very different. Google is doing this without telling the user that it is acutally filtering the results based on what the algorithm thinks the user might like. According to Eli Pariser Google is using 57 signals to determine the interest of us. Of course this kind of personalization has its good sides. When I am about to buy a new notebook computer y I definitely want to see different Websites if I live in Germany or in the US. This could be due to tax and shipping fees. Which means that I am most probably interested in local stores and not in oversea shops. Still this personalization and filtering is a huge potential for serious problems. We might think we get all the information we need. But in reality we are becoming blinded by the filters Google is using. We have no chance to determine what other information is filtert and potentially available for a certain topic. On the other hand due to the amount of information we need filters and computers to help us. But the systems should be more transparent.

Facebook is also filtering the newsstream from your friends:
I have always been thinking Facebook’s huge success is strongly correlated to the fact that there is hardly Spam on Facebook and the information economy is very smart and user friendly. The attention of users to status updates is very high making facebook a great place for every company to do online and viral marketing. This of course contributes to Facebook’s reach. In fact the information architecture on Facebook is even so smart that your 20’000 followers on Facebook might not receive your status updates since Facebook’s EdgeRank algorithm decides it is not relevant to your fans or friends. Edgerank might not have 57 signals but it still takes into consideration:

who your fans are friend with
what other news they like
how heavy they have interacted with you in the past
the time passed since your last status update

Great news isn’t it? Just compare this with my statement in a recent blog post about creating newsletters as a musician in order to communicate with your fans and not solely rely on other services like Facebook or MySpace. You don’t believe the Facebook thing? There is a video about the EdgeRank algorithm used by Facebook to determine which status updates should reach us and which shouldn’t. Feel free to have a look and thanks to the guys from Klurig Analytics for producing such a great video resource:

57 SIGNALS
http://www.thefilterbubble.com/guessing-googles-57-signals
http://www.rene-pickhardt.de/google-uses-57-signals-to-filter/
What are the 57 signals google uses to filter search results?

Since my blog post on Eli Pariser’s Ted talk about the filter bubble became quite popular and a lot of people seem to be interested in which 57 signals Google would use to filter search results I decided to extend the list from my article and list the signals I would use if I was google. It might not be 57 signals but I guess it is enough to get an idea:

  1. Our Search History.
  2. Our location
  3. the browser we use.
  4. the browsers version
  5. The computer we use
  6. The language we use
  7. the time we need to type in a query
  8. the time we spend on the search result page
  9. the time between selecting different results for the same query
  10. our operating system
  11. our operating systems version
  12. the resolution of our computer screen
  13. average amount of search requests per day
  14. average amount of search requests per topic (to finish search)
  15. distribution of search services we use (web / images / videos / real time / news / mobile)
  16. average position of search results we click on
  17. time of the day
  18. current date
  19. topics of ads we click on
  20. frequency we click advertising
  21. topics of adsense advertising we click while surfing other websites
  22. frequency we click on adsense advertising on other websites
  23. frequency of searches of domains on Google
  24. use of google.com or google toolbar
  25. our age
  26. our sex
  27. use of “i feel lucky button”
  28. do we use the enter key or mouse to send a search request
  29. do we use keyboard shortcuts to navigate through search results
  30. do we use advanced search commands  (how often)
  31. do we use igoogle (which widgets / topics)
  32. where on the screen do we click besides the search results (how often)
  33. where do we move the mouse and mark text in the search results
  34. amount of typos while searching
  35. how often do we use related search queries
  36. how often do we use autosuggestion
  37. how often do we use spell correction
  38. distribution of short / general  queries vs. specific / long tail queries
  39. which other google services do we use (gmail / youtube/ maps / picasa /….)
  40. how often do we search for ourself

Uff I have to say after 57 minutes of brainstorming I am running out of ideas for the moment. This list of signals is a pure guess based on my knowledge and education on data mining. Not one signal I name might correspond to the 57 signals google is using. In future I might discuss why each of these signals could be interesting. But remember: as long as you have a high diversity in the distribution you are fine with any list of signals.

[youtube=https://www.youtube.com/watch?v=fDhsO_q7aYU]

CONTACT
Eli Pariser
http://elipariser.com/
email : epariser [at] elipariser [dot] com

PREVIOUSLY : the PANIC of CROWDS
http://spectrevision.net/2008/11/04/the-panic-of-crowds/

the FOUR CONDITIONS of CROWD-MIND HEALTH
http://kottke.org/04/07/wisdom-of-crowds
The wisdom of crowds you say? As Surowiecki explains, yes, but only under the right conditions. In order for a crowd to be smart, he says it needs to satisfy four conditions:

1. Diversity. A group with many different points of view will make better decisions than one where everyone knows the same information. Think multi-disciplinary teams building Web sites…programmers, designers, biz dev, QA folks, end users, and copywriters all contributing to the process, each has a unique view of what the final product should be. Contrast that with, say, the President of the US and his Cabinet.

2. Independence. “People’s opinions are not determined by those around them.” AKA, avoiding the circular mill problem.

3. Decentralization. “Power does not fully reside in one central location, and many of the important decisions are made by individuals based on their own local and specific knowledge rather than by an omniscient or farseeing planner.” The open source software development process is an example of effect decentralization in action.

4. Aggregation. You need some way of determining the group’s answer from the individual responses of its members. The evils of design by committee are due in part to the lack of correct aggregation of information. A better way to harness a group for the purpose of designing something would be for the group’s opinion to be aggregated by an individual who is skilled at incorporating differing viewpoints into a single shared vision and for everyone in the group to be aware of that process (good managers do this). Aggregation seems to be the most tricky of the four conditions to satisfy because there are so many different ways to aggregate opinion, not all of which are right for a given situation.

Satisfy those four conditions and you’ve hopefully cancelled out some of the error involved in all decision making: “If you ask a large enough group of diverse, independent people to make a prediciton or estimate a probability, and then everage those estimates, the errors of each of them makes in coming up with an answer will cancel themselves out. Each person’s guess, you might say, has two components: information and error. Subtract the error, and you’re left with the information.”

CONTACT
James Surowiecki
http://www.newyorker.com/online/blogs/jamessurowiecki/
email : jamessuro [at] aol [dot] com

HOW CROWDS GET SMARTER
http://www.randomhouse.com/features/wisdomofcrowds/Q&A.html
Q & A with James Surowiecki

Q: How did you discover the wisdom of crowds?
A: The idea really came out of my writing on how markets work. Markets are made up of diverse people with different levels of information and intelligence, and yet when you put all those people together and they start buying and selling, they come up with generally intelligent decisions. Sometimes, though, they come up with remarkably stupid decisions—as they did during the stock-market bubble in the late 1990s. I was interested in what explained the successes and the failures of markets, and as I got further into it I realized that it wasn’t just markets that were smart. In fact, crowds of all sorts were often remarkably wise.

Q: Could you define “the crowd?”
A: A “crowd,” in the sense that I use the word in the book, is really any group of people who can act collectively to make decisions and solve problems. So, on the one hand, big organizations—like a company or a government agency—count as crowds. And so do small groups, like a team of scientists working on a problem. But just as interested—maybe even more interested—in groups that aren’t really aware themselves as groups, like bettors on a horse race or investors in the stock market. They make up crowds, too, because they’re collectively producing a solution to a complicated problem: the bets of people betting on a horse race determine what the odds on the race will be, and the choices of investors determine stock prices.

Q: Under what circumstances is the crowd smarter?
A: There are four key qualities that make a crowd smart. It needs to be diverse, so that people are bringing different pieces of information to the table. It needs to be decentralized, so that no one at the top is dictating the crowd’s answer. It needs a way of summarizing people’s opinions into one collective verdict. And the people in the crowd need to be independent, so that they pay attention mostly to their own information, and not worrying about what everyone around them thinks.

Q: And what circumstances can lead the crowd to make less-than-stellar decisions?
A: Essentially, any time most of the people in a group are biased in the same direction, it’s probably not going to make good decisions. So when diverse opinions are either frozen out or squelched when they’re voiced, groups tend to be dumb. And when people start paying too much attention to what others in the group think, that usually spells disaster, too. For instance, that’s how we get stock-market bubbles, which are a classic example of group stupidity: instead of worrying about how much a company is really worth, investors start worrying about how much other people will think the company is worth. The paradox of the wisdom of crowds is that the best group decisions come from lots of independent individual decisions.

Q: What kind of problems are crowds good at solving and what kind are they not good at solving?
A: Crowds are best when there’s a right answer to a problem or a question. (I call these “cognition” problems.) If you have, for instance, a factual question, the best way to get a consistently good answer is to ask a group. They’re also surprisingly good, though, at
solving other kinds of problems. For instance, in smart crowds, people cooperate and work together even when it’s more rational for them to let others do the work. And in smart crowds, people are also able to coordinate their behavior—for instance, buyers and sellers are able to find each other and trade at a reasonable price—without anyone being in charge. Groups aren’t good at what you might call problems of skill— for instance, don’t ask a group to perform surgery or fly a plane.

Q: Why are we not better off finding an expert to make all the hard decisions?
A: Experts, no matter how smart, only have limited amounts of information. They also, like all of us, have biases. It’s very rare that one person can know more than a large group of people, and almost never does that same person know more about a whole series of questions. The other problem in finding an expert is that it’s actually hard to identify true experts. In fact, if a group is smart enough to find a real expert, it’s more than smart enough not to need one.

Q: Can you explain how a betting pool can help predict the future?
A: Well, predicting the future is what bettors try to do every day, when they try to figure out what horse will win a race or what football team will win on Sunday. What horse-racing odds or a point spread represent, then, is the group’s collective judgment about the future. And what we know from many studies is that that collective judgment is often remarkably accurate. Now, we have to be careful here. In the case of a horse race, for instance, what the group is good at predicting is the likelihood of each horse winning. The potential benefits of this are pretty obvious. If you’re a company, say, that’s trying to decide which product you should put out, what you want to know is the likelihood of success of your different options. A betting pool—or a market, or some other way of tapping into the wisdom of crowds—is the best way for you to get that information.

Q: Can you give an example of a current company that is tapping into the “wisdom of crowds?”
A: There’s a division of Eli Lilly called e.Lilly, which has been experimenting with using internal stock markets and hypothetical drug candidates to predict whether new drugs will gain FDA approval. That’s an essential thing for drug companies to know, because their whole business depends on them not only picking winners—that is, good, safe drugs—but also killing losers before they’ve invested too much money in them.

Q: You’ve explained how tapping into the crowd’s collective wisdom can help a corporation, but how can it help other entities, like a government, or perhaps more importantly, an individual?
A: Well, the same principles that make collective wisdom useful to a company make it just as useful to the government. For instance, in the book I talk about the Columbia disaster, showing how NASA’s failure to deal with the shuttle’s problems stemmed, in part, from a failure to tap into knowledge and information that the people in the organization actually had. And in a broader sense, I think the book suggests that the more diverse and free the flow of information in a society is, the better the decisions that society will reach. As far as individuals go, I think there are two consequences. First, we can look to collective decisions—as long as the groups are diverse, etc.—to give us good predictions. But the collective decisions will only be smart if each of us tries to be as independent as possible. So instead of just taking the advice of your smart friend, you should try to make your own choice. In doing so, you’ll make the group smarter.

Q: When you talk about using the crowd to make a decision, are you talking about consensus?
A: No, and this is one of the most important points in the book. The wisdom of crowds isn’t about consensus. It really emerges from disagreement and even conflict. It’s what you might call the average opinion of the group, but it’s not an opinion that every one in the group can agree on. So that means you can’t find collective wisdom via compromise.

Q: What would Charles MacKay—the author of Extraordinary Popular Delusions and the Madness of Crowds—think of your book?
A: He would probably think I’m deluded. Mackay thought crowds were doomed to excess and foolishness, and that only individuals could produce intelligent decisions. On the other hand, a good chunk of my book is about how crowds can, as it were, go mad, and what allows them to succumb to delusions. Mackay would like those chapters.

Q: What do you most hope people will learn from reading your book?
A: I think the most important lesson is not to rely on the wisdom of one or two experts or leaders when making difficult decisions. That doesn’t mean that expertise is irrelevant, or that we don’t need smart people. It just means that together all of us know more than any one of us does.

EARLY CROWD EXPERIMENTS
http://www.randomhouse.com/features/wisdomofcrowds/audio.html
http://www.randomhouse.com/features/wisdomofcrowds/excerpt.html
by James Surowiecki

As it happens, the possibilities of group intelligence, at least when it came to judging questions of fact, were demonstrated by a host of experiments conducted by American sociologists and psychologists between 1920 and the mid-1950s, the heyday of research into group dynamics. Although in general, as we’ll see, the bigger the crowd the better, the groups in most of these early experiments—which for some reason remained relatively unknown outside of academia—were relatively small. Yet they nonetheless performed very well. The Columbia sociologist Hazel Knight kicked things off with a series of studies in the early 1920s, the first of which had the virtue of simplicity. In that study Knight asked the students in her class to estimate the room’s temperature, and then took a simple average of the estimates. The group guessed 72.4 degrees, while the actual temperature was 72 degrees. This was not, to be sure, the most auspicious beginning, since classroom temperatures are so stable that it’s hard to imagine a class’s estimate being too far off base. But in the years that followed, far more convincing evidence emerged, as students and soldiers across America were subjected to a barrage of puzzles, intelligence tests, and word games. The sociologist Kate H. Gordon asked two hundred students to rank items by weight, and found that the group’s “estimate” was 94 percent accurate, which was better than all but five of the individual guesses. In another experiment students were asked to look at ten piles of buckshot—each a slightly different size than the rest—that had been glued to a piece of white cardboard, and rank them by size. This time, the group’s guess was 94.5 percent accurate. A classic demonstration of group intelligence is the jelly-beans-in-the-jar experiment, in which invariably the group’s estimate is superior to the vast majority of the individual guesses. When finance professor Jack Treynor ran the experiment in his class with a jar that held 850 beans, the group estimate was 871. Only one of the fifty-six people in the class made a better guess.

There are two lessons to draw from these experiments. First, in most of them the members of the group were not talking to each other or working on a problem together. They were making individual guesses, which were aggregated and then averaged. This is exactly what Francis Galton did, and it is likely to produce excellent results. (In a later chapter, we’ll see how having members interact changes things, sometimes for the better, sometimes for the worse.) Second, the group’s guess will not be better than that of every single person in the group each time. In many (perhaps most) cases, there will be a few people who do better than the group. This is, in some sense, a good thing, since especially in situations where there is an incentive for doing well (like, say, the stock market) it gives people reason to keep participating. But there is no evidence in these studies that certain people consistently outperform the group. In other words, if you run ten different jelly-bean-counting experiments, it’s likely that each time one or two students will outperform the group. But they will not be the same students each time. Over the ten experiments, the group’s performance will almost certainly be the best possible. The simplest way to get reliably good answers is just to ask the group each time.

A similarly blunt approach also seems to work when wrestling with other kinds of problems. The theoretical physicist Norman L. Johnson has demonstrated this using computer simulations of individual “agents” making their way through a maze. Johnson, who does his work at the Los Alamos National Laboratory, was interested in understanding how groups might be able to solve problems that individuals on their own found difficult. So he built a maze—one that could be navigated via many different paths, some shorter, and some longer—and sent a group of agents into the maze one by one. The first time through, they just wandered around, the way you would if you were looking for a particular café in a city where you’d never been before. Whenever they came to a turning point—what Johnson called a “node”—they would randomly choose to go right or left. Therefore some people found their way, by chance, to the exit quickly, others more slowly. Then Johnson sent the agents back into the maze, but this time he allowed them to use the information they’d learned on their first trip, as if they’d dropped bread crumbs behind them the first time around. Johnson wanted to know how well his agents would use their new information. Predictably enough, they used it well, and were much smarter the second time through. The average agent took 34.3 steps to find the exit the first time, and just 12.8 steps to find it the second.

The key to the experiment, though, was this: Johnson took the results of all the trips through the maze and used them to calculate what he called the group’s “collective solution.” He figured out what a majority of the group did at each node of the maze, and then plotted a path through the maze based on the majority’s decisions. (If more people turned left than right at a given node, that was the direction he assumed the group took. Tie votes were broken randomly.) The group’s path was just nine steps long, which was not only shorter than the path of the average individual (12.8 steps), but as short as the path that even the smartest individual had been able to come up with. It was also as good an answer as you could find. There was no way to get through the maze in fewer than nine steps, so the group had discovered the optimal solution. The obvious question that follows, though, is: The judgment of crowds may be good in laboratory settings and classrooms, but what happens in the real world?

II. on the UTILITY of PREDICTION MARKETS

ROGUE PROPAGANDIST TRIES to BULLSHIT MARKET
http://www.whistle-safe.org/article.pl?sid=32/07/20/0833218
http://www.cqpolitics.com/wmspage.cfm?docID=news-000002976265&referrer=js
Trader Drove Up Price of McCain ‘Stock’ in Online Market
BY Josh Rogin / Oct. 21, 2008

An internal investigation by the popular online market Intrade has revealed that an investor’s purchases prompted “unusual” price swings that boosted the prediction that Sen. John McCain will become president. Over the past several weeks, the investor has pushed hundreds of thousands of dollars into one of Intrade’s predictive markets for the presidential election, the company said. “The trading that caused the unusual price movements and discrepancies was principally due to a single ‘institutional’ member on Intrade,” said the company’s chief executive, John Delaney, in a statement released Thursday. “We have been in contact with the firm on a number of occasions. I have spoken to those involved personally.” After the internal investigation into the trading patterns, Intrade found no wrongdoing or violation of its exchange rules, according to the company. Citing privacy policies, Delaney would not disclose the investor’s identity or whether the investor was affiliated with any political campaign. According to Delaney the investor was using “increased depth” in the Intrade market “to manage certain risks.” The action boosted the McCain prediction over its previous market value and above the levels of competing predictive-market Web sites. Pundits and politicians have used Intrade to track the fortunes of the two presidential candidates. Through the site, begun in 1999 and incorporated in Ireland, traders buy and sell “contracts” that function as stocks, allowing investors to gamble on the outcome of political, cultural, or even geological events such as the weather.

The company asserts and experts have found that the Intrade market is generally more accurate in predicting the outcome of major events than other leading indicators, including public opinion polls. But the relatively small scale of the market and its lack of outside regulation could leave the system vulnerable to unscrupulous investors, scholars of predictive markets say. Justin Wolfers, an associate professor at the University of Pennsylvania’s Wharton School of Business, said the trades in question do not follow any logical investment strategy. “Who knows who’s doing it, it’s obviously someone who wants good news for McCain,” said Wolfers, who has been following the situation closely. McCain campaign spokesman Michael Goldfarb said: “It’s always a good time to buy McCain.”

Ripple Effects
Intrade users first noticed something amiss when a series of large purchases running counter to market predictions sparked volatility in the prices of John McCain and Barack Obama contracts. The investor under scrutiny purchased large blocks of McCain futures at once, boosting their price and increasing the prediction that McCain had a greater chance of winning the presidential election. At other times, according to Intrade’s online records, blocks of Obama futures were sold — lowering the market’s prediction about Obama’s standing in the race. According to Intrade bulletin boards and market histories, smaller investors swept in to take advantage of what they saw as price discrepancies caused by the market shifts — quickly returning the Obama and McCain futures prices to their previous value. This resulted in losses for the investor and profits for the small investors who followed the patterns to take maximum advantage. The activities of the trader, dubbed the “rogue trader” on Intrade’s message boards, raised several questions. For example, the trader purchased large contracts named specifically after McCain and Obama. There were no similar-sized investments, however, in separate instruments that predict a generic Republican or Democratic presidential win — even though both sets of contracts apply to the same event, prices show. Some political news sites, such as realclearpolitics.org, prominently display Intrade’s McCain contract value but do not display the corresponding value for a Republican presidential win. Similar trading patterns were not found in competing predictive market Web sites betting on John McCain , such as the Iowa Electronic Markets or Betfair. This means the trader was paying thousands of dollars more than necessary to purchase McCain contracts on Intrade, where the price of betting on McCain was much higher. On Sept. 24, for example, Obama contracts were trading on Intrade at a price that predicts a 52 percent chance of an Obama victory. At the same time, Betfair and IEM contracts equated to about a 62 percent chance of an Obama victory, according to the political site fivethirtyeight.com. Intrade records show the trader often purchased tens or hundreds of thousands of dollars of contracts in the middle of the night, when activity was at its lowest, and in large bursts. In a three-day period from Sept. 30 through Oct. 2, four separate flurries of buying drove the price of the McCain contracts up by 3 to 5 points each. Those numbers eventually settled when the market compensated. “These movements over McCain largely occurred at time when there was no way that any useful information came out that was pro-McCain,” Wolfers said. “A profit-motivated guy wants to buy his stock in a way that would minimize his impact on the price, a manipulator wants to maximize it.”

Rogue Tactics
According to Intrade, the company contacted the investor and used public and private data held by the company as part of its investigation. That included an analysis of the trades made by the investor, tracking of Internet addresses, checking physical addresses and other information. Intrade released details about its investigation in a statement on its Web site. Some Intrade users commented on the company’s message board that the trader may believe in McCain’s chances for victory, despite trends in recent public opinion polls. Indeed, bucking conventional wisdom can be a profit-making strategy. For example, David Rothschild, a researcher and Ph.D. candidate at the Wharton School, said that during the first two presidential debates, the trader bet thousands of dollars on a McCain electoral victory at the same moment that instant polls were suggesting that Obama would win. “That’s equivalent to buying a company’s stock just as negative earning reports come out,” Rothschild said. “It is a bad investment, but may make some observers think that Mr. McCain won the debate, which, again would be the goal of market manipulation.” Also, the trader paid a premium of 10 percent to 20 percent on every dollar traded by not placing similar bets on other Web sites, according to Rothschild’s calculations. Overall, if the trader’s motive was to influence the Intrade market, he was remarkably successful, Rothschild said. The trader’s actions help keep the probability of Obama winning the election on Intrade about 10 percent lower than Betfair and IEM for more than a month. “If the investor did this as investment, not to manipulate Intrade, he is one of the most foolish investors in the world,” Rothschild said.

MARKET MANIPULATION RESEARCH
http://hanson.gmu.edu/biashelp.pdf
http://www.unc.edu/~cigar/papers/ManipIHT_June2008(KS).pdf
http://bpp.wharton.upenn.edu/jwolfers/Press/WSJcolumn/16-Market%20Manipulation%20Muddies%20Election%20Outlook.pdf

PROOF of CONCEPT?
http://www.fivethirtyeight.com/2008/09/intrade-betting-is-suspcious.html
http://freakonomics.blogs.nytimes.com/2008/10/02/manipulation-in-political-prediction-markets/#more-3145
http://www.marginalrevolution.com/marginalrevolution/2008/01/prediction-mark.html
http://www.marginalrevolution.com/marginalrevolution/2008/10/manipulation-of.html
“This is big news but not for the reasons that most people think. Although some manipulation is clearly possible in the short run, the manipulation was already suspected due to differences between Intrade and other prediction markets. As a result: “According to Intrade bulletin boards and market histories, smaller investors swept in to take advantage of what they saw as price discrepancies caused by the market shifts — quickly returning the Obama and McCain futures prices to their previous value. This resulted in losses for the investor and profits for the small investors who followed the patterns to take
maximum advantage.”

This supports Robin Hanson’s and Ryan Oprea’s finding that manipulation can improve (!) prediction markets – the reason is that manipulation offers informed investors a free lunch. In a stock market, for example, when you buy (thinking the price will rise) someone else is selling (presumably thinking the price will fall) so if you do not have inside information you should not expect an above normal profit from your trade. But a manipulator sells and buys based on reasons other than expectations and so offers other investors a greater than normal return. The more manipulation, therefore, the greater the expected profit from betting according to rational expectations.

An even more important lesson is that prediction markets have truly arrived when people think they are worth manipulating. Notice that the manipulator probably doesn’t care about changing the market prediction per se. Instead, a manipulator willing to bet hundreds of thousands to change the prediction of a McCain win must think that the prediction will actually affect the outcome. And if people think prediction markets are this important then can decision markets be far behind?”

COLLECTIVE INTELLIGENCE
http://www.prokons.com/prediction-markets/faq
http://flipflopmoney.com/2008/05/11/intrade-making-money-online-not-the-usual-way/
http://www.intrade.com/
http://us.newsfutures.com/
http://www.biz.uiowa.edu/iem/
http://www.fivethirtyeight.com/
http://www.realclearpolitics.com/
http://www.hsx.com/

http://en.wikipedia.org/wiki/Prediction_market
http://en.wikipedia.org/wiki/Election_Stock_Market
http://en.wikipedia.org/wiki/Policy_Analysis_Market
http://en.wikipedia.org/wiki/Futures_market
http://en.wikipedia.org/wiki/Efficient_market_hypothesis

http://www.forecastingprinciples.com/PM/
http://www.midasoracle.org/
http://www.chrisfmasse.com/
http://www.predictionmarketjournal.com/
http://www.pmindustry.org/
http://betting.betfair.com/specials/politics-betting/prediction-markets/
http://www.dmreview.com/bissues/20070301/2600311-1.html?bir=1
http://www.ideosphere.com/
http://www.consensuspoint.com/blog/?m=200809

POSITIVE ECONOMICS
http://en.wikipedia.org/wiki/Positive_economics
http://en.wikipedia.org/wiki/Essays_in_Positive_Economics#The_Methodology_of_Positive_Economics
http://academic2.american.edu/~dfagel/Class%20Readings/Friedman/Methodology.pdf

ORGANIZING without ORGANIZATIONS
http://cyber.law.harvard.edu/interactive/events/2008/02/shirky
http://www.herecomeseverybody.org/
http://www.shirky.com/

‘HERD INSTINCT’ as ECONOMIC MODEL
http://www.forbes.com/2008/10/21/why-bubbles-economy-markets-bubbles08-cx_th_1021harford.html
Why Do Markets Create Bubbles?
BY Tim Harford / 10.21.08

Bubbles are like pornography: Everyone has his or her own opinion as to what qualifies, but it is impossible to pen a precise definition. If you wish to push the metaphor further, both are also fun for a while, if you like that sort of thing, but apt to end up making you feel deflated and embarrassed. Bubbles are also embarrassing for the economics profession. It’s not that we have no idea what causes bubbles to form, it’s that we have too many ideas for comfort. Some explanations are psychological. Some point out that many bubbles have been stoked not by markets but by governments. There is even a school of thought that some famous bubbles weren’t bubbles at all.

The psychological explanation is the easiest to explain: People get carried away. They hear stories of their neighbors getting rich and they want a piece of the action. They figure, somehow, that the price of stocks (1929) or dot-com start-ups (1999) or real estate (2006) can only go up. A symptom of this crowd psychology is that the typical investor displays exquisitely bad timing. The economist Ilia Dichev of the University of Michigan has recently calculated “dollar-weighted” returns for major stock indexes; this is a way of adjusting for investors rushing into the market at certain times. It turns out that “dollar-weighted” returns are substantially lower than “buy and hold” returns. In other words, investors flood in when the market is near its peak, tending to buy high and sell low. The herd instinct seems to cost us money. That is awkward for economists, because mainstream economic models do not really encompass “herd instinct” as a variable. Still, some economists are teaming up with psychologists and even neurophysiologists in the search for an answer.

Cambridge economist John Coates is one of them. He used to manage a Wall Street trading desk and was struck by the way the (male) traders changed as the dot-com bubble inflated. They would pump their arms, yell, leave pornography around the office and in general behave as though they were high on something. It turns out that they were: It was testosterone. Many male animals–bulls, hares, rutting stags and the like–fight with sexual rivals. The winner experiences a surge of testosterone, which makes him more aggressive and more likely to take risks. In the short run that tends to mean that winners keep winning; in the long run, they take too many risks. Dr. Coates wondered if profitable traders were also running on testosterone, and a few saliva samples later it appears that he is right. Profitable trading days boost testosterone levels and tend to encourage more risk-taking, more wins and more testosterone. When the risks didn’t pay off, the testosterone ebbed away to be replaced by a stress hormone, cortisol. The whole process seems likely to exaggerate peaks and troughs. These psychological explanations are likely to help us understand what goes on as bubbles form and how they might be prevented. Yet they make me nervous: It is too easy to blame a bubble on the mob psychology of the market when a closer look at most bubbles reveals that there is much more to the story than that.

For example, one famous early “mania” was the Mississippi bubble, in which countless investors poured their money into the Compagnie des Indes in France in 1720, and lost it. Yet there was more going on than a free-market frenzy: The government could hardly have been more closely involved. The Compagnie des Indes had effectively taken over the French Treasury and legal monopolies on French trade with much of the rest of the world (including Louisiana–hence “Mississippi bubble”). Investors were hardly insane to think that such a political machine might be profitable, especially since the king of France personally held many of the shares. But the king sold out near the top in 1720; within two years, the Compagnie was bankrupt and its political power dismantled.

The government played its own part in the current credit crunch, too. For all the scapegoating of deregulation, thoughtful commentators also point to the Federal Reserve’s policy of cheap money, and Fannie and Freddie’s enormous appetite for junk mortgages–urged all the way by politicians trying to make credit available to poor and risky borrowers. Market psychology was part of the story, but not the whole story. The idea that ordinary people have a tendency to be caught up in investment manias is a powerful one, thanks in part to Charles Mackay, author in 1841 of the evergreen book Extraordinary Popular Delusions and the Madness of Crowds. Mackay’s most memorable example was the notorious Dutch tulip bubble of 1637, in which –absurdity!–tulip bulbs changed hands for the price of a house.

It is the quintessential case study of financial hysteria, but it’s not clear that there was ever an important tulip bubble. Rare tulip flowers–we now know that their intricate patterning is caused by a virus–were worth huge sums to wealthy Parisian gentlemen trying to impress the ladies. Bulbs were the assets that produced these floral gems, like geese that laid golden eggs. Their value was no fantasy. Peter Garber, a historian of economic bubbles, points out that a single bulb could, over time, be used to produce many more bulbs. The price of the bulbs would, of course, fall as more were cultivated. A modern analogy would the first copy of a Hollywood film: the final copies may circulate for a few dollars, but the original is worth tens of millions. Garber points out that rare flower breeds still change hands for hundreds of thousands of dollars. Perhaps we shouldn’t be quite so sure that the tulipmania really was a mania. Economists are going to have to get better at understanding why bubbles form from a heady mix of fraud, greed, perverse incentives, mob psychology and government incompetence. What we should never forget is that underneath the apparent hysteria, there is often a cold rationality to it all.

FIELD RECORDINGS
http://betting.betfair.com/specials/politics-betting/prediction-markets/the-betfair-prof/whats-the-connection-between-a-1906-poultry-exhibition-and-t-180908.html
What’s the connection between a 1906 poultry exhibition and the 2008 US election?
by Leighton Vaughan Williams / 18 September 2008

Sir Francis Galton was an English explorer, anthropologist, scientist, who was born in 1822 and died in 1911. To students of prediction markets he is best known, however, for his visit, at the age of 85, to the West of England Fat Stock and Poultry Exhibition, and what happened when he came across a competition in which visitors could, for sixpence, guess the weight of an ox. Those who guessed closest would receive prizes. About 800 people entered. Ever the scientist, he decided to examine the ledger of entries to see how clever these ordinary folk actually were in estimating the correct weight. In letters to ‘Nature’ magazine, published in March of 1907, he explained just how ordinary those entering the competition were. “Many non-experts competed”, he wrote, “like those clerks and others who have no knowledge of horses, but who bet on races, guided by newspapers, friends, and their own fancies … The average was probably as well fitted for making a just estimate of the dressed weight of the ox as an average voter is of judging the merits of most political issues”.

The results surprised him. For what he found was that the crowd had guessed (taking the mean, i.e. adding up the guesses and dividing by the number of entrants) that the ox would weigh 1,197 pounds. In fact, it weighed 1,198 pounds! The median estimate (listing the guesses from the highest to the lowest and taking the mid-point) was also close (1,207 pounds, and therefore still within 1% of the correct weight) but not as close. Some have argued that Galton himself favoured the use of the median rather than the mean, and so was double-surprised when the mean beat the median. Others have argued that the point is incidental and what this tale demonstrates about the wisdom of the crowd is more important than such a fine statistical detail. I think that both these points of view contain some merit. The power of the market to aggregate information is indeed a critically important idea. But it is also important to be able to distinguish in different contexts which measure of the ‘average’ (the mean, the median, or perhaps some other measure) is more suited to the purpose at hand.

Take the stream of opinion polls which contribute to the collective knowledge that drives the Betfair market about the identity of the next President of the United States. If five are released, say, on a given day, what is the most appropriate way of gauging the information contained in them? Should we simply add up the polling numbers for each candidate and divide by the number of polls, or should we list them from highest polling score to lowest and take the mid-point. The convention adopted by sites such as www.realclearpolitics.com is to take the mean. But is there a better measure than the mean of discerning the collective wisdom contained in the polls, and if so, what is it? The jury is still deliberating.

CONTACT
Leighton Vaughan Williams
http://www.ntu.ac.uk/research/school_research/nbs/staff/61441gp.html
http://www.ntu.ac.uk/nbs/business/specialist_centres/political_forecasting.html
http://www.predictionmarketjournal.com/
email : leighton.vaughan-williams [at] ntu.ac [dot] uk

 

‘COLLECTIVE BEST GUESS’ CORRECT within a FURLONG
http://betting.betfair.com/specials/politics-betting/prediction-markets/the-betfair-prof/the-betfair-prof-question-how-do-you-find-a-missing-submarin-080408.html
“Question: How do you find a missing submarine? Answer: Ask the audience”
BY Leighton Vaughan Williams / 8 April 2008

During a car journey between Nottingham and Warwick the other week I was told a story about the value of crowd wisdom in turning up buried treasure. The story was that by asking a host of people, each with a little knowledge of ships, sailing and the sea, where a vessel is likely to have sunk in years gone by, it is possible with astonishing accuracy to pinpoint the wreck and the bounty within. Individually, each of those contributing a guess as to the location is limited to their special knowledge, whether of winds or tides or surf or sailors, but the idea is that together their combined wisdom (arrived at by averaging their guesses) could pinpoint the treasure more accurately than a range of other predictive tools. At least that’s the way it was told to me by an economist who was in turn told the story by a physicist friend.

To any advocate of the power of prediction markets, this certainly sounds plausible, so I decided to investigate further. Soon I was getting acquainted with the fascinating tale of the submarine USS Scorpion, as related by Mark Rubinstein, Professor of Applied Investment Analysis at the University of California at Berkeley. In a fascinating paper titled, ‘Rational Markets? Yes or No? The Affirmative Case’, he tells of a story related in a book called ‘Blind Man’s Bluff: The Untold Story of American Submarine Espionage’ by Sherry Sontag and Christopher Drew. The book tells how on the afternoon of May 27, 1968, the submarine USS Scorpion was declared missing with all 99 men aboard. It was known that she must be lost at some point below the surface of the Atlantic Ocean within a circle 20 miles wide. This information was of some help, of course, but not enough to determine even five months later where she could actually be found.

The Navy had all but given up hope of finding the submarine when John Craven, who was their top deep-water scientist, came up with a plan which pre-dated the explosion of interest in prediction markets by decades. He simply turned to a group of submarine and salvage experts and asked them to bet on the probabilities of what could have happened. Taking an average of their responses, he was able to identify the location of the missing vessel to within a furlong (220 yards) of its actual location. The sub was found. Sontag and Drew also relate the story of how the Navy located a live hydrogen bomb lost by the Air Force, albeit without reference in that case to the wisdom of crowds. Perhaps, though, that tale is too secret yet to be told! What then, I wonder, would those scientific giants, Karl Pearson and Lord Rayleigh, have made of it all? It was their correspondence, you may recall, in the pages of the scientific journal, ‘Nature’, which answered the classic query of where to find the drunk you left in a field. “Where you left him,” was the answer. Which is all very well, of course, if you were sober enough yourself to know exactly where that might have been!

PREVIOUSLY : TANGANYIKA LAUGHTER EPIDEMIC, 1962-64
http://spectrevision.net/2009/01/03/tanganyika-laughter-epidemic-1962-64/
CROWD-SOURCING the FUTURE
http://spectrevision.net/2007/04/05/crowd-sourcing-the-future/
MECHANISM DESIGN THEORY
http://spectrevision.net/2007/10/22/mechanism-design-theory/
SWARM INTELLIGENCE
http://spectrevision.net/2007/07/16/swarm-intelligence/

SEE ALSO : FINANCIAL LITERACY
http://annalusardi.blogspot.com/
http://www.dartmouth.edu/~alusardi/media.html

GAIN CONFIDENCE
http://www.gametheory.net/dictionary/
http://www.gametheory.net/tests/
http://www.gametheory.net/games/
http://www.gametheory.net/applets/
http://www.gametheory.net/links/academic-journals.html
http://www.gametheory.net/lectures/
http://www.gametheory.net/books/
http://gambit.sourceforge.net/
http://www.strategy-business.com/library/enews
http://www.ics.uci.edu/~eppstein/cgt/
http://kuznets.fas.harvard.edu/~aroth/alroth.html
http://www.perfecteconomy.com/pg-glossary-of-terms.html
http://www-sop.inria.fr/coprin/ISDG/
http://meganmcardle.theatlantic.com/archives/2008/10/recommended_reading.php
http://www.econlib.org/

Leave a Reply