“When confronted with threatening stimuli and predators, the crayfish responds with an innate escape machanism called the startle reflex. Also known as tailflipping, this stereotyped behaviour involves rapid flexions of the abdominal muscles which produce powerful swimming strokes that thrust the small crustacean through the water and away from danger. In the struggle for existence, the speed of this response can mean the difference between life and death, and the crayfish has evolved an incredibly fast escape mechanism which can be initiated within well under one-hundredth of a second. This mechanism depends on a process called coincidence detection, whereby the electrical impulses inputs from sensory organs on different parts of the body arrive simultaneously at a specific location in the central nervous system. Although this reflex has been studied intensively, the mechanism by which nervous impulses arrive in synchrony at the central nervous system was poorly understood.”


Fear the Roller Coaster? Embrace It
by Dennis K. Berman  /   September 11, 2007

In these markets, everyone’s afraid. It’s your response to the fear that matters most. Are you going to crack up like Howard Dean in 2004? Or detach yourself, analyze and respond like Neil Armstrong in 1969? Astronauts, firefighters and soldiers train to respond to moments of duress. The rest of us are left on our own. And in most cases, the results aren’t good. We generally underestimate the true dangers arrayed against us, overplaying the dramatically violent outcomes over the more insidious ones. And in times when we lack information, we’re prone to imagine the worst, scientists say. We are only as effective as our emotions allow us.

Which is precisely why this current market is so daunting. Consider the unknowns still in play: The choked market for short-term corporate funding. The impossible-to-value mounds of LBO debt and equity. The daisy-chain effect between liquidating hedge funds and the broader market. It’s a far different situation than the market drop of 2001, when the downturn was spurred by the relatively simple concept that technology stocks were broadly overvalued. What’s the best way to handle all of this lingering fear? Some inspiration comes from a group of researchers who have been applying new techniques to get an answer. The researchers have begun studying professional traders as if they were chimps, even using MRI machines to divine how fear affects the brain. “We are responding from a different part of the brain when we are in the midst of calm, clear thought,” says Brett Steenbarger, a psychiatry professor at the State University of New York’s Upstate Medical University, who also trains traders and hedge-fund managers.

That area is the prefrontal cortex, what he calls the “executive” node of the brain that plans and reasons. When we are fearful, blood flows away from the area toward the motor areas of the brain – the ones that produce a flight-or-fight sensation. This is great if you’re confronting a saber-toothed tiger, but not so great if you’re mulling your daughter’s college fund. “You end up making decisions rashly without engaging in research and planning that you might otherwise do,” he explains. Dr. Steenbarger has found that the most important step is to get back to basics: to methodically check whether the hypothesis that got you into an investment still applies or not. The next step of controlling market fear may be to eliminate as much borrowing as possible, he adds. Leverage, he says, magnifies financial results and therefore emotional swings. During times of high volatility, this can become an especially dangerous trap for bad decision making.

Andrew Lo, a professor at Massachusetts Institute of Technology, has observed professional traders in their natural habitats. He’s found that there is some truth to the idea of the Cool Hand Luke. Palms of veteran traders get less sweaty than novice ones. After especially stressful moments, these traders return to a standard physiological baseline. The novices “are all over the map,” Dr. Lo says. He recommends two other means of coping with financial fear. The first sounds simple but is essential – training yourself to recognize fear in the first place. For example, your habit may be to avoid the markets altogether by shunning the newspaper or online stock quotes. The second approach is to prepare for a busted or volatile market,
much like an astronaut rehearsing emergency procedures. This helps neutralize the fear in your decision making, especially in those moments when it seems so easy to succumb.

That’s why it might make sense to decide ahead of time your range of responses if your portfolio loses, say, 10 percent to 20 percent of its value. Research has shown that, unsurprisingly, retail investors are usually the worst at this, adds Dr. Lo. Neil Armstrong’s cool is on vivid display in the wonderful new movie, “In the Shadow of the Moon,” about the Apollo lunar missions. In one fraught moment, Mr. Armstrong is running low on fuel as he pilots the spacecraft to the moon’s surface. The cameras pan to the smoking, sweating wonks in Mission Control. Piped in by radio, Mr. Armstrong’s voice sounds unshaken, almost blase. His best human trait – his intellect – has subdued his most animal one – his fear. That’s been the experience of 67-year-old Lewis van Amerongen, formerly of private-equity firm Gibbons, Green, Goodwin & van Amerongen. Having pioneered the buyout business, the firm got bogged down in the now-infamous “Burning Bed” purchase of Ohio Mattress Co. in the late 1980s. When the junk-bond market collapsed soon afterward, bank First Boston couldn’t refinance a $457 million bridge loan and ended up owning most of the company. “Each generation has to go through it and has to emotionally experience it,” Mr. van Amerongen said in an interview. “Without that, it’s just an academic exercise.” In other words, there is no substitute for having survived other fearful experiences. The best antidote for fear just may be fear itself.


Brett Steenbarger
email : steenbab [at] aol [dot] com

Andrew Lo
email : ssalem [at] mit [dot] edu

Darwinian Investing – Dr. Andrew Lo’s market theory borrows from
neuroscience, evolution, and econometrics
by Christopher Farrell  /  February 20, 2006
“Can brain science unlock the secrets of success on Wall Street? And if so, will it transform the field of personal finance? These matters fascinate Andrew W. Lo, a finance professor at Massachusetts Institute of Technology’s Sloan School of Management and director of its Laboratory for Financial Engineering. Lo, 45, and a small band of economists are tapping into neuroscience and cognitive psychology to
better understand how investors make financial decisions. In one early experiment, he and a colleague wired up 10 traders in Boston and monitored their breathing, body temperature, perspiration, pulse rates, and muscle activity as they risked real money in the markets. While the most seasoned traders in the group remained relatively calm,
nearly everyone had sweaty palms and quickened pulses when the markets grew more volatile. “Even the best traders have significant emotional responses when they trade,” says Lo. This fights the stereotype of traders as rational, coolly analytical Vulcans of commerce. Lo’s results, along with further studies using more sophisticated magnetic-resonance imaging on traders, also undercut a dominant theory known as the efficient market hypothesis (EMH), which holds that markets aggregate information efficiently and investors form their financial expectations rationally. The reality may be much messier. Lo, who also serves as chief scientific officer at the hedge fund Alphasimplex, breaks with both EMH and behavioral economics in seeing emotions as central to survival in the market. But this is just one element in a theory Lo is developing called the Adaptive Market Hypothesis. It shows how investors use trial and error to establish rules of thumb when placing financial bets and then hone their skills amid disruptive changes. Think of the market as an ecosystem made up of hedge funds, mutual funds, retail investors, and other “species,” all competing for profit opportunities. It’s a Darwinian world where market shifts render some strategies obsolete, resulting in chances missed and money lost, says Lo. “The only way to maintain an edge is to continually innovate.”

Lo is not the first to incorporate the insights of Charles Darwin in his models. Luminaries from Joseph Schumpeter to Gary Becker explored this territory in the past. But Lo’s mingling of neuroscience, evolution, and financial econometrics is highly original. He predicts that the insights of evolutionary psychology will change individual
wealth- and risk-management techniques, right down to how people handle 401(k) portfolios or deal with declining home prices. Prepped with appropriate data from Lo’s research, a simple computer program might one day provide invaluable financial advice. You would punch in basic information, such as family status, life goals, the standard of living you would find acceptable in retirement, and the types of risks
you can or can’t tolerate. An algorithm would then tailor a portfolio for you and help you hedge against unwanted risks, such as a lost job or a wage cut. “Now, it sounds like science fiction,” says Lo. “Not in 10 years.” Sci-fi was an important influence on Lo, whose family moved from Taiwan to Queens, N.Y., when he was 5. Raised by his mother, he became an academic star. He skipped eighth grade, sped through Bronx
High School of Science and Yale University, and nabbed a PhD in economics from Harvard University at age 24. But it was Isaac Asimov’s Foundation trilogy that steered him toward finance economics. Asimov sketched out a branch of mathematics called psychohistory, whose practitioners sample the proclivities of large numbers of people, then accurately predict the future based on what they learn. Sound familiar?”

Visualizing Market Fear
by Richard L. Peterson  /  September 26, 2007

“How can you cope with market fear? Many investors consider this a crucial question. Yet it often isn’t until periods of fear and sharp market downturns that investors think, “now I know I shouldn’t sell everything, but it really hurts!” It’s at these times that the
excellent investors and traders stand out. They can muster the courage to buy in such markets, even as the financial news and media pundits are screaming, “The sky is falling!” The MarketPsych Fear Index was displayed on the Wall Street Journal’s C1 Money and Investing page a couple weeks ago. The MarketPsych Fear Index helps investors visualize the fear they are feeling that is affecting their judgment. Studies
show that we’re all affected by market fear, and it takes a lot of courage and experience to step back and see the fear and identify the opportunities that it creates. The first step is understanding that fear is contagious. The second step is identifying where it is and how strong it is. That’s what our Index allows.”

On Wall Street, Eyes Turn to the Fear Index
by Michael M. Grynbaum  /  October 20, 2008

Fear is running high on Wall Street. Just look at the Fear Index. With all those stomach-churning free falls and sharp reversals in the stock market recently, traders are keeping a nervous eye on an obscure index known as the VIX. The VIX (officially the Chicago Board Options Exchange Volatility Index) measures volatility, the technical term for those wrenching market swings. A rising VIX is usually regarded as a sign that fear, rather than greed, is ruling the market. The higher the VIX goes, the more unhinged the market looks. So how scared are investors? On Friday, the VIX rose to 70.33, its highest close since its introduction in 1993. To some experts, that suggests that the wild ride is far from over. “Right now, it’s an extremely important part of the puzzle,” Steve Sachs, a trader at Rydex Investments, said of the VIX. “It’s showing a huge amount of fear in the marketplace.”

The VIX is hardly a household name like the Dow. But lately, it has become a fixture on CNBC and other financial news outlets, with commentators often invoking an index that most of the general public was blissfully unaware of only a few weeks ago. Some traders think all the publicity has only added to the anxieties that the VIX is intended to reflect. “The VIX is a self-fulfilling prophecy,” said Ryan Larson, head equity trader at Voyageur Asset Management. “It’s almost adding to the problems.” Speaking on Thursday, when the VIX hit an intraday high of 81.17 before closing lower, he said: “You see the VIX trade north of 80, and of course the media starts to pick it up.” Mr. Larson continued, “It’s blasted on the TV, and for the average investor sitting at home, they think, oh, my gosh, the VIX just broke 80 — I’ve got to go sell my stocks.”

Put simply, the VIX measures the degree to which investors think stocks will swing violently in the next 30 days. It is calculated in real time throughout the trading day, fluctuating minute to minute. The higher the VIX, the bigger the expected swings — and the index has a good track record. It spiked in 1998 when a big hedge fund, Long-Term Capital Management, collapsed, and after the 9/11 terrorist attacks. Mr. Sachs, with some incredulity, said that the swings in the stock market have reflected the volatility implied by the VIX. “We had a 17 percent peak-to-trough trading range this week,” he said. “It should take two years under normal circumstances for the S.& P. 500 to have that type of trading range.”

The VIX had its origin in 1993, when the Chicago Board Options Exchange approached Robert E. Whaley, then a professor at Duke, with a dual proposal. “The first purpose was the one that is being served right now — find a barometer of market anxiety or investor fear,” Professor Whaley, who now teaches at the Owen Graduate School of Management at Vanderbilt University, recalled in an interview. But, he said, the board also wanted to create an index that investors could bet on using futures and options, providing a new revenue stream for the exchange. Professor Whaley spent a sabbatical in France toying with formulas. He returned to the United States with the VIX, which gauges anxiety by calculating the premiums paid in a specific options market run by the Chicago Board Options Exchange.

An option is a contract that permits an investor to buy or sell a security at a certain date at a certain price. These contracts often amount to insurance policies in case big moves in the market cause trouble in a portfolio. A contract, like insurance, costs money — specifically, a premium, whose price can fluctuate. The VIX, in its current form, measures premiums paid by investors who buy options tied to the price of the Standard & Poor’s 500-stock index. In times of confusion or anxiety on Wall Street, investors are more eager to buy this insurance, and thus agree to pay higher premiums to get them. This pushes up the level of the VIX. “It’s analogous to buying fire insurance,” Professor Whaley said. “If there’s some reason to believe there’s an arsonist in your neighborhood, you’re going to be willing to pay more for insurance.”

The index is not an arbitrary number: it offers guidance for the expected percentage change of the S.& P. 500. Based on a formula, Friday’s close of around 70 suggests that investors think the S. & P. 500 could move up or down about 20 percent in the next 30 days — an almost unheard-of swing. So the higher the number, the bigger the swing investors think the market will take. Put another way, the higher the VIX, the less investors know about where the stock market is headed. The current level shows that “investors are still very uncertain about where things will go,” said Meg Browne of Brown Brothers Harriman, a currency strategist who was keeping a close eye on the VIX as the stock market soared last Monday.

Since 2004, investors have been able to buy futures contracts on the VIX itself, providing a way to hedge against volatility in the market. Options on the VIX have been available since 2006. “You have seen more and more investors using it as an avenue toward hedging their portfolios,” said Chris Jacobson, chief options strategist at the Susquehanna Financial Group. In times of crisis, “while you’re losing your portfolio, you could make some money on the increase in volatility,” he said. Some investors are skeptical about the utility of the index. “If you’re trading the markets, you pretty much know the fear, you know the volatility. I don’t need an index to tell me there’s volatility out there,” Mr. Larson said.

Robert E. Whaley
email : whaley [at] vanderbilt [dot] edu



What caused the meltdown on Wall Street? Greed. Lax regulation. Panic. And maybe the very biological makeup of investors’ brains. Eight years ago a handful of brain scientists began using MRI scanners, psychological tests and an emerging understanding of brain anatomy to try and overturn traditional economic theories that assume people always act rationally when it comes to financial decisions. To understand the market, these researchers said, you needed to get inside peoples’ heads. They called their new field neuroeconomics.

If proof was needed that markets can be unpredictable, irrational and cruel, the past few weeks provided it. Bear Stearns and Merrill Lynch have been swallowed up by emergency mergers. The government has bailed out Fannie Mae, Freddie Mac and AIG. Lehman Brothers is bankrupt.  So, can these neuroeconomists shed any light on what went wrong? Surprisingly, yes. “Fear plus herding equals panic,” says Gregory
Berns, a neuroeconomist at Emory University. “You bet it’s biologically based.”

At the core of the market mess are securities that were backed by extremely risky mortgages. The theory was that slicing and dicing mortgages diluted the risk away. But the ratings agencies were being compensated by issuers of the mortgage-backed securities, and neuroeconomics says that created big problems. “You don’t get mistakes this big based on stupidity alone,” says George Loewenstein of Carnegie Mellon University. “It’s when you combine stupidity and people’s incentives that you get errors of this magnitude.”

Consider this forthcoming research by Loewenstein, Roberto Weber and John Hamman, all of Carnegie Mellon. They organized volunteers into partners. One partner is given $10 and told to split it however he sees fit. On average, the deciding partner keeps $8 and gives away $2. Then researchers repeat the game. This time, the decider pays an “analyst” to decide how to split the money fairly. The game continues
for multiple rounds and the decider can fire the analyst. With this change, the decider gets everything. Paying somebody else to ensure assets are divided fairly actually makes things less fair.

Colin Camerer, an economist at Caltech, blames “diffusion of responsibility” for the problems. His own research identifies another problem: Neither investors nor bankers were likely to be considering worst-case scenarios. Camerer conducted experiments in which two people engage in a negotiating game on how to split $5. But each time
they fail to come to an agreement, the value of the pot drops. The negotiators can check the total value of the money by clicking colored boxes on a computer screen. But only 10% look to see what will happen in the worst case.

To make matters worse, hedge funds were bragging about uncanny returns, making the impossible seem possible. But some studies show that these results may have been inflated by a lack of disclosure, Camerer says. Brain imaging studies show that investors as a whole get more and more used to big returns, and thus take bigger and bigger risks in a bull market–and then the bubble pops and stockholders start selling like mad.

One reason: Investors fear losing more than they look forward to winning. According to a 2007 paper, researchers used MRI scans to watch the brains of people as they decided whether or not to take gambles with a 50/50 risk. Gains caused brains to light up in areas that released dopamine (the chemical boosted by Zoloft and Prozac); losses caused those same areas to decrease. Researchers could predict that people would do based on the size of the increases.

Dread, the anticipation of a loss that is expected to happen, is another powerful force. Emory’s Berns has shown that people differ in how they respond to expected pain. He gave electric shocks to people in an MRI machine, and then gave them the option of either getting an intense shock immediately or a less intense shock later. People whose brains started lighting up in areas associated with pain beforehand were more likely to decide to get the pain over with. They also would have sold stock.

So what’s a regulator to do? One argument against big bailouts is moral hazard–the idea that if you bail the banks out now, future bankers will take even bigger risks. Caltech’s Camerer points out that people are naturally shortsighted. People with health insurance do spend more on care, he says, but people who rent cars don’t get in more accidents, because there are more immediate risks, like bodily harm. But so far the government’s attempts to quell the risk have just reinforced the idea that something is very wrong. If you tell somebody not to think about white elephants, Loewenstein notes, they will do exactly that.

On the other hand, putting a floor in the market for these mortgage-backed securities, as the government’s plan tried to do, could ease investor panic, says Richard Peterson of MarketPsy Capital, who is trying to put neuroeconomic research to work in a $50 million hedge fund. “Things are unknowable,” Peterson says. “That is the X factor that is causing the risk aversion to accelerate.”

George Loewenstein
email : gL20 [at] andrew.cmu [dot] edu

Colin Camerer
email : camerer [at] hss.caltech [dot] edu


The Chemical Basis of Trust
Trust is essential to healthy social interactions, but how do we decide whether we can trust strangers? An article based on research supported by the Templeton Foundation and published in the June issue of Scientific American argues that the hormone ocytocin enhances our ability to trust strangers who exhibit non-threatening signals.

The article, “The Neurobiology of Trust,” by Paul J. Zak, is based on original research with an experimental situation that the author calls the “trust game.” It is a modification of a similar game developed in the mid-1990s by the experimental economists Joyce Berg, John Dickhaut, and Kevin McCabe. The game allows test subjects to transfer their money to a stranger if they trust the stranger to reciprocate by transferring more back.

When we are trusted, Zak found, our brains release oxytocin, which makes us more trustworthy; the subjects with the highest levels of oxytocin returned the most money to their partners. Moreover, the rise in oxytocin levels, and not the absolute level, made the difference. Zak also found that subjects who inhaled an oxytocin nasal spray were more likely to trust others. Those given oxytocin transferred 17 percent more money than control subjects who inhaled a placebo. Twice as many subjects who received oxytocin gave all their cash to their partners.

Ocytocin is best known as the hormone that induces labor in pregnant women. But Zak maintains that its role in the development of trust has implications for a range of important issues, from the growth of wealth in developing countries to the nature of diseases such as autism to the physiological basis of virtuous behaviors. A professor
of economics and founding director of the Center for Neuroeconomics Studies at Claremont Graduate University, Zak also serves as clinical professor of neurology at Loma Linda University Medical Center. His new book, Moral Markets: The Critical Role of Values in the Economy, was also supported by JTF, and was published by Princeton University Press this year.


Paul J. Zak
email : paul [at] pauljzak [dot] com

‘Might have been’ key in evaluating behavior
by Ruth SoRelle  /  August 2007

“What might have been,” or fictive learning, affects the brain and plays an important role in the choices individuals make – and may play a role in addiction, said Baylor College of Medicine researchers and others in a report that appeared in the Proceedings of the National Academy of Sciences. These “fictive learning” experiences, governed by what might have happened under different circumstances, “often dominate the evaluation of the choices we make now and will make in
the future,” said P. Read Montague Jr., Ph.D., professor of neuroscience at BCM and director of the BCM Human Neuroimaging Laboratory and the newly formed Computational Psychiatry Unit. “These fictive signals are essential in a person’s ability to assess the quality of his or her actions above and beyond simple experiences that
have occurred in the immediately proximal time.”

Blood flow reflects brain’s response to risk and reward
Using techniques honed in previous experiments that studied trust, Montague and his colleagues used an investment game to test the effects of these “what if” thoughts on decisions in 54 subjects. Using functional magnetic resonance imaging (fMRI) to measure blood flow changes in specific areas of the brain, they precisely measured responses to economic instincts. These blood flow changes in the brain reflect alterations in the activity of nerve cells in the vicinity. In this case, they measured the brain’s response to “what could have been acquired” and “what was acquired.” This newly discovered “fictive learning” signal was measured, localized and precisely parsed from the brain’s standard reward signal that reflects actual experience. Each subject took part in a sequential gambling task. The player makes a new investment allocation (a bet) and then receives a “snippet” of information about the market – either the market went up and the investment was a good one or the market goes down and the play had a loss. Each subject received $100 and played 10 markets, making 20 decisions about each.

Regret affects future decisions
Montague and his associates found that fictive learning – the “what might have happened” – affected the brains of the subjects and played an important role in their decisions about the game. This effect manifested as a distinct selective activation signal in a part of the brain called the ventral caudate nucleus. The emotion of regret for a path chosen or not taken can be strongly influential on future decision-making. The fictive learning signal discovered by Montague and the team of researchers does not necessarily manifest as such a conscious “feeling” but contributes to the brain’s computation and planning operations in a robust way that is now available to rigorous
experimental analysis in health and in diseases of the brain/mind. “We used real world market data – the crash of 1929, the bubble of the late 1990s and so on – to probe each subject’s brain response to fictive signals (what could have been) as they navigated their choices. This means we now have a kind of neural catalogue of how
famous stock market episodes affect signals in the average human brain,” said Montague. He plans to use the findings from this study to explore the balance of choices between actual and fictive outcomes.

New tool for studying addiction
“These results provide a new tool for exploring issues related to addiction,” Montague said. “For example, why does a person choose using a drug even though he or she can imagine the bad consequences that can result? We now have a way to measure quantitatively the balance between reward-seeking (like seeking a drug) and the thoughts that could intervene.”

“The brain has a well-defined system for pursuing actual rewards based on actual outcomes,” said Terry Lohrenz, Ph.D., instructor in the neuroimaging laboratory and the report’s first author. “The system is complex, but recent research has begun to dissect them in great detail. The importance of that work is that the reward guidance
signals are exactly those hijacked by drugs of abuse. “Identifying real neural signals to fictive outcomes now positions us to understand how our more abstract thoughts – the way we contextualize or frame our experience – guide our behavior,” Lohrenz added.

P. Read Montague
email : readm [at] bcm.tmc [dot] edu

Terry Lohrenz
email : tlohrenz [at] hnl.bcm.tmc [dot] edu

Has evolution essentially bootstrapped our penchant for intellectual concepts to the same kinds of laws that govern systems such as financial markets?
by Jonah Lehrer  /  August 8, 2008

Read Montague is getting frustrated. He’s trying to show me his newest brain scanner, a gleaming white fMRI machine that looks like a gargantuan tanning bed. The door, however, can be unlocked only by a fingerprint scan, which isn’t recognizing Montague’s fingers. Again and again, he inserts his palm under the infrared light, only to get the same beep of rejection. Montague is clearly growing frustrated — “I can’t get into my own scanning room!” he yells, at no one in particular — but he also appreciates the irony. A pioneer of brain imaging, he oversees one of the premier fMRI setups in the world, and yet he can’t even scan his own hand. “I can image the mind,” he says. “But apparently my thumb is beyond the limits of science.”

Montague is director of the Human Neuroimaging Lab at Baylor College of Medicine in downtown Houston. His lab recently moved into a sprawling, purpose-built space, complete with plush carpets, fancy ergonomic chairs, matte earth-toned paint and rows of oversize computer monitors. (There are still some technical kinks being worked out, hence the issue with the hand scanner.) If it weren’t for the framed sagittal brain images, the place could pass for a well-funded Silicon Valley startup. The centerpiece of the lab, however, isn’t visible. Montague has access to five state-of-the-art fMRI machines, which occupy the perimeter of the room. Each of the scanners is hidden behind a thick concrete wall, but when the scanners are in use — and they almost always are — the entire lab seems to quiver with a high-pitched buzz. Montague, though, doesn’t seem to mind. “It’s not the prettiest sound,” he admits. “But it’s the sound of data.”

Montague, who is uncommonly handsome, with a strong jaw and a Hollywood grin, first got interested in the brain while working in the neuroscience lab of Nobel Laureate Gerald Edelman as a post-doc. “I was never your standard neuroscientist,” he says. “I spent a lot of time thinking about how the brain should work, if I had designed it.” For Montague the cortex was a perfect system to model, since its incomprehensible complexity meant that it depended on some deep, underlying order. “You can’t have all these cells interacting with each other unless there’s some logic to the interaction,” he says. “It just looked like noise, though — no one could crack the code.” That’s what Montague wanted to do. The human brain, however, is an incredibly well-encrypted machine. For starters it’s hard to even know what the code is: Our cells express themselves in so many different ways. There’s the language of chemistry, with brain activity measured in squirts of neurotransmitter and kinase enzymes. And then there’s the electrical conversation of the cortex, so that each neuron acts like a biological transistor, emitting a binary code of action potentials. Even a silent cell is conveying some sort of information — the absence of activity is itself a form of activity.

Montague realized that if he was going to solve the ciphers of the mind, he would need a cryptographic key, a “cheat sheet” that showed him a small part of the overall solution. Only then would he be able to connect the chemistry to the electricity, or understand how the signals of neurons represented the world, or how some spasm of cells caused human nature. “There are so many different ways to describe
what the brain does,” Montague says. “You can talk about what particular cell is doing, or look at brain regions with fMRI, or observe behavior. But how do these things connect? Because you know they are connected; you just don’t know how.” That’s when Montague discovered the powers of dopamine, a neurotransmitter in the brain. His research on the singular chemical has drawn tantalizing connections between the peculiar habits of our neurons and the peculiar habits of real people, so that the various levels of psychological description — the macro and the micro, the behavioral and the cellular — no longer seem so distinct. What began as an investigation into a single neurotransmitter has morphed into an exploration of the social brain: Montague has pioneered research that allows him to link the obscure details of the cortex to all sorts of important phenomena, from stock market bubbles to cigarette addiction to the development of trust. “We are profoundly social animals,” he says. “You can’t really understand the brain until you understand how these social behaviors happen, or what happens when they go haywire.”

And yet even as Montague attempts to answer these incredibly complex questions, his work remains rooted in the molecular details of dopamine. No matter what he’s talking about — and he likes to opine on everything from romantic love to the neural correlates of the Coca-Cola logo — his sentences are sprinkled with the jargon of a neural cryptographer. The brain remains a black box, an encrypted mystery, but the transactions of dopamine are proving to be the Rosetta Stone, the missing link that just might allow the code to be broken. The importance of dopamine was discovered by accident. In 1954 James Olds and Peter Milner, two neuroscientists at McGill University, decided to implant an electrode deep into the center of a rat’s brain. The
precise placement of the electrode was largely happenstance: At the time the geography of the mind remained a mystery. But Olds and Milner got lucky. They inserted the needle right next to the nucleus accumbens (NAcc), a part of the brain dense with dopamine neurons and involved with the processing of pleasurable rewards, like food and sex.

Olds and Milner quickly discovered that too much pleasure can be fatal. After they ran a small current into the wire, so that the NAcc was continually excited, the scientists noticed that the rodents lost interest in everything else. They stopped eating and drinking. All courtship behavior ceased. The rats would just cower in the corner of
their cage, transfixed by their bliss. Within days all of the animals had perished. They had died of thirst. It took several decades of painstaking research, but neuroscientists eventually discovered that the rats were suffering from an excess of dopamine. The stimulation of the brain triggered a massive release of the neurotransmitter, which
overwhelmed the rodents with ecstasy. In humans addictive drugs work the same way: A crack addict who has just gotten a fix is no different from a rat in electrical rapture. This, then, became the dopaminergic cliché — it was the chemical explanation for sex, drugs, and rock ‘n’ roll. But that view of the neurotransmitter was vastly oversimplified. What wasn’t yet clear was that dopamine is also a profoundly important source of information. It doesn’t merely let us take pleasure in the world; it allows us to understand the world.

The first experimental insight into this aspect of the dopamine system came from the pioneering research of Wolfram Schultz, a neuroscientist at Cambridge University. He was originally interested in the neurotransmitter because of its role in triggering Parkinson’s disease, which occurs when dopamine neurons begin to die in a part of
the brain that controls bodily movements. Schultz recorded from cells in the monkey brain, hoping to find those cells involved in the production of movement. He couldn’t find anything. “It was a classic case of experimental failure,” he says. But after years of searching in vain, Schultz started to notice something odd about these dopamine
neurons: They began to fire just before the monkeys got a reward. (Originally, the reward was a way of getting the monkeys to move.) “At first I thought it was unlikely that an individual cell could represent anything so complicated,” Schultz says. “It just seemed like too much information for one neuron.” After hundreds of experimental
trials, Schultz began to believe his own data: He realized that he had found, by accident, the reward mechanism at work in the primate brain. “Only in retrospect can I appreciate just how lucky we were,” he says. After publishing a series of landmark papers in the mid-1980s, Schultz set out to decipher this reward circuitry in exquisite detail. How, exactly, did these single cells manage to represent a reward? His experiments observed a simple protocol: He played a loud tone, waited for a few seconds, and then squirted a few drops of apple juice into the mouth of a monkey. While the experiment was unfolding, Schultz was probing the dopamine-rich areas of the monkey brain with a needle that monitored the electrical activity inside individual cells. At first the dopamine neurons didn’t fire until the juice was delivered; they
were responding to the actual reward. However, once the animal learned that the tone preceded the arrival of juice — this requires only a few trials — the same neurons began firing at the sound of the tone instead of the sweet reward. And then eventually, if the tone kept on predicting the juice, the cells went silent. They stopped firing altogether.

When Schultz began publishing his data, nobody quite knew what to make of these strange neurons. “It was very, very tough to figure out what these cells were encoding,” Schultz says. He knew that the cells were learning something about the juice and the tone, but he couldn’t figure out how they were learning it. The code remained impenetrable. At the time Montague was a young scientist at the Salk Institute, working in the neurobiology lab of Terry Sejnowski. His approach to
the brain was rooted in the abstract theories of computer science, which he hoped would shed light on the software used by the brain. Peter Dayan, a colleague of Montague’s at Salk, had introduced him to a model called temporal difference reinforcement learning (TDRL). Computer scientists Rich Sutton and Andrew Barto, who both worked on models of artificial intelligence, had pioneered the model. Sutton and Barto wanted to develop a “neuronlike” program that could learn simple rules and behaviors in order to achieve a goal. The basic premise is straightforward: The software makes predictions about what will happen — about how a checkers game will unfold for example — and then compares these predictions with what actually happens. If the prediction is right, that series of predictions gets reinforced. However, if the prediction is wrong, the software reevaluates its representation of the game.

Montague was entranced by these software prototypes. “It was just so clearly the most efficient way to learn,” he says. The problem was that TDRL remained purely theoretical, a system both elegant and imaginary. Even though computer scientists had begun to adapt the programming strategy for various practical purposes, such as running a bank of elevators or determining flight schedules, no one had found a
neurological system that worked like this. But one spring day in 1991, Dayan burst into Montague’s small office. “He was very excited and shoved these figures from some new paper in my face,” Montague remembers. “He kept on saying to me, ‘What does this look like? What does this look like?'” The figures were from Schultz’s experiments with dopamine neurons, and they showed how these cells reacted to the tone and the juice. “I thought he had faked the data,” Montague says. “Dayan was a big prankster, and I assumed he’d photocopied some of our own figures [on TDRL] just to tease me. It looked too good to be true.” Montague immediately realized that he and Dayan could make sense of Schultz’s mysterious neurons. They knew what these dopamine cells were doing; they had seen this code before. “The only reason we could see it so clearly,” Montague says, “is because we came at it from this theoretical angle. If you were an experimentalist seeing this data, it would have been extremely confusing. What the hell are these cells doing? Why aren’t they just responding to the juice?” That same day Montague and Dayan began writing a technical paper that laid out their insight, explaining how these neurons were making precise predictions about future rewards. But the paper — an awkward mix of Schultz’s dopamine recordings and equations borrowed from computer science — went nowhere. “We wrote that paper 11 times,” Montague says. “It got bounced from every journal. I came this close to leaving the field. I realized that neuroscience just wasn’t ready for theory, even if the theory made sense.”

Nevertheless, Montague and Dayan didn’t give up. They published their ideas in obscure journals, like Advances in Neural Information Processing Systems. When the big journals rejected their interpretation of monkey neurons, they instead looked at the nervous systems of honeybees, which relied on a version of TDRL when foraging
for nectar. (That paper got published in Nature in 1995.) “We had to drag the experimentalists kicking and screaming,” Montague says. “They just didn’t understand how these funny-looking equations could explain their data. They told us, ‘We need more data.’ But what’s the point of data if you can’t figure it out?” The crucial feature of these dopamine neurons, say Montague and Dayan, is that they are more concerned with predicting rewards than with the rewards themselves. Once the cells memorize the simple pattern — a loud tone predicts the arrival of juice — they become exquisitely sensitive to variations on the pattern. If the cellular predictions proved correct and the primates experienced a surge of dopamine, the prediction was reinforced. However, if the pattern was violated — if the tone sounded but the juice never arrived — then the monkey’s dopamine neurons abruptly decreased their firing rate. This is known as the “prediction error signal.” The monkey got upset because its predictions of juice were wrong. What’s interesting about this system is that it’s all
about expectation. Dopamine neurons constantly generate patterns based upon experience: If this, then that. The cacophony of reality is distilled into models of correlation. And if these predictions ever prove incorrect, then the neurons immediately readjust their expectations. The discrepancy is internalized; the anomaly is remembered. “The accuracy comes from the mismatch,” Montague says. “You learn how the world works by focusing on the prediction errors, on the events that you didn’t expect.” Our knowledge, in other words, emerges from our cellular mistakes. The brain learns how to be right by focusing on what it got wrong.

Despite his frustration  with the field, Montague continued to work on dopamine. In 1997 he published a Science paper with Dayan and Schultz whose short title was audaciously grand: “A Neural Substrate of Prediction and Reward.” The paper has since been cited more than 1,200 times, and it remains the definitive explanation of how the brain parses reality into a set of accurate expectations, which are measured
out in short bursts of dopamine. A crucial part of the cellular code had been cracked. But Montague was getting restless. “I wanted to start asking bigger questions,” he says. “Here’s this elegant learning system, but how does it fit with the rest of the brain? And can we take this beyond apple juice?” At first glance the dopamine system
might seem largely irrelevant to the study of human behavior. Haven’t we evolved beyond the brutish state of “reward harvesting,” where all we care about is food and sex? Dopamine might explain the simple psychology of a lizard, or even a monkey sipping juice, but it seems a stretch for it to explain the Promethean mind of a human. “One of the distinguishing traits of human beings is that we chase ideas, not just primary rewards,” Montague says. “What other animal goes on hunger strike? Or abstains from sex? Or blows itself up in a cafe in the name of God?” These unique aspects of human cognition seem impossible to explain with neurons that track and predict rewards. After all, these behaviors involve the rejection of rewards: We are shrugging off tempting treats because of some abstract belief or goal.

Montague’s insight, however, was that ideas are just like apple juice. From the perspective of the brain, an abstraction can be just as rewarding as the tone that predicts the reward. Evolution essentially bootstrapped our penchant for intellectual concepts to the same reward circuits that govern our animal appetites. “The guy who’s on hunger strike for some political cause is still relying on his midbrain dopamine neurons, just like a monkey getting a treat,” Montague says. “His brain simply values the cause more than it values dinner.” According to Montague, the reason abstract thoughts can be so rewarding, is that the brain relies on a common neural currency for evaluating alternatives. “It’s clear that you need some way to compare your options, even if your options come from very different categories,” he says. By representing everything in terms of neuron firing rates, the human brain is able to choose the abstract thought over the visceral reward, as long as the abstraction excites our cells more than apple juice. That’s what makes ideas so powerful: No matter how esoteric or ethereal they get, they are ultimately fed back into the same system that makes us want sex and sugar. As Montague notes, “You don’t have to dig very far before it all comes back to your loins.”

In recent years Montague has shown how this basic computational mechanism is a fundamental feature of the human mind. Consider a paper on the neural foundations of trust, recently published in Science. The experiment was born out of Montague’s frustration with the limitations  of conventional fMRI. “The most unrealistic element [of fMRI experiments] is that we could only study the brain by itself,” Montague says. “But when are brains ever by themselves?” And so Montague pioneered a technique known as hyper-scanning, allowing subjects in different fMRI machines to interact in real time. His experiment revolved around a simple economic game in which getting the maximum reward required the strangers to trust one another. However, if one of the players grew especially selfish, he or she could always steal from the pot and erase the tenuous bond of trust. By monitoring the players’ brains, Montague was able to predict whether or not someone would steal money several seconds before the theft actually occurred. The secret was a cortical area known as the caudate nucleus, which closely tracked the payouts from the other player. Montague noticed that whenever the caudate exhibited reduced activity, trust tended to break down.

But what exactly is the caudate computing? How do we decide whom to trust with our money? And why do we sometimes decide to stop trusting those people? It turned out that the caudate worked just like the reward cells in the monkey brain. At first the caudate didn’t get excited until the subjects actually trusted one another and garnered their separate rewards. But over time this brain area started to expect trust, so that it fired long before the reward actually arrived. Of course, if the bond was broken — if someone cheated and stole money — then the neurons stopped firing; social assumptions were proven wrong. (Montague is currently repeating this experiment with a collaborating lab in China so that he can detect the influence of culture on social interactions.) The point, he says, is that people were using this TDRL strategy — a strategy that evolved to help animals find caloric rewards — to model another mind. Instead of predicting the arrival of juice, the neurons were predicting the behavior of someone else’s brain.

A few years ago, Montague was reviewing some old papers on TDRL theory when he realized that the system, while effective and efficient, was missing something important. Although dopamine neurons excelled at measuring the mismatch between their predictions of rewards and those that actually arrived — these errors provided the input for learning — they’d learn much quicker if they could also incorporate the
prediction errors of others. Montague called this a “fictive error learning signal,” since the brain would be benefiting from hypothetical scenarios: “You’d be updating your expectations based not just on what happened, but on what might have happened if you’d done something differently.” As Montague saw it, this would be a very valuable addition to our cognitive software. “I just assumed that evolution would use this approach, because it’s too good an idea not to use,” he says.

The question, of course, is how to find this “what if” signal in the brain. Montague’s clever solution was to use the stock market. After all, Wall Street investors are constantly comparing their actual returns against the returns that might have been, if only they’d sold their shares before the crash or bought Google stock when the company first went public. The experiment went like this: Each subject was
given $100 and some basic information about the “current” state of the stock market. After choosing how much money to invest, the players watched nervously as their investments either rose or fell in value. The game continued for 20 rounds, and the subjects got to keep their earnings. One interesting twist was that instead of using random simulations of the stock market, Montague relied on distillations of data from famous historical markets. Montague had people “play” the Dow of 1929, the Nasdaq of 1998, and the S&P 500 of 1987, so the neural responses of investors reflected real-life bubbles and crashes.

The scientists immediately discovered a strong neural signal that drove many of the investment decisions. The signal was fictive learning. Take, for example, this situation. A player has decided to wager 10 percent of her total portfolio in the market, which is a rather small bet. Then she watches as the market rises dramatically in
value. At this point, the regret signal in the brain — a swell of activity in the ventral caudate, a reward area rich in dopamine neurons — lights up. While people enjoy their earnings, their brain is fixated on the profits they missed, figuring out the difference between the actual return and the best return “that could have been.” The more we regret a decision, the more likely we are to do something different the next time around. As a result investors in the experiment naturally adapted their investments to the ebb and flow of the market. When markets were booming, as in the Nasdaq bubble of the late 1990s, people perpetually increased their investments.

But fictive learning isn’t always adaptive. Montague argues that these computational signals are also a main cause of financial bubbles. When the market keeps going up, people are naturally inclined to make larger and larger investments in the boom. And then, just when investors are most convinced that the bubble isn’t a bubble — many of Montague’s subjects eventually put all of their money into the booming market — the bubble bursts. The Dow sinks, the Nasdaq collapses. At this point investors race to dump any assets that are declining in value, as their brain realizes that it made some very expensive prediction errors. That’s when you get a financial panic.

Such fictive-error learning signals aren’t relevant only for stock market investors. Look, for instance, at addiction. Dopamine has long been associated with addictive drugs, such as cocaine, that overexcite these brain cells. The end result is that addicts make increasingly reckless decisions, forgoing longterm goals for the sake of an intensely pleasurable short-term fix. “When you’re addicted to a drug, your brain is basically convinced that this expensive white powder is worth more than your marriage or life,” Montague says. In other words addiction is a disease of valuation: Dopamine cells have lost track of what’s really important.

Montague wanted to know which part of the dopamine system was distorted in the addicted brain. He began to wonder if addiction was, at least in part, a disease of fictive learning. Addicted smokers will continue to smoke even when they know it’s bad for them. Why can’t they instead revise their models of reward? Last year Montague decided to replicate his stock market study with a large group of chronic
smokers. It turned out that smokers were perfectly able to compute a “what if” learning signal, which allowed them to experience regret. Like nonsmokers they realized that they should have invested differently in the stock market. Unfortunately, this signal had no impact on their decision making, which led them to make significantly less money during the investing game. According to Montague, this data
helps explain why smokers continue to smoke even when they regret it. Although their dopamine neurons correctly compute the rewards of an extended life versus a hit of nicotine — they are, in essence, asking themselves, “What if I don’t smoke this cigarette?” — their brain doesn’t process the result. That feeling of regret is conveniently ignored. They just keep on lighting up.

Montague exudes the confidence of a scientist used to confirming his hypotheses. He buzzes with ideas for new experiments — ” I get bored rather easily,” he says — and his lab is constantly shifting direction, transitioning from dopamine to neuroeconomics to social neuroscience. Montague is currently consumed with questions about how people interact when they’re part of a group. “A mob or a market is not just a collection of discrete individuals,” he says. “It’s something else entirely. You would do things in a group that you would never do by yourself. But what’s happening in your brain? We’ve got all these sociological studies but no hard data.” Montague’s been warned that the project is too complicated, that social interactions are too subtle and complex to pick up in a scanner, but he’s convinced
otherwise. “If I’d listened to the naysayers,” he says, “I’d still be studying honeybees.”

Montague’s experiments take advantage of his unique fMRI setup. He has four people negotiate with one another as they decide how much to offer someone else during an investing game. While the group is bickering, Montague is monitoring the brain activity of everyone involved. He’s also infiltrated the group with a computer player that has been programmed to act just like a person with borderline personality disorder. The purpose of this particular experiment is to see how “one bad apple” can lead perfect strangers to also act badly. While Montague isn’t ready to share the results — he’s still gathering data — what he’s found so far is, he says, “stunning, shocking even…. For me the lesson has been that people act very badly in groups. And now we can see why.”

Such exuberance is well earned. In the space of a few short years, Montague has taken his theoretical model of learning — a model he borrowed from some old computer science textbooks — and shown that it’s an essential part of the human brain. He’s linked the transactions of a single neurotransmitter to a dizzying array of
behaviors, so that it’s now possible to draw a straight line between monkeys craving juice and stock market bubbles. A neurotransmitter that wasn’t supposed to matter is now our most important clue into the secret messages of the mind and the breakdown of social graces. The code hasn’t been broken. But for the first time, it’s getting cracked.

Jonah Lehrer
email : jonah.lehrer [at] gmail [dot] com



“In a fiduciary relation one person justifiably reposes confidence,
good faith, reliance and trust in another whose aid, advice or
protection is sought in some matter. In such a relation good
conscience requires one to act at all times for the sole benefit and
interests of another, with loyalty to those interests. A fiduciary
duty [1] is the highest standard of care at either equity or law. In
English common law the fiduciary relation is arguably the most
important concept within the portion of the legal system known as
equity. In the United Kingdom, the Judicature Acts merged the courts
of Equity (historically based in England’s Court of Chancery) with the
courts of common law, and as a result the concept of fiduciary duty
also became usable in common law courts. When a fiduciary duty is
imposed, equity requires a stricter standard of behavior than the
comparable tortious duty of care at common law. It is said the
fiduciary has a duty not to be in a situation where personal interests
and fiduciary duty conflict, a duty not to be in a situation where his
fiduciary duty conflicts with another fiduciary duty, and a duty not
to profit from his fiduciary position without express knowledge and
consent. A fiduciary cannot have a conflict of interest. It has been
said that fiduciaries must conduct themselves “at a level higher than
that trodden by the crowd”[2] and that “[t]he distinguishing or
overriding duty of a fiduciary is the obligation of undivided

“Self-dealing trustee, an attorney, a corporate officer, or other
fiduciary that consists of taking advantage of his position in a
transaction and acting for his own interests rather than for the
interests of the beneficiaries of the trust, corporate shareholders,
or his clients. Self-dealing may involve misappropriation or
usurpation of corporate assets or opportunities. Michael McDonald,
Ph.D, Chair of Applied Ethics at The University of British Columbia
provides examples based from this book: “using your government
position to get a summer job for your daughter”.”

“In contrast to enlightened self-interest is simple greed or the
concept of “unenlightened self-interest”, in which it is argued that
when most or all persons act according to their own myopic
selfishness, that the group suffers loss as a result of conflict,
decreased efficiency because of lack of cooperation, and the increased
expense each individual pays for the protection of their own
interests. If a typical individual in such a group is selected at
random, it is not likely that this person will profit from such a
group ethic. Some individuals might profit, in a material sense, from
a philosophy of greed, but it is believed by proponents of enlightened
self-interest that these individuals constitute a small minority and
that the large majority of persons can expect to experience a net
personal loss from a philosophy of simple unenlightened selfishness.
Unenlightened self-interest can result in the tragedy of the commons.”

by Garrett Hardin  /  December 1968

Pathogenic Effects of Conscience
The long-term disadvantage of an appeal to conscience should be enough
to condemn it; but has serious short-term disadvantages as well. If we
ask a man who is exploiting a commons to desist “in the name of
conscience,” what are we saying to him? What does he hear? –not only
at the moment but also in the wee small hours of the night when, half
asleep, he remembers not merely the words we used but also the
nonverbal communication cues we gave him unawares? Sooner or later,
consciously or subconsciously, he senses that he has received two
communications, and that they are contradictory: (i) (intended
communication) “If you don’t do as we ask, we will openly condemn you
for not acting like a responsible citizen”; (ii) (the unintended
communication) “If you do behave as we ask, we will secretly condemn
you for a simpleton who can be shamed into standing aside while the
rest of us exploit the commons.”

Everyman then is caught in what Bateson has called a “double bind.”
Bateson and his co-workers have made a plausible case for viewing the
double bind as an important causative factor in the genesis of
schizophrenia (17). The double bind may not always be so damaging, but
it always endangers the mental health of anyone to whom it is applied.
“A bad conscience,” said Nietzsche, “is a kind of illness.” To conjure
up a conscience in others is tempting to anyone who wishes to extend
his control beyond the legal limits. Leaders at the highest level
succumb to this temptation. Has any President during the past
generation failed to call on labor unions to moderate voluntarily
their demands for higher wages, or to steel companies to honor
voluntary guidelines on prices? I can recall none. The rhetoric used
on such occasions is designed to produce feelings of guilt in

For centuries it was assumed without proof that guilt was a valuable,
perhaps even an indispensable, ingredient of the civilized life. Now,
in this post-Freudian world, we doubt it. Paul Goodman speaks from the
modern point of view when he says: “No good has ever come from feeling
guilty, neither intelligence, policy, nor compassion. The guilty do
not pay attention to the object but only to themselves, and not even
to their own interests, which might make sense, but to their
anxieties” (18). One does not have to be a professional psychiatrist
to see the consequences of anxiety. We in the Western world are just
emerging from a dreadful two-centuries-long Dark Ages of Eros that was
sustained partly by prohibition laws, but perhaps more effectively by
the anxiety-generating mechanism of education. Alex Comfort has told
the story well in The Anxiety Makers (19); it is not a pretty one.

Since proof is difficult, we may even concede that the results of
anxiety may sometimes, from certain points of view, be desirable. The
larger question we should ask is whether, as a matter of policy, we
should ever encourage the use of a technique the tendency (if not the
intention) of which is psychologically pathogenic. We hear much talk
these days of responsible parenthood; the coupled words are
incorporated into the titles of some organizations devoted to birth
control. Some people have proposed massive propaganda campaigns to
instill responsibility into the nation’s (or the world’s) breeders.
But what is the meaning of the word responsibility in this context? Is
it not merely a synonym for the word conscience? When we use the word
responsibility in the absence of substantial sanctions are we not
trying to browbeat a free man in a commons into acting against his own
interest? Responsibility is a verbal counterfeit for a substantial
quid pro quo. It is an attempt to get something for nothing. If the
word responsibility is to be used at all, I suggest that it be in the
sense Charles Frankel uses it (20). “Responsibility,” says this
philosopher, “is the product of definite social arrangements.” Notice
that Frankel calls for social arrangements–not propaganda.

Mutual Coercion Mutually Agreed upon
The social arrangements that produce responsibility are arrangements
that create coercion, of some sort. Consider bank-robbing. The man who
takes money from a bank acts as if the bank were a commons. How do we
prevent such action? Certainly not by trying to control his behavior
solely by a verbal appeal to his sense of responsibility. Rather than
rely on propaganda we follow Frankel’s lead and insist that a bank is
not a commons; we seek the definite social arrangements that will keep
it from becoming a commons. That we thereby infringe on the freedom of
would-be robbers we neither deny nor regret.

The morality of bank-robbing is particularly easy to understand
because we accept complete prohibition of this activity. We are
willing to say “Thou shalt not rob banks,” without providing for
exceptions. But temperance also can be created by coercion. Taxing is
a good coercive device. To keep downtown shoppers temperate in their
use of parking space we introduce parking meters for short periods,
and traffic fines for longer ones. We need not actually forbid a
citizen to park as long as he wants to; we need merely make it
increasingly expensive for him to do so. Not prohibition, but
carefully biased options are what we offer him. A Madison Avenue man
might call this persuasion; I prefer the greater candor of the word

Coercion is a dirty word to most liberals now, but it need not forever
be so. As with the four-letter words, its dirtiness can be cleansed
away by exposure to the light, by saying it over and over without
apology or embarrassment. To many, the word coercion implies arbitrary
decisions of distant and irresponsible bureaucrats; but this is not a
necessary part of its meaning. The only kind of coercion I recommend
is mutual coercion, mutually agreed upon by the majority of the people
affected. To say that we mutually agree to coercion is not to say that
we are required to enjoy it, or even to pretend we enjoy it. Who
enjoys taxes? We all grumble about them. But we accept compulsory
taxes because we recognize that voluntary taxes would favor the
conscienceless. We institute and (grumblingly) support taxes and other
coercive devices to escape the horror of the commons.

An alternative to the commons need not be perfectly just to be
preferable. With real estate and other material goods, the alternative
we have chosen is the institution of private property coupled with
legal inheritance. Is this system perfectly just? As a genetically
trained biologist I deny that it is. It seems to me that, if there are
to be differences in individual inheritance, legal possession should
be perfectly correlated with biological inheritance–that those who
are biologically more fit to be the custodians of property and power
should legally inherit more. But genetic recombination continually
makes a mockery of the doctrine of “like father, like son” implicit in
our laws of legal inheritance. An idiot can inherit millions, and a
trust fund can keep his estate intact. We must admit that our legal
system of private property plus inheritance is unjust–but we put up
with it because we are not convinced, at the moment, that anyone has
invented a better system. The alternative of the commons is too
horrifying to contemplate. Injustice is preferable to total ruin.

It is one of the peculiarities of the warfare between reform and the
status quo that it is thoughtlessly governed by a double standard.
Whenever a reform measure is proposed it is often defeated when its
opponents triumphantly discover a flaw in it. As Kingsley Davis has
pointed out (21), worshippers of the status quo sometimes imply that
no reform is possible without unanimous agreement, an implication
contrary to historical fact. As nearly as I can make out, automatic
rejection of proposed reforms is based on one of two unconscious
assumptions: (i) that the status quo is perfect; or (ii) that the
choice we face is between reform and no action; if the proposed reform
is imperfect, we presumably should take no action at all, while we
wait for a perfect proposal.

But we can never do nothing. That which we have done for thousands of
years is also action. It also produces evils. Once we are aware that
the status quo is action, we can then compare its discoverable
advantages and disadvantages with the predicted advantages and
disadvantages of the proposed reform, discounting as best we can for
our lack of experience. On the basis of such a comparison, we can make
a rational decision which will not involve the unworkable assumption
that only perfect systems are tolerable.

American Dream a Biological Impossibility, Neuroscientist Says
by Brandon Keim  /  October 21, 2008

What if people are biologically unsuited for the American dream? The
man posing that troubling question isn’t just another lefty activist.
It’s Peter Whybrow, head of the Semel Institute for Neuroscience and
Behavior at UCLA. “We’ve been taught, especially in America, that
happiness will be at the end of some sort of material road, where we
have lots and lots of things that we want,” said Whybrow, a 2008
PopTech Fellow and author of American Mania: When More Is Not Enough.
“We’ve set up all sorts of tricks to delude ourselves into thinking
that it’s fine to get what you want immediately.”

He paints a disturbing picture of 21st century American life, where
behavioral tendencies produced by millions of years of scarcity-driven
evolution don’t fit the social and economic world we’ve constructed.
Our built-in dopamine-reward system makes instant gratification highly
desirable, and the future difficult to balance with the present. This
worked fine on the savanna, said Whybrow, but not the suburbs: We
gorge on fatty foods and use credit cards to buy luxuries we can’t
actually afford. And then, overworked, underslept and overdrawn, we
find ourselves anxious and depressed.

That individual weakness is reflected at the social level, in markets
that have outgrown their agrarian roots and no longer constrain our
excesses — resulting in the current economic crisis, in which
America’s unpaid bills came due with shocking speed. But with this
crisis, said Whybrow, comes the opportunity to rethink how Americans
live, as individuals and as a nation, and build a country that works.
“We’re primed for doing things immediately. We’re poor at planning for
the future, unless we get into circumstances like these, where we’re
forced to think cleverly about what to do next,” he said. “In a way,
this financial meltdown is a healthy thing for us. We’ll think
intuitively again.”

Foremost among Whybrow’s targets is the modern culture of spending on
credit. “The instinctive brain is well ahead of the intellectual
brain. Credit cards promise us that you can have what you want now,
and postpone payment until later,” he said. Buying just feels good, in
a biological sense — and that instant reward outweighs the threat of
future bills. Of course, many people use credit cards to pay bills and
put food on the table, rather than buy flat-screen televisions and new
computers. “That unfortunate reality,” said Whybrow, “is produced by
an out-of-control economic system” geared toward perpetual growth.
That is no more natural a state for markets than a mall food court is
natural for individuals whose metabolic heredity treats fats and
sugars as rarities. “Once upon a time, this economic system worked.
But for the invisible hand of the free market to function, it needed
to be balanced. And that balance is gone,” he said.

Markets were once agrarian institutions, said Whybrow, which balanced
the gratification of individuals with the constraints of small
communities, where people looked their trade partners in the eye, and
transactions were bounded by time and geography. With those
constraints removed, markets have engaged in the buy-now, pay-later
habits of college kids who don’t read the fine print on their credit
card bills. “You can think about markets in the same way as
individuals who mortgaged their future — except markets did it with
other people’s money,” he said. “You end up with a Ponzi scheme
predicated on the idea that we can get something now, rather than
having to wait. And it all comes back to the same instinctual drive.”
And now that the fundamental excesses of our economy have been so
painfully exposed, with trillions of dollars vanishing from the
American economy in just a few days, we have to think about changing
both the economy and ourselves.

The answers aren’t easy, Whybrow cautioned — but they do exist. People
can think creatively about jumping from the treadmills of bad jobs and
unmeetable needs; and even if this isn’t always possible, they can
teach their children to live modestly and within their means. Urban
engineers can design cities that allow people to live and work and
shop in the same place. Governments can, at the insistence of their
citizens, provide the social safety nets on which social mobility,
stagnant for the last 50 years, is based. And we can — however much it
hurts — look to Europe for advice. “America has always believed that
it was the perfect society. When you have that mythology driving your
culture, it’s hard to look around and say, ‘Is someone else doing it
better than us?'” said Whybrow. “But you can trace the situation we’re
in to our evolutionary origins. Now that we find ourselves in the
middle of this pseudo-abundance, we’re in trouble. And the fantasy
that we can restart the American dream just isn’t true.”

Peter Whybrow
email : pwhybrow [at] mednet.ucla [dot] edu

Humankind evolved to seek rewards and avoid risks but not to invest
by Jason Zweig  /  August 23 2007

For most purposes in daily life, your brain is a superbly functioning
machine, steering you away from danger while guiding you toward basic
rewards like food, shelter and love. But that brilliant machine can
lead you astray when it comes to investing. You buy high only to sell
low. You try to time the market. You follow the crowd. You make the
same mistakes again. And again. How come?

We’re beginning to get answers. Scientists in the emerging field of
“neuroeconomics” – a hybrid of neuroscience, economics and psychology
– are making stunning discoveries about how the brain evaluates
rewards, sizes up risks and calculates probabilities. With the wonders
of imaging technology we can observe the precise neural circuitry that
switches on and off in your brain when you invest. Those pictures make
it clear that your investing brain often drives you to do things that
make no logical sense – but make perfect emotional sense. Your brain
developed to improve our species’ odds of survival. You, like every
other human, are wired to crave what looks rewarding and shun what
seems risky.

To counteract these impulses, your brain has only a thin veneer of
modern, analytical circuits that are often no match for the power of
the ancient parts of your mind. And when you win, lose or risk money,
you stir up some profound emotions, including hope, surprise, regret
and the two we’ll examine here: greed and fear. Understanding how
those feelings – as a matter of biology – affect your decision-making
will enable you to see as never before what makes you tick, and how
you can improve, as an investor.

Greed: The thrill of the chase
Why is it so hard for most of us to learn that the old saying “Money
doesn’t buy happiness” is true? After all, we feel as if it should.
The answer lies in a cruel irony that has enormous implications for
financial behavior: Our brains come equipped with a biological
mechanism that is more aroused when we anticipate a profit than when
we get one. I lived through the rush of greed in an experiment run by
Brian Knutson, a neuroscientist at Stanford University. Knutson put me
into a functional magnetic resonance imaging (fMRI) scanner to trace
my brain activity while I played a kind of investing video game that
he had designed. By combining an enormous magnet and a radio signal,
the fMRI scanner pinpoints momentary changes in the level of oxygen as
blood ebbs and flows within the brain, enabling researchers to map the
neural regions engaged by a particular task.

In Knutson’s experiment, a display inside the fMRI machine showed me a
sequence of shapes that each signaled a different amount of money:
zero ($0), medium ($1) or large ($5). If the symbol was a circle, I
could win the dollar amount displayed; if it was a square, I could
lose the amount shown. After each shape came up, between 2 and 2½
seconds would pass – that’s the anticipation phase, when I was on
tenterhooks waiting for my chance to win or lose – and then a white
square would appear for a split second.

To win or avoid losing the amount I had been shown, I had to click a
button with my finger when the square appeared. At the highest of the
three levels of difficulty, I had less than one-fifth of a second to
hit the button. After each try the screen showed how much I’d just won
or lost and updated my cumulative score. When a shape signaling a
small reward or penalty appeared, I clicked placidly and either won or
lost. But if a circle marked with the symbols of a big, easy payout
came up, I could feel a wave of expectation sweep through me. At that
moment, the fMRI scan showed, the neurons in a reflexive, or
emotional, part of my brain called the nucleus accumbens fired like
wild. When Knutson measured the activity tracked by the scan, he found
that the possibility of winning $5 set off twice as strong a signal in
my brain as the chance at gaining $1 did.

On the other hand, learning the outcome of my actions was no big deal.
Whenever I captured the reward, Knutson’s scanner found that the
neurons in my nucleus accumbens fired much less intensely than they
had when I was hoping to get it. Based on the dozens of people Knutson
has studied, it’s highly unlikely that your brain would respond much
differently. Why does the reflexive part of the brain make a bigger
deal of what we might get than of what we do get? That function is
part of what Brian Knutson’s mentor, Jaak Panksepp of Bowling Green
State University in Ohio, calls “the seeking system.”

Over millions of years of evolution, it was the thrill of anticipation
that put our senses in a state of high awareness, bracing us to
capture uncertain rewards. Our anticipation circuitry, says Paul
Slovic, a psychologist at the University of Oregon, acts as a “beacon
of incentive” that enables us to pursue rewards that can be earned
only with patience and commitment. If we derived no pleasure from
imagining riches down the road, we would grab only at those gains that
loom immediately in front of us. Thus our seeking system functions
partly as a blessing and partly as a curse. We pay close attention to
the possibility of coming rewards, but we also expect that the future
will feel better than it does once it turns into the present.

A vivid example of this is the stock of Celera Genomics Group. In
September 1999, Celera began sequencing the human genome. By
identifying each of the 3 billion molecular pairings that make up
human DNA, the company could make one of the biggest leaps in the
history of biotechnology. Investors went wild with anticipation,
driving the stock to a peak of $244 in early 2000. Then, on June 26,
Celera announced that it had completed cracking the code. How did the
stock react? By tanking. It dropped 10.2% that day and another 12.7%
the next day. Nothing had occurred to change the company’s fortunes
for the worse. Quite the contrary: Celera had achieved a scientific
miracle. So what happened? The likeliest explanation is simply that
the anticipation of Celera’s success was so intense that reality was a
letdown. Getting exactly what they wished for left investors with
nothing to look forward to, so they got out and the stock crashed.

Greed: The stuff of memories
Researchers in Germany tested whether anticipating a financial gain
can improve memory. A team of neurologists scanned people’s brains
with an fMRI machine while showing them pictures of objects like a
hammer or a car. Some images were paired with the chance to win half a
euro, while others led to no reward. The participants soon learned
which pictures were reliably associated with the prospect of making
money, and the scan showed that their anticipation circuits fired
furiously when those images appeared. Immediately afterward, the
researchers showed the participants a larger set of pictures,
including some that had not been displayed inside the scanner. People
were highly accurate at distinguishing the pictures they had seen
during the experiment and equally adept at recognizing which of those
pictures had predicted a gain.

Three weeks later the participants came back to the lab, where they
were shown the pictures again. This time people could even more
readily distinguish the pictures that had signaled a financial gain
from those that had not – although they hadn’t laid eyes on them in 21
days! Astounded, the researchers went back and re-examined the fMRI
scans from three weeks earlier. It turned out that the potentially
rewarding pictures had set off more intense activation not only in the
anticipation circuits but also in the hippocampus, a part of the brain
where long-term memories live.

The fire of expectation, it seems, somehow sears the memory of
potential rewards more deeply into the brain. “The anticipation of
reward,” says neurologist Emrah Düzel, “is more important for memory
formation than is the receipt of reward.” Anticipation has another
unusual neural wrinkle. Brian Knutson has found that while your
reflexive brain is highly responsive to variations in the amount of
reward at stake, it is much less sensitive to changes in the
probability of receiving a reward.

If a lottery jackpot was $100 million and the posted odds of winning
fell from one in 10 million to one in 100 million, would you be 10
times less likely to buy a ticket? If you’re like most people, you
probably would shrug, say “A long shot’s a long shot” and be just as
happy buying a ticket as before. That’s because, as economist George
Loewenstein of Carnegie Mellon University explains, the “mental image”
of $100 million sets off a burst of anticipation in the reflexive
regions of your brain. Only later will the analytical, or reflective,
areas calculate that you’re less likely to win than Ozzy Osbourne is
to be elected Pope. When possibility is in the room, probability goes
out the window. It’s no different when you buy a stock or a mutual
fund: Your expectation of scoring a big gain elbows aside your ability
to evaluate how likely you are to earn it. That means your brain will
tend to get you into trouble whenever you’re confronted with an
opportunity to buy an investment with a hot – but probably
unsustainable – return.

Fear: What are you afraid of?
Here are two questions that might, at first, seem silly.
1 Which is riskier: a nuclear reactor or sunlight?
2 Which animal is responsible for the greatest number of human deaths
in the U.S.? a) Alligator b) Deer c) Snake d) Bear e) Shark

Now let’s look at the answers. The worst nuclear accident in history
occurred when the reactor at Chernobyl, Ukraine melted down in 1986.
Early estimates were that tens of thousands of people might be killed
by radiation poisoning. By 2006, however, fewer than 100 had died.
Meanwhile, nearly 8,000 Americans are killed every year by skin
cancer, commonly caused by overexposure to the sun.

In the typical year, deer are responsible for roughly 130 human
fatalities – seven times more than alligators, bears, sharks and
snakes combined. Deer, of course, don’t attack. Instead, they step in
front of cars, causing deadly collisions. None of this means that
nuclear radiation is good for you or that rattlesnakes are harmless.
What it does mean is that we are often most afraid of the least likely
dangers and frequently not worried enough about the risks that have
the greatest chances of coming home to roost.

We’re no different when it comes to money. Every investor’s worst
nightmare is a stock market collapse like the crash of 1929. According
to a recent survey of 1,000 investors, there’s a 51% chance that “in
any given year, the U.S. stock market might drop by one-third.” In
fact, the odds that U.S. stocks will lose a third of their value in a
given year are around 2%. The real risk isn’t that the market will
melt down but that inflation will erode your savings. Yet only 31% of
the people surveyed were worried that they might run out of money
during their first 10 years of retirement.

If we were logical we would judge the odds of a risk by asking how
often something bad has actually happened under similar circumstances.
Instead, explains psychologist Daniel Kahneman, “we tend to judge the
probability of an event by the ease with which we can call it to
mind.” The more recently it occurred or the more vivid our memory of
something like it in the past, the more “available” an event will be
in our minds – and the more probable its recurrence will seem.

Fear: The hot button of the brain
Deep in the center of your brain, level with the top of your ears,
lies a small, almond-shaped knob of tissue called the amygdala (ah-mig-
dah-lah). When you confront a potential risk, this part of your
reflexive brain acts as an alarm system – shooting signals up to the
reflective brain like warning flares. (There are two amygdalas, one on
each side of your brain.) The result is that a moment of panic can
wreak havoc on your investing strategy. Because the amygdala is so
attuned to big changes, a sudden drop in the market tends to be more
upsetting than a longer, slower decline, even if it’s greater in

On Oct. 19, 1987, the U.S. stock market plunged 23% – a deeper one-day
drop than the crash of ’29. Big, sudden and inexplicable, the ’87
crash was exactly the kind of event that sparks the amygdala. The
memory was hard to shake: In 1988, U.S. investors sold $15 billion
more worth of shares in stock mutual funds than they bought, and their
net purchases of stock funds didn’t recover to pre-crash levels until
1991. One bad Monday disrupted the behavior of millions of people for
years. There was something more at work here than merely investors’
individual fears. Anyone who has ever been a teenager knows that peer
pressure can make you do things as part of a group that you might
never do on your own.

But do you make a conscious choice to conform or does the herd exert
an automatic, almost magnetic, force? People were recently asked to
judge whether three-dimensional objects were the same or different.
Sometimes the folks being tested made these choices in isolation.
Other times they first saw the responses of four “peers” (who were, in
fact, colluding with the researcher).

When people made their own choices, they were right 84% of the time.
When the peer group all made the wrong choice, however, the
individuals being tested chose correctly just 59% of the time. Brain
scans showed that when the subjects followed the peer group,
activation in parts of their frontal cortex decreased, as if social
pressure was somehow overpowering the reflective, or analytical,
brain. When people did buck the consensus, brain scans found intense
firing in the amygdala.

Neuroscientist Gregory Berns, who led the study, calls this flare-up a
sign of “the emotional load associated with standing up for one’s
belief.” Social isolation activates some of the same areas in the
brain that are triggered by physical pain. In short, you go along with
the herd not because you want to but because it hurts not to. Being
part of a large group of investors can make you feel safer when
everything is going great. But once risk rears its ugly head, there’s
no safety in numbers.

Fear: Fright makes right
I learned how my own amygdala reacts to risk when I participated in an
experiment at the University of Iowa. First I was wired up with
electrodes and other monitoring devices to track my breathing,
heartbeat, perspiration and muscle activity. Then I played a computer
game designed by neurologists Antoine Bechara and Antonio Damasio.
Starting with $2,000 in play money, I clicked a mouse to select a card
from one of four decks displayed on the monitor in front of me. Each
“draw” of a card made me either “richer” or “poorer.”

I soon learned that the two left decks were more likely to produce big
gains but even bigger losses, while the two right decks blended more
frequent but smaller gains with a lower chance of big losses.
Gradually I began picking most of my cards from the decks on the
right; by the end of the experiment I had drawn 24 cards in a row from
those safer decks. Afterward I looked over the printout that traced my
spiking heartbeat and panting breath as the red alert of risk swept
through my body, even though I didn’t recall ever feeling nervous.

Early on, when I drew a card that lost me $1,140, my pulse rate shot
from 75 to 145. After a few more bad losses from the risky decks, my
body would start reacting even before I selected a card from one of
them. Merely moving the cursor over the risky decks was enough to make
my physiological functions go haywire. My decisions, it turns out, had
been driven by fear even though the “thinking” part of my mind had no
idea I was afraid. Ironically – and thankfully – this highly emotional
part of our brain can actually help us act more rationally.

When Bechara and Damasio run their card-picking game with people whose
amygdalas have been injured, the subjects never learn to avoid
choosing from the riskier decks. If told that they have just lost
money, their body doesn’t react; they can no longer feel a financial
loss. Without the saving grace of fear, the analytical parts of the
brain will keep trying to beat the odds, with disastrous results. “The
process of deciding advantageously,” concludes Damasio, “is not just
logical but also emotional.”

Jason Zweig
email : info [at] jasonzweig [dot] com / jason.zweig [at] wsj [dot] com /
intelligentinvestor [at] wsj [dot] com

Synopsis : Predictably Irrational by Dan Ariely
by George Gibson  /  September 18, 2008

[This detailed, chapter-by-chapter précis of Dan Ariely’s Predictably
Irrational: The Hidden Forces That Shape Our Decisions is a guest post
by George Gibson, a colleague of mine at Xerox. George originally
posted it on our internal blogs as a series, and I found it so much
fun to read, I asked if I could repost it on ribbonfarm. So here you

Chapter 1: The Truth About Relativity
This was clearly the most interesting of the books from my summer
reading list. Let me be clear that though I don’t buy all of the
points Dan tries to make, I find them all interesting and worthy of
thought. With any luck we can begin a real discussion of his ideas and
observations in the commentary. That means I’ll attempt (not always
successfully) to keep my opinion out of the body of this piece, and
reserve that for any commentary that might develop. The real point
here is to get you interested enough to read the book yourself.

“Most people don’t know what they want until they see it in context.”
Control the context and you can change their decisions. This chapter
is about how our decision making as skewed from what we might think of
as rational by the use of comparisons, anchor points and some about
the magic of “FREE!”. The Economist offered three subscription
* Electronic alone: $59
* Print alone: $125
* Electronic and print: $125

So what would you guess people would choose? Would anyone choose the
Print alone option forgoing a “FREE!” electronic subscription? Not
likely. So, why is it even offered? Testing with 100 Sloan School
students, 16 chose Electronic alone and 84 chose the combined
Electronic and print option. Nobody chose the Print alone option (boy
those Sloan folks are smart aren’t they?). However when the irrelevant
option, the one nobody chose, was eliminated, another, equally bright,
hundred Sloan students divided 68 for Electronic alone and only 32
chose Electronic and print. So…what happened here?

Ariely makes the unpleasant but often correct assertion, “Thinking is
difficult and sometimes unpleasant.” Cues that allow us to establish
the relative value of various offerings, then, reduce the required
thought effort. What the Economist offered was a no-brainer; while we
might not be certain if the print subscription was worth more than
twice the electronic version, the combination of the two was clearly
worth more that the print version alone.

The chapter contains many examples of this effect including movie
script jokes, bread makers, houses, vacations, salaries and even
potential dates. Two general frameworks are noted as particularly
common. First is the inclusion of a slightly degraded offering near
the offering you want the customer to accept increases the likelihood
that he or she will make your desired choice (read the choice of
potential dating partners section of the book for this one). Next is
the pick the middle one strategy in which three offerings are
presented with the middle of them as the sellers preferred choice. So
often the butt of jokes, car companies even provide an interesting
example. What portion of the firm’s profits on most platforms come
from the middle of the line offering?

There are take-aways here for both the seller and buyer. Politely
said, you as a seller can help guide your customer through the
bewildering array of choices by providing helpful contextual
information (I’ll let each of you put your cynical hat on and restate
that one for fun!) As a consumer it is helpful to understand the
framing a seller is likely to present you with and do some of that
nasty thinking work up front deciding whether or not the seller’s
preferred context and yours are the same.

Chapter 2: The Fallacy of Supply and Demand
This chapter is at the heart of Ariely’s argument. Classical economics
says that our decisions about resource allocation reflect our relative
valuation of the various investment alternatives. If I buy more wine
than cheese it’s because I derive greater utility (more than just
usefulness by the way) from the juice of the grape. There are clear
limits of course, I am unlikely to allow myself to starve and will
occasionally buy cheese, or buy cheese when the price is low enough.
But the general point remains, I am willing to pay more for those
things from which I derive greater utility.

Not so says our man Dan. How much a person is willing to pay for
something is determined or at least significantly affected by a
variety of factors which have nothing to do with any benefit that he
or she derives from that purchase. Do you remember what Tom Sawyer did
with his chore, whitewashing a fence? Review that first and you’ll be
more open to these arguments.

He starts with the story of black pearls. There were essentially none
on the market so there was no objective way of establishing price.
What happened was they were shown in advertising and in Harry
Winston’s toney store along rubies and diamonds at a very high price.
This initial association served to effectively anchor the price and
therefore, going forward, future prices were high since the initial
frame in which people were introduced to the product was among high
priced goods.

He likens this anchor price phenomenon to that of imprinting. We are
all goslings, fixed on that first object. He’s done a lot of really
neat experiments to support his point. None of them are completely
convincing but they certainly are thought provoking.

Consider, for example some really interesting experiments suggesting
that thinking about a number – any number – before considering what
you are willing to pay for an item whose market price you do not know
– actually effects what you would be willing to pay for that item. In
one of the experiments described a group of students were asked to
write down the last two digits of their social security number before
they indicated how much they were willing to pay for a bottle of wine,
a cordless keyboard, some imported chocolates. Guess what! The amounts
they were willing to pay actually correlated with those social
security number fragments they had previously written down. I spend
time much time on this particular experiment precisely because the
results are so bizarre.

He cites a number of other experiments and observations that support
not only that an anchoring affect, unrelated to market value or
derived utility. One of the ones he cites that I have experienced
personally is the persistence of old concepts of housing value when
you move from one market to another. When Ginny and I moved here from
Dayton, Ohio fifteen years ago, we went a long time looking for a
house that cost about as much as the one we were leaving. Although
Rochester can hardly be described as a high cost area, real estate was
roughly twice as expensive per square foot here than in Dayton. It
took us literally months to stop looking to replace our Dayton house
with one of similar price here and adjust to the new price scale we

I won’t spoil your fun and go through all the examples and neat
concepts, “coherent arbitrariness” being among my favorites, but I
will reiterate one of his most powerful points. Knowing that it is
entirely possible that some factors not related to the real value a
product or service crate for you may be affecting how much of that
good you consume and how much you are paying for it, be mindful.
Carefully examine your purchasing behavior and make sure you actually
believe that the money you allocate to consumption of various
offerings really advances your overall well being more than the next
best use of those funds. So when you’re paying for your $4 cup of
coffee at Starbucks (as you know I do), revisit the fundamental
decision – should I be buying cheaper coffee at McDonalds or even
bringing in coffee from home or should I be drinking water, of the
free sort.

But if first anchors are so significant and long lived, how come you
ever bought that first cup at $4 let alone the third? That first day
you walked into Starbucks wasn’t your first experience with buying
coffee. You had plenty of time and experience to establish an anchor
backed my years of repeated experience to reinforce it. Howard Shultz
had to work hard to make Starbucks fundamentally different than the
other places you might by coffee – not just quantitatively but
qualitatively. It just had to be unlike the other places you might
stop when you wanted coffee – you had to get something more. Their
success is the proof of their success there and the most recent
stumbling in earnings can be attributed to some extent as the success
of McDonalds and Duncan Donuts in making it just about the coffee.

Chapter 3. The Cost of Zero Cost
Why We Often Pay Too Much When We Pay Nothing
“Zero is not just another price….zero is an emotional hot button – a
source of irrational excitement.”

The allure of free stuff drives us to make all sorts of irrational
purchasing decisions. “Buy 2 get 1 FREE!!,” motivates a fair share of
people to buy two of something they wouldn’t have bought one of except
to get that free thing. As you’ve picked up by now, Ariely’s MO is to
do experiments to probe economic rationality or the lack thereof. In
this matter the first experiment involved selling chocolate on the MIT
campus albeit in a strange way. Limiting chocolate purchases to one
per customer they offered a choice between Lindt truffle and a Hershey
Kiss. A huge difference in quality reflected in a substantial
difference in price. The truffle sold for $0.15, half off the bulk
retail price, and the Kiss sold for $0.01. Students split on their
purchases with 73% choosing the truffle and 27% choosing the kiss.
Next they lowered the price of each by $0.01; the truffle at $0.14 and
the Kiss was FREE!! Now 69% of students choose the Kiss; same price
difference, same expected benefit or enjoyment from eating the
chocolate but apparently there is an additional benefit of FREE!!

Again my purpose here is to serve as a teaser here not to reiterate
the book, I want you to read the book. Let’s just say that he did this
experiment a variety of ways and each time the proposition that FREE!!
distorts decision making was supported. He has some especially
interesting Halloween experiments and some real Amazon experience
supporting his assertion.

Chapter 4. The Cost of Social Norms:
Why We Are Happy to Do Things, but Not When We Are Paid to Do Them

I’m against wholesale quotations in reviews. So remember that this
isn’t a review, it’s meant to be a précis and teaser. This chapter
leads off with a story so compelling that I just have to present it

You are at your mother-in-law’s house for Thanksgiving dinner, and
what a sumptuous spread she has put on the table for you. The turkey
is roasted to a golden brown; the stuffing is homemade and exactly the
way you like it. Your kids are delighted: the sweet potatoes are
crowned with marshmallows. And your wife is flattered: her favorite
recipe for pumpkin pie has been chosen for desert.

The festivities continue into the late afternoon. You loosen your belt
and sip a glass of wine. Gazing fondly across the table at your mother-
in-law, you rise to your feet, pull out your wallet. “Mom, for all the
love you’ve put into this, how much do I owe you?” you say sincerely.
As silence descends on the gathering, you wave a handful of bills. “Do
you think three hundred dollars will do it? No, wait, I should give
you four hundred.” Please fill in the blank with what you think will
happen next.

The rest of the chapter is devoted to some experiments (of course) and
some anecdotes that describe two separate frames in which we operate:
those of social norms and those of market norms. He compiles evidence
that social norms are more effective at motivating superior
performance than are market norms. The armed forces are an interesting
example. You didn’t really think that those soldiers in Iraq and
Afghanistan were there for the pay and to save money for college did
you? Those are nice perks (well the pay for low rank enlisted soldiers
sometimes leaves their families in poverty) but exactly how much money
would it take for you to risk your life like that? The actions we
barely hear on the news in the car as we travel back and forth to work
or have on as background during dinner, the acts of courage and
heroism are not motivated by the paycheck but by the social norms of
the service. One soldier I know said that you may join for your
country but, in a fire fight, you’re fighting for your buddies. Boy,
now there’s a powerful force. Powerful but, it turns out fragile.
After the dinner above, how long do you think it would be before the
narrator’s mother-in-law went out of her way for him?

In this chapter Dan provided the results of a number of experiments
showing that there is a particularly interesting difference between
the types and performance levels of tasks that can be produced when
the reward system is governed by “social norms.” In one set of
experiments he had people perform a simple computer task. Three groups
were paid varying amounts and one group was asked to do the task as as
a favor to the experimenter. Among those paid, those paid more
generally produced more in keeping with our idea of market behavior.
Those doing the experimenter a favor however, outperformed the highest
paid group. You can imagine all sorts of implications. One more
interesting twist however was that introducing market norms into the
conversation (talking about how much some folks had been paid) before
the volunteers worked destroyed the effect.

The most interesting set of things he explored based on these tenets
was the implications of personal in firm-level behavior: what should
you, and should a firm, leave in the arena of social norms and what in
the realm of market norms. Will extra productivity for a firm be most
effectively produced by market or social reinforcements? How about
employee loyalty in all its manifestations? What effect will a company
making clear that its relationship with its employees is purely
financial have on the performance of that company’s employees and
hence on the company itself. Is this an argument for a return to
paternalism? It seems unlikely. Is it at the heart of the oft vaunted
ability of small firms to “outperform” larger ones in some aspects of
innovation: perhaps.

Chapter 5. The Influence of Arousal
Why Hot is Hotter Than We Realize

One of the things I’ve learned about blogging, although it might not
be apparent, is that it’s a good idea to be brief. As you know I
seldom say in two words what I can say in five, this is an ongoing
challenge for me. This chapter however is one that encourages brevity.
The central hypothesis is that arousal, of all sorts, produces a
significant distortion of decision making. Decisions made in the heat
of the moment are notoriously badly made. Think road rage, think
victory celebration, think extreme thirst or hunger (I’ve always
wondered, just how hungry the first person was that saw a lobster and
thought, “hmmmm, that looks good”). There are all sorts of states of
arousal and given that this is a book by an experimentalist at a
university you can imagine that he describes experiments using
several. ‘Nuff said. You’re simply going to have to read the book to
get the details of what junior was doing in the name of behavioral
economics to supplement the pittance his parents were forcing him to
live on.

The major assertion of this chapter is that when we’re calm and
detached, we repeatably and significantly underestimate the effect of
altered mental states on our decision making. Of course we all know
that when we’re angry or in love or afraid or hot on the trail of a
particularly desirable objective like in a auction bidding war or when
we’ve had one too any drinks our decision making can suffer. Look at
the bad decisions made by the folks at Enron, Arthur Anderson or any
of a host of other companies. We know altered states of many kinds can
cause us to make bad decisions and so, forewarned we are forearmed
right? Not so much it turns out. The experimental subjects in this
chapter recognized that when they were excited they would make
decisions that were significantly different than those they would make
in a in the so called, cold light of day. They were asked to predict
behaviors or alternatives that they expect would change in the grips
of some emotionally charged state. However, when actually provoked and
queried again Dan’s experiments found that they consistently and
significantly underestimated the magnitude of the effect. His
prescription is a prevent defense. If you know that a certain
situations can cause you to make bad decisions, don’t put yourself in
those situations. This is another example of one of the jokes that, as
you know I believe, run the universe. It goes like this.
Patient: Doctor, doctor, it hurts when I go like this.
Doctor: Don’t go like that.

I’ll let you read the details but let’s all ask ourselves this;
knowing that we are highly likely to underestimate how much our
decision making will be changed in states of emotional turmoil of
varying sorts, how will we protect ourselves from being either patsies
of our emotions or manipulated by those willing to exploit this lever
for their gain?

Chapter 6: The Problem if Procrastination and Self-Control
Why We Can’t Make Ourselves Do What We Want to Do

Procrastination is probably the most common source of self-inflicted
wounds you or I are likely to suffer during our lives. Not necessarily
the source of the most significant ones but surely the greatest
number. From the petty (the “people” door on my garage has decayed to
the point that I will have to replace it) to the profound (I kept
meaning to start saving for retirement or dieting and exercising)
procrastination can leave nasty tracks in our lives.

Dan’s experiments here are perhaps the most limited of those he
describes in the book. Given the detail he presents it’d be hard to
judge their import if he turned up something new. Let me describe the
experiment. He compared three groups of students in a graduate
consumer behavior course. Each class was required to turn in three
papers over the course of the semester. One class was given strict and
equally spaced deadlines with a penalty for failing to meet the
deadline, A second class was told that they could turn the papers in
anytime before the end of the semester and that there was no reward
for being early. The third class was allowed to sign up for deadlines
spaced however they liked but, having committed to those deadlines,
there was a penalty for failing to meet them. Guess which group got
the best grades?

The group given the hard deadlines took first. The group with no
deadlines took last. The interesting point is that the other group,
those with self-selected deadlines did nearly as well as the first
group. Apparently, this tool, letting them pre-commit to a performance
standard was nearly as powerful as the externally imposed deadline.
Now since the experimenter graded the papers there is, of course, some
question about bias. There is also a significant degree of randomness
in this sort of grading process. In fact a great discussion of this is
in the other book from my reading list that I strongly recommend to
you, “The Drunkards Walk.” If Dan’s finding was revolutionary then we
might have some significant reservations. Pre-commitment is, however,
a well established technique. Really want to get something done? Write
a $500 check to some campaign or social cause based organization whose
ends you strongly oppose. Then give it to a friend and say, “If I
don’t accomplish X by Y send this check to these folks.” You’d be
amazed at what you can accomplish. This whole tack is a well
established result from game theory. Now let’s talk about some of the
ways Dan pictures using it.

Huge components of our health care costs are the results of
preventable diseases. What if your insurance company withdrew $200
dollars from your paycheck to cover the expense of a regular and
complete physical with the understanding that you would get that money
back IFF you kept your appointment for all of the required testing?
Maybe out on the lunatic fringe of health care thinking but
interesting. In this and other similar situations Dan suggests both
these voluntary pre-commitment models and the alternative of a
One of the most amusing suggestions he makes regards spending control,
especially credit card use. You may have heard the ice method. Some
people, to counter their impulsive use of consumer credit, put their
card(s) in a glass of water in the freezer. Thawing it (them) out to
use takes time allowing that arousal that we talked about last chapter
to fade. Of course there are simpler ways. Dan actually took one of
these suggestions to the executives of a major NY bank. Why, he said,
can’t a credit card record and automatically react in accordance with
pre-committed spending patterns. When you exceed your chosen limit
(which might be spending category specific) for instance, it would
decline more charges, or generate an email reporting your errant ways
to your spouse. He reports that the executives listened and thought it
was a good idea but never called him back. I would pay cash for a
recording of the conversation they had after he left.

There are lots of other approaches to controlling procrastination of
course. We’ll talk more about these later in the year. I confess that
this is a trait I personally fight. Let me just make two
recommendations other than pre-commitment. One is the time management
tool suite called “Getting Things Done” popularized by David Allen, or
alternately an approach which you can find described in a book called
“Making Work Work.” Julie Morgenstern.

Chapter 7: The High Price of Ownership
Why We Overvalue What We Have

Do you know of anyone whose house stays on the market not just for
months but for years? How about somebody who’s been driving around
with a “For Sale” sign in their car window long enough for you to
think it might actually be an accessory? What these folks have in
common is a valuation of what they are offering that does not match
the value in among the people to whom they are making the offer. These
are two quick examples but it turns out there are lots of other ways
in which we tend to overestimate the market value of the things we
own. It’s been called the investment effect. It might be because we
price in the positive feelings we have derived from owning the object
(we took such great family outings in that car) in ways that are
irrelevant to potential buyers. It may be that we experience the
parting with the object as a loss that prices in those good feelings
and it is well demonstrated that we have a tendency to avoid loss that
exceeds our desire for gains even at constant expected value. I’m not
sure Dan adds a lot new on this topic, although it is certainly a way
in which we evidence irrational consumer behavior, except an
experiment based on a rather peculiar basketball ticketing process at
Duke. None the less, this is useful stuff to be reminded of from time
to time.

I must admit that although one of my nephews is on faculty there, I
had not heard of Duke’s peculiar way of rationing basketball tickets
for important games. I won’t go through the whole thing hear – it’s a
pleasure left for the reader – but suffice it to say that it’s a multi-
day process that involves camping out and jumping through the odd hoop
and that process just gets you into a lottery for a ticket. Dan, who
did his Ph.D. there, and a colleague from INSEAD, contacted folks who
had gotten tickets and those who hadn’t and tried to arrange sales.
All had demonstrated the fervent desire to go to the game by
participating in the ritual described but, while those that had gotten
tickets said they would sell them for (on average) $2400, those who
hadn’t gotten them would only agree to pay (on average) $175.

This effect of ownership, even if it’s temporary (”FREE!! 10 day home
trial”, “return it without charge if you’re not satisfied”) or virtual
(how dare that idiot outbid me for my watch) is quite general.
Merchandisers use it to their advantage all the time. As with many of
these chapters, Dan’s point is, knowing that this effect is real,
examine your behavior when you get in these situations. So doing you
can avoid much frustration and avoid being manipulated into making
decisions that are not really in your best interest.

Chapter 8. Keeping Doors Open
Why Options Distract Us from Our Main Objective

I’m a big options fan. I like real options thinking and have seen it
used to generate real value in R&T environments. I was, therefore, not
wild when I read this title. Was there something fundamentally wrong
with my attachment to options? Well, let’s take a couple of famous
examples. The oldest comes from Sun Tsu in the world’s oldest job
application, “The Art of War,” written in the 6th century BC. I know
I’ve talked about this book before and I assure you I will talk about
it again. If you haven’t read it yet make it next on your list. Master
Tsu advises generals: “do not attack an enemy that has his back to a

and further
“do not thwart an enemy retreating home. If you surround the enemy,
leave an outlet; do not press an enemy that is cornered.

Such cornered foes are too formidable. Exploiting this same dynamic he
advises: Throw your soldiers into positions whence there is no escape,
and they will prefer death to flight. If they will face death, there
is nothing they may not achieve. Officers and men alike will put forth
their uttermost strength. Soldiers when in desperate straits lose the
sense of fear. If there is no place of refuge, they will stand firm.
If they are in hostile country, they will show a stubborn front. If
there is no help for it, they will fight hard.”

Indeed, Xiang Yu, in 210 BC, exploited this in Cortez and, doubtless,
many more. Having crossed the Yangtze, he burned his boats and had all
the cooking pots destroyed. Win or die; a clear message for the troops
and one that cleared their minds and free up assets from having to
protect those assets.

As always, Dan and some colleagues run some experiments on students.
They design several computer games in which players had 100 “clicks”
which they could use to choose one of three rooms and once in a room
click to get cash. Different rooms give different pay-offs and
generally people figured out pretty quickly which room paid the most
per click and then spent their time in that room. However, when the
game was changed such that rooms that hadn’t been visited in some
prescribed number of clicks disappeared, players would go back and
click on those rooms to keep them available even though it cost (on
average) 15% of their earnings.

This chapter makes the point that options can serve as distraction as
well as valuable alternatives. Olympic athletes are seldom concert
violinists. Mastery and focus often turn in better results than trying
to be all things to all people. My undergraduate honors advisor was,
and likely is still, a complete success as a chemist. When, as a
graduate student, I took his advanced organic synthesis course, I felt
like I was taken to the top of a tall mountain and shown the vast
landscape of chemical synthesis. He achieved that mastery as the
result of considerable focus. “George,” he said to me at one point,
“you should be spending 80% of your waking hours at the bench.”

This sort of behavior is contrary to much of what our culture offers
us today. Our environment bombards us with variety. You can know more
and more about more with just a few clicks of a mouse. Failure to be
well-rounded is viewed as a significant deficit. On the other hand,
the person who tries to do too many things can end up never doing any
one of them well enough to have impact. Like most things, of course,
there are two ways to get this wrong. Being monomaniacal may have its
benefits but it comes at a price. Having strong family relationships,
for example, can buffer you from the occasional bad days you may
experience at work.

There is another interesting aspect of the sometimes bewildering array
of choices that confronts us; a retreat to systems in which less
choice is allowed. The power and even ascendancy of authoritarian
regimes and rigid philosophical systems are sometimes viewed as
reactions to the world’s increasing complexity. Hardly a new idea,
“Escape from Freedom,” by the philosopher Eric Fromm is probably the
best treatment on the topic. However Dan has an interesting slant,
pointing out that increasing the complexity of a decision makes it
more likely that decision makers will rely on external (hence
manipulable) cues.

While it would be a mistake for us to fail to exploit options thinking
and the development of options for our business and personal lives,
trying to do too many things at once is a clear route to failure.

Chapter 9: The Effect of Expectations
Why The Mind Gets What It Expects

We all know that our expectations affect our experiences. Generally
however, since we are aware of this tendency we “smart people” think
we set that aside for the most part. In this chapter Dan describes a
number of experiments that he performed as well as a number of
experiments by others that point out just how subtle and persuasive
our expectations are.

Setting up shop in the “Muddy Charles,” the pub in MIT’s Walker
Memorial Building, he and collaborators started handing out free
samples of beer. Students were given samples of two types and then
asked to choose which of these they’d like a larger glass. The beers
started with the same brew but a few drops of balsamic vinegar were
added to one. (They actually started with Budweiser but some folks
“objected to calling Budweiser beer” so they switched to Sam Adams.)
They measured how many people ordered each of the samples and then
asked people to describe what they thought about the new beer. Some of
these folks were not told what the difference between the two beers
was, some were told about the vinegar before they tried it and some
were told after. Guess what happened. When they got the information
actually changed their rating of the experimental suds. Knowing it
contained vinegar beforehand changed their described experience when
doing the taste test.

There’s actually a lot more to this experiment and Dan presents a
number of other experiments including some employing functional MRI.
Here in a version of the classic Coke-v-Pepsi challenge it can be
demonstrated that at a brain activity level the experience of drinking
one as opposed to the other id modified by knowledge of which they
were drinking

By far the most interesting examples – and I really can’t bring myself
to spoil the fun you’ll have reading them – have to do with
stereotypes. Especially interesting are those dealing with groups to
whom several “conflicting” stereotypes can be applied. In these case
preconditioning the subjects with certain words chosen to “remind”
them of one or the other of these produced behavior that reflected the
provoked stereotype. You’ve just got to read this stuff trust me.

Chapter 10. The Power of Price
Why a 50 Cent Aspirin Can Do What a Penny Aspirin Can’t

We all know about the placebo effect; that wild and wonderful way in
which our mind affects our perception of, and in some cases our real
experience of the healing effects of one medication or the other. It’s
sort of an extension of the last chapter’s theme; the mind gets what
it expects. You’ll remember that Dan spent a long time in a hospital
burn unit recovering from a serious accident he’d had while training
for the IDF, well you won’t be surprised that he had a lot of time to
think about the placebo effect.

As part of his investigations into the perception of pain, as a newly
minted asst prof, he bought a vice and would crush people’s fingers in
it and ask them things like:

“How much did that hurt?”
“How much would I have to pay you to let me do that to you again?”

(You just can’t make this stuff up!) In this chapter he explores some
aspects of the economic side of the placebo effect. He has
experimenters pose as representatives from a drug company. They gave
people a series of electrical shocks of varying magnitude, asked them
about the pain they experienced. Next they were given a pain reliever,
well vitamin C actually, but they were told either that it was a new
and expensive one or a cheap one. When the shocks were repeated guess
what? Those who thought they were getting the high-priced stuff
reported that it worked pretty well, and much better than the folks
who got the cheap stuff. (Now let’s review what this means for the
spiraling costs of US health care.) By the way, the more recently the
folks had had experience with significant pain the better it worked.
As usual he did a number of experiments like this and I won’t spoil
you fun.

There are two sorts of implications he explores that are worth our
thought. First, how general is this phenomenon? It certainly applies
to food and drink, to cars to a whole lot of things. Does that mean
that we are manipulated into paying higher prices for goods that are
essentially equivalent to lower price alternatives? Would we be better
off if we brought this into our conscious mind as we decide whether
the most recent genes are worth it?

The next thing he brings up is really an ethical question. He cites
several examples where surgical procedures were found to produce no
better results than sham operations. A patient who thinks he or she
received one of these surgical procedure reports just as much benefit
as someone who actually had the procedure. While the medical community
wasn’t actually intending the procedures benefits to derive from the
placebo effect it turns out that’s exactly what happened. There is
also the less dramatic exploitation of the effect that many doctors
practice when they prescribe antibiotics for colds and sore throats,
the vast majority of which are viral. They prescribe, patients get
better and the offending microbe was not at all affected by the active
ingredient. It, of course, turns out, that in some instances at least,
people treated with placebos actually do get better faster than those
untreated. There will be some Nobels given out for figuring out
exactly how that works. So, the interesting ethical questions Dan
brings up are, knowing the placebo effect is real, should doctors use
in intentionally and if so how and when? Also if we want to protect
people from unnecessary surgery do we have an obligation to test
surgical procedures against sham surgery in humans?

Like I have said all along, this book is worth the time, even more for
the questions than for the answers.

Chapter 11: The Context of Our Character, Part 1
Why We Are Dishonest, and What We Can Do About It

Dan starts this chapter with some interesting observations. I haven’t
independently confirmed them but I’m willing to give him the benefit
of the doubt and assume they’re right.

Total loss due to robbery                                   $525M
Average loss per robbery                                $1,300
Total loss to robbery, burglary,
larceny-theft and auto theft                                 $16B
Workplace loss to theft and fraud                   $600B
Loss from fraudulent insurance claims            $24B
Underpayment of income tax (per IRS)         $350B
Fraudulent clothing returns to retail outlets    $16B

Do we think about the people who perpetrate these crimes differently?
In our most fundamentalist moments we’d say no. A theft is a theft.
But do we actually act that way as a society? Let’s change tacks. Does
the self concept of the guy or gal walking out of work with a package
of Post-It™ notes differ fundamentally from the folks speeding away
from the convenience store they’ve just knocked over? How about the
person who keeps the extra cash when they’ve been given too much
change? If we judge by how much attention and cash we pay to catch the
perpetrators and the answer seems clear. Does the amount of loss due
to the actions of people we do not generally think of as criminals
mean that many “honest” people cheat? Bring on the experiments!

Dan and a number of collaborators do a number of varieties of one
experimental theme using his favorite subjects, college students from
around the country. The basic outline of the experiment is as follows.
A control group is established by giving a group of students some
timed test (50 general information questions, 15 math problems,…) and,
allowing them no way to cheat, grade their papers paying them $0.10
per correct answer. The next group has to transfer their answers from
their work sheet to a grid on which the correct answers are
highlighted and they are to write at the top of the grid sheet how
many answers they got correct. The worksheet and the grid are handed
to the experimenter who them pays the student $0.10 for every correct
answer he or she claimed. Another group was treated the same as the
previous but then told to shred their worksheet and grid and then
simply tell the experimenter how many answers they had gotten right at
which point the experimenter paid them as before. A final group tested
as before, was told to shred their worksheet and grid and simply take
the correct amount of money from a jar containing about $100. Guess
what happened and write down your guess.

No really don’t look, guess first. The second experimental group,
which had handed in their work sheet and grid sheet, cheated by about
10%. How about the other groups? Have you got your guess recorded?

They cheated by about the same amount. Even when they could have
simply taken all the money the students cheated just a little. And it
wasn’t the case that there were a few bad apples that drove the
results. The means for the group shifted but the distribution remained
the same. Apparently we are pretty good at rationalizing small amounts
of dishonesty.

Now, same experiment (including the shredding of the work sheet and
the grid) but now the experimental groups are asked to do a little
memory test or given a verbal reminder before the test. One group was
asked to write down a list of 10 books they had read in high school,
another was asked to write down as many of the 10 commandments as they
could remember, a group at Princeton was told that this test was
governed by Princeton’s storied honor code, yet another was told that
the tests were governed by MIT’s honor code (there is, by the way, no
such thing). Have you guessed what happened?

In all of the groups asked to remember something that reminded them of
an ethical benchmark no one cheated. Now these are successful college
students at some of the best schools in the country so you’d hope
they’d had some underpinning in ethics, and of course there was little
at stake so we could expect different results in different groups and
among these groups in different contexts, nonetheless this experiment
is striking. Simply being recently reminded that there is a difference
between ethical and unethical conduct changed their behavior. Dan
draws the comparison with the codes of conduct to which professions of
varying sorts used to ascribe. He asserts that as the professional
societies and identities have become weaker forces in the practice of
their respective crafts, we have passed that boundary he talked about
earlier from the arena of social norms to that of market norms with a
concomitant cost to society.

Unsurprisingly there’s a lot more to this chapter and, as always, I do
not want to spoil the fun you’ll have when you read the book yourself.
But I think that there’s enough here to provoke discussion. If most
“good, honest” people cheat a little what does that say about society
as a whole and how might we actually promote a turn to more ethical
behavior, or has that die been cast?

Chapter 12: The Context of Our Character, Part 2
Why Dealing With Cash Makes Us More Honest

The fundamental finding Dan reports here is encapsulated in the title.
He finds that, in his experiments, people are less likely to steal
cash than they are to run off with non-monetary instruments. Again, he
cites a number of experiments, but the sense of the lot can be summed
up in just one. When he put 6 packs of Coke in MIT dorms they all
disappeared in 72 hours. When instead, he put 6 one dollar bills in
the same refrigerators they all survived. In his usual fashion he
explored just how close to case you had to be to see this effect. If
they were given tokens that you nearly instantly exchanged for cash
would it increase cheating (yes it turns out)?

This is one of his most broadly provocative points. If tokens increase
cheating, how about even more abstract instruments?
* How about credit cards – lots of cheating there
* How about the anonymity granted by the net – lots of cheating
* How about stock options – lots of back dating there
* How about cooking the books – lots of cheating there

It seems really likely that Jeff Skilling and Ken Lay would likely
never simply have mugged folks and taken their cash, but somehow
cooking the books was OK. There is clearly, at least for some folks, a
mechanism which allows the incremental dishonesty to creep in without
triggering our “If I do this I’ll be a bad person,” alarm.

On the whole I find this chapter a little depressing. It certainly
points out some things that, if they are truly extensible, should make
us have significant reservations about the increasing abstraction of
vessels of monetary exchange, a trend likely to continue. So I am left
at the end of this chapter with a dilemma I seldom faced in this book,
disquiet with no obvious remedy. I can more carefully examine my own
behavior and I can become more protective in my use of non-cash
instruments but the entire chapter begs for broader experimentation.
I’ll leave you with a quotation from HL Mencken. If you don’t know his
work, dabble some in it. It’s an excellent source for uncomfortable

“The difference between a moral man and a man of honor is that the
latter regrets a discreditable act, even when it has worked and he has
not been caught. “
– H. L. Mencken, ‘Prejudices: Fourth Series,’ 1924

Chapter 13: Beer and free lunches
What Is Behavioral Economics, And Where Are The Free Lunches?

There’s an old joke in economics – two economists are walking down the
street and they see a $20 bill on the ground. One begins to bend over
to reach for it, the other stops him saying, “If that were a real $20
bill someone would have already picked it up.”

OK – so there’s a lot of the standard model in a nutshell. It’s sort
of like that classic statement of the second law of thermodynamics,
“you can never win, you can, at best, break even.” There are several
more sets of experiments described in this chapter of course. These
focus on restaurants, people’s behavior in ordering food and beer (a
recurring theme) and their satisfaction with the outcomes. It turns
out that people order different things if they are the first or last
in a group to order. The orders previously given by group members
influence what the remaining members order. You can easily imagine at
least two ways that might happen, a drive toward conformity or a drive
toward displaying uniqueness, I’m not going to spoil your fun by
describing the experiments and the details of the results. Generally,
however, it turns out that you are likely to enjoy your selection more
if you make up your own mind and stick to it. This is, then, a source
of a free lunch. Having information about largely subliminal process
that influence your decision making can allow you to escape the traps
such processes help us fall into. So, order what you like and enjoy it
more. The extra enjoyment is free.

Indeed the real point of the majority of this fun book is just that:
don’t blindly believe that economic rationality prevails at all times.
Study real behavior, make the invisible processes visible to you and
stop being the tool of others – this is the real free lunch. Felix qui
potuit rerum cognoscere causas!

Dan Ariely
email : dan [at] predictablyirrational [dot] com / ariely [at] mit [dot] edu /
dandan [at] duke [dot] edu

SEE ALSO[youtube=–sm9XXU]l

RYSSDAL: Given our motives for revenge, is there a way that Congress
can shape a bill that’s going to make it acceptable to people whose
constituents really want to punish Wall Street?

ARIELY: Yes. So I think we need to include revenge in the bill. There
was discussion about capping CEO salaries, which I think went a small
way into revenge. But I think there are two ways to include revenge in
the bill. One way is to say every time we are going to nationalize
something, we are going to take the stock option of these people in
these banks, right? We will make them pay for nationalizing it. That’s
one approach. The second approach is to build into the system future
revenge. So another thing we can do is we can decide that the bill
will actually force us to create a new code of punishment for people
on Wall Street. And we have an opportunity here, with a meltdown
that’s so dramatic, that we feel that there is a need to go back and
try and reshape the whole system. And that might actually be very,
very useful in the long term.


by Daniel Kahneman

Many people think of economics as the discipline that deals with such
things as housing prices, recessions, trade and unemployment. This
view of economics is far too narrow. Economists and others who apply
the ideas of economics deal with most aspects of life. There are
economic approaches to sex and to crime, to political action and to
mass entertainment, to law, health care and education, and to the
acquisition and use of power. Economists bring to these topics a
unique set of intellectual tools, a clear conception of the forces
that drive human action, and a rigorous way of working out the social
implications of individual choices. Economists are also the
gatekeepers who control the flow of facts and ideas from the worlds of
social science and technology to the world of policy. The findings of
educators, epidemiologists and sociologists as well as the inventions
of scientists and engineers are almost always filtered through an
economic analysis before they are allowed to influence the decisions
of policy makers.

In performing their function as gatekeepers, economists do not only
apply the results of scientific investigation. They also bring to bear
their beliefs about human nature. In the past, these beliefs could be
summarized rather simply: people are self-interested and rational, and
markets work. The beliefs of many economists have become much more
nuanced in recent decades, and the approach that goes under the label
of “behavioral economics” is based on a rather different view of both
individuals and institutions. Behavioral economics is fortunate to
have a witty guru—Richard Thaler of the University of Chicago Business
School. (I stress this detail of his affiliation because the Economics
Department of the University of Chicago is the temple of the “rational-
agent model” that behavioral economists question.) Expanding on the
idea of bounded rationality that the polymath Herbert Simon formulated
long ago, Dick Thaler offered four tenets as the foundations of
behavioral economics:

Bounded rationality
Bounded selfishness
Bounded self-control
Bounded arbitrage

The first three bounds are reasonably self-evident and obviously based
on a plausible view of the psychology of the human agent. The fourth
tenet is an observation about the limited ability of the market to
exploit human folly and thereby to protect individual fools from their
mistakes. The combination of ideas is applicable to the whole range of
topics to which standard economic analysis has been applied—and at
least some of us believe that the improved realism of the assumption
yields better analysis and more useful policy recommendations.

Behavioral economics was influenced by psychology from its inception—
or perhaps more accurately, behavioral economists made friends with
psychologists, taught them some economics and learned some psychology
from them. The little economics I know I learned from Dick Thaler when
we worked together 25 years ago. It is somewhat embarrassing for a
psychologist to admit that there is an asymmetry between the two
disciplines: I cannot imagine a psychologist who could be counted as a
good economist without formal training in that discipline, but it
seems to be easier for economists to be good psychologists. This is
certainly the case for both Dick and Sendhil Mullainathan—they know a
great deal of what is going on in modern psychology, but more
importantly they have superb psychological intuition and are willing
to trust it.

Some of Dick Thaler’s most important ideas of recent years—especially
his elaboration of the role of default options and status quo bias—
have relied more on his flawless psychological sense than on actual
psychological research. I was slightly worried by that development,
fearing that behavioral economics might not need much input from
psychology anymore. But the recent work of Sendhil Mullainathan has
reassured me on this score as well as on many others. Sendhil belongs
to a new generation. He was Dick Thaler’s favorite student as an
undergraduate at Cornell, and his wonderful research on poverty is a
collaboration with a psychologist, Eldar Shafir, who is roughly my
son’s age. The psychology on which they draw is different from the
ideas that influenced Dick. In the mind of behavioral economists,
young and less young, the fusion of ideas from the two disciplines
yields a rich and exciting picture of decision making, in which a
basic premise—that the immediate context of decision making matters
more than you think—is put to work in novel ways.

I happened to be involved in an encounter that had quite a bit to do
with the birth of behavioral economics. More than twenty-five years
ago, Eric Wanner was about to become the President of the Russell Sage
Foundation—a post he has held with grace and distinction ever since.
Amos Tversky and I met Eric at a conference on Cognitive Science in
Rochester, where he invited us to have a beer and discuss his idea of
bringing together psychology and economics. He asked how a foundation
could help. We both remember my answer. I told him that this was not a
project on which it was possible to spend a lot of money honestly.
More importantly, I told him that it was futile to support
psychologists who wanted to influence economics. The people who needed
support were economists who were willing to be influenced. Indeed, the
first grant that the Russell Sage Foundation made in that area allowed
Dick Thaler to spend a year with me in Vancouver. This was 1983-1984,
which was a very good year for behavioral economics. As the Edge
Sonoma session amply demonstrated, we have come a long way since that
day in a Rochester bar.



Daniel Kahneman
email : kahneman [at] princeton [dot] edu

Sendhil Mullainathan
email : mullain [at] fas.harvard [dot] edu

Richard H. Thaler
email : richard.thaler [at] chicagogsb [dot] edu

Cass R. Sunstein
email : csunstei [at] law.harvard [dot] edu



“Libertarian Paternalism: Not an oxymoron. Libertarian paternalism is
a relatively weak, soft, and non-intrusive type of paternalism where
choices are not blocked, fenced off, or significantly burdened. A
philosophic approach to governance, public or private, to help homo
sapiens who want to make choices that improve their lives, without
infringing on the liberty of others. Addendum to skeptics: It is not
pledge for bigger government, just for better governance.”

Richard Thaler has led a revolution in the study of economics by
understanding the strange ways people behave with their money.
by Roger Lowenstein  /  11 February 2001

It is possible that Richard Thaler changed his mind about economic
theory and went on to challenge what had become a hopelessly dry and
out-of-touch discipline because, one day, when a few of his supposedly
rational colleagues were over at his house, he noticed that they were
unable to stop themselves from gorging on some cashew nuts he’d put
out. Then again, it could have been because a friend admitted to
Thaler that, although he mowed his own lawn to save $10, he would
never agree to cut the lawn next door in return for the same $10 or
even more. But the moment that sticks in Thaler’s mind occurred back
in the 1970’s, when he and another friend, a computer maven named Jeff
Lasky, decided to skip a basketball game in Rochester because of a
swirling snowstorm. “But if we had bought the tickets already, we’d
go,” Lasky noted. “True — and interesting,” Thaler replied.

Thaler began to make note of these episodes — anomalies, he called
them — and to chalk them up on his blackboard at the University of
Rochester, where he was a young, unheralded and untenured assistant
professor. Each of these stories was at odds with neoclassical
economics as it was taught in graduate schools; indeed, each was a
tiny subversion of the prevailing orthodoxy. According to accepted
economic theory, for instance, a person is always better off with more
rather than fewer choices. So why had Thaler’s colleagues roundly
thanked him for removing the tempting cashews from his living room?
The lawn example was even more troubling. Perhaps you dimly remember
from Economics 101 that unlovely term, “opportunity cost.” The idea,
as your pointy-headed prof vainly tried to persuade you, is that
forgoing a gain of $10 to mow a neighbor’s lawn “costs” just as much
as paying somebody else to mow your own. According to theory, you
either prefer the extra time or the extra money — it can’t be both.
And the basketball tickets refer to “sunk costs.” No sense going to
the health club just because we have paid our dues, right? After all,
the money is already paid — sunk. And yet, Thaler observed, we do.
People, in short, do not behave like the pointy heads say they should.

In the ordered world of economics, this rated as a heresy on the scale
of Galileo. According to the standard or neoclassical school
(essentially a 20th-century updating of Adam Smith), people, in their
economic lives, are everywhere and always rational decision makers;
those who aren’t either learn quickly or are punished by markets and
go broke. Among the implications of this view are that market prices
are always right and that people choose the right stocks, the right
career, the right level of savings — indeed, that they coolly adjust
their rates of spending with each fluctuation in their portfolios, as
though every consumer were a mathematician, too. Since the 1970’s,
this orthodoxy has totally dominated the top universities, not to
mention the Nobel Prize committee.

Thaler spearheaded a simple but devastating dissent. Rejecting the
narrow, mechanical homo economicus that serves as a basis for
neoclassical theory, Thaler proposed that most people actually behave
like . . . people! They are prone to error, irrationality and emotion,
and they act in ways not always consistent with maximizing their own
financial well being. So serious was Thaler’s challenge that Merton
Miller, the late Nobelist and neoclassical deity, refused to talk to
him; Thaler’s own thesis adviser lamented that he had wasted a
promising career on trivialities like cashews. Most economists simply
ignored him.

But the anomalous behaviors documented by Thaler and a band of fellow
dissenters, including Yale’s Robert Shiller and Harvard’s Lawrence
Summers, Clinton’s last treasury secretary, have grown too numerous to
ignore. And the renegades, though still a minority, have embarked on a
second stage: an attempt to show that anomalies fall into recognizable
and predictable patterns. The hope is that by illuminating these
patterns, behavioral economics, as it has come to be called, will
yield a new understanding of the economy and markets. Behaviorism,
says Daniel McFadden, the recent Nobel laureate, “is a fundamental re-
examination of the field. It’s where gravity is pulling economic

Thaler, after years of being shunned, is now a popular, highly paid
professor at the University of Chicago Graduate School of Business,
the traditional nerve center of neoclassicism. His increasing
following is owed in no small part to the fact that behaviorism,
unlike so much of economics, is fun. Although prewar economists like
John Maynard Keynes were literary artists, most writing in the field
since the 70’s has been obtuse and highly mathematical, all but
inaccessible to the lay person. By contrast, Thaler’s papers are rich
with intuitive gems drawn from sports, business and everyday life. In
one paper, he pointed out that people go across town to save $10 on a
clock radio but not to save $10 on a large-screen TV. It’s a seemingly
obvious point — and also a direct contradiction of rationalist

Thaler loves pointing out that not even economics professors are as
rational as the guys in their models. For instance, a bottle of wine
that sells for $50 might seem far too expensive to buy for a casual
dinner at home. But if you already owned that bottle of wine, having
purchased it earlier for far less, you’d be more likely to uncork it
for the same meal. To an economist (a sober one, anyway) this makes no
sense. But Thaler culled the anecdote from Richard Rosett, a prominent

A thickset man of 55, Thaler has a sharp wit and a voluble ego. Many
assume that his years in the academic wilderness have made him
defensive; Thaler denies it. “The last thing I want to do is to sound
embittered about having to struggle,” he told me, easing his Audi
around Lake Michigan toward the Gothic stone campus. But Thaler
doesn’t so much debate opponents; he skewers them. The British
economist Ken Binmore once proclaimed at a seminar that people evolve
toward rationality by learning from mistakes. Thaler retorted that
people may learn how to shop for groceries sensibly because they do it
every week, but the big decisions — marriage, career, retirement —
don’t come up that often. So Binmore’s highbrow theories, he
concluded, were good for “buying milk.”

I met Thaler two days after the election, and he was already
predicting that the country would be willing to accept Bush as the
winner, because “people have a bias toward the status quo.” I asked
how “status-quo bias” affects economics, and Thaler observed that
workers save more when they are automatically enrolled in savings
programs than when they have to choose to participate by, say,
returning a form. Standard theory holds that workers would make the
most rational decision regardless.

Savings is an area where Thaler thinks he can have a big impact. Along
with Shlomo Benartzi, a collaborator at U.C.L.A., Thaler cooked up a
plan called Save More Tomorrow. The idea is to persuade employees to
commit a big share of future salary increases to their retirement
accounts. People find it less painful to make future concessions
because pain deferred is, to an extent, pain denied. Therein lies the
logic for New Year’s resolutions. Save More Tomorrow was tried with a
Chicago company, and workers tripled their savings within a year and a
half — an astounding result. “This is big stuff,” Thaler says. He is
shopping the plan around to other employers and predicts that
eventually it could help raise the country’s low savings rate.

Though Thaler, who comes across as a middling, Robert Rubin-style
Democrat, plays down the connection, such results could provide
ammunition to liberals who think government bashing has gone too far.
Since the Reagan era, a mantra for office seekers is that people know
what is best for themselves. Generally, yes; but what if not always,
and what if they err in predictable ways? For instance, Thaler has
found that the number of options on a 401(k) menu can affect the
employees’ selections. Those with a choice of a stock fund and bond
fund tend to invest half in each. Those with a choice of three stock
funds and one bond fund are likely to sprinkle an equal amount of
their savings in each, and thus put 75 percent of the total in stocks.
Such behavior illustrates “framing” — decisions being affected by how
choices are positioned. Political pollsters and advertisers have known
this for years, though economists are just coming around.

Framing has big implications for the debate on privatizing Social
Security. Neoclassicists say that people should manage their own
retirement accounts, and that the more choices they have the better.
Thalerites are not so sure. “If Thaler is right, it makes the current
dogmatic antipaternalism really doubtful,” says Cass Sunstein, a
prominent legal scholar at the University of Chicago.

Thaler, who grew up in Chatham, N.J., the son of an actuary, wrote his

doctoral thesis at the University of Rochester on the economic “worth”
of a human life (public planners tackle this morbid theme frequently,
for instance, in determining speed limits). Thaler conceived a clever
method of calculation: measuring the difference in pay between life-
threatening jobs like logging and safer lines of work. He came up with
a figure of $200 a year (in 1967 dollars) for each 1-in-1,000 chance
of dying.

Sherwin Rosen, his thesis adviser, loved it. Thaler did not. He had
been asking friends about it, and most insisted that they would not
accept a 1-in-1,000 mortality risk for anything less than a million
dollars. Paradoxically, the same friends said they would not be
willing to forgo any income to eliminate the risks that their jobs
already entailed. Thaler decided that rather than rationally pricing
mortality, people had a cognitive disconnect; they put a premium on
new risks and casually discounted familiar ones.

For a while, Thaler regarded such anomalies as mere cocktail-party
fodder. But in 1976 he happened upon the work of two psychologists,
Daniel Kahneman and the now-deceased Amos Tversky, who had been
studying many of the same behaviors as Thaler. The two had noticed a
key pattern: people are more concerned with changes in wealth than
with their absolute level — a violation of standard theory that
explained many of Thaler’s anomalies. Moreover, most people are “loss
averse,” meaning they experience more pain from losses than pleasure
from gains. This explains why investors hate to sell losers. For
Thaler, their work was an epiphany. He wrote to Tversky, who plainly
encouraged him. “He took me seriously,” Thaler recalled, “and because
of that, I started taking it seriously.”

Thaler began designing experiments to test his ideas. In one, Thaler
told lab subjects to imagine they are stranded on a beach on a
sweltering day and that someone offers to go for their favorite brand
of beer. How much would they be willing to pay? Invariably, Thaler
found, subjects agree to pay more if they are told that the beer is
being purchased from an exclusive hotel rather than from a rundown
grocery. It strikes them as unfair to pay the same. This violates the
bedrock principle that one Budweiser is worth the same as another, and
it suggests that people care as much about being treated fairly as
they do about the actual value of what they’re paying for. Although
“fairness” is generally ignored by neoclassicists, it’s probably a
reason why companies do not lower salaries when they encounter tough
times — perversely, laying off workers is considered more fair.

Thaler’s first paper on anomalies was rejected by the leading economic
journals. But in 1980, a new publication, The Journal of Economic
Behavior and Organization, was desperate for copy, and Thaler’s
“Toward a Positive Theory of Consumer Choice” saw the light of day. “I
didn’t have any data,” he admits. “It was stuff that was just true.”

The response from fellow economists was zero. But the article
eventually caught the eye of Eric Wanner, a psychologist at the Alfred
P. Sloan Foundation in New York. Wanner was itching to get economists
and psychologists talking to one another, and Thaler took the bait.
“He was the first economist who thought hard about the implications
for economics,” Wanner says. “The reaction of mainstream economists
was defensive and hostile. They considered it an attack — an
apostasy.” Wanner, who became president of the Russell Sage
Foundation, started financing behavioral economics, and Thaler became
the informal leader, organizing seminars and summer workshops. In
effect, he turned an idea into a movement. “Dick was like a taxonomist
who goes out and collects embarrassing specimens,” Wanner says. “He
learned that to get anyone to pay attention to him he had to develop a
portfolio of facts that he could be entertaining about and that
economists couldn’t sweep under the rug.”

Thaler’s most original contribution was “mental accounting” — an
extension of Kahneman and Tversky’s “framing” principle. “Framing”
says the positioning of choices prejudices the outcome. “Mental
accounting” says people draw their own frames, and that where they
place the boundaries subtly affects their decisions. For instance, a
poker player who accounts for each day separately may become bolder at
the end of a winning night because he feels he is playing with “house
money.” If he accounted for each hand separately, he would play the
first and last hands the same.

Most people sort their money into accounts like “current income” and
“savings” and justify different expenditures from each. They’ll gladly
blow their winnings from the office football pool, a “frivolous”
account, even while scrupulously salting away every penny of their

Thaler and a trio of colleagues went on to document that cabdrivers
stop working for the day when they reach a target level of income.
(Each day’s “account” is separate.) This means that — quite
nonsensically — they work shorter hours on more lucrative days, like
when it’s raining, and longer hours on days when fares are scarce! In
a sense, investors who pay attention to short-term fluctuations are
like those cabbies; if they toted up their stocks less frequently,
they would be better investors. Thaler went so far as to suggest to an
audience at Stanford that investors should be barred from seeing their
portfolios more than once every five years.

Such irreverence reinforced the view among economists that Thaler
could be safely ignored. His anecdotes were fuzzy science, they said,
and examples like the cabbies were easy pickings. Since there is no
way for a third party to profit from a cabbie’s mistake, it’s not
surprising that he would make one. Thaler knew the criticism had
merit, and that to be taken seriously, he had to demonstrate
irrationalities in financial markets, which are the purest embodiment
of neoclassicism. In the markets, one person’s bad decision can be
offset by someone else’s smart one. Across the markets, rationality
should reign.

Thaler set out to prove that it did not. His first effort, a 1985
paper with Werner De Bondt, his doctoral student, showed that stocks
tend to revert to the mean — that is, stocks that have outperformed
for a sustained period are likely to lag in the future and vice versa.
This was a finding that Chicago School types couldn’t ignore —
according to their theory, no pattern can be sustained, since if it
did, canny traders would try to profit from it, correcting prices
until the pattern disappeared.

Then, in 1987, Thaler was hired to write a regular Anomalies column
for a new economics journal, giving him a widespread audience among
his peers. That same year, the stock market crashed 23 percent on a
single day. Thaler could hardly have imagined better proof that the
market was not, well, perfectly rational. More economists began to
mine the data, and by the 90’s there was a rich literature of market
anomalies, documenting, for example, that people can consistently make
money on stocks that trade at low multiples of earnings, or on
companies that signal changes by doing things like hiking dividends.
Documenting anomalies became a popular pastime from Berkeley to

Thaler still has plenty of critics. The harshest one is right upstairs
from his office at Chicago, the curmudgeonly Eugene Fama, a longtime
advocate of the efficient-market school. “What Thaler does is
basically a curiosity item,” Fama snipes. “Would you be surprised that
every shopper doesn’t shop at the lowest prices? Not really. Does that
mean that prices aren’t competitive?”

Thaler periodically invites Fama in to his class to present the other
side, but Fama has not returned the gesture and, indeed, sounds bitter
that behavioral finance is getting so much attention. “One question
that occurs to me,” Fama says, “is, ‘How did some of this stuff ever
get published?”‘ The objection raised most often, from Fama and
others, is that if Thaler is right and the market is so screwy, why
wouldn’t more fund managers be able to beat it? A variation of this
theme is that if behavioral economics, for all its intuitive appeal,
can’t help people make money, what good is it?

Thaler, actually, is a director in a California money management firm,
Fuller & Thaler Asset Management, which, according to figures it
provided, has been beating the market handily since 1992. The firm
tries to exploit various behavioral patterns, like “categorization”:
when Lucent Technologies was riding high, people categorized it as a
“good stock” and mentally coded news about it in a favorable way.
Lately, Lucent has become a “bad stock.” But Thaler, who does not get
involved in picking stocks, stops short of suggesting that investors
versed in his research can beat the market. Mispricings that spring
from anomalies are hard to spot, he says, particularly when the people
looking for them are prone to their own behavioral quirks.

If this sounds muted, it may be because Thaler is ready to declare
victory and join the establishment. The neoclassical model, he admits,
is a fine starting point; it’s misleading only when regarded as a
perfect or all-encompassing description. People aren’t crazy, he adds,
but their rationality is “bounded” by the tendencies that Kahneman,
Tversky, himself and others have studied. What he hopes is that a
future generation will resolve the schism by building behavioral
tendencies into a new, more flexible model.

For now, Thaler is still looking for new miniature applications
wherever he can find them, like on the basketball court recently.
Thaler studied games in which a team trails by 2 points, with time
left for just one shot. What to go for, 2 points or 3? A 2-point shot
succeeds about half the time, a 3-pointer about 33 percent of the
time. But since a 2-point basket would only tie the game (and force an
overtime, in which the team has a 50-50 chance of winning), going for
a 3-pointer is a superior strategy. Still, most coaches go for 2. Why?
Because it lowers the risk of sudden loss. Coaches, like the rest of
us, do more to avoid losing than they do to win. You won’t find an
explanation for that in the mechanical homo economicus of theory. But
it has everything to do with folks Thaler thinks are much more
relevant to the economy — Homo sapiens.

Leave a Reply