Predictably Rational

Posted on

Bouree Lam in The Atlantic rounds up some recent thinking on the difference between round number pricing and the more common $X.99 style of pricing.

Of the lunch spots near my office, the chain Le Pain Quotidien's menu always demands more of my attention than others. The reason that the menu at Le Pain Quotidien is unusual isn't because they serve open-faced sandwiches or that I'm not sure what kind of cheese Fourme d’Ambert is, but rather that their prices aren't formatted like those of other shops. Organic egg frittata costs $12.00, curried chicken salad tartine is $12.25, a large cappuccino is $5.35. In a world where most prices end with ".99", Le Pain Quotidien's prices make my brain hurt.

The "undercover" economist Tim Harford (he has a book and writes column at the Financial Times by that title) has explained the theories for why prices in our world end in "9." First is something called the left-digit effect, which suggests that consumers just can't be bothered to read to the end of prices. The mind puts the most emphasis on the number on the far-left, so even though $59.99 is closer to $60, it's the "5" that registers. The other theory is that prices ending in ".99" signal a deal to consumers. In short, consumers seem to like prices that end in "9," and experiments say that pricing things this way increases purchases.

This topic is a favorite for those who like to geek out on the subtleties of heuristics and biases (a foundation of behavioral economics) but the idea that consumers are "lazy" when considering prices, that they simply don't notice that $9.99 is a lot like $10.00, seems facile. 

Despite the ubiquitous "9" pricing practice, most numbers used in everyday life are whole numbers. It's not common to say, "just give me 5.27 minutes." But why do Le Pain Quotidien's prices still make my mind reel? A new study in the Journal of Consumer Research might have the answer. Researchers found that shoppers deal with pricing information differently when prices feature round numbers ("5"), as opposed to non-round ones ("4.99"). When something costs $100, consumers tend to rely on their feelings, whereas when something has an irregular price—such as $98.67—consumers have to use reason to compute whether it's a good price.

Perhaps the exact opposite effect is at work, and irregular pricing actually makes consumers "slow down" and pay more attention to the price, leading to a higher liklihood of a sale. 

Are Capitalism and Altruism Compatible?

Posted on

Now there’s a light topic.

Or perhaps to state it another way, is being altruistic compatible with acting in your own self interest? Aren’t those two things diametrically opposed?

Or to get to the crux of the matter, can a market based approach be compatible with the goal of advancing the common good? Short answer: yes. For my stab at a long answer, see below. Of course, it depends on how you look at it.

I’ve been meaning to follow up on some of the ideas raised by the Prisoner’s Dilemma issues in the last post, so let’s start by rushing to the conclusion, with a quote from Mario Henrique Simonsen:

Moreover, as game theorists have shown, the ruthless pursuit of self-interest often results in a comparative loss for everyone. Game theorists often appeal to what is known as the Prisoner’s Dilemma. Typically, the Prisoner’s Dilemma provides an example of a situation in which two people are faced with a choice about whether to act in a self-interested way or altruistically, and the example shows that both come out ahead if both act altruistically. Peter Singer gives an interesting variation of this dilemma in The Expanding Circle. Imagine two early human hunters who are confronted with a saber tooth tiger. If the tiger chases them, the tiger will only be able to chase one of them but will have at least a ninety percent chance of catching and killing the one that is chased. If both stand their ground together, there is only a very small chance that the tiger could kill either of them. If both hunters are narrowly self-interested, they will both flee in order to save their own skin and there is a fifty-fifty chance for each hunter of being caught and killed. If, on the other hand, both are altruistic and both stay to help the other hunter, then in fact both will benefit. In some situations, in other words, individuals actually derive more benefit by not being self-interested!

Let’s build our own sabre-tooth-model then. There are ten people who believe in cooperation in one village. Ten who only act in a ruthless and caricatured version of self-interest in another village, on the other side of the river. In each, along comes a saber-toothed tiger that’s hungry. Assume that the tiger only needs to eat one person a day to be happy. Assume that the tiger is faster than any given person. Assume that the tiger is really tough to kill, but 5 people could do it together if they try hard enough.

Day one. Tiger comes. In the first village someone gets eaten as they are caught surprised. In the other, everyone runs instantly. The slowest is killed. In the first village, they get together. The fastest runners decide also that everyone will work together, the next day they gang up and try to kill the tiger. They may lose another one or two, but he’s dead eventually. In the other village, the tiger comes each day and kills the slowest runner. Two weeks later, every single person is gone.

Aha.

So one responds — banding together is not altruism, obviously, since if we don’t do it we all die, so the other village (who act only by ruthless self-interest) would have done the same thing, they say. They’d just do it for a different reason, because it’s also in their self interest.

But how? Nobody knew he was coming back the next day. Or any given person could have just run, and hoped that 5 others were able to kill the tiger and they would have avoided all risk.

Ok, so how do you deal with a system of rewards and penalties that is infinitely more vague and complex than this minor example? People can’t predict the future, they have to make assumptions. You cannot make an absolute case for self-interest against altrusim, because you cannot absolutely define which is which.

In short, one learns how to balance altruism with self interest. Or more accurately, one learns that rational self interest — and hence market based solutions — isn’t the opposite of altruism, with community, or with banding together to solve common problems.

If you view the above example through the prism of markets, and as an example of a market rendering judgement, the results of the invisible hand of said market are clear.

In one group everyone banded together, saw the oncoming existential threat to their entire community, and decided to do something about it. It didn’t necessarily require the authority of command, all it took was the collective realization that the community would live or die by tackling the problem together. In the other group people refused to recognize the threat they faced, or argued that it wasn’t rational for them individually to expend energy to face that threat. They ceased to exist.

When people talk about capitalism and markets that’s just another way of talking about incentives.

There’s no rule that says that self-interest has to be short sighted or blind. There’s nothing magical about markets. But there’s something very powerful about them, they present incentives and with lighting speed they channel resources towards those who adapt and thrive most efficiently.

As we face upcoming existential threats — global climate change for example — it’s comforting to remember that we’re all descendents of the first village, by definition. The second village didn’t make it.

Two Envelopes

Posted on

I was just trying to refresh my knowledge of Bayesian probability theory (don’t ask) and I came across something I hadn’t seen before, the Two Envelopes Problem. I’m a fan of the classic game theory paradoxes, the most commonly known one being the Prisoner’s Dilemma.

The latter, as trite as it might seem after endless repetition, is still really a cornerstone of game theory, and the usual entree into discussions of how game theory can apply to economics and strategic financial decisions. For myself at least it was the first introduction to Nash Equillibrium, a state where no player can unilaterally benefit by changing strategy.

What’s so eye opening when you dive into Nash Equilibrium after a heavy indoctrination in efficient markets theory is the realization that a perfectly understandable system with clear rules, where each player is acting perfectly rationally, can produce an outcome that’s decidedly sub-optimal. It’s a dilemma indeed, which is why cops have been using it for centuries. No matter what strategy the prisoner chooses it’s nearly always optimal to confess, though clearly the optimal strategy for both prisoners is to keep their mouth shut.

Thoughtful and prudent market players acting rationally don’t always produce efficient (or Pareto-Optimal, to use the jargon) results. Interesting. We’ll have to get back to that one.

But back to the envelopes. To summarize, assume an actor is given two identical envelopes, each of them containing a sum of money. One envelope has twice as much as the other. The player selects one envelope and keeps whatever is in it, but as soon as they choose, and before they open the envelope, they are offered the option of switching.

Should they take the offer?

If the envelope in your hand has X amount of money, then the other envelope has either .5X or 2X, with a probability of one half. Now as any self-respecting gambler can, you should run an expected value calculation and thus determine that the value of switching will pay off at .5X half the time and 2X the other half, for an expected value of the switch that works out to 1.25X.

So switching, on average will yield a 25% better return than not switching. Of course once you’ve switched, and you have the other envelope in your hand, you start the problem over from the beginning. And now it makes sense to switch again, for exactly the same reason. And again, and again, and again.

Using math, we’ve proven that switching to the other envelope is always the better choice, no matter which envelope is in your hand. Sounds like the statistician’s version of proving that the grass is always greener.

If you’re waiting for the punch line, there isn’t one. That’s why it’s called a problem.