As Featured on Forbes
When Behavioral Economics Backfires — 3 Dangers to Avoid
If you’re reading this article, you’re likely familiar with, or at least intrigued by, the concept of behavioral economics (also known as BE). The study of how humans really make decisions has generated a lot of press and brought social science to the mainstream through popular books such as “Nudge,” “Predictably Irrational” and “Thinking Fast and Slow.”
At a glance, it sounds easy enough — just go to Wikipedia, choose any of the hundreds of interventions and you’ll get people to behave just the way you want them to. Wrong. At best, that won’t work; at worst, you’ll sabotage results and your good name.
With this article, you'll discover three pitfalls to avoid when using behavioral economics.
Pitfall #1 Bad Experimental Methods Yields Bad Results.
- Design a controlled intervention. Treat this like medical research: You want to give some people a sugar pill and some people the medicine so that you can isolate the variables you want to measure.
- Have the right sample size and power. Every test you run requires a different calculation. But if you know one or two pieces of the puzzle, you can use a sample size calculator like this one.
- Analyze the data in a meaningful way. Strive to understand why those specific variables changed the person's behavior, not just what they do.
Pitfall #2 When Nudging Goes Bad
Pitfall #3 Mistaking BE For A Silver Bullet And Shooting Yourself In The Foot.
Silicon Valley founders know that Big Data and A/B testing are important. However, in my experience, they have little to no training in running experiments. I believe that the startup culture of “iteration” and “failing fast” is serving as a replacement for proper controls. While it may be enough to get a company from nothing to “something that works well enough” at the start, many companies don’t iterate fast enough, thoroughly enough or objectively enough to learn what actually works in the long term.
For example, I have seen that many surveys don't correct for bias in the way they ask questions or collect answers, which can result in misleading data. There are A/B “tests” that neglect to define and test a single hypothesis. Most damaging of all, though, is that big, actionable conclusions are being drawn from tiny, qualitative sample sizes.
Behavioral science practitioners are seeking to overcome these problems. Many behavioral economists are either psychologists trained in careful experimentation and rigorous data analysis techniques or economists trained in understanding how you run control experiments with real people.
Recognize that bad experimentation can harm your company. To do it correctly, you need to:
At its core, behavioral economics recognizes that people sometimes ignore rational conclusions and make choices that are not in their own best interests. If you’ve ever hit snooze instead of going to the gym, binge-watched Breaking Bad on a Sunday night, or bought yet another pair of new shoes instead of saving for a rainy day, you know that even if you know better, you don’t always do better. (Or, as behavioral scientists like to say, “knowledge doesn't equal action.”)
Sludge means a selfish nudge. These are behavioral interventions that only benefit the company, not the user. When behavioral economics (BE) first left academia and went into the real world, its practitioners wanted to use these nudges to help people overcome friction and make better decisions. (This is evidenced by the many mission statements and “about us” pages of Behavioral Science companies.) That’s why you see so many good nudges helping people make better decisions about their finances, their health or the environment.
But the real world is also home to those with less noble goals: Some companies try to use BE for purely selfish reasons. That misuse is a bummer — and something that we take seriously at my company. For example, consider how Uber got caught misusing BE in order to trick its drivers.
The most famous BE experiments carry a lot of wow factor. But these big effects often come out of a carefully controlled lab environment. Real human decision making has a lot more inputs and is far more subtle. Scientists themselves do study subtle details, but these studies aren’t sexy enough to make it onto slide decks. They are buried in journals that only people like me read.
Here’s a cautionary tale of what happens when you randomly release a BE intervention into the world: To an economist, a 1/100 chance of winning $100 is the same as a sure bet that wins you $1. Eager to motivate their employees, United Airlines implemented what it expected to be a great BE intervention. Instead of giving staff their regular bonus, United added up all the bonuses they currently pay out and introduced a lottery system. A few lucky employees would get a large amount of cash, luxury car or vacation, while the others wouldn’t get anything. They thought framing the big cash prize would be more motivating than a small regular bonus.
Not quite. Staff and unions were up in arms, and within days management had to revert to the original scheme. Here’s a BE analysis of the problem: United considered framing but failed to consider loss aversion. More specifically, they neglected a little talked about, but well-documented, principle that while people prefer a lottery when they may lose money, they almost always choose a sure bet when they stand to gain. This holds true even if the expected value of winning a bigger cash prize dwarfs the value of the smaller sure thing.
There is no silver bullet. You can’t assume that tendencies apply unconditionally or overgeneralize findings from one context to the next.