Suppose you are deciding between several competing strategies, products, or services. The choice could be about any number of things: responses to a crisis, funds for your retirement, versions of the latest iPhone, or something else entirely. What is important is that each option comes with its own set of benefits and drawbacks or varying levels of risk and reward. When faced with such choices, what is your decision making process? 

Well, if the last half-century of cognitive science research has taught us anything, it’s that your process is probably complicated, and even you may not be able to really explain it. In many ways, scientists still don’t fully understand those processes either. But PhD student Logan Walls (Cognition & Cognitive Neuroscience) hopes his dissertation will get them closer. 

To understand Walls’s project, however, we first need to learn about some known quirks of human decision making. First, consider this variation on a classic thought experiment (1): A natural disaster is approaching your town, and you must decide how to evacuate its population. Time is running out, and there are no solutions that can save everyone. Instead, you are forced to choose between two options. Pick one and remember your answer.

Scenario 1:    Plan A-1
                      Guarantees that you save 3,000 lives

                      Plan B-1
                      Provides an 80% chance that you save 4,000 lives; otherwise (20%), no one is saved

Now consider a slightly different scenario. A natural disaster is still approaching, and you’re still in charge. But this time, your choices are as follows: 

Scenario 2:    Plan A-2
                       Guarantees that 3,000 people will die

                       Plan B-2
                       Provides an 80% risk that 4,000 people will die; otherwise (20%), no one dies

So which options did you choose? As it turns out, most people pick Plan A-1 in the first scenario. People tend to go with the sure bet here. The other alternative, an 80% chance of saving an additional 1,000 lives, is not enough to outweigh the (admittedly relatively low) risk of saving no one at all. Conversely, most people pick Plan B-2 in the second scenario. They take the riskier gamble here. For most people, the idea of condemning 3,000 people to certain death feels less acceptable than taking a (relatively long) shot at saving them all.

But here’s the thing: There is an answer in both scenarios that would—statistically speaking—save more lives on average. But in both cases, that answer is the opposite of what most people (including this author) intuitively choose. Picking Plan B in the first scenario would mean that you would save 3,200 lives on average. But it would also mean giving up the sure thing: the certainty of saving 3,000. And in the second scenario, picking the most popular option (Plan B) means that you would actually be condemning 3,200 people to death on average (instead of going with the “safe” bet and killing only 3,000 with certainty). In other words, in such calculated risk scenarios, what feels right to most people is sometimes the opposite of what statistically “makes sense.”

These behavioral phenomena were famously studied in 1979 by Daniel Kahneman and Amos Tversky (Tversky, incidentally, was also a Michigan PhD recipient). Their findings contradicted utility theory, the historically popular belief that people behave (mostly) rationally when making such decisions: conducting informal cost-benefit analyses and choosing options that maximize desirable outcomes.  To explain those deviations, Kahneman and Tversky proposed prospect theory, which better predicts actual decisions by incorporating various behavioral quirks. For example, people tend to go with the “sure thing” in situations like the first scenario (when taking a greater risk is about maximizing potential gains). But people take much greater risks in scenarios like the second (when the goal is to avoid losses). These patterns manifest in less dramatic situations as well, such as gambling or investing. Prospect theory has since become well established, and existing mathematical models for it do a good job of predicting behavior in those kinds of risky choice scenarios.

But other scenarios bring out different anomalies. For example, suppose you plan to buy a new TV, and you are comparing three options. The specifications and performance of each TV are similar, but each is a different size, and larger TVs cost more. You would ideally like to get as large a TV as possible, but you still want to get good value for your money. This kind of scenario is known as a multi-attribute choice. Here, two attributes play into your decision: size and cost.  

Previous research (2) has shown that perceptions of which TV is the best value vary significantly based on what other options are presented to people. In other words, the perceived value of each TV is not intrinsic or fixed; rather, it is contextual, and the associated behavioral phenomena are known as context effects. For example, when presented with three options that are relatively evenly divided across both attributes, people tend to pick the middle option (a compromise effect). Things then get more complicated and interesting when the relationships between options change. If one of the TVs is either a bit smaller and cheaper or a bit larger and more expensive than one of the others, for instance, people’s decisions can be manipulated in predictable ways (known as attraction effects). Context effects are also well supported, and existing models for them can predict behaviors. In fact, companies extensively exploit context effects in real-world product design and marketing.

But such behavioral phenomena have mostly been studied (and modeled) individually until now. That means scientists still do not understand the underlying cognitive tendencies (if they exist) that drive those behaviors. In other words, scientists know that we behave in several interesting ways when making decisions, but they still don’t know how they are all connected on a deeper cognitive level.

That is where Logan Walls’s dissertation comes in. Walls’s project incorporates Regret Theory, which was proposed in 1982 by Graham Loomes and Robert Sugden (3). Regret Theory posits that human decision making is heavily biased toward avoiding potential outcomes that would cause regret if they came to pass. That is, our decision making is more heavily weighted toward avoiding the most psychologically painful outcomes than toward achieving the best ones. This hyper-awareness of “the worst that could happen” causes us to make what appear to be counterintuitive choices. In the first natural disaster scenario, for example, Regret Theory says that people are picking Plan A-1 not so much to maximize the number of lives saved; rather, they are picking it because Plan B-1’s 20% possibility of killing everyone is something they would have more trouble living with (even though it is relatively unlikely to occur). Conversely, in the second disaster scenario, picking Plan A-2 (with its certainty of killing 3,000 people) is the option that would create the most regret for most people. Plan B-2 is preferable because we will know that we at least tried to save everyone, even in the (likely) event that it failed (and even more lives were lost).

Similarly, when selecting from various TVs (or iPhones, or cars, etc.), we are more biased toward avoiding a bad deal or a bad product than toward getting a good one. We don’t need the best thing out there, but no one wants to get ripped off or pick an product that will soon become obsolete.

While regret theory has been around for decades, a regret-theory-based mathematical model that predicts human behaviors across multiple scenarios has been elusive. Walls is hopeful that his project will help change that. He explains: “Regret Theory is well-known in its original domain: risky choice. But interestingly, we are finding that (with a few modifications) it does a good job explaining people’s choices in other domains as well!” 

Developing such a unified model would have many real-world benefits. Perhaps most obviously, it could help us better understand the decisions made in crisis scenarios—such as natural disasters or emergency medical care—as well as consumer behaviors of many kinds. Less obviously, it could also have important uses in clinical psychology, such as developing better treatments for gambling disorders.

Walls plans to defend his dissertation in 2024. Although he is currently still testing and refining his model, the numbers (so to speak) seem to be adding up so far. He is excited about the prospect of testing it with human research subjects over the coming year, as well as its potential impact on future research and our understanding of actual decision making.

“In terms of both research and applications, I think it’s really exciting that a theory based on such an intuitive idea (that we avoid making choices we might regret) can explain so much,” he says. “Models that unify our understanding of multiple types of decisions are very valuable because they can be applied to more realistic scenarios. Real life is a lot more complicated than the laboratory, and the more aspects of decision making a model can explain, the more applicable it becomes to helping people with real-world decision making.”

 

References:

1. Amos, T., & Kahneman, D. The Framing of Decisions and the Psychology of Choice. Science 211, 453-458 (1981). DOI: 10.1126/science.7455683 https://www.science.org/doi/10.1126/science.7455683

2. Simonson, I. (1989). Choice based on reasons: The case of attraction and compromise effects. Journal of consumer research, 16(2), 158-174. https://academic.oup.com/jcr/article/16/2/158/1800431

3. Loomes, G., & Sugden, R. (1982). Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty. The Economic Journal, 92(368), 805–824. https://doi.org/10.2307/2232669 https://www.jstor.org/stable/2232669