What does probability mean anyway?

This post summarizes the material in Philip Stark and David Freedman’s article “What is the chance of an earthquake” at a high level.

If I tell you an event has a 70% chance of happening, what does that mean? Does your interpretation differ depending on the event? When we learn probability, we typically think about flipping coins, rolling dice, picking cards, or pulling balls out of urns. We also connect probability to more complicated phenomena such as the chance of rain tomorrow. But does our probabilistic intuition hold up to even more complicated events? For example, how do we interpret the following statement?
“What is the chance that an earthquake of magnitude 6.7 or greater will
occur before the year 2030 in the San Francisco Bay Area? The U.S. Geological Survey estimated the chance to be 0.7\pm 0.1 (USGS, 1999).”

(USGS: 1999, “Working Group on California Earthquake Probabilities. Earthquake Probabilities in the San Francisco Bay Region: 2000-2030 – A Summary of Findings”. Technical Report Open-File Report 99-517, USGS, Menlo Park, CA.)

Philip Stark and David Freedman use this example to walk us through the interpretation of the point estimate (0.7), the uncertainty estimate (\pm 0.1), and what probability (“chance”) means in this context.

Since the first interpretation of probability that we often encounter comes from examples of games of chance, gambling gives us the symmetry and equally likely outcomes interpretation. Let’s consider a fair coin toss. Both heads and tails must be equally likely, so the probability that we get a tails must be 1/2. The same is true for a fair die. Now we have six equally likely states, so the probability of getting a two is 1/6. If we try to apply this interpretation of probability to the earthquake problem, we do not have a natural sense of symmetry to exploit, so we must turn to other interpretations.

Another natural interpretation of probability is by imagining multiple repetitions of an experiment and using the proportion of the time that a desired outcome occurs as the probability of that outcome. This is referred to as the frequentist approach. In the earthquake case, it seems nonsensical to think about repeating the years until 2030 over and over again to estimate the probability of an earthquake occurring. Again, we must look for another probability interpretation to help us make sense of the earthquake probability statement.

An alternative to the frequentist approach is the Bayesian approach where instead of imagining multiple replications of a scenario to obtain a probability, we map a probability to a degree of belief in a certain outcome. If we believe something is impossible, it has probability zero. Likewise, if we have full confidence in an event taking place, it has probability one. In the earthquake scenario we want an inherent probability, not a overview of others’ opinions about how likely an earthquake is, even if these opinions are those of experts.

Another common probability interpretation is the Principle of Insufficient Reason which states: “If there is no reason to believe that outcomes are not equally likely, take them to be equally likely.” To apply this interpretation we must first define the set of potential outcomes. In the earthquake example, there are infinite time points between now and 2030 when an earthquake could occur. How we define the set of potential outcomes in terms of these time points can impact our probabilities. It seems strange that the probabilities would then be different under this principle due to our definition of potential outcomes, not due to something fundamental property of earthquakes. This probability interpretation is also found wanting.

Moving towards a more theoretical framework, probabilities can also be interpreted formally as mathematical probability. In this case probabilities must follow a certain set of rules. They must be non-negative (what would it mean to have a negative probability?). The probability of all possible outcomes must sum to one (something must happen, including a “nothing happening” result). If outcomes cannot happen at the same time, then the probability of at least one happening is the sum of all of its probabilities. This formalism can help us in the earthquake example, but we still need some additional structure beyond these rules.

Probability models build upon the mathematical probability rules, adding structure and helping us interpret probability as, in the words of Stark and Freedman, “just a property of a mathematical model intended to describe some features of the natural world.” We need to ensure that our model “matches” well with the phenomenon it aims to illustrate. Matching can be defined in terms of the frequentist approach. If we simulated from the proposed probability model many times, does the proportion of the event occurring in the simulation match the “true” probability of the event? For the earthquake example, we could design some model that explains earthquake behavior and use it to determine the probability of an occurrence. However, because earthquakes are few and far between, we do not have much data to build and test the model on to ensure that our predictions of earthquakes match well with what might happen in reality.

Now that we have these probability interpretations in mind, let’s revisit the USGS forecast for an earthquake. It seems like we must fit the USGS estimate into the probability model category; a mathematical model was made and under that model, the probability of an earthquake occurring was predicted as 0.7.

An important feature of the probability model interpretation is that the proposed model must match well with the phenomenon it aims to illustrate in order to be meaningful and useful. If we are unsure about how well our model matches the truth (which is most often the case), we must be sure to communicate this uncertainty.

What does the uncertainty estimate for the earthquake case mean? In this case, the probability model that the USGS created for the earthquake process was simulated from multiple times (like the multiple realities suggested by the Frequentist approach), and the 0.1 represents the variability in the outcomes of the simulations that were used to arrive at the point estimate of 0.7.

However, Freedman and Stark point out that many more sources of uncertainty should be incorporated as well. The model itself is an imperfect representation of the true earthquake generating process. This particular model happens to be made up of smaller models for various geologic features, leaving room for imperfection in these representations as well. Some of these sub-models require input parameters that are also subject to uncertainty. Realistically, the uncertainty estimate given by the USGS is much larger.

The big take-aways from this thought exercise are that some convenient probability interpretations cannot be used to reason about certain complex, chance events, and any probabilities ascertained from the probability model interpretation should include the model building itself as a source of uncertainty.

Want to learn more? Read the details in the original paper.

Thoughts? Questions? Feedback? Let me know: @sastoudt

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s