In The Improbability Principle, the renowned statistician David J. Hand argues that extraordinarily rare events are anything but. In fact, they’re commonplace. Not only that, we should all expect to experience a miracle roughly once every month.
What is chance?
The long history of the word ‘probability’, as well as its importance and the confusion that still surrounds it, are reflected by the fact that there are many other words for very closely related concepts. These include odds, uncertainty, randomness, chance, luck, fortune, fate, fluke, risk, hazard, likelihood, unpredictability, propensity, and surprise, among others. There are also other concepts that touch on similar ideas, such as doubt, credibility, confidence, plausibility, and possibility, and also ignorance and chaos.
The word ‘probable’ derives from the same Latin root as ‘approve’, ‘provable,’ and ‘approbation’ (from the word probare, meaning test or prove), and early uses have these sorts of meanings.
I might define the probability of an event as ‘the extent to which that event is likely to happen‘. Or, alternatively, as ‘the strength of belief that the event is likely to happen.’ These capture notions of uncertainty, and convey the meaning that a highly probable event is likely to happen and an improbable event is unlikely to happen.
‘Chance’ is another word that people often use in place of ‘probability’. Technically the chance of an event means the same as the ‘probability’ of that event, but chance is often used as a less formal alternative, and is seldom associated with a numerical value. We talk of the chance of it raining, and so on.
Why improbable events happen?
Borel’s law says that we simply should not expect (sufficiently) improbable events to happen. But we’ve seen countless examples of situations where such events have happened–and the Improbability Principle tells us that events which we regard as highly improbable occur because we got things wrong. If we can find out where we went wrong, then the improbable will become probable.
- Imagine I have a cloth bag which I tell you contains 1 black marble and 999,999 white ones (it’s a big bag). You reach your hand in and, without being able to see the color, draw out one marble. And you see it’s black.
Clearly the chance of this happening is very small–it’s literally one in a million. You might think that this probability is sufficiently small for Borel’s law to apply: it shouldn’t happen. But, despite Bore’ls law, you get a black marble. In such cases, this typically means that we’ve failed to take account of something which would lead to a greater chance of you drawing out a black marble. Perhaps I lied when I told you the number of black marbles in the bag.
- But now suppose I tell you (truthfully) that I have two bags, and that one contains a million marbles, one of which is black and the others all white. You blindly reach your hand into one of the bags and pull out a marble. It’s black. The question is, do you think that bag was the one which contained only one black marble or the bag which contained 999,999 black marbles?
I hope you’ll agree that since it’s more likely that the second bag would yield a black marble, that’s the bag you’d choose.
Balancing the probability of getting the observed outcome if one of the proposed explanations is true against the probability if the other is true is a fundamental principal underlying stadistical methods. We look at the data, and calculate the probability that it could arise from each of the competing explanations. The explanation which has the greatest probability of having produced the observed data will be the explanation in which we have the most confidence. Statisticians call this the law of likelihood: we prefer the explanation that is more likely to have produced the observed data.