My colleague Eric Lawrence sends this along:

bq. In today’s Wall Street Journal, Nate Silver of 538.com makes the case that most people are “horrible assessors of risk.” A large academic literature in psychology and elsewhere supports his argument. Psychological biases and the inability to manipulate numbers both contribute to the difficulty of risk assessment. Adding in probability only compounds the problem, as probability can be tricky even on seemingly straightforward problems.

bq. This trickiness can even trip up skilled applied statisticians like Nate Silver. This passage from his piece caught my eye:

bq. “The renowned Harvard scholar Graham Allison has posited that there is greater than a 50% likelihood of a nuclear terrorist attack in the next decade, which he says could kill upward of 500,000 people. If we accept Mr. Allison’s estimates—a 5% chance per year of a 500,000-fatality event in a Western country (25,000 causalities per year)—the risk from such incidents is some 150 times greater than that from conventional terrorist attacks.”

bq. Here Silver makes the same mistake that helped to lay the groundwork for modern probability theory. The idea that a 5% chance a year implies as 50% chance over 10 years suggests that in 20 years, we are certain that there will be a nuclear attack. But as the popular bumper sticker says, statistics means never having to say that you are certain. The problem is analogous to the problem that confounded Chevalier de Méré, who consulted his friends Pascal and Fermat, who then derived several laws of probability. That’s the short version. A longer version was recently published as a monograph. A way to see that this logic is wrong is to consider a simple die roll. The probability of rolling a 6 is 1/6. Given that probability, however, it does not follow that the probability of rolling a 6 in 6 rolls is 1. To follow the laws of probability, you need to factor in the probability of rolling 2 6s, 3 6s, etc.

bq. So how can we solve Silver’s problem? The simplest way turns the problem around and solves for the probability of not having a nuclear attack. Then, preserving the structure of yearly probabilities and the decade range, the problem becomes P(no nuclear attack in ten years) = .5 = some probability p raised to the 10th power. After we muck about with logarithms and such, we find that our p, which denotes the probability of an attack not occurring each year, is .933, which in turn implies that the annual probability of an attack is .067.

bq. But does that make a difference? The difference in probability is less than .02. On the other hand, our revised annual risk is a third larger. Making risk assessments can be tricky. Deciding what to do with the risk assessments poses a much more formidable problem.