Home > News > Which pandemic model should you trust? Here’s how to evaluate them.
132 views 8 min 0 Comment

Which pandemic model should you trust? Here’s how to evaluate them.

Don’t rely on just one.

- May 6, 2020

The Institute for Health Metrics and Evaluation (IHME) model has been crucial in debates over the novel coronavirus. It is relied on by the White House and other policymakers. Yet Vox.com recently published an article arguing (in the headline) that “This coronavirus model keeps being wrong. Why are we still listening to it?”

More recently, there has been intense political debate over three models that have been discussed within the White House: the IHME model, a different model being developed by Johns Hopkins University researchers, and a “cubic” model developed by Kevin Hassett and the Council of Economic Advisers. These models make wildly different predictions: The Hopkins model suggests that coronavirus cases will surge to 200,000 a day, with 3,000 daily deaths by June 1, while the cubic model predicts that deaths will go to zero by the middle of May.

So how do you make sense of these different models and their different predictions? The best way to evaluate models is to ask four questions. First, how accurate has the model been so far? Second, how do scientists come up with the projections? Third, how will people respond to the model and change its predictions? And fourth, how can you best use them?

The IHME model’s predictions have changed — but that’s to be expected

To understand model accuracy, start with the IHME. The IHME (like other models of the pandemic) produces graphs of projected number of cases and fatalities. For June 1, the current IHME model’s best estimate is that there will be 890 fatalities (shown by the dotted line) and that 95 percent of the time, the number of fatalities will be somewhere between 250 and 2,750 (the “confidence interval”). Both the best estimate and the confidence interval provide useful information — they tell us that we should expect about 900 cases but prepare for upward of 2,500.

These predictions can change quickly: In just one week, the model’s best estimate for June 1 fatalities increased twentyfold, while its worst-case scenario jumped about twelvefold.

That said, the IHME can be quite impressive. On March 25, the IHME model’s best estimate for fatalities on May 1 was 1,282, with a range between 551 and 2,570. The reported number was 2,343. The actual number lies within the confidence interval (barely), and the point estimate is off by half. Good luck being that accurate without a model.

In some states, unemployment checks are much smaller than in others. Here’s why.

Different models have different assumptions

Predictive models are either curve-fitting models or microfoundation models. A curve-fitting model, like the early IHME model and Hassett’s cubic model, just tries to find the curve that best fits the data, without theorizing how the contagion spreads. However, the number of cases in a pandemic follows an “S” curve, where it starts slowly, picks up steam as it starts spreading in earnest and then slows again. It’s much harder to find the best-fitting S shape than a straight line, especially without a lot of data, which is one reason the confidence interval is so big. (There is a lot of uncertainty.)

Models with microfoundations instead try to capture how the contagion spreads, by looking at who is susceptible to being infected, who is exposed to the infection, who gets infected (and can pass it on), and who is recovered (and hopefully immune). You can use these models to figure out how social networks, transport systems, and differences between individuals might affect the spread of disease, but more complicated models rely on more assumptions, which they may get wrong.

One reason Hassett’s “cube” model has been criticized is that it makes a very strong prediction — that the number of deaths will rapidly shrink to zero — which seems contrary to how pandemics work in real life. Curve-fitting can be dangerous without understanding the process underlying the curve.

Models can undermine themselves

It’s hard to get curve-fitting and microfoundation models right. Worse, they can lead to self-defeating prophecies. If a model makes an extreme prediction and everyone believes it, then they may change their behavior in ways that undermine the model. If everyone had believed a pessimistic model in March that predicted 2.5 million U.S. deaths, they might have sheltered at home, dramatically lowering deaths from contagion. If they believed (as perhaps they did) the IHME model, which predicted only 65,000 fatalities, they might not have taken the threat seriously enough, leading to many more deaths than the model predicted.

This helps explain why models aren’t becoming more accurate over time. Not only may they make the wrong assumptions, but they may also lead people to behave in ways that undermine the model’s assumptions. That is why the IHME decided to supplement its curve-fitting model with a second model that tracks and estimates the proportion of people who are susceptible, exposed, infected and recovered.

You should use many models

All this doesn’t mean that we should stop using models, but that we should use many of them. We can continue to improve curve-fitting and microfoundation models and combine them into hybrids, which will improve not just predictions, but also our understanding of how the virus spreads, hopefully informing policy.

Even better, we should bring different kinds of models together into an “ensemble.” Different models have different strengths. Curve-fitting models reveal patterns; “parameter estimation” models reveal aggregate changes in key indicators such as the average number of people infected by a contagious individual; mathematical models uncover processes; and agent-based models can capture differences in peoples’ networks and behaviors that affect the spread of diseases. Policies should not be based on any single model — even the one that’s been most accurate to date. As I argue in my recent book, they should instead be guided by many-model thinking — a deep engagement with a variety of models to capture the different aspects of a complex reality.

The TMC newsletter is changing shape! Sign up here to keep receiving our smart analysis.

Scott E. Page is a professor of political science, of complex systems and of economics at the University of Michigan, where he is also the Leonid Hurwicz collegiate professor; the John Seely Brown Distinguished University Professor of Complexity, Social Science and Management; and the Williamson Family Professor of Business Administration.