Now that we’re writing about baseball . . . what do you think of Phil Price’s observation about All-Star games from 1965 to the present (with N indicating National League wins and A indicating American League wins):
The “T” indicates a tie (in 2002): unlike regular games, there is no requirement that the All-Star Game continue until somebody wins, and pitchers are reluctant to pitch too many innings and potentially hurt themselves.
I was born into an era in which the National League won every game. Now, the American League wins (or, at least, doesn’t lose) every game. This is happening in a sport where even bad teams beat good teams occasionally, so it’s really mystifying. It would be possible to explain a small edge for one league or the other, that persists for a few years — the league with the best pitcher will have an advantage, for example, and that pitcher can play year after year — but these effects can’t come close to explaining the long runs in favor of one team or another. Predicting next year’s winner to be the same as this year’s winner would have correctly predicted 80% of the games in my lifetime…and that’s if we pretend the National League won the tie game in 2002. (If we pretend the American League won it, it’s 84%).
What would be a reasonable statistical model for baseball All-Star games, and why isn’t it something close to coin flips?