Regarding the consistency myth video: Very informative video! I find it very interesting and would not have thought that to be the case (at first). But now I think it makes sense. I think part of the push-back from people is that the type of “Fargo consistency” talked about in the video doesn’t fit well with traditional ideas of consistency. But what really is "consistency"? Take these two statements:
(1) Higher rated players tend to be more consistent than lower rated players.
(2) Higher rated players have the same Fargo performance variability as lower rated players.
Some may believe that only one of these statements can be true. Though I believe both can be true as long as we differentiate between the two ideas of "consistent" If fact, (2) may imply (1) if a suitable definition of "consistency" in statement (1) is pinned down.
Intuitively:
Take 2 players: Player 1 (Fargo 800) and Player 2 (Fargo 400), and have them each play many games against the ghost or some common third player. Give both players the same detrimental nudge to their “consistency” by making sure each misses an extra 1 out of 6 balls. (Something like: before every shot, a die is thrown and if it shows 6 spots, then the next shot is a miss, by definition,) In other words, give both players the same
absolute change to “consistency”. (This means i am defining consistency in terms of a rate of missing balls, an arguable point.)
Using “Fargorate performance” as the measuring stick, we should expect a more catastrophic effect on the 800 player than on the 400 player, due to differing
relative effects this nudge has on each player’s (vastly different) natural error rates. (Whether or not this seems intuitive, a very crude math analysis shows this is expected. It is as though, with the 400 level player, Fargorate views this nudge with a naked eye, but with the 800 level player, views it through a 16x magnification lens, figuatively making a mountain out of a molehill.)
Therefore if (1) were false, we should expect you (Mike Page) to be reporting that highly ranked players have
more variability in Fargorate performance, because of the "making of mountains out of molehills" effect . But you are not. You are reporting that variability is constant, and this would support (1).
Crude math seems to support this as well. If we imagine Fargo rating as determined by playing games against a 500 level opponent, then Fargo rating is proportional to the logarithm of the odds of losing a game to this opponent. Deviations in Fargo performance would be (approximately) proportional to observed deviation in these odds,
divided by the odds. For a 400 level player you'd have a quotient with denominator 2. For a 800 level player you'd have a quotient with denominator 0.125. So if these two quotients show the same variability (as the data seems to indicate), then the numerators must have differing variances, with the one corresponding to the 800 level player being less.
The best part of what the video suggest is that is permits that consistency can have some grounding by being quantifiable and empirically justified. In my mind it's ok to accept that (1) and (2) are compatible with each other as as it is understood the two notions of consistency refer to different things, with the traditional notion of consistency referring to something vague and subjective. Vague, subjective and even incorrect ideas can have lots of traction simply because they've been around a long time.