Fiddling around with Mike Page's Fargo Ratings made me wonder whether anyone has done any analysis on the role of variance ("luck", if you will) in billiards. That is, how much inherent fluctuation there is in results of pool billiard games (or other disciplines, for that matter). Obviously there's more luck involved in say 9 ball than there is in chess, but compared to, say, poker, has much less variance. (Though poker is obviously a skill-game too. It just has a way larger variance.)
As an example, consider two players that have a Fargo Rating difference of 100 points. By Mike's system, this means that the likelihood of the worse player winning a frame (a game, I guess I should call it) is 1/3 and 2/3 for the better player. This means that in race to six wins, the most likely result is 6-3 for the better player.
If the variance is very high, it would mean that in practice some of the matches are won by the worse player and more by the better player, but in average it would come out as 6-3 for the better player. In contrast, if the variance is very low, most results of matches would be just around 6-3. Some matches would end 6-2 and some 6-4, but it would be very unlikely that the worse player wins.
What I'm looking for is some way to describe or quantify the variance of pool billiards. Let's take as an example some pro tournament. I think it's fair to say that there is luck involved. You can't win by luck, of course, but in match where two players are pretty equally skilled, the luck plays sometimes a significant factor. Two players can break the rack equally good, but the other one landing on the smallest ball in 9-ball and so forth.
I haven't thought this through yet so I'm not sure if I make any sense, but I was interested if anyone else has thought about this and perhaps wrote about it too. I'm at a point where I'm not quite sure what the exact question is that I'm looking for, but I feel like analysis of the variance could bring some interesting comparisons, say between disciplines or how the amount of frames needed in a match changes the variance.
Edit: I guess I could just use Fargo Ratings and determine the standard deviation of a single game based on real results and go from there.
As an example, consider two players that have a Fargo Rating difference of 100 points. By Mike's system, this means that the likelihood of the worse player winning a frame (a game, I guess I should call it) is 1/3 and 2/3 for the better player. This means that in race to six wins, the most likely result is 6-3 for the better player.
If the variance is very high, it would mean that in practice some of the matches are won by the worse player and more by the better player, but in average it would come out as 6-3 for the better player. In contrast, if the variance is very low, most results of matches would be just around 6-3. Some matches would end 6-2 and some 6-4, but it would be very unlikely that the worse player wins.
What I'm looking for is some way to describe or quantify the variance of pool billiards. Let's take as an example some pro tournament. I think it's fair to say that there is luck involved. You can't win by luck, of course, but in match where two players are pretty equally skilled, the luck plays sometimes a significant factor. Two players can break the rack equally good, but the other one landing on the smallest ball in 9-ball and so forth.
I haven't thought this through yet so I'm not sure if I make any sense, but I was interested if anyone else has thought about this and perhaps wrote about it too. I'm at a point where I'm not quite sure what the exact question is that I'm looking for, but I feel like analysis of the variance could bring some interesting comparisons, say between disciplines or how the amount of frames needed in a match changes the variance.
Edit: I guess I could just use Fargo Ratings and determine the standard deviation of a single game based on real results and go from there.
Last edited: