Updated FargoRates are out

How closely matched they are is immaterial really. What is important is what percentage of the time people won compared to what percentage of the time FargoRate predicted they would win.

Another measure for an individual in a tournament would be their total match scores compared to the predicted total match scores. For example, given the strengths of someone's opponents, he might be expected to have a 53% games-won percentage in total but actually had 59% (with a total of 80 games played). That would be well within the normal variation of a single-tournament performance.
 
How closely matched they are is immaterial really. What is important is what percentage of the time people won compared to what percentage of the time FargoRate had predicted they would win.

Yes

Suppose Henry plays an opponent 20 points under him.

Predicting Henry will win a race to 5 is the same as predicting the blackjack dealer WON'T BUST showing a 6.
 
Last edited:
The probability of winning given 2 players' fargo rates is calculated in an even race setting. The algorithm should be able to estimate a "fair" handicapped match. Whatever the actual score ends up to be, we should be able to gauge the correctness of the prediction if a match ends up hill-hill (or close enough) in the in-silico predicted fair match.

I agree with this, with one caveat. Like many others, you seem to be thinking in terms that FargoRate attempts to predict individual match outcomes. It does not. What it tells you is that for any particular match up with no spots involved, if it were to be played 100 times, X player would win 27 of them, and Y player would win 73 of the for example. So if X player happens to win the match we are looking at, we can't go and say well FargoRate got this one wrong because FargoRate said Y player was favored but X player won the match, because FargoRate specifically said X player was going to win 27 out of a 100 times and this just happened to be one of those 27.

Same thing if you use FargoRate to determine what the actual score in a particular set length would be. Let's say FargoRate says that if player X and player Y played 100 races to 9, that they would average out with player Y winning by an average score of 9-6. So we couldn't then look at a single match in the tournament between players X and Y where the final score was 9-1 with player Y winning and say "well FargoRate didn't even get it close, it was 5 games off in its prediction" because it was 9-1 instead of 9-6. The thing is FargoRate might have their skills pegged perfectly, but some matches are going to be 9-1, some are going to be closer to the predicted 9-6, and some are even going to have player X winning, but they are going to average out to 9-6 and we have to remember than anything can happen in one match that may or may not be close to what is going to happen over time.

FargoRate predicts what will happen on average over time, and individual matches have way too much variance in them to try to use them to assess the accuracy of FargoRate. Now if you can somehow average the results for many matches to eliminate all that variance, now you have something you can compare to what FargoRate predicts for that same group of matches and you can then see how accurate it is or is not.

For example, if there were a whole bunch of matches where FargoRate was predicting one of the players to be favored to win 70% of the time, we use what actually happened on average in all those matches to compare (assuming there are enough of them) and if the favored person only won 45% of the time we might could conclude that FargoRate wasn't very accurate, but if the favored person won 75% of those matches on average we can conclude that FargoRate was amazingly accurate. There are other ways to make useful comparisons as well but it can never be with individual matches because of all the variance inherent in them, it has to be with large enough groups of matches that are somehow averaged together to eliminate that variance.
 
Another measure for an individual in a tournament would be their total match scores compared to the predicted total match scores. For example, given the strengths of someone's opponents, he might be expected to have a 53% games-won percentage in total but actually had 59% (with a total of 80 games played). That would be well within the normal variation of a single-tournament performance.

This is another perfect example of what I was talking about where you have to group matches and somehow "average" them to eliminate the variance inherent in individual matches. What you propose is another of the very useful comparisons for testing the accuracy of FargoRate.
 
Fargorate is the real deal. It is exactly like chess ELOs which have been an absolute gold standard across the game.

It won't predict if a player will choke on a certain shot.
It won't predict if they'll slop in the 9 ball when attempting a combination shot.
It won't predict if the 7 ball will skid on them.

All it does is reflect a players overall skill level based on their performance against their peers over time.

Could there be an instance in which a lower rated player is a favorite over a higher rated player? Certainly, if we had reason to believe that their performance in the future would exceed their performance in the past. This would mean that the player had IMPROVED, and as a result their fargorate would increase based on their new level of performance. For example, we knew that Can Salim was playing much stronger when he played Vilmos than his historical rating. This didn't mean Fargorate was 'wrong', it simply meant that his skill level had increased. Guess what, I'm sure his fargo rating will be on the rise as he plays at his new skill level over a number of matches.

So yes, anytime that someone improves or drops in skill level it will take time for the fargorate to reflect this. This is based on results, not reading tea leaves.

Overall Fargorate does reflect the skill someone brings to the table. If someone is rated higher than Earl, it doesn't mean that they have a backer and will put up 100K on a 10 foot table with no side pockets tomorrow. What it DOES mean is that they have historically performed better against their competition than Earl has. This is how we measure skill, because there is no way to objectively weight some racks to mean more than others.

I guess this could 'burn' someone that played very inconsistently but hit a huge high gear when they were really inspired. But again, fargorate would still reflect this overall performance. You'd just have to know the history of that player, and then it would be on you as a gambler if you wanted to bet on them catching their gear. Using the Earl example, maybe his highest gear could get there. Do you want to bet on him? I sure don't. Unless the money I was going to lose was worth the entertainment value of watching him bark at the audience every time he missed a kick shot.

The only concern for fargo is how they capture their data. If we put garbage in, we'll get garbage out. I know that I played a tournament a year ago that I won against good players and my rating didn't change. I'm guessing it was off the radar. But Mike does a remarkable job with this, particularly among top players that play major events and with league players. Technology has grown to where it's easier and easier to capture this information and fargo has grown to become more mainstream as well. I'd already give this an A, and I think this will be A+ in no time flat.

What I would like to see is an option to 'drill down' on a players rating and see the database of games they've played. For example, if I clicked on my name it would be nice to see which tournaments were included, etc. My rating has dipped in the last few months, and I'm not sure if it's because I've performed poorly lately, if I had good results from years ago that are losing weight, or if it's because fargorate is inflating as the top players continue to improve. It would be cool to better understand why I am where I am.

But as I said, this is the best thing that's ever happened to pool rankings and I'm amazed at how Mike made his vision a reality. Great job Mike. I'd give you a fargorating of 850 for coming up with rating systems, best in show. Although I'm sure Earl could've done better.
 
For example, if there were a whole bunch of matches where FargoRate was predicting one of the players to be favored to win 70% of the time, we use what actually happened on average in all those matches to compare (assuming there are enough of them) and if the favored person only won 45% of the time we might could conclude that FargoRate wasn't very accurate, but if the favored person won 75% of those matches on average we can conclude that FargoRate was amazingly accurate. There are other ways to make useful comparisons as well but it can never be with individual matches because of all the variance inherent in them, it has to be with large enough groups of matches that are somehow averaged together to eliminate that variance.

I provided an one-match example. Given each match is independent from each other, the approach is very applicable to combining every in-silico predicted fair match to assess what percentage of matches ends up being hill-hill or very close. When data points (number of matches) are sufficiently large, the variance from individual matches should be minimal.
 
Fargorate is the real deal. It is exactly like chess ELOs which have been an absolute gold standard across the game.

It won't predict if a player will choke on a certain shot.
It won't predict if they'll slop in the 9 ball when attempting a combination shot.
It won't predict if the 7 ball will skid on them.

All it does is reflect a players overall skill level based on their performance against their peers over time.

Could there be an instance in which a lower rated player is a favorite over a higher rated player? Certainly, if we had reason to believe that their performance in the future would exceed their performance in the past. This would mean that the player had IMPROVED, and as a result their fargorate would increase based on their new level of performance. For example, we knew that Can Salim was playing much stronger when he played Vilmos than his historical rating. This didn't mean Fargorate was 'wrong', it simply meant that his skill level had increased. Guess what, I'm sure his fargo rating will be on the rise as he plays at his new skill level over a number of matches.

So yes, anytime that someone improves or drops in skill level it will take time for the fargorate to reflect this. This is based on results, not reading tea leaves.

Overall Fargorate does reflect the skill someone brings to the table. If someone is rated higher than Earl, it doesn't mean that they have a backer and will put up 100K on a 10 foot table with no side pockets tomorrow. What it DOES mean is that they have historically performed better against their competition than Earl has. This is how we measure skill, because there is no way to objectively weight some racks to mean more than others.

I guess this could 'burn' someone that played very inconsistently but hit a huge high gear when they were really inspired. But again, fargorate would still reflect this overall performance. You'd just have to know the history of that player, and then it would be on you as a gambler if you wanted to bet on them catching their gear. Using the Earl example, maybe his highest gear could get there. Do you want to bet on him? I sure don't. Unless the money I was going to lose was worth the entertainment value of watching him bark at the audience every time he missed a kick shot.

The only concern for fargo is how they capture their data. If we put garbage in, we'll get garbage out. I know that I played a tournament a year ago that I won against good players and my rating didn't change. I'm guessing it was off the radar. But Mike does a remarkable job with this, particularly among top players that play major events and with league players. Technology has grown to where it's easier and easier to capture this information and fargo has grown to become more mainstream as well. I'd already give this an A, and I think this will be A+ in no time flat.

What I would like to see is an option to 'drill down' on a players rating and see the database of games they've played. For example, if I clicked on my name it would be nice to see which tournaments were included, etc. My rating has dipped in the last few months, and I'm not sure if it's because I've performed poorly lately, if I had good results from years ago that are losing weight, or if it's because fargorate is inflating as the top players continue to improve. It would be cool to better understand why I am where I am.

But as I said, this is the best thing that's ever happened to pool rankings and I'm amazed at how Mike made his vision a reality. Great job Mike. I'd give you a fargorating of 850 for coming up with rating systems, best in show. Although I'm sure Earl could've done better.

Fantastic post. I would like to add that any lack of data (or as much data as we would like) is not FargoRate's fault, but the fault of tournament directors and league operators who don't submit it, and all the players that don't put some pressure on them to submit it or at least request it.

Also, another possible reason that your rating could drop even if you haven't played any new games or have not played below your current rating is if the rating of some of your past opponents has dropped. If that 675 you beat turns out to not really be a 675 after all and he is now rated as a 620, then you are no longer getting as much "credit" for beating him and FargoRate has to adjust you down as well to reflect that your win over a 620 is a lot less impressive and doesn't take as much skill as your win over the "675" did.
 
I provided an one-match example. Given each match is independent from each other, the approach is very applicable to combining every in-silico predicted fair match to assess what percentage of matches ends up being hill-hill or very close. When data points (number of matches) are sufficiently large, the variance from individual matches should be minimal.

Sounds like we agree. The confusion is that in the post I responded to you only talked about using a single match to compare to what FargoRate would predict and made no mention of needing a sufficiently large number of them to reduce the inherent variance of individual matches and so it was misleading, at least to me. Totally agree with what you just wrote though.
 
Sounds like we agree. The confusion is that in the post I responded to you only talked about using a single match to compare to what FargoRate would predict and made no mention of needing a sufficiently large number of them to reduce the inherent variance of individual matches and so it was misleading, at least to me. Totally agree with what you just wrote though.

It's something we can quite easily do. We have fargo ratings for a number of players before and after the recent US Open 9-ball and we have the results. We can measure the distributions of "correctness" based on these ratings and their adjusted ratings post tournament. If we continue to collect these data points, the proposed question on whether fargo rates is approximately accurate can be answered in a statistically meaningful way.
 
It's something we can quite easily do. We have fargo ratings for a number of players before and after the recent US Open 9-ball and we have the results. We can measure the distributions of "correctness" based on these ratings and their adjusted ratings post tournament. If we continue to collect these data points, the proposed question on whether fargo rates is approximately accurate can be answered in a statistically meaningful way.

Again I agree. Not sure if you have seen it but this is something Mike has already done a number of times and using a number of different methods and each time FargoRate was very accurate. I hope he continues to do this from time to time in the future as well but it appears that some people are going to trust their own "feelings" and how things "seem" to them regardless of the data proving them wrong. Some people are just ignorant like that and have no use for any of that silly evidence and proof crap when it is in conflict with their beliefs and when it would require having to admit to yourself that you were wrong.
 
Again I agree. Not sure if you have seen it but this is something Mike has already done a number of times and using a number of different methods and each time FargoRate was very accurate. I hope he continues to do this from time to time in the future as well but it appears that some people are going to trust their own "feelings" and how things "seem" to them regardless of the data proving them wrong. Some people are just ignorant like that and have no use for any of that silly evidence and proof crap when it is in conflict with their beliefs and when it would require having to admit to yourself that you were wrong.

I have publicly criticized FargoRate numerous times but I never once questioned the accuracy of it. It's mathematically sound and I have seen the implementation in USAPL and thought it was genuinely accurate. I'm just a little bit confused on the delay for BCAPL. From an outsider's perspective, I'm not sure why it is difficult.
 
So what does the top 100 list really tell us? My bet is, if you stop 1,000,000 people at random on the streets of America, the only pool player they will mention is "The Black Widow", Jeanette Lee! So who is number one? You know my answer. Fargo be damned.

Lyn
It is a little off topic but your J. Lee statement is right on the money!
The only player any of my co-workers knew before I started teaching them a few more.

But would Simeng last to day 4 in a major men's open tournament?
Can you give me her best finish in a men's open tournament?

There's a big difference to torturing Karen in round 2 of the China women's open with alternating break and magic rack....And torturing Earl on day 4 of a 128 person field in a Mosconi cup ranking event.

You are wise to the game of pool so you know CONTEXT is extremely important to the discussion anytime you talk pool.

Example: "I just beat the 9-ball ghost"

Could be either "I just beat the 9-ball ghost on a valley bar box in my basement"....or "I just beat the 9-ball ghost on the 10 foot diamond table at derby while live streamed"

Now there is a difference because of context.
I see your point.
But after reading through the whole thread, I think it's the best thing going for a ratings system.
 
Mine is a bit off, but most of my rating comes from leagues, and I don't try there as hard as in tournaments. My rating is 20-40 points lower than some others I can play even with or beat.

Honest question: how do you know the guys that you “can play even with or beat” are “trying hard”?
 
Honest question: how do you know the guys that you “can play even with or beat” are “trying hard”?

I've known them for quite a while and know how they play not just against me. I've played against them in several events as well as many times in our local weekly tournament, not just a few times where they may not have been on their best game.
 
I have publicly criticized FargoRate numerous times but I never once questioned the accuracy of it. It's mathematically sound and I have seen the implementation in USAPL and thought it was genuinely accurate. I'm just a little bit confused on the delay for BCAPL. From an outsider's perspective, I'm not sure why it is difficult.

We are very close on releasing LMS for a wide variety of league formats.

I believe our database of games is now the second largest of any database for a head-to-head sport/activity, second only to chess, and we are two thirds of the way to that and closing fast.

Our daily global optimization is very much a serious "big data" application and is more sophisticated than what is done in any other sport/activity of which I am aware, including chess. There is a cluster of computers in the cloud working on this. Some of these machines are dedicated to parsing and processing incoming game data from leagues and tournaments, and most are used in the optimization process and are designed to run in parallel as much as possible. Thus we can scale the number of machines up to deal with ever increasing numbers of games and players

We are fully integrating all of this directly into a versatile league management system.

These are things that have never been done before, so we should all, imo, be at least a little circumspect in deciding whether it is difficult.
 
We are very close on releasing LMS for a wide variety of league formats.

I believe our database of games is now the second largest of any database for a head-to-head sport/activity, second only to chess, and we are two thirds of the way to that and closing fast.

Our daily global optimization is very much a serious "big data" application and is more sophisticated than what is done in any other sport/activity of which I am aware, including chess. There is a cluster of computers in the cloud working on this. Some of these machines are dedicated to parsing and processing incoming game data from leagues and tournaments, and most are used in the optimization process and are designed to run in parallel as much as possible. Thus we can scale the number of machines up to deal with ever increasing numbers of games and players

We are fully integrating all of this directly into a versatile league management system.

These are things that have never been done before, so we should all, imo, be at least a little circumspect in deciding whether it is difficult.

Mike, I apologize if my response sounded harsh. My work involves designing machine learning and AI algorithms to crunch through terabytes, and at times, petabytes of genomic data in the cloud mostly using services like AWS Batch and EMR. While I know nothing about LMS development, if you do run into technical issues, please feel free to reach out.
 
Mike, I apologize if my response sounded harsh. My work involves designing machine learning and AI algorithms to crunch through terabytes, and at times, petabytes of genomic data in the cloud mostly using services like AWS Batch and EMR. While I know nothing about LMS development, if you do run into technical issues, please feel free to reach out.

Cool... Thank you...
 
[...] I have no proof but I have a feeling fargo is starting to fall apart.

LOL... You'd better get the word out, because the number of people wasting their time seems to be growing pretty steadily...
 

Attachments

  • website365.png
    website365.png
    20.3 KB · Views: 394
  • fairmatch365.png
    fairmatch365.png
    19.4 KB · Views: 390
Back
Top