Fargo Rating Algorithm Question

FeelDaShot

AzB Silver Member
Silver Member
If you’re the top rated player in Fargo, when does the system decide to increase your rating, as opposed to decreasing everyone else’s rating?

For example, Josh Filler is at 842 and Fedor Gorst is at 838. These are the two highest ratings. If Filler starts playing above his average, it seems like the system would have two options:
1. Increase Filler above 842
2. Decrease Fedor below 838 and lower all other ratings in the system accordingly.

Which option is best and why?
 
How the performance difference is in games you won against the other players and also how those other players do. If you are an 800 and play another 800, the % you change would differ if you win 9-2 vs 9-8. If you are an 800 but another 800 you are even with starts to beat players that are 820, now you are also going to slowly creep up although that would chance once you actually play that other player and loose or loose by more than expected.
The system looks at both players and adjusts them both a bit, the higher player may go down and the lower player may go up, till there are enough games to average out the swings so the change is tiny. Those two in your example may move like 1/2 of a single point per match even on a blow out like 9-0. You would need several such wins for one or the other to drop where they are the same level.

Think of the Fargo ratings as continental drift or the gravity effect of the moon. It's always in motion but slowly and goes back and forth without settling down on a single static point.
 
Last edited:
I think Mike mentioned a few years back he was "anchoring" the top pros, so the top of the scale does not move much. That said, the top has definitely moved in the 8 years Fargo has been out. I think the separation between top pro and top local superstars (Shortstop/Open player) has increased.

Here is an early Fargo chart from 2015, for example:
 
More likely the anchoring is happening at the midpoint of the scale, not the top. Either way, as the skill distribution of the FargoRate population has dramatically shifted with increasing numbers of league players entering the system, this has the effect of inflating the ratings of the folks at the top unless the anchoring level is adjusted.
 
More likely the anchoring is happening at the midpoint of the scale, not the top. Either way, as the skill distribution of the FargoRate population has dramatically shifted with increasing numbers of league players entering the system, this has the effect of inflating the ratings of the folks at the top unless the anchoring level is adjusted.
I see a lot of new names in the pro events, so it's not clear to me that the average is being diluted by league players.

I vaguely recall that Mike said something about the anchoring method. There are lots of reasonable ways to anchor the ratings. One way would be to make the average rating of the top 200 players 800.

The FargoRate FAQ page says that fewer than 20 players are rated over 800. That was written a while ago since there are now 40 (counting Dechaine and not counting players from China.
 
"I see a lot of new names in the pro events"

My league has 200+ players, most of whom are in the 250-450 range, and nearly all of whom got added in a huge batch a couple years ago. This has been happening all over the country as BCA and CSI link more closely with FargoRate.

Adding new pros from Australia, China and the Philippines will do a bit to counteract this but their site now says there are 299,000 players in the system. 300-400 missing pros will be swamped by tens of thousands of new league players.

"One way would be to make the average rating of the top 200 players 800."

Something like this is a great idea.
 
''' Adding new pros from Australia, China and the Philippines will do a bit to counteract this but their site now says there are 299,000 players in the system. 300-400 missing pros will be swamped by tens of thousands of new league players.
...
It is possible that the new players are proportional to the corresponding categories. That is, there were 50,000 rated 4xx and 20 rated 8xx, and since then we have picked up 50,000 more 4xx and 20 more 8xx. Without detailed statistics -- where is that Dr. Page when you need him? -- we don't know.
 
It is possible that the new players are proportional to the corresponding categories. That is, there were 50,000 rated 4xx and 20 rated 8xx, and since then we have picked up 50,000 more 4xx and 20 more 8xx. Without detailed statistics -- where is that Dr. Page when you need him? -- we don't know.
Possible, though it seems likely that before the BCA LMS automatic link-up most Fargo players and scores were from tournaments that attract more serious players. Few 200 and 300 level players put their hat in the ring for the kind of tournaments that find their way into FargoRate, while many are happy to play league nights.
 

I knew we had discussed this recently... Between 2018 and 2022, the average and median ratings both dropped by about 15 points due to the influx of league players.
 
I see a lot of new names in the pro events, so it's not clear to me that the average is being diluted by league players.

I vaguely recall that Mike said something about the anchoring method. There are lots of reasonable ways to anchor the ratings. One way would be to make the average rating of the top 200 players 800.

The FargoRate FAQ page says that fewer than 20 players are rated over 800. That was written a while ago since there are now 40 (counting Dechaine and not counting players from China.
[...]

We're in the process of building a system to test a few different new approaches to anchoring the ratings. So we'll have more to say when we're further along. To date, we've had a defined list of active players whose average rating we prevent from changing.. We're confident there has been some upward drift, but we really don't know how much.

If we take the 46,000 players who two-years ago had an established rating, we can see how the average rating of THOSE players has changed over two years. They had an average rating of 491.3 two years ago and 494.0 now.

If we look at the top several hundred players from 3-4 years ago and look at what those players' rating are now, we see they've risen 1.5 points per year or so.

There is no effect from the changing distribution from lots of new lower-level players being added. We could add a million new 300-level players and it wouldn't make any difference.

Over the past 4 years the number of established players in the 400s has gone from 7,500 to 21,000, quite a bit bigger increase than the number of 800+ players. At the high end, I think the increase is a combination of
A. finding new players
B. increasing level of play
C. upward drift
We don't know the relative weight of these, but we think none is negligible.
 
[...]

We're in the process of building a system to test a few different new approaches to anchoring the ratings. So we'll have more to say when we're further along. To date, we've had a defined list of active players whose average rating we prevent from changing.. We're confident there has been some upward drift, but we really don't know how much.

If we take the 46,000 players who two-years ago had an established rating, we can see how the average rating of THOSE players has changed over two years. They had an average rating of 491.3 two years ago and 494.0 now.

If we look at the top several hundred players from 3-4 years ago and look at what those players' rating are now, we see they've risen 1.5 points per year or so.

There is no effect from the changing distribution from lots of new lower-level players being added. We could add a million new 300-level players and it wouldn't make any difference.

Over the past 4 years the number of established players in the 400s has gone from 7,500 to 21,000, quite a bit bigger increase than the number of 800+ players. At the high end, I think the increase is a combination of
A. finding new players
B. increasing level of play
C. upward drift
We don't know the relative weight of these, but we think none is negligible.

How is the median player performing these days with all of the additional league add-ins?
 
[...]

...
C. upward drift
We don't know the relative weight of these, but we think none is negligible.

@mikepage I have a question about the upward drift, expecially of the pro players. My theory is that due to them being so competitive and the nature of how Fargo does ratings, they will keep climbing. I remember when there was one or maybe two 800 level players at one point.
Here is what I think is happening, let me know if it's correct. Say SVB is an 800, Filler beat him 9-4, now Filler is an 805 since he beat SVB. SVB then beats Filler, so Fargo ups SVB to 807. Filler then beats SVB, Fargo math looks at that, hey, SVB is 807, Filler won so he must be higher, Filler is now an 809. Repeat a bunch of times and slowly the line creeps upward.
 
@mikepage I have a question about the upward drift, expecially of the pro players. My theory is that due to them being so competitive and the nature of how Fargo does ratings, they will keep climbing. I remember when there was one or maybe two 800 level players at one point.
Here is what I think is happening, let me know if it's correct. Say SVB is an 800, Filler beat him 9-4, now Filler is an 805 since he beat SVB. SVB then beats Filler, so Fargo ups SVB to 807. Filler then beats SVB, Fargo math looks at that, hey, SVB is 807, Filler won so he must be higher, Filler is now an 809. Repeat a bunch of times and slowly the line creeps upward.
Wouldn't the opposite be true? When Filler beat SVB, wouldn't SVB go down? This should counterbalance to keep the effect that you describe from happening.
 
If you’re the top rated player in Fargo, when does the system decide to increase your rating, as opposed to decreasing everyone else’s rating?

For example, Josh Filler is at 842 and Fedor Gorst is at 838. These are the two highest ratings. If Filler starts playing above his average, it seems like the system would have two options:
1. Increase Filler above 842
2. Decrease Fedor below 838 and lower all other ratings in the system accordingly.

Which option is best and why?
The algorithm itself has no answer to this question. It is equally content with either. It is also equally content putting Filler at 1000 and Gorst at 995 or whatever and everyone else up 147 points. This is because all the algorithm cares only about rating differences, like the gap between Filler and Gorst. We have to essentially answer this question ourselves after the algorithm is done each day.

If we want the numbers to represent an actual skill level, then (1) is a more sensible choice. It is more likely that Filler--one person--was underrated before (or changed his skill level) than it is that many people were over rated (or changed their skill levels).

Here are a couple potential ways to do it and a comment:

(1) Shift the ratings each day so that the average established rating--currently at 480.2 for 75,000 players--stays the same.

As lower skilled players enter the system at greater rates (our expectation), "480.2" and every other rating will slowly represent less skill today than it did yesterday.

(2) Shift the ratings each day so that the average of the top 200 (or some other number) of players remains fixed.

This has the opposite drift as (1). As new top players --e.g., from Philippines or China-- enter the system, the average skill of the top 200 goes up. If we keep the average rating fixed, a given rating will represent more skill today than it did yesterday.

(3) Make a list of top 200 players at beginning of year and require the average rating of those 200 players remains fixed throughout the year. Then at the beginning of the next year create a new list (for which some of the previous year's 200 will be gone and some new members will come on) and do the same thing.

(4) Do (3) with a new list each month instead of each year

(5) Do (3) with a new list each day instead of each year

(6) Do (3), (4), or (5) with 2,000 or 20,000 or some other number rather than 200.

(7) For any of (3),(4), (5) or (6), should it be the average rating that is fixed or the average variance-weighted rating that is fixed.

(8) Or maybe this is all barking up the wrong tree and we should instead focus on the average rating or weighted average rating of people who DON'T play staying fixed.
 
If there is any pinning, I vote for the elite players. Maybe the average of the top 10, not even top 100. It would be more consistent over time. A bunch of bangers entering the rating system should not alter the absolute scale of what the numbers mean, imo. (yes, I know the numbers are relative only by design).

Take the top 10 today, vs top 10 of 1990, and I think it would be quite close. But if you go to the top 100 today, vs the top 100 of 1990, It might vary a lot more, as the players then were less organized with less tournaments (more gambling). Also, maybe not as many pros as most top pros were USA based whereas now its fully international.
 
I recently played in a tournament and was surprised to learn that I had a Fargo Rating as I don't recall ever playing in a tournament with my full name and I don't think the league I play in submits info to Fargo (many of the players in the league don't have a Fargo when I looked them up).

How did Fargo establish a rating for me then?
 
Back
Top