XC Ranking Perspective (Tully Runners Article)


 [back to Home Page] [back to Articles Page]

Perspective on Cross Country Speed Ranking

by Bill Meylan (Tully Runners)

(draft:  August 13, 2002)

Based upon recent e-mails and conversations, some additional clarification is obviously required to explain my XC speed rankings.  The term "ranking" is causing the problem ... by definition, "ranking" means placing in a particular order or position.  However in some sports parlance, "ranking" has an implication of subjective ordering ... for example, college football and basketball polls are based upon writer's opinions of "who is better than who".  My speed ranking lists are NOT my opinion of "who is better than who" ... they are simply lists of "who has run faster than who".

My speed ranking lists are analogous to track & field leaderboards ... There are only two differences between track leaderboards and my lists:
(1) track leaderboards use a single race result (the fastest time for an individual) ... my lists use multiple race results (combined to form a single race time for an individual).
(2) track leaderboard times are absolute times (the actual times seen on a stopwatch) ... my lists use relative times (actual times adjusted to a standard XC race course).

Hopefully, this real life example will give some perspective on leaderboards.  I maintain my own track leaderboards for distance races ... at the end of the 2002 outdoor season, there were 28 runners on the boys Section III 1600 meter leaderboard between 4:38 and 4:44 ... so only 6 seconds separated 28 runners ... placing just these runners in a list, somebody is "ranked" #1 and somebody else is "ranked" #28 ... actual race results show several runners near the bottom of the list had beaten runners near the top of the list (some on more than one occasion).  In any group of runners separated by just a few seconds of time difference, some will take turns beating each other in specific races ... however, some runners will run faster than others (as shown on the stopwatch) and will be placed higher on the leaderboard ... and nobody seems to complain about this in track.

The most common complaint I receive about my XC rankings is: "why am I ranked below a certain runner I have beaten??" ... My XC rankings during the XC season are based solely on speed (how fast runners have run races) ... head-to-head competition is not considered ... just like track leaderboards.  Please remember ... there is only one "event" in cross country, so everybody (literally hundreds of runners) are sorted for the same leaderboard ... and many runners are very closely rated in terms of speed (just like the 1600 meter leaderboard example above).


How do I combine multiple XC race times to get a single race time rating for an individual?

Near the end of the season (for sectionals and States only), I perform a "Monte Carlo" simulation as described in my first article on computer ranking in cross country ... During the season (when fewer results are available) I use a simple statistical weighting method ... I have tried various weighting methods, and it really doesn't make that much difference.  Initially, the computer retrieves all speed ratings for an individual runner ... Then:

(1) the computer analyzes the speed ratings as a consecutive series ... low ratings (inconsistent with the series) are excluded ... so bad races (inconsistent with the series) are tossed-out and do not count (exclusions require at least three race results)

(2) the computer then identifies the following (depending on number of results):
 .... the highest speed rating
 .... the seasonal average speed rating
 .... the most recent speed rating
 .... the average of the most recent two and/or three speed ratings
 .... the maximum and minimum of the most recent speed ratings
A variety of similar numbers can also be identified (doesn't really matter).

(3) A "weighting factor" is then applied to each of the identified ratings above ... just like getting a school grade as in 50% of your final grade comes from your final exam and 25% comes from your mid-term exam and 25% from your homework or quizzes.  How I weight the ratings depends upon the the number of race results ... as long as any reasonable combination of the identified ratings are applied, it doesn't make much difference!



Developed and maintained by Bill Meylan