Based on a variety of recent e-mails sent to me, I
think I need to post some general explanations regarding my rankings and
speed ratings ... There seems to be some misconceptions as to what they
are and how they are formulated.
The general overall process is as follows:
(1) I get the results for a meet.
(2) I evaluate the results to determine how
fast or slow a race is compared to other races and a baseline race (an
average race at SUNY Utica) ... this a called a race adjustment or
correction ... it is a specific number of seconds ... the same
adjustment applies to everyone in the race.
(3) I take the actual race times from the
meet and add or subtract the number of seconds in the adjustment ... for
example, if the actual race time is 18:00 and the adjustment is +5
seconds, the adjusted race time becomes 18:05
(4) The race data is uploaded into a master
database ... it includes name, grade, school, section/class, race, date,
place, actual time, adjusted time.
(5) The adjusted time is converted to a
speed rating with a simple conversion that does not change the adjusted
time ... it just converts it to a number that is easier to compare ...
one speed rating point equals three seconds ... the scale starts at
26:00 which equals zero ... any adjusted time slower than 26:00 becomes
a negative speed rating.
(6) To generate the individual rankings, a
computer program (that I wrote) opens the master database, calculates an
overall rating for each runner, and outputs a file that is uploaded
directly into a Microsoft Excel spreadsheet ... the spreadsheet is
sorted into the exact ranking lists that are posted on the web-site.
Subjective Judgement ... the ONLY
subjective judgment involved in the process is determining the
adjustment time ... for a minority of races, some "best estimates" are
necessary ... but for the majority of races, the determination is
strictly statistical in nature, meaning the determination is done for me
in a semi-automated process and I just grab the result.
Misconception ... "The race speed is
determined from the fastest runners in a race" ... This is totally
incorrect! ... The top runners in a race are intentionally excluded from
the race adjustment determination ... one of the statistical methods
used to determine the adjustment is based on groups of runners ... I
want to identify groups of runners in the race which correspond to
classes I call "normal above average" and "typical average runners" ...
the theory behind the evaluation is that "average runners" in NY (and
most NY sections) are approximately equal in ability and speed to
"average runners" in other NY sections, states or regions ... IF
I can identify the "average runners" in a race, it becomes possible to
equate them to my speed rating scale ... Likewise, using the "reference
runner" method to determine adjustments, the focus is on the individuals
in the groups above because they comprise the greater number of runners.
Because adjustments focus on the average runners, in races at shorter
distances (such as 2.5-mile at VCP), the top runners can lose several
rating points because the spread between runners is less ... For some
reason, many people think I assign a rating to the fastest runner in a
race and calculate downwards - and thatís backwards to what really
Misconception ... "The guy from
TullyRunners is biased toward Section 3 runners" ... Much of
this seems to be coming from Section 2 ... In a somewhat ironic twist,
coaches and runners from other sections think Iím biased towards Section
2 (in both ratings and pre-season previews) ... Well, I need to confess
- There really is a Bias ... The statistics are biased ...
Statistics are driven by the number of data points, and because I
commonly get complete results for Section 3 races (and I enter
deeper results for Section 3 runners), the master database is
ďlop-sidedĒ with Section 3 entries compared to other sections ... And
because Jonathan Broderick does a great job with results for Section 2,
Iím able to enter results for many more Section 2 runners ... Some
sections around the State have very minimal results available by
comparison, so some runners from those sections are not in my database
simply because results of their performances are not available ... NOTE:
for runners outside Section 3, my database is generally limited to the
better runners (fast enough to make the ranking lists) or runners from
teams expected to be in the State Meet (it is not all-inclusive).
Misconception ... "The rankings are one
guyís opinion of who-is-better-than-whom" ... This one really annoys
me the most ... The rankings are simply a mathematical model of how fast
runners have been running relative to each other ... They do NOT
indicate how fast somebody might run in the future - they indicate what
runners have done (past tense) ... I make absolutely no judgment calls
that places one individual in front of another because I think he-or-she
As noted on the web-page, the rankings are similar to a track
leaderboard ... they are ranked by speed alone ... On a track
leaderboard, you can beat another runner multiple times, but if the
other runner has run a faster time, the other runner will be ranked
ahead of you until you run faster.
The ranking lists are not precise measurements ... They are
approximations ... The idea is to get a general idea of how fast
runners are relative to each other.
Consider this ... during outdoor track the ONLY event is the One
Mile ... Everybody runs just the mile race ... Think about the
leaderboard for that ... After some separation amongst the very top
runners, the leaderboard will get very clogged with many runners running
similar times ... There could be fifty runners statewide between 4:40 and
4:45 ... Somebody will be ranked #1 in that group and somebody #50 ...
Is there really that much difference?? ... The ranking number itself is
useless in this scenario (just like in my XC lists when there is a large
grouping of runners within 3 or 4 points of each other).