An Alternative UROY Ranking System

Wow, there sure has been a lot of “complaining” on various blogs since the 2010 UROY results came out earlier this week.  It’s interesting that people take this so seriously, including me.  Isn’t it the races that actually count, not some subjective ranking comparing Bay Area trail vs Ohio road, Colorado rocky mountains vs California track meets, competitive vs cherry picks, lots of races vs an American record, real ultras (100 milers) vs those pseudo-ultra 50K’s?  Consider the top three women.  How the heck does one compare Tracy Garneau’s (#1) year that included only three races but all victories at well-known races (HURT, AR, WS) to Meghan Arbogast’s (#2) year where she finished 9 races including WS (2nd behind Tracy), TNF (6th), World 100K (5th) and won the MUC, to Ellie Greenwood’s (#3) that included 7 finishes including a big victory at the World 100K (in front of Meghan) but many smaller lesser-known Canadian races?  Head-to-head both Tracy and Ellie beat Meghan (Ellie beat her twice).   Tracy and Ellie never competed together (unless they both DNF’ed at a race together?).  Tracy didn’t complete a race after WS.  Meghan and Ellie ran for more months and both ran marathons, Meghan’s faster than Ellie’s.  These diverse years are not easy to compare.  Do I wish Meghan had won?  Yes, because she is my friend, but the other two also had awesome years and I applaud all of them.

So how are these rankings determined?  According to the release at Ultrarunning Magazine, “a panel of 18 race organizers from all regions of North America submitted ballots this year.” This panel of 18 is anonymous so we’ll never know who they are.  While I don’t think there is anything wrong with the voting, in fact, I kind a like that the voters are anonymous so people can’t lobby or bribe them for votes, but maybe there is a better way.  Yep, I think we could take some of the subjectivity out of it.

I’m not that big a football fan, but with the Oregon Ducks making it to the BCS Championship Bowl (is that redundant?) I was definitely paying attention to the weekly BCS rankings this season.  For several weeks we were ranked #1, but eventually settled at #2 even though we never lost a regular season game.  The BCS rankings, if you don’t know, are comprised of two human polls (AP, Coaches) and then the computerized rankings, which are actually six different ranking algorithms.

Can you see where I’m going with this?  Ultrarunning is already setup to go to a BCS-style ranking with Ultrasignup, the center of the ultrarunning universe.  Come on, everybody has looked at their ranking at Ultrasignup at least once.  While the Ultrasignup ranking is a cumulative ranking and as you get older you get to watch it drop lower and lower (I actually just finished a race and a got a ranking higher than my cumulative ranking – the first time in 5 years!), I’m sure the engineers at Ultrasignup could come up with a system that only includes the current calendar year and be much more involved than a simple comparison to the winner’s time.  The computer algorithm could keep track of things like depth of field (strength of schedule in BCS world),  weather conditions, course records and how many years a race has been run, etc.  It could also keep track of those annoying DNF’s and DNS’s which quickly disappear since they aren’t recorded anywhere except via an exclusion on AJW’s Christmas card list.  With this new system, if you signup for a race and don’t get a finish you get a DNxmas and those take big points off your rank.   It could also include a difficulty of entering component.  Races that fill up and don’t allow elites to enter via other means could actually increase a runner’s rank as long as they tried to enter.   Just like BCS, we’d still have the human polls and they would be conducted on AJW’s blog.  The monthly results would be sent directly to Ultrasignup and factored in accordingly.  I can envision an updated real-time UROY hot list so we all can know at all times who the leading runners are.  For a small fee, a runner could click a button and the computer could spit out a list of suggested upcoming races to run that are likely to increase his/her rank.  Oh, it could be good.

Yeah sure, the computerized rankings would likely generate lots of complaints, too, but at least humans wouldn’t be the targets of the complaints.  It would be those damn computers at Ultrasignup.

10 Comments

  1. Good points, even if your tongue is half in your cheek. All three women had great performances and yes, unless they’re going head-to-head in more than a couple races, we are essentially left to compare apples to oranges.

    For the record, I count eight finishes for Ellie if you include the World 100k Championships along with all those smaller, lesser-known Canadian races like the Death Race in which she beat the previous men’s record and finished second overall to Hal Koerner. She also “finished” the Orcas Island 50k, the small and lesser-known Washington race, and likely would have set another record there had she not gone off course and been DQed (these smaller races and their lesser-known course markings!)

    Ellie’s finally going to get a shot to show what she’s got this year by attending the bigger, well-known races like Western States, Comrades and American River. Whatever happens she should be fun to watch!

    • @Paps, well, ultrasignup shows six ultras excluding Worlds, but including Harriers Elk/Beaver (7 finishers), Scorched Sole, and Run for the Toad??? Regardless, it is going to be fun watching Ellie this year at AR, Comrades, and States. Hope she rocks them.

  2. With regards to making the UROY voters known and possible subsequent side issues, like bribes and so on… I think thats a ‘better’ ‘problem’ to have than no transparency. Today nobody knows who is voting, why they vote, or how (as best I can tell). From reading various well known runners’ questions and feedback on the results, it appears to me there are no dictated metrics or guidance or weighting system. Yes, although your tongue is half in your cheek you make great points that need to be chewed on.

  3. Hey Craig, Thanks for bringing up the topic. Rankings tend to be one of the more controversial topics on UltraSignup, I receive plenty of suggestions on how to make the numbers better. Although most suggestions are great ideas, there are downsides to most of them. The current system eliminates subjectivity by simply averaging your past performances compared to the fastest person that ran that day (gender specific). Many people complain about the lack of age adjustment, rightfully so. This is an area I would like to address this year. Instead of basing the number off the fastest overall finisher, I would like to create a new age adjusted rank that bases the average off of the fastest person in the person’s age group. This would be in addition to the current rank. The problem with this new rank, is the fact that many ultras have less than 100 participants and when you divide up the numbers into age categories, you really water down the meaning. An alternative to this would be to base the age rankings off of the “All Time Fastest” age group winner. This would be more meaningful, but people may complain about weather factors and or subtle course modifications over the years. Not sure if I want to open up that can of worms. Maybe I could create 5 variations of the ranks so people can pick and choose which one they prefer? One could be a “Distance specific” rank that would give you an idea how well a runner performs in a 50k vs. a 100m.

    • @Mark Gilligan, those are all great ideas. But really, anything that helps an old guy who’s ranking just continues to drop would be good for the ego and enthusiastically applauded 🙂

      A strength of field ranking would also be cool to see. If you could make use of the race rankings similar to what Ultrarunning Magazine puts out in March where the number of top ten UROY runners in a particular race are summed up to give an indication of the quality of the field. The higher the qualify of the field the higher the rank of the runner’s performance. Of course, that creates the chicken or the egg situation. Do you rank the race first or the runners first?

      As a first step, it shouldn’t be too difficult to have a current year ranking in addition to the cumulative ranking and just continue to use a runner’s time relative to the winner’s time. Doesn’t discriminate between a runner who won a single Fat Ass vs someone like Tracy who won three big races but it’s a start.

  4. Are we leaving the “I just love to run” Chuck Jones-mentality behind? Great discussion but I hoped I would never see “BCS” and “ultrarunning” mentioned in the same breath:) Other than seeing a true 100 mile trail championship on US soil come to frution where anyone who feels they have a chance at the title can get in in any given year, I hope the algoryhtmic rankings stay in the sports that aren’t mano-a-mano to begin with. -Koz

  5. I tried to do an automated ranking this year, using a variety of weighting algorithms – average percentage place in field, further weighting by how many were in each race, total sum of each of these to take in to account how many races were run, and several others.

    None of my algorithms produced a result that came anywhere close to my intuitive feeling for who were the best runners. This is not to say that someone couldn’t invent an algorithm that would be better (I have a couple ideas I want to try next year), only that it is a lot harder than it looks.

    In fact, I think there is not a unique answer to the question, because it involves subjective opinions about how much you value, for instance, performance in a few big-name races versus performance in many lower profile races (as you point out succinctly in your article, Craig.)

    This is why John Medinger solicits opinions from a number of race organizers – to sample a wide range of the quality vs. quantity spectrum.

Leave a Comment