Who’s Got the Talent and Who’s Doing What With It? — Part One

This is the first in a series by a friend of the site, who we’ll call MB.  Hope you enjoy:

As avid college football fans, three of the topics we enjoy discussing (arguing about) the most are “which team is the best?” “which teams have the most talent?”  and “which team gets the most out of its talent?” The answers to these questions are almost by definition subjective, but we have developed a fact-based, “objective” system (really a set of systems) to measure and compare both on-field performance and talent levels for college football teams and conferences.

In an effort to keep this series of posts uncluttered, we will explain in detail the “formulas” for all the calculations we reference at the end of the post.  However, we want to make it clear that we recognize that these systems are not perfect….they have flaws, flaws that we will point out.  However, we do believe that these systems are very accurate and do a good job of putting objective, quantitative measures on subjective college football characteristics.  More important, we also believe this data provides a great basis for discussion among knowledgeable and reasonable college football fans.  We are always open to suggestions on how to improve the systems.  In this series, we will discuss performance, talent and performance vs talent (i.e. who gets the most/least out of their talent).

Summary

The data and analyses below will show that talent is critical for elite success in college football.  Over the last decade, the vast, vast majority of national title contenders have had elite talent.  Oklahoma in 2000, Nebraska 2001 (although one could argue the Huskers should not have been a title contender) and potentially Boise State in 2010 are the rare examples of teams without elite talent competing for titles.  However, collecting elite talent is not enough…in fact the data will show that once programs get to the very top end of talent levels, performance starts to decline (we intend to do a more in depth analysis on this counter intuitive pattern in the future).

Our take away from this is that coaching is also very important. There are several examples of programs that achieved high levels of success without top talent.  Virginia Tech, Boise, Utah, Oregon St, Oregon and West Virginia are examples of this.  Some programs achieve this by finding “diamonds in the rough” recruits and others achieve this by designing and implementing  systems that can be effective with less talent.  Those teams excel at getting the most out of their talent.

However, the programs that find the balance of assembling great talent (not the best talent in the country) and match that with outstanding coaching are the programs that consistently compete for titles.  That was USC 2002-2004. Good talent–not crazy talent–and great coaching.  From 2005-2008, USC’s talent gap over the rest of college football grew to unbelievable heights, yet the performance failed to keep up and the program got less out of the talent.

Conversely, there are programs that consistently aggregate good/great talent, but rarely if ever “convert” that talent to performance on the field.  Tennessee, Cal, Notre Dame and Wisconsin are some of those teams with legit NFL talent and little to show for it. UCLA and South Carolina consistently show up in recruiting rankings, but don’t get performance on the field and don’t “convert” their recruiting talent into NFL talent.  These are all key components in putting together a successful program over the long run.  None of these conclusions are shocking, but we believe the application of fact-based, objective analyses and ranking systems bring the facts into a clear light, away from the bias and subjectivity that is rampant in the media and with fans.  We hope you find the analyses helpful and interesting.

Performance

The bottom line in college football is how a team performs on the field.  To that end, we will start by ranking which teams have performed the best on the field (not talent, not potential, but purely on field performance) and then discuss the various ways those teams achieved their success (recruiting talent, developing talent, coaching acumen, etc) . We will describe our ranking system in detail at the end, but it is based on the Sagarin composite rankings and then adjusted to add points for national championships, top 10 finishes, wins over ranked teams and BCS performance.  Here are the top 25 teams of the decade based on our system:

Top 25 Teams of the Decade, Table 1

Team/Points

1. USC  164.6

2. Oklahoma 151.4

3. Florida 147.3

4. Texas 136.5

5. LSU 136.1

6. Ohio St 132.5

7. Miami 127.7

8. Georgia 105.5

9. Virginia Tech 105.4

10.  Alabama 102.3

11. Florida St 102.1

12. Auburn 95.1

13. Oregon 94.3

14. Boise State 93.9

15. West Virginia 91.5

16. Nebraska 90.7

17. Michigan 90.1

18. Tennessee 89.2

19. Penn St 88.6

20. Oregon St 87.7

21. Iowa 87.4

22. Utah 86.1

23. Texas Tech 83.5

24. TCU 82.9

25. Louisville 82.4

Not too many surprises here, particularly at the top.  Oklahoma finishes second despite having only one national title in the decade…ahead of Florida and LSU which each won two national titles.  The combination of no poor seasons (worst finish was 17th in Sagarin), the second most wins in the decade (and most wins vs Top 10 competition) and four trips to the BCS title game propel the Sooners to #2.

Close call (essentially a tie) between Texas and LSU for fourth.  LSU had two national titles in the decade compared to one for Texas, but LSU also had 3 finishes outside the top 20 (Sagarin) while Texas had none.  Texas had 11 more wins (and a higher winning percentage vs top 10 and top 30 opponents) and a title game loss to just close the gap on LSU.

Also a close battle between Georgia and Virginia Tech for ninth and between Alabama and Florida St for 10th.  Alabama’s 2009 national title allowed the Tide to skyrocket up the board and sneak into the top 10.  Boise and West Virginia finish ahead of traditional powers Michigan, Tennessee and Penn St.  Utah and TCU also make the list from outside the BCS conferences.  Two SEC teams in the top 5, four SEC teams in the top 10 and six SEC teams in Top 20…very impressive.  Only 6 programs compiled 100 wins in the decade: Boise (112), Texas (110), Oklahoma (110), Ohio St (102), USC (102) and Florida (100).  Of the teams knocking on the 100 win door, it was interesting to see TCU there with 95 wins.

Next comes the top 25 single season teams.  USC leads the way with six teams followed by Florida with four and Miami, LSU and Ohio St with three each.  The 2000 Miami team is the highest ranked team not to win a title (did not even play in title game), finishing ahead of 2002 National Champion Ohio St.  2008 Florida is the highest ranked 1 loss team, 2007 LSU is the highest ranked 2 loss team.

Top 25 Single Season Teams 2000-2009, Table 2

Team/Points

1 2005 Texas 123.7

2 2001 Miami 123.4

3 2004 USC 120.4

4 2000 Oklahoma 119.1

5 2009 Alabama 117.8

6 2008 Florida 115.5

7 2003 LSU 114.7

8 2006 Florida 113.4

9 2003 USC 111.3

10 2007 LSU 110.1

11 2000 Miami 110.0

12 2002 Ohio St 109.0

13 2000 Florida St 108.3

14 2005 USC 108.2

15 2001 Florida 108.0

16 2002 USC 105.1

17 2006 USC 104.1

18 2002 Miami 104.1

19 2009 Florida 104.0

20 2008 USC 103.4

21 2005 Ohio St 103.1

22 2008 Oklahoma 102.9

23 2004 Auburn 101.8

24 2006 LSU 101.8

25 2006 Ohio St 101.4

Please note that the point totals for multiple year analyses (i.e. table 1) and single year analyses (i.e. table 2) are not comparable.  We will explain in detail below, but the main point is that in order to maintain the appropriate weighting for the bonus points applied to the Sagarin score, multiple year calculations use the average Sagarin score over the period.

The other side of the coin is the list of worst performing teams during the decade:

Worst 10 BCS Programs 2000-2009-Table 3

Team/Points

1 Duke 44.9

2 Indiana 53.2

3 Baylor 53.9

4 Vanderbilt 57.5

5 Rutgers 59.5

6 Syracuse 60.9

7 Mississippi St 61.9

8 Illinois 62.3

9 Iowa St 64.3

10 Kentucky 65.3

Duke was the worst performing team from a BCS conference this decade…by a WIDE margin.  As you we see below, Duke had three of the bottom four single season performances of the decade (and 4 of the bottom 7).  Since we were so complimentary of the SEC above, it is only fair to point out that the SEC had three of the worst programs of the decade.  The Big 12, Big 10 and Big East each had two teams on the list.  It is amazing that Illinois makes this dubious list given the Illini played in two BCS games during the decade!  Outside of those two years, Illinois’ performance score was 55.5.  Those seasons included years with 11, 10, 9, 8, 7 and 7 losses.  The Pac 10 Ten had no teams represented in the list…the only BCS conference that can make that claim.  Note this list only includes teams that were in a BCS conference for all years during the decade.

For the sake of completeness, below are the worst single season performances of the decade…

Worst 25 Single Season Teams 2000-2009-Table 4

Team/Points

1 2001 Rutgers  (2-9) 44.0

2 2001 Duke  (0-11) 46.6

3 2006 Duke  (0-12) 47.5

4 2000 Duke  (0-11) 47.9

5 2002 Rutgers  (1-11) 48.1

6 2000 Wake  (2-9) 51.6

7 2005 Duke  (1-10) 52.7

8 2002 Kansas  (2-10) 52.7

9 2008 Washington St  (2-11) 53.7

10 2000 Baylor  (2-9) 53.8

11 2003 Temple  (1-11) 54.4

12 2003 Baylor  (3-9) 54.6

13 2009 Washington St  (1-11) 55.2

14 2008 Washington  (0-12) 55.2

15 2003 Indiana  (2-10) 55.4

16 2000 Rutgers  (3-8) 55.4

17 2007 Minnesota  (1-11) 55.7

18 2002 Baylor  (3-9) 55.9

19 2008 Indiana  (3-9) 56.2

20 2003 Illinois  (1-11) 56.5

21 2001 Cal  (1-10) 56.5

22 2005 Syracuse  (1-10) 57.1

23 2001 Vanderbilt (2-9) 57.2

24 2007 Syracuse  (2-10) 57.6

25 2007 Duke  (1-11) 57.7

As discussed above, Duke really shines on this list!  Other repeat offenders include Rutgers (3 times), Baylor (3 times), Washington St (twice) and Indiana (twice).

We can also look at these performance rankings by conference:

Average Performance Ranking by Conference 2000-2009-Table 5

Rank/Conference Points

1 SEC 79.7

2 Pac 10 78.8

3 Big 12 78.2

4 ACC 76.5

5 Big 10 76.2

6 Big East   76.0

No surprise to see the SEC at the top of the list, but it is interesting to see the Pac 10 in 2nd…ahead of the Big 12 and far ahead of the ACC, Big 10 and ACC.  While the Big 12 has two great teams leading the way, the bottom half of the conference drags it down (21 of the 100 worst BCS teams of the decade are from the Big 12). The Pac 10 has had few elite teams outside of USC this decade (only 7 non-USC teams in top 100 teams this decade), but has had consistent strength in the middle of the conference (only 9 teams in bottom 100, fewest of any conference). It is important to note that these rankings assume the teams were in their respective conference at the time.  For example, the Big East gets credit for Miami 2000-2003…which happens to be when the Canes were the best team in college football.  If one were to look at the Big East today, under current alignment, the performance of that conference would be lagging even more.  We do note that the Big 10 has fallen quite a bit back from the Big 12 and Pac 10 this decade.

Come back tomorrow on HP when we look at talent levels and recruiting rankings for the decade…

Powered by

About Heismanpundit

Chris Huston, A.K.A. ‘The Heisman Pundit‘, is a Heisman voter and the creator and publisher of Heismanpundit.com, a site dedicated to analysis of the Heisman Trophy and college football. Dubbed “the foremost authority on the Heisman” by Sports Illustrated, HP is regularly quoted or cited during football season in newspapers across the country. He is also a regular contributor on sports talk radio and television.

Follow HP

Find us on Twitter, Facebook and Youtube!

31 Responses to Who’s Got the Talent and Who’s Doing What With It? — Part One

  1. AUman76 June 16, 2010 at 5:00 pm #

    Does USCheaters 2004 total in clude Auburn’s 10 points they should have? How bout the overall decade totals too? Might wanna re calculate the figures and adjust em a tad now? The whole sagrin system is based on his opion of what a team is worth. Then he throws in a lil fancy math and wa la….another system of judgement not facts. He clearly places values based on names of teams not just performance. Last I checked Auburn played a much tougher schedule and defeated more top 10 and top 25 teams in 2004 but still got screwed in all the polls. Both human and puter nerds f’d it up cause they wanted to see a certain matchup. Well they ngot what rose colored glassed usually get you…FOOLED! They selected the wrong team and they still do. Bammer’s 2009 team barely defeated the worst AU defense in history yet are considered better than a team with 3 1st round NFL players in the same backfield? So there you have it… the same ol bullshit being seen through the same ol’ rose colored glasses. The 2004 AU team was the most well rounded college team of the decade and reached the potentioal that even theso called experts had thought thety would just the previous two seasons. They experts had piss poor timing as did my Tigers. Nope I don’t likes the establishment . Fer theys likes to be greedy and don’t play fair with some kids.

  2. Heismanpundit June 16, 2010 at 5:54 pm #

    You are right. It was the same old stuff through the same old rose colored glasses.

    Yours.

  3. mb June 16, 2010 at 6:13 pm #

    actually, Auburn played the 60th toughest schedule in 2004 (Sagarin) while Oklahoma played th 13th toughest and USC played the 7th toughest. Prior to the bowl game, Auburn did not play a team that finished the season in the top 10 (final Sagarin) while prior to the bowl game Oklahoma was 1-0 vs top 10 teams (final Sagarin) and USC was 2-0 vs top 10 teams. All 3 programs were 4-0 vs top 30 teams prior to the bowl game.

  4. Anonymous June 17, 2010 at 3:58 am #

    Where are the formulas? Please post the spreadsheet including the raw numbers, or these comparisons are meaningless.

  5. mb June 17, 2010 at 9:41 am #

    Heismanpundit will be posting all the formulas at the end. All of the data sources, math and rationale will be described in detail.

  6. ME June 17, 2010 at 10:24 am #

    sharpe ratio would be a nice analytic to see as well….

  7. Anonymous June 17, 2010 at 1:34 pm #

    Let us know when you make the spreadsheet available… until then this is just looks like more USC homerism. It’s a lot like telling us that un-named sources think Bush should keep his Heisman despite cheating and lying.

  8. mb June 17, 2010 at 3:57 pm #

    Here is the Sharpe Ratio for 2000-2009. Calculated using the 20th ranked Sagarin score for each year. Took the performance metric as calculated in table 1 subtracted it from the benchmark for each year 2000-2009 (each year “the differential”). Sharpe Ration is =(Average of the differntials)/(stdev of the differentials)

    Sharpe Ratio 2000-2009
    1 Oklahoma 1.35
    2 Texas 1.06
    3 USC 1.01
    4 Ohio St 0.93
    5 Florida 0.89
    6 Georgia 0.81
    7 Virginia Tech 0.74
    8 LSU 0.73
    9 Miami 0.46
    10 Florida St 0.40
    11 Auburn 0.16
    12 Oregon 0.14
    13 Boise 0.12
    14 Alabama 0.03
    15 Nebraska (0.02)
    16 Michigan (0.03)
    17 Tennessee (0.07)
    18 Oregon St (0.09)
    19 West Virginia (0.12)
    20 Penn St (0.15)
    21 Iowa (0.21)
    22 Texas Tech (0.24)
    23 Utah (0.29)
    24 TCU (0.36)
    25 Louisville (0.39)

  9. mb June 17, 2010 at 4:13 pm #

    Ten Worst Sharpe Ratios 2000-2009 (BCS Only)

    1 Indiana (3.32)
    2 Baylor (3.11)
    3 Duke (2.68)
    4 Vandy (2.60)
    5 Michigan St (1.81)
    6 Miss St (1.74)
    7 Northwestern (1.70)
    8 Arizona (1.57)
    9 Pitt (1.56)
    10 Syracuse (1.55)

  10. SEChater June 18, 2010 at 9:43 am #

    gotta love the SEC homers are coming out of the wood work because the team of the decade won 2 AP national championships, played in 7 BCS games, and hat 7 straight 11 win seasons while playing a 13 game schedule. i guess if i had a false sense of self i would be pissed too.

  11. AUman76 June 18, 2010 at 9:27 pm #

    hey mb how many teams were rated in the top 10 when we beat em? Seems I read it was 5? Maybe we were the reason why they didn’t finish as high? Just who did USC beat that did or has ever done anything special before or since 2004? Matter of fact what the hell has USC done so special since 2004? They couldn’t close out Texas in 05 and then they couldn’t even beat the likes of Stanford? Yeah…great teams lose to Stanford don’t they? Only on the leftist coast do we get this kind of thought process. Hell fire…daOSU has been in more crystal ball games than them thar rubbers. And now y’all gotta give back that one. lol What’s worse knowing you belonged in the 2004 BCS title game but being left out or winnin it then havin to give it back cause you got caught cheatin? The shame of being left out belongs to the media and coaches the shame of being a cheater speaks for itself, huh? Spew all the stats and formula BS you wanna but the ghame is played on the field not in some nerds computer. By the way every one doing a computer formula decides the value of a teams schedule bases solely on his opion of said teams and teams played. It’s nothin more than a mathmatical opinion. Even those opinions are like asholes…they stink too! Is Cali gonna boycott UA and ASU if the new immagration law passes? If so there’s at least two losses for all Cali teams via forefiet. Can’t do business with any companies in the state then state Uni’s sure as hell shouldn’t play em in sports. Only consistant and equal application of the new proposal to boycott businesses in Arizona are fair don’tca think? That means the winner of the ASU-UA game will win the pac1. Hey just a formula I’m tossin at you. No dumber than what Sagarin spews.

  12. Anonymous June 21, 2010 at 8:17 pm #

    Still waiting for the spreadsheet… or is the analysis so flawed that it won’t stand up to the light of day?

  13. mb June 22, 2010 at 9:33 am #

    As I said earlier, Heismanpundit will be posting all the formulas for each table presented at the end. It will include the data sources, the math and the rationale for each. If you have a specific question on a specific formula, I will do my best to answer it.

  14. Anonymous June 22, 2010 at 2:32 pm #

    Presenting the basis and supporting data long after the results makes no sense and doesn’t foster trust in the methods nor encourage meaningful discussion. But I guess I’ll join into this backwards guessing game:

    1. If the analysis details and spreadsheet will stand up in the light of day, why wait?

    2. You seem to be using Sagarin ratings and then overlaying data from Sagarin ratings on top. How can you justify this dependent use of data and what is your basis for emphasizing certain elements of Sagarin more than Sagarin does?

    3. How many conference championship points did you give USC? Boise St? Oklahoma? Florida? Ohio St?

    4. Since you awarded points for BCS wins, I assume you also awarded points for winning conference championship games. Is this true?

    5. Did you account for conference size or strength when awarding points for conference championships?

    6. Do you consider BCS championships?

    7. How do you handle vacated wins and vacated championships?

    8. Do you penalize teams at all for cheating or being put on probation?

    9. Please write out the formula that you use and the raw data for the top 10 so others might evaluate your approach in greater detail.

  15. HP June 22, 2010 at 2:41 pm #

    We did it this way because it was an extraordinarily long study and we didn’t want the debate over methodology to hinder the debate over the results until all the results were shown.

    By the tenor of your questions, you seem to have some sort of issue with the results so far. I’m not sure why you would think that sanctions have some effect on results on the field and talent levels, unless you have some sort of agenda.

  16. mb June 22, 2010 at 3:12 pm #

    1. We were concerned that a bunch of detailed formulas would make the posts difficult to read and bog them down. All the formulas will be posted.

    2. We use Sagarin as a base for our rankings because he does an outstanding job with SOS and MOV. We chose to place more emphasis on truly elite performance than Sagarin does. Most of this emphasis however is based on either Sagarin OUTPUT or third party output (BCS) as opposed to double counting input. For example, Sagarin does not give bonus point for finishing #1 or winning a title. We think it is important to give added emphasis to elite performance. A team winning a national title one year and finishing 5th the next year should definitely be ranked ahead of a team finishing 3rd 2 years in a row (both average a 3rd place finish) Just our opinion.

    3. None. Conference championships points are not awarded. However, BSC appearances are rewarded so indirecly conference championships (for the BCS conferences are rewarded). 10 points for BCS/AP title, 3 points for BCS title game loss, 2.5 points for BCS win, 1 point for BCS loss.

    4. No. Sagarin does incorporate that extra win into his formula as well as SOS.

    5. No points awarded for conference championships. One of the great things about using Sagarin is that he incorporates SOS heavily in his rankings. Those programs that play the best schedule are rewarded by Sagarin. My system further rewards it by giving bonus points for wins over ranked teams.

    6. Yes. See #3 above.

    7. Not factored in.

    8. No

    9.Our performance metric is weighted towards outstanding performance. We believe that elite performances should greatly outweigh middle of the road performance. The base for our calculation is Sagarin, who we believe does a very good job (not perfect) with his computer rankings. By starting with Sagarin, we can capture strength of schedule, margin of victory and overall wins and losses. We then add emphasis to Sagarin for outstanding performance.
    The performance points are calculated as follows: Average Sagarin Composite ranking points over the time frame (2000-2009 in this case) PLUS bonus points for top 10 finishes (final Sagarin) PLUS bonus points for BCS performance PLUS bonus points for wins over top 10 and 30 teams (final Sagarin) MINUS points for losses to non-Top 30 teams (final Sagarin). Bonus points for Top 10 finishes are awarded in this manner: 5 points for 1st, 4.5 points for 2nd, 4 points for 3rd…down to 0.5 points for 10th. BCS performance points are awarded in this manner: 10 points for BCS/AP championship, 3 points for loss in BCS championship game, 2.5 points for regular BCS game win and 1 point for BCS game loss. Bonus points for ranked wins are awarded as follows: 0.5 points for each win over top 10 team and 0.25 points for win over top 30 team (not double counting the top 10 wins). 0.25 points are deducted for each loss to a non-top 30 team.

    Here are the inputs for the top 10 teams (only the final 10 are included:

    Average Sagarin Score 2000-2009

    OK 90.9
    USC 90.9
    Texas 90.8
    Florida 89.6
    Ohio St 87.3
    LSU 86.9
    Miami 86.7
    VT 86.1
    Georgia 85.5
    Alabama 81.1

    BCS Bonus Points

    USC 33.0
    Florida 26.0
    LSU 25.0
    Ohio St 24.5
    OK 23.5
    Texas 18.0
    Miami 18.0
    Alabama 11.0
    Georgia 6.0
    VT 4.5

    Final Ranking Bonus Points

    USC 31.5
    OK 26.5
    Florida 21.0
    Texas 20.0
    Miami 17.0
    LSU 16.0
    Ohio St 14.0
    VT 11.0
    Alabama 8.5
    Georgia 8.0

    Top 10 Win Bonus Points

    OK 6.0
    USC 5.5
    Florida 5.0
    Miami 4.5
    Ohio St 3.5
    Texas 3.0
    LSU 3.0
    Georgia 2.0
    Alabama 2.0
    VT 1.5

    Top 30 Win Bonus Points (exc Top 10 above)

    Florida 6.8
    LSU 6.5
    USC 6.3
    Texas 6.0
    OK 5.8
    Georgia 5.8
    Miami 5.3
    Ohio St 4.8
    VT 4.8
    Alabama 3.8

    Non Top 30 Loss Penalty Points

    Florida (1.0)
    OK (1.3)
    Texas (1.3)
    LSU (1.3)
    Ohio St (1.5)
    Georgia (1.8)
    USC (2.5)
    VT (2.5)
    Miami (3.8)
    Alabama (4.0)

  17. Anonymous June 22, 2010 at 3:56 pm #

    HP –
    How can we evaluate results without knowing the methods? Real evaluation requires knowing the methods, assumptions and underlying data. Typically you have to provide enough detail for others to reproduce your results or you won’t be taken seriously. Providing the spreadsheet with equations is a quick way to do this. If you’re worried about bandwidth provide the supporting detail for the top 10!

    You yourself have indicated that some of the numbers were surprising. Are they surprising because of calculation errors, analysis flaws or because they capture a fundamental truth? We’re a week into this and we still don’t have the basic info we need to answer such questions!

    P.S. I think it’s funny that it only took you 9 minutes to respond to my post once you saw legitimate questions about USC’s standing.

  18. Heismanpundit June 22, 2010 at 4:25 pm #

    Don’t give yourself too much credit. I just happened to be on the site when I saw your comment.

    I thought people from the South were supposed to be gentlemen…

  19. Anonymous June 22, 2010 at 6:45 pm #

    MB –

    Not sure how you justify folding AP rankings into BCS rankings. Don’t see how you can argue conference championship games shouldn’t count while simultaneously making BCS Bonus Points the single most influential metric. And how does 2-5 Oklahoma end up with nearly the same BCS Bonus Points as 4-1 Florida and 4-0 LSU?

    You have weighted BCS Bonus Points and Final Ranking Points much more heavily than the other metrics. For BCS Bonus Points you have a whopping 28.5 point difference between #1 and #10. That’s ten times the influence of two of the metrics and three times the influence of the Sagarin ratings themselves. In fact you could base your analysis completely on BCS Bonus Points and Final Ranking Bonus Points and end up with the same top 7 teams.

    Since HP still refuses to provide us with the equations, the best I can do with this data is illustrate how reduced skewing would change the results. I have replaced the skewed scores with the ranks of the scores. This is a common technique for dealing with data that is not normally distributed. It allows each metric to be equally weighted with the same mean and standard deviation.

    Root Mean Squared Ranking of MB’s 6 metrics:
    1. Florida 2.58
    2. OK 3.16
    3. USC 3.29
    4. Texas 4.42
    5. LSU 4.56
    6. Ohio St 5.83
    7. Miami 6.53
    8. Georgia 8.03
    9. VT 8.57
    10. Alabama 9.21

    So even when you lump in AP rankings, ignore probations, ignore vacated wins, and ignore conference championship games, HP’s beloved USC still only manages 3rd place.

  20. mb June 22, 2010 at 8:30 pm #

    You must be confused or maybe we did not portray the numbers clearly enough. Not only did we give you the EXACT formula, we gave you every single component of the formula that summed to the total. Please go back and see that the math works and sums to the total.

    You must have skimmed the post and not read it carefully. If you did, you would have seen the scale for BCS points. Go back and look at it and you will see the math. You may notice that Oklahoma played in 7 BCS games, including 4 BCS title games.

    Yes we weight BCS performance and top 10 finishes more heavily…that is what makes elite seasons in our opinion. What is more important than BCS performance and top 10 finishes? You can’t honestly tell me you think we should weight losses to non top 30 teams the exact same as BCS performance? Or weight top 30 victories the same as top 10 finishes? So a team that wins wins 2 games against top 30 teams, but goes 3-9 should get the same points (0.5) as a team that goes 10-2 and finishes 10th in the country…both having played the same schedule? Give me a break….that is what your root mean squared does.

    Also, if you read carefully n you would see we only included the 10 teams that made the final top 10. That does not necessarily mean those are the rankings in each category. For example, Alabama did not finish 10th overall in average Sagarin…they finished 10th out of the final 10.

    We don’t include AP rankings except when the AP champion is different than the BCS champion.

    This analysis was designed to be as objective as possible and generate dialogue among college football fans. Its obvious you dont think this formula is objective. We would invite you to design a different formula to measure college football performance for the decade…measuring 80 teams (all BCS conference teams and all Non AQ teams that have been ranked in any final poll in the decade). Formula needs to work for single season and multiple seasons. Post your data…we look forward to reviewing it. Thanks for your interest.

  21. Anonymous June 23, 2010 at 2:35 pm #

    This analysis is a disaster.

    The only reason to pretend AP #1 can replace BCS #1 is to give USC 10 points they didn’t earn in 2003. Special rules that benefit only one team hurt your credibility, especially when you post on HP’s website and the team you choose to favor is USC.

    You say differences among the top 10 teams are less important than overall differences. I disagree. People care most how a ranking system differentiates between similar strength teams. You have a skewed system that relies almost completely on 2 of 6 metrics. You give Oklahoma 11 points just for losing BCS games. That’s more benefit than any team gets for 4 of your metrics! And USC gets a massive and unjustifiable 33 BCS Bonus Points despite only winning one BCS championship (which they have vacated for cheating).

    Losing games to opponents not in the top 30 is much more damaging than winning games against top 10 opponents. I think an equal weight of your metrics is more reasonable than having some count 10 times as much as others. But if you really don’t think a metric is that important then don’t include it. It is silly to weight BCS Bonus Points 3 times as heavily as Sagarin ratings, especially since you acknowledge Sagarin ratings as a cornerstone with several key subcomponents.

    I appreciate your response and apologize for not recognizing what you wrote as the complete formula… I had understood from previous posts that you and HP were not going to provide the formulas until the end.

    Here are just a few nonsensical results based on your ranking method:

    1. Beating #8 is better than beating #11, #12, #13, #14, #15, and #16 combined.

    2. Losing a BCS championship helps a team more than beating all teams ranked 11 thru 20!

    3. In your ranking point system, a team can offset 4 losses outside the top 30 by beating #9.

    4. Beating #1 is 20 times better than beating #11.

    5. You get bonus points for losing BCS games, but don’t get any credit for winning conference championship games.

  22. mb June 23, 2010 at 3:41 pm #

    Again, you must be confused as your facts are completely wrong. Also, as I said before, feel free to come up with a better system and post all the data. Here are some responses

    1. Check the math on the top 10. The Sagarin ranking counts for 68% (on average) of the overall score. No idea where you are getting the math to claim “It is silly to weight BCS Bonus Points 3 times as heavily as Sagarin ratings” You are just flat wrong on the math. By the way, BCS points count for on average (of the top 10) 14% of the overall score, with a top % of 20%.

    2. Oklahoma gets BCS points FOR MAKING IT TO THE BCS TITLE GAME which is a huge accomplishment. It is a reward for AN OUTSTANDING SEASON. Yes, they lost 4 title games, but the system rewards them for an outstanding overall season…more heavily than Sagarin…by design.

    3. I have no idea what you are talking about you say “Beating #8 is better than beating #11, #12, #13, #14, #15, and #16 combined.” You are just wrong. Beating number 8 is worth 0.5 pts and beating 11, 12, 13, 14, 15 and 16 are worth 1.5 points combined (0.25 each). Just wrong. A team that loses 4 games outside the top 30 loses 1 point and only gets 0.5 poitns back by beating #9

    You are spending way too much time on the top 10, top 30 wins (and non top 30 losses)…these are a small percentage of the formula…AND ARE ONLY INCREMENTAL TO SAGARIN. So a team that lost to 4 non-top 30 teams would get crushed in Sagarin to begin with…my formula crushes them just a little more. Remember, on average (top 10) Sagarin is 68% of the score.

    Beating #1 is worth 0.5 BONUS POINTS amd beating #11 is worth 0.25 bonus points…NOT 20X the points. Also, this is incremental to Sagarin…Sagarin will have already rewarded thet teams.

    Any team that wins a CCG goes to a BCS game and thus is guaranteed points so winning a CCG game is counted in Sagarin and is given indirect bonus points.

    It is clear you dont understand the math and the formula. Your statements are just flat wrong and false. You dont have to like the formula and I look forward to seeing you come up with a better one.

  23. Anonymous June 24, 2010 at 10:36 am #

    I still don’t understand the logic of giving large bonuses to teams for losing games… especially when the bonuses exceed what teams earn for winning a non-title BCS game.

    You don’t understand how metrics influence your ranking system and I’ll try to explain once again. The mean values of each component don’t cause a team to rise or fall relative to others. It is the variation in numbers that’s relevant. Among the top 10 teams: Sagarin Ratings vary by 9.8 with a standard deviation of 3.1. BCS Bonus Points vary by 28.5 with a standard deviation of 9.3. This is why I say your BCS Bonus Points have 3 times the influence on your rankings. It’s telling that you claim my findings about the top 10 are wrong because I don’t have data you refuse to provide. I don’t buy this excuse, and the fact that you can eliminate all but 2 metrics and get an identical top 7 supports my claim that two flawed metrics dominate your rankings.

    I misunderstood what you were saying about rankings and assumed you got inflated points for beating a top 10 team rather than being a top 10 team. This has it’s own serious flaws in that you’re taking one small element of Sagarin and inflating it so it influences the ratings much more than the original Sagarin ratings themselves. And then you superimpose another BCS bonus on top that is also closely tied to the rankings (with a silly exception that gives a massive bonus to USC and has no bonus to any of the other 244 teams).

    Just think how much confusion could have been avoided by providing all the supporting data up front.

  24. AUman76 June 25, 2010 at 10:33 pm #

    Very astute obsevations Anonomous. You know there system better than they do. This type of math is exactly why certain named schools always get the nod over teams that actually played tougher schedules on the field. Sure I’ll always be pissed about what happened to my 2004 Tigers but it has happened to other teams also. I’ve yet to see why Boise State or Utah can’t get into the big show? They seems to do very well when invited to a BCS game. Truth is there’s ain’t a hairs different in the top 25 teams talent wise. It’s mostly luck with injuries and team chemistry that decides the winners and losers. Yeah USC always has highly ranked recruiting classes but how many Crystal balls do they have now? ZERO! Any computer hack good with numbers and formulas can come up with a “System” to rank anything. He or she can also tip the scales to favor whichever team or region so desired. It’s also easy to toss out crap like above when no real facts about how you actually work out the equation are exposed. Looks more like another ranking theory instead of a definitive indicator of quality teams. Oh well, USC has filed an appeal. Maybe these guys can come up with a formula to determine the odds of the Trojans beating the NCAA’s ruling?

  25. Anonymous June 27, 2010 at 11:29 am #

    Thanks AUman. It would be interesting to have a discussion about a reasonable set of criteria and put together a system that makes sense, but that’s not going to happen on this website.

  26. AUman76 June 28, 2010 at 11:18 pm #

    Ain’t that the trut, Anon? I like the site but there’s never been a doubt about the USC connections. I also have enough common sense to know USC has ans always will be a mainstay in college football. Especially on the west coast. The truth also is they are the only contender that will ever challenge for the BCS title from the pac1. The Texas snub was USC’s saving grace. Had they joined the pac16 the days of a one horse conference was over and so too would be the BS overrated hype which is USC football.
    So far as the numbers games on this site? You’re right…there will never be a fair and accountable formula used nor presented for obsevation. Just a lot of “but you don’t understand” will continue to thrive. Ole HP ain’t a bad guy and actually tosses out a good read or two from time to time. I think this mb is a sagarin desciple. lol Sagarin seems to be the west coast and ESPN’s favorite slide rule guy and they place way to much emphisis on his opinions. And opinions are all he really gives us. Noone not me, not you not anyone can accurately determine a true strength of schedule. No matter how many numbers we feed into a machine …that machine can not see the eb and flow and true closes of a contest. It also can’t know when a team completely dominates another squad despite the closeness of the final score or stats. Defense is also overlooked far too often. You seem to know your fingues so why not come up with a system that doesn’t award brownie points based on a schools name but becaused they were earned on the field! Give it a try then run it by us on here. Hell…it has to be as accurate as what we’ve seen so far and any explaination spelled out in any seblalence (spelling don’t have spell check so now some folks can question my education. Truth is I’m over 50 and to damned tired and lazy to look up the correct spellin. lol) of detail would be an improvement into the insite of how you came to your conclusions.

  27. Anonymous July 25, 2010 at 10:48 am #

    Most wins in the decade:
    1 Boise State….112
    2 Texas……….110
    3 Oklahoma…….110
    4 Ohio State…..102
    5 Florida……..100
    6 LSU………….99
    7 Virginia Tech…99
    8 Georgia………98
    9 TCU………….95
    10 Miami……….92

    How is it that a team that had to vacate its only BCS title and didn’t make the top 10 in wins is ranked #1? Maybe the cheating Trojans dole out massive bonus points for probations.

Trackbacks/Pingbacks

  1. Who’s Got The Talent and Who’s Doing What With It — Part Two | Heismanpundit - June 18, 2010

    [...] « Who’s Got the Talent and Who’s Doing What With It? — Part One [...]

  2. En Fuego « Get The Picture - June 19, 2010

    [...] the subject of recruiting, you might want to check out a couple of posts over at Heisman Pundit.  The first ranks the best and worst programs of the decade, filtered through Sagarin’s rantings, [...]

  3. Who’s Got The Talent And Who’s Doing What With It — Part Three | Heismanpundit - June 21, 2010

    [...] Continued thanks to site contributor MB for his amazing effort in compiling these numbers.  Go here to read the first part of the series and here for part two.  Please note that the methodology used for these numbers will [...]

  4. Who’s Got The Talent and Who’s Doing What With It — Part Four (Final) | Heismanpundit - June 22, 2010

    [...] our series that looks at who’s got the talent–and what they do with it.  You can read Pt. 1 here, Pt. 2 here and Pt. 3 here.  Hope you enjoyed our exhaustive study put together by site [...]