Even before Melo went down, it seemed you couldn't read an article, talk to a co-worker, or turn on the radio without hearing all about how Syracuse wasn't the best team in their region, and how Boeheim has a history of bowing out early in the tournament. Despite 3 Final Four appearances and a national championship you'd think Jim Boeheim has never had NCAA tournament success, and that every year we see Syracuse ride into the tournament on a stellar regular season only to fall flat on its face in the first couple rounds (Hi Pitt!). You might also think that the Orange are the only team ever to be upset in the tournament, or fail to make the Final Four after being seeded on top of a region.
For me, what is so upsetting about this is not the unnecessary hyperbole, not the reality that these sports pundits are wrong, and not that fact that these types of comments are often accompanied by Doug Gottlieb. What really gets at me is that these types of broad conclusions are made without any attempt to look at tournament history in any systematic way.
There is an easy way to find out if Syracuse typically fails to live up to tournament expectations, or if other teams typically fare better. It required an analysis that took a total of 35 minutes and could have been done by any journalist who gets paid to actually write about sports.
Assessing Expectations - Creating a 'Success Score'
My method for creating a tournament 'success score' was simple. Starting in 1985 (the year the tournament expanded to 64 teams) I collected the seed a team received (#1-#16), what round that seeding would predict the team to exit the tournament after, and what round the team actually exited the tournament. I did this for Syracuse and for a set of teams I considered to be the most successful over the past 27 years.
I coded the round of 64 as "round 1" and counted up from there so that the championship game counted as "round 6". Teams ranked #9-16 were expected to exit after round 1, those ranked #5-8 after round 2, rankings #3 and #4 after round 3, teams rank #2 after round 4, and the #1 seeds after round 5. Teams that win or lose the championship game (round 6), by this coding, automatically exceed expectations.
To calculate the actual 'success score' -- a measure of the degree to which a team met or exceeded expectations -- the expected round of defeat was subtracted from the actual round of defeat. The resulting value, hypothetically, could range from -4 to +5, with '0' indicating a team met expectations exactly. For instance, a #2 seed losing in the first round would receive a value of '-3' for that year, while a #3 seed winning the national championship would receive a '+3' (sound familiar?). Looking at all years from 1985 to 2011 shows us how programs have done over time. Note: If a team did not make the tournament, no data were collected for that year.
Flopping and Not-Flopping
Below are the results for each team. The first column is the mean of all the success scores. The next two are the mode and median of those scores. The final column is a aggregate total of a team's scores, taken by adding them all up. Each provides some detail about how teams have fared:
|Team Success Scores, 1985-2011|
How did Syracuse do? Looking at the data, Syracuse lands squarely in the middle of the pack. Their aggregate 'success score' is a -4. This would suggest that over 27 years, Syracuse has come up just under expectations. But looking across teams, we see that every team but UConn has come up under expectations by the aggregate measure. Furthermore, the modal and median outcome for Syracuse is to exactly meet expectations. The mean, at -0.18, similarly, is not substantively different from '0'. On the whole, this suggests that despite some notable flops, Syracuse typically meets expectations, and has has almost as many surprisingly successful tournament appearances as they have has disappointments. As we will see, this is true for all of these teams.
Winners and Losers? Looking across the table, we see that all of the teams are pretty similar. Other than UConn, each team, in the aggregate, has come up just under expectations. In fact only UConn and Michigan State have any claim to typically doing better than expected. UConn's success is driven by their run to the championship last year (and the fact that they have the fewest tournament appearances among the group), while Michigan State's reflects the success the program has had since Tom Izzo took over (the numbers are far worse before then).
Kansas is notably the "worst" team of the bunch due to a couple of early round flame-outs this decade -- though they are by no means bad. Duke, by some measures, is the second worst but also has the only modal score above '0'. This reflects the fact that Duke was the least predictable team in the bunch, often times succeeding well beyond expectations, but just as many teams flopping tremendously. Dukies can blame this on their routinely high-seeding in the tournament. Duke has had an average seeding of 2.2. No other team had an average better than 2.5. These high expectations make some level of failure inevitable over the course of 27 years.
What can we learn from this exercise? Well, first, it seems the selection committee does a great job with seeding. Teams typically meet expectations. For all the griping every March about seeding decisions, over the course of the last 27 years it seems, at least with these teams, the selection committee has done well.
More to the point, Syracuse is no more or less of a disappointment than other top programs across the country. If any team should be singled out, its Kansas, but even they have done quite well. Ultimately, what this analysis teaches us is that sports writers do not care about the truth. I have spent, at best, an hour of my time on this fanpost, and was able to provide a much more balanced look at teams and tournament expectations than I would have by working off my memory and running my mouth. If only this level of moderate effort were done by the guys and gals getting paid.