NCAA Tournament Primer

Everything you need to know about the NCAA tournament, and how the teams are selected and seeded. If you have something you want addressed or clarified, contact us. Also see: Info on this season's tournament (including current bracket); and CHN's NCAA Tournament History/Almanac.

Historical Timeline - Relevant CHN Articles

2022-23
  • PROCESS CHANGE - 3-on-3 OT now counts as a 0.66 / 0.33 split.
2021-22
  • PROCESS CHANGE - With universal 3-on-3 OT implemented, RPI now calculates OT wins as 0.55 of a win, 0.45 of a loss.
2020-21
2013-14
  • PROCESS CHANGE - Home-Road Weighting, Quality Win Bonus Implemented, Record vs. TUC criterion removed
2012-13
2011-12
  • PROCESS CHANGE - Record vs. Common Opponents is now calculated by averaging the winning percentage against each individual opponent, instead of an overall win pct.
2010-11
  • TUC is once again a team with RPI of .500 or better
2008-09
  • Regionalization - Exploring the potential impact of NCAA mandating strict travel requirements.
  • PROCESS CHANGE - Must have .500 or better record to be considered
2007-08
2006-07
  • Breaking Pairwise Ties - Analysis of bracket, and explanation of Pairwise ties, vis-a-vis UMass/Maine scenario
  • PROCESS CHANGE - Numerous tweaks: Change in RPI weighting; Only top 25 in RPI is a TUC; TUC criteria only counts after 10-games vs. TUC minimum; AQs not automatically TUCs ("Bentley rule")
2005-06
2004-05
2003-04
2002-03
2000-01
  • Second automatic bid, for conference's regular-season champion, is eliminated.
1999-2000
1996-97

The Basics

: 16

: 4

: Regional sites are pre-determined by the men's ice hockey committee. Cities/arenas make bids, and the committee selects the locations in advance. (Future sites)

: There are six automatic bids that go to the conference tournament champions of the six Division I conferences. The remaining 10 teams are selected based upon ranking under the NCAA's objective system of Pairwise Comparisons.

: One representative from each of the six conferences make up the men's ice hockey committee (heretofore referred to as "The Committee"). Terms are 4 years long, and end in August of the year listed. The committee currently includes:

  • Hockey East - Jeff Schulman, chair (Vermont AD, 2024)
  • Big Ten - Josh Richelew (Michigan sports admin., 2027)
  • Atlantic - Rick Gotkin (Mercyhurst coach, 2026)
  • CCHA - Bob Daniels (Ferris State coach, 2026)
  • ECAC - Timothy Troville (Harvard assoc. AD, 2026)
  • NCHC - Scott Sandelin (Minnesota Duluth coach, 2026)

: The NCAA mandates that a conference receives an automatic bid to the NCAA tournament if it exists for at least two years and has at least six teams. There is no mandate on how a conference should award the automatic bids, though almost every conference in every sport awards it to the winner of that conference's postseason tournament. For a time, the hockey committee gave two automatic bids -- to the regular-season and conference tournament champion -- but did away with that practice in 2000-01, in part because the NCAA said it wasn't allowed, and in part because the added autobid earned by the MAAC (now Atlantic Hockey) reduced the available at-large slots.

Pairwise Comparison System

: In the early 1990s, the NCAA Men's Ice Hockey Committee instituted a system designed to objectively compare teams to each other. The methodology evolved over time, getting more and more precise until becoming as it is today. Also, the criteria used has fluctuated somewhat over time. Three are currently used.

: The current criteria for comparing one team to another consists of:

  1. Ratings Percentage Index (RPI), adjusted for a variety of factors (see below)
  2. Record vs. Common Opponents
  3. Head-to-Head record

The most notable change to the selection criteria over the years has been the reduction in the number of criteria, and more heavy reliance on RPI (plus adjustments). There was once a "Record in Last 16 (or 20) Games" component. In addition, as of 2013-14, the Record vs. TUC criterion was removed, and replaced with a sliding-scale "Quality Win Bonus."

: Each "Team Under Consideration" (TUC) is compared to every other "Team Under Consideration" (see below for TUC definition), using the three criteria. Within each "comparison," one point is awarded for winning each criterion. One point is also awarded for each head-to-head win. The team with the most "criteria points" at the end of this process, wins that comparison. If the comparison ends in a tie, it's broken by determining which team has the better RPI. This procedure is repeated for every possible TUC pair. The final number represented in the Pairwise Comparison Rating chart is the amount of "comparison wins" (PCWs) each team has.

: Using the chart, the teams are listed in order on the basis of most "comparisons" won. When taking out the teams that qualify automatically (by virtue of winning their conference tournament), the remaining top teams are then selected to fill out the 16-team field. If there is a tie in the amount of total comparisons won, this tie is also broken by comparing the two teams' RPI. (Note: That method of breaking ties is not outlined anywhere, and has simply been ascertained through experience and observation. Likewise, the ordering of teams in the chart -- based on total comparisons won -- is also not outlined anywhere. Other methods have been used in the past that, while practically amounting to the same thing, are not exactly the same. See below, and this article.)

: To understand this further, it's important to know the history of the system. There came a time when the hockey community decided it wanted to take subjectivity out of the process. The Pairwise Comparison system was born. Originally, the system was designed as a way to objectively compare teams that were close in RPI, i.e. "on the bubble" of getting into the tournament. Once that bubble was ascertained by the committee (a subjective process in a sense, but not practically), the committee checked the individual comparisons among the teams, and figured out who was "winning the comparisons" against each other. It was only third-party sources -- after learning of this methodology's details -- that originally totaled up all the "comparison wins" and presented them in a chart in ranking form. This kind of chart was ultimately popularized by U.S. College Hockey Online, the first Internet-only college hockey media organization, which went on-line in 1996. Some time over the next seven years, life imitated art -- in other words, the committee's methods morphed, and it began to actually utilize the chart, as is, without doing any micro-observation of the individual comparisons. (See: article and article.)

Pairwise - Definitions

: As of 2013-14, the Record vs. TUC criterion has been removed, effectively making every team a TUC. Prior to that, a team under consideration was one which had an RPI of .500 or higher. There were other definitions in the past, such as "top 25 RPI teams." A team was once made a TUC by winning its conference tournament and becoming an automatic qualifier, but that is no longer the case, as of 2006.

: The RPI was created by the NCAA in the late 1970s, originally to help the basketball selection committee. It's a method of adjusting for the varying strengths of schedule of the different teams. The number is computed from the following three components:

  1. A team's own winning percentage (25%)
  2. The average of the team's opponents' winning percentages (21%)
  3. The average of the team's opponents opponents' winning percentages (54%)

Originally, the RPI was weighted 25-50-25, as it is in men's basketball. At one time, hockey experimented with making a team's own winning percentage comprise 35% of the RPI, which worked OK when there were just four conferences that were generally comparable. But it wound up tilting the RPI too much in favor of strong teams from weak conferences -- particularly with the advent of "mid-major" conferences such as Atlantic Hockey (1999) and College Hockey America (2001) -- so the composition of the RPI was returned to 25-50-25. As of 2006, the RPI weights were changed to as their currently constituted (25-21-54.)

: For purposes of calculating a final RPI, games are weighted based upon whether they are home or road games. Road wins and home losses are weighted by a factor of 1.2, while home wins and road losses are weighted by 0.8. Unlike basketball, all components of the RPI are weighted. This weighting system was introduced in 2013-14.

: As of 2021-22, when college hockey went to 3-on-3 overtimes across the board, The Committee decided to weight these differently. OT wins are counted as 2/3 (0.6666) of a win, and 1/3 (0.3333) of a loss.

: A "Quality Win Bonus" was added for the 2013-14 season. For any win against the top 20 of the RPI, a team is awarded "bonus points" on a sliding scale from 1-20. In other words, a team is given a .050 RPI bonus for defeating the No. 1 team, sliding down to .0025 bonus for defeating the 20th team. The total bonus for the season is divided by the amount of games played (weighted for home-road), to give a final bonus figure. There was previously a more vague bonus system, which applied to wins against non-league teams in the Top 15 of RPI. That lasted from 2003-07 before being eliminated.

: A flaw of the RPI is that it has can potentially decrease if a good team defeats a poor team. In order to compensate for this, if a team's victory would otherwise lower its RPI, that game is removed from the formula. This originally only applied to conference tournament games, but as of 2006-07 was modified to include all games.

: As of 2012-13, two teams' records vs. common opponents is not a straight win-loss percentage. Instead, you get a win-loss percentage against each individual common opponent, then average all those percentages together. This helps smooth out situations, for instance, where a team can beat up on the same opponent four times, while the other team in the comparison only was 1-0 against that opponent. 4-0 vs. 1-0 was a big difference. But under the new method, both go down as just 1.000.

: The various RPI tweaks have no bearing on Record vs. CO and Head-to-Head criteria. In other words, home/road weighting and OT weighting, etc..., are not factored into the Record vs. CO and H2H. In those cases, a win is a win, and a loss is a loss.

Seeding Process

: There have generally been two sacrosanct philosophies when it comes to the seeding process. 1. teams that are hosting a regional must be placed in that region; 2. avoid first-round games (and second-round, if possible) against teams from the same conference. Other factors, such as maximizing gate revenue, and limiting travel have become de-emphasized since the tournament went from 12 to 16 teams in 2003.

: Since the advent of the objective system of Comparisons, there has always been a step-by-step methodology to determining the seeds. But since going to a 16-team tournament, the methodology has become highly straightforward. For one, there was a time when the emphasis was more upon individual comparisons. Now, the Pairwise Comparison chart, as described above, is used to rank the teams in a straight 1-16. (Note: This methodology is not outlined in the Ice Hockey manual, it has simply become the practice of the committee over time -- and was determined by the media via observation.) The teams are then grouped into four "bands" of four, with teams 1-4 given No. 1 seeds (Band 1), 5-8 given No. 2 seeds (Band 2), 9-12 given No. 3 seeds (Band 3), and 13-16 given No. 4 seeds (Band 4). Ties among teams in the amount of team-to-team comparisons they have won, at one time, were broken by looking at those individual comparisons among the teams in question. Now such a tie is generally broken by simply looking at the RPI.

: The No. 1 seeds are ranked 1-2-3-4, and then placed, in that order, in the region closest to home as possible.

: For the remaining teams, the current practice no longer favors geography, but instead places a strong premium upon maintaining a "serpentine" order. i.e. 1 vs. 16, 2 vs. 15, 3 vs. 14, etc... with the second-round set up to preserve, if possible, a 1 vs. 8, 2 vs. 7, 3 vs. 6, 4 vs. 5 setup. The committee will mix and match teams within bands in order to preserve the two sacrosanct issues mentioned above, but will not move teams outside their band. Generally speaking, in order to avoid an intra-conference matchup, the committee prefers flip-flopping the No. 3 seeds within their band to different regionals, as opposed to No. 2 seeds. Either way would work, but they have usually chosen the former.

: The regional winners that will face each other in the national semifinals (the Frozen Four semis) are pre-determined prior to the start of the tournament under the assumption that the four No. 1 seeds will advance. The region of the No. 1 overall seed is matched with the region of the No. 4 overall seed, and same for No. 2 and 3. This holds even if the No. 1 seeds get eliminated in the regional.

Issues

: The Pairwise -- and KRACH, for that matter -- is not precise enough for the committee to confine itself so strictly to a 1-16 ordering of the teams based upon it. It's a good method for selecting teams -- because at least an objective system, even if flawed, eliminates the problems with subjectivity. But in seeding, there's no need to be so locked into the numbers when they are so close. These are too small sample sizes to do that to yourself. Even in KRACH, although a pure method of ranking teams based on past results, you cannot be sure that team 8 is a better team than team 9. While ordering the teams 1-16 is a nice conceptual starting point, the committee should not consider itself hamstrung by it. Even the concept of placing teams in "bands" of four -- where teams can be shuffled within the band, but not moved to a different band -- seems unnecessary. It doesn't make logical sense why it's OK to flip-flop teams 9 and 12, but it's not OK to flip-flop teams 8 and 9, if necessary. (See: article)

: Which brings us to the classic argument ... Should the season be judged as a whole, or should more weight be given to the end of the season, or conference tournaments, for example? The Committee, and the hockey community as a whole, decided to remove the "Last 16" criteria from the Pairwise many years ago, not so much because it didn't agree with the idea philosophically, but because the "Record in Last 16" was so skewed by strength of schedule. But there are some who believe the season should just be judged as a whole, period. On a more fundamental level, should we be relying upon Pairwise components that have such small sample sizes? For example, "Record vs. Common Opponents" is often based upon a game or two. Perhaps it's better to live with this for the sake of factoring in things that are worth factoring in -- such as Head-to-Head and Common Opponents. Others will argue, just use the RPI to compare teams (or, better yet, KRACH), and just use the other criteria when the RPI (or KRACH) is very close.

: While it's true that a team's RPI can go down for winning a game against a bad team, and while it's true that this illustrates a flaw in the RPI concept, the concept of removing that game from the formula in order to compensate is logically flawed. The RPI is meant to be taken as a whole -- a snapshot of the entire season once it's over. It's only because of the publicity given to it by media organizations such as this one, that anyone even notices the daily fluctuations in the RPI. (Consider, too, that bad teams' RPIs go up when losing to good teams.) That the RPI is a flawed method is apparent on many levels, but the way to compensate is to use a different method for adjusting for strength of schedule (like KRACH), not to bastardize a flawed method.

: It's one thing to flip seedings around for a compelling reason. It's another thing to flip seedings around by subjectively ignoring the Pairwise criteria. This is what the committee did in 2005, for the first time. Even though Colorado College won its comparison with Denver, and therefore was second on the Pairwise chart to Denver's third, the committee decided to switch them because Denver won the head-to-head matchup, 3-2, including the WCHA title game. This hardly makes sense when the committee's own rules state that head-to-head is just one of four criteria. This seems like a small change, but it opens a huge can of worms that should scare anyone who believes we should be avoiding smoke-filled rooms, and anyone who believes the whole season matters. Why have the comparison system if the committee can simply decide to overrule it? (See: article)

: We think so. (See: article and article)

: No matter the flaws in this system, or any system, it has been generally agreed upon by the hockey community that it is better than allowing committee members free reign to make subjective decisions. Even inserting just the appearance of bias is not worth the grief. At least this way, no matter the flaws, the system is out in the open and teams know what they have to do to make the tournament. Some have argued that the committee should be allowed subjectivity in cases that scream for it. For example, let's say a team loses its star goalie for 15 games, doesn't do well, but then the goalie comes back, the team plays great but gets a 4 seed. Should the committee be able to move them up? After all, the basketball committee takes those kinds of things into consideration. The problem is, you start opening things up like that, and you don't know know where it ends. Even the decision of how far to take it is subjective in and of itself. Everyone's definition of "common sense" is different.

: This leads to a firestorm every year, particularly from those who don't know about the philosophy. Cries of favoritism come from the masses that such-and-such team again gets to play its NCAA games at, or near, its home arena.

Whether everyone agrees with this process for selecting and seeding the teams or not, the methodology is well-defined and transparent. There is no subjectivity in the selection process, other than the pre-determined subjectivity of which criteria is used. The selections are not based upon polls. They are not based upon the whim or opinion of any committee member. ... There are many common misconceptions. For example, that teams which win their respective conference tournament will (or should) get preferential treatment in seeding; or that teams who are playing well down the stretch will also get preferential treatment. These wins are simply factored into the process naturally, and are not given any special weight. Whether or not they should, is a matter of debate. But as it stands, they don't.

Send Feedback | Privacy Policy | Terms and Conditions

©2024 College Hockey News.