
The NCAA Tournament officially starts tonight, 3/17/2026 with the First Four (play in) games. For the last few seasons we have been grading how the committee did in selecting and seeding the field. The exercise below is similar. In order to grade the committee, we aren’t comparing them to how we think they should have seeded the field, rather we are seeing how consistently they applied their preferences throughout the bracket. Consistency is found by correlating the final S-curve to a weighted-model ranking and playing around with the inputs until the closest correlation (highest R²) can be found.
The committee uses team sheets to help it compare the various teams it reviews. The team sheet contains quite a bit of information, which we’ve divided into four different components. These components are essentially:
- Schedule strength
- Raw winning percentage
- Success by Quadrants
- Metrics
Within these components, sub-components are present. Each component will show the overall weight the committee inherently applied to the 2026 bracket. Also, each of the components will be explored in more detailed based upon what the committee members are looking at when they sit down to select and seed the field of 68 each March.
Schedule Strength (1.0%)
The committee looks at overall SOS but also sees each team’s conference and non-conference SOS ranks. The overall SOS is most important, but the committee has also historically looked at non-con SOS as it is within a team’s control to play a tournament-worthy non-con SOS. For 2026, the schedule component was relatively small (1%), but it did have some predictive power.
Raw Winning Percentage (2.9%)
This component is unique in two ways. First, it doesn’t have sub-components. Its just a teams overall record (in D-1 games). Second, it arguably should be 0%. Yes, the 1-seeds will have higher raw winning percentages than teams lower down on the S-curve, but this should accounted for by other components.
Miami (OH)’s 28-1 record and inclusion into the field (S-curve ranking of 44) helps draw this number up. At 2.9% the number is low enough that we aren’t too concerned that it was factored into the committee’s decision making, but any higher and we would be getting worried.
Success by Quadrants (30.9%)
The team sheet as well as the source website used to run our calculations show each prospective team’s results by Quadrant, from Quad 1 to Quad 4. Pundits will often compare “blind resumes” which show a team’s Quad 1 results compared to another team’s to argue or support the inclusion of a certain team over another. While many data analysts have argued against the usage of quadrants, the committee very much sees them and does weight them.
The components we used this year were three-fold. First was Q1 results (both overall wins and winning percentage). Second was Q1+Q2 results (again both overall wins and winning percentage). Third was the effect of bad losses (Q3 and Q4). This approximates how the committee likely looks at these metrics. They are impressed with both quantity and quality in terms of the top wins, but in terms of the easier games, want to just make sure teams are avoiding bad losses.
Interestingly for 2026, the committee was far more impressed with Q1+Q2 results (26.4%) than just Q1 results alone (4.5%). Bringing in Q2 games at a higher weight helped teams like UConn (7-3 in Q1 but 18-4 in Q1+Q2) and Iowa State (8-7 in Q1 but 18-7 in Q1+Q2).
Lastly, the bad loss sub-component was incorporated as a multiple to the first two components. We toggled this multiple on and off to see if it had an effect, and it did so we kept it as a sub-component.
Computer Metrics (65.2%)
The computer rankings made up nearly 2/3 of the input the committee considered. This is up from last season (55.7%) and the season before (58.5%).
We divided the metrics into the two buckets the committee basically does–efficiency metrics and resume metrics. The efficiency metrics include: NET, KenPom, Torvik, and BPI. The resume metrics include: WAB, SOR, and KPI. The metrics are weighted more toward the NCAA’s internal metrics (NET and WAB), but each listed number on the team sheet is included as they are what the committee members see.
The split between efficiency metrics and resume metrics was 32.1% and 33.1% respectively. The committee looked at both elements, how well a team did at winning games and how well they won these games. Arguably the resume metrics should be far higher than efficiency metrics, but both are quite correlated to one another regardless so this isn’t so bad.
Correlation to S-Curve
The R² was 0.9694 this season, making it better than 2025 (0.9489) and 2024 (0.8793). Compared to prior recent iterations, the committee in 2026 did a good job. Let’s look at where it was off the most
Teams Over-seeded by the Committee
Given the preferences the committee had, these three teams are finding themselves with better spots on the S-curve than they likely should have been:
Illinois (10 on the S-curve, 16 on the weighted-model)
The Illini have very strong resume metrics (8 in NET, 6 in KenPom) but their Q1 results (7-8) and resume metrics (18 in WAB) leave much to be desired. This resulted in the difference of a seed, from 4 to 3. How to handle strong teams with poorer resumes is going to always be a question the committee has to deal with, but here it seems like they over-borrowed from the efficiency metric side compared to how they were doing so elsewhere.
Kentucky (25 on S-curve, 29 on weighted-model)
Mark Pope’s team got the top 7-seed instead of the top 8-seed, helping the Wildcats avoid a potential R32 matchup against an elite 1-seed (not that their second round game against Iowa State would be easy). Kentucky was hovering around the last 7-seed/first 8-seed spot aside from its strong SOS metric (6th overall). They may have gotten a name-brand boost as well. Not egregious, but nevertheless helpful for UK.
Texas (42 on S-curve, 48 on weighted-model)
This one may be the most controversial. Not only was it a six spot difference, it also vaulted the Longhorns into the bracket instead of San Diego State (45 on weighted model) or Oklahoma (46). The Longhorns are 17-14 (they also have one non-D1 win) which includes a Q3 loss against a decent (23rd overall) SOS. Their combined resume metrics placed them in 48th, below even Auburn (44th). While nearly every bracketologist got this pick correct, it is weird Texas so easily got into the field.
Teams Under-seeded by the Committee
Just as three teams were over-seeded, we found three under-seeded teams for 2026:
Vanderbilt (17 on S-curve, 10 on the weighted-model)
The Commodores knocked of Florida in the SEC Tournament and finished as runners-up to Arkansas, but only reached the spot of the final 5-seed. Others have noticed how indefensible it is for Vandy to be a 5-seed. Looking at the weighted model, the only ranking where Vandy was worse than a top 12 team was in the lowly-correlated overall winning percentage (29) and SOS (20) components…but even there they weren’t egregiously an outlier. KU might have gotten a break to get a 4-seed, because objectively Vanderbilt should have been in front of the Jayhawks.
Utah State (33 on S-curve, 25 on the weighted-model)
Utah State was not happy with its seeding when announced on Sunday, and while it is common to hear complaints this time of year so at first we weren’t necessarily sympathetic to the Aggies, we’re now in complete agreement with them. Not only should Utah State have been a 7-seed instead of a 9, they should have comfortably been on the 7-line. Like Kentucky above who got promoted, the difference isn’t necessarily negligible in that the second round game is more winnable when playing a 2-seed instead of a 1 (Arizona is the 1-seed in Utah State’s potential path). Historically speaking, since 1985 the winner of the 7/10 game has gone on to win the R32 game 54 times for a Sweet 16 success rate of 34%. The winner of the 8/9 game, who almost always plays the 1-seed, gets to the Sweet 16 only 15% of the time.
In terms of the components, Utah State had a lower SOS component (89), and its Q3 loss didn’t help it. Still, an overall Q1+Q2 record of 13-5 to go along with decent metrics (30 in efficiency, 29 in resume) should have had it higher on the S-curve than it was.
San Diego State (49 on S-curve, 45 on the weighted-model)
It was only four spots, but the Aztecs have a case for inclusion into March Madness based on how the committee seeded the field elsewhere. They didn’t have any exceptional metric or component, but cumulatively they were within the top 45 teams. SDSU was 9-10 in Q1+Q2 games which hurt, but other first four teams (Texas was 7-13, SMU was 9-13, NC State was 11-12) were likewise not very impressive. While it wouldn’t have got them in, having SDSU in front of Oklahoma (19-15) and Auburn (17-16), the first two teams out, would have better recognized the Aztecs’ 21-11 record against way worse records from power conference teams.
Bracket Matrix
Just as with last season, the Bracket Matrix was more consistent than the committee itself despite not establishing the preferences we used to best fit the committee to a weighted model. We were able to get an R² of 0.9780 when we compared the Bracket Matrix’s S-curve compared to the weighted model that fit best to the actual S-curve. The Bracket Matrix consensus had Vandy as a 4-seed (and more 3-seeds than 5-seeds in various mock brackets) and an overall ranking of #14. It also had Utah State (#30 overall) a bit better than the official committee did.
The consensus bracket also had all 68 teams correct (well, 37 at-larges). Auburn was in 16% of the mock brackets, the highest for any team to miss the field. Texas had the lowest odds according to the Bracket Matrix, but it was still in 80% of mocks on Selection Sunday.
In total, very little controversy compared to most years. The committee was more consistent than it had been in prior years though its preferences did shift some. The biggest misses were in the area of seeding, but even here it only really had an effect on a few teams.

One thought on “Grading the Committee – 2026”