The Ontario SchoolReach provincial championship whittles roughly 40 teams down to three national invites. To coordinate the largest field of any SchoolReach event, teams are split into pools that play amongst each other, with the (usually) top two of eight in each group moving on to the playoffs.
The composition of the pool can play a significant role in how far a team can progress in the tournament. There are good-faith efforts to balance the pools, but historically with no other background information, organizers had to use reputation (and geographical separation) to form the pools. Often, this led to strange results, such as two 2013 national invites coming from the same preliminary group. Ideally, and with more information, teams would be sorted so that they earn a final rank appropriate to their performance.
But I can’t solve that for now. What I can do is look back, thanks to collecting data from past tournaments. I occasionally get asked (or hear complaints) about how teams don’t get a fair shot during provincials, either through losing a playoff spot to a “weaker” team or having to deal with a group of death. I took a look at some numbers.
The analysis is based on teams that had at least 10 appearances at Ontario provincials since 1999. Results from 2003-05 are excluded from the averages because I don’t have pool composition for those tournaments (just points and ranks). 18 schools fit the bill, including most of the modern “usual suspects” for national qualifications.
First up is a team’s average rank against their average round-robin points per game. See figure 1, and excuse the crowded labels in places; some teams are close together. There is an unsurprising relationship – teams that finish well scored more points to get there. There are four teams that are at least a full standard deviation from the linear trend:
- UTS earns more points than necessary to get their rank. They are also limited by being unable to go below 1, even though they would fit closer to a theoretical rank of “0”.
- Lisgar gets the round-robin points to justify an average rank in the 1-3 range. However, they have a history of stumbling in the playoffs, especially the televised ones, which gives them a lower final rank than their seed would suggest.
- I will get back to Leaside in a later graph. In the early years, the team scored UTS-esque point tallies. In their later years, they had schedule benefits. Their mid-years are excluded (2003-05).
- Assumption earns fewer points than expected. It will be seen later that one of my past assumptions (pun intended) that they get easy draws is false. Instead, they probably earn lots of close wins in the prelims, operating on razor-thin margins of victory to often get on the better side of the playoff bubble.
Next is the comparison of rank and strength of schedule. The relationship is not as strong, but teams with better ranks usually have an easier schedule. This is expected for balanced pools – the top teams in the pools face teams weaker than them, while the bottom teams face opponents stronger than them. Unfortunately, we don’t have information-based balance, so we are starting to see some outliers:
- Leaside is on the low side of this chart. They were getting statistically significantly easier schedules than their rank would suggest. However, I believe I can explain this – Leaside made the provincial final in all (and only) the three excluded years. Leaside was extremely good in the 2003-05 period. They were also a very strong prelim team before that, but would slip in the playoffs. For their remaining active years (consecutively until 2009), they probably benefited from reputation placing them as expected pool winners, but they never made playoffs again after the 2005 run. If the 2003-05 results could be added, they would have a higher average rank with probably not too much change in SOS.
- Lisgar appears low, but is within a standard deviation. As mentioned before, their average rank is worse than expected because of historical poor playoff performance.
- The cluster of Oakville-Trafalgar, Waterloo, and Westdale have a right to gripe. They face statistically significantly tougher schedules than their results would justify – Westdale is almost two standard deviations from the trend. OTHS is particularly surprising: they had good results in the missing years (thanks to University Challenge celebrity Eric Monkman), but don’t appear to have been given a “boost” from that reputation; they seem to be put in pools under the assumption they don’t do well. Westdale’s tough luck was also looked at in an earlier post when I posited (incorrectly) that Hamilton teams in general suffered from bad schedules.
The last graph, comparing SOS and PPG, could be summarized as how teams cope with the schedule they’ve been dealt. Strength of schedule loosely represents pool strength and the potential unbalance, so teams getting PPGs above the trend are punching above their weight to overcome a bad draw. A few teams are outliers:
- Westdale still stands out (OTHS and Waterloo draw closer to the trend in this analysis). Their single greatest mountain to climb was the 2013 pool: they had a 5-2 record, their second-best ever PPG relative to the set, and a final rank of 11th, all while dealing with two nationals-bound teams and a third team that also got into the playoffs. Westdale also incredibly made playoffs in 2009 with a 1.15 SOS. Westdale often got the worst schedules, but they made every effort to try to get something out of it.
- Assumption is the outlier on the low end. I don’t wish to suggest that they are a low-effort team, though. They get schedules that are roughly fair for what is expected of them, but the first analysis suggested that they just don’t pick up large margins of victory.
- UTS is also an outlier. They appear to have an easy slate of opponents, but they are still performing better than their schedule would expect. UTS has had a few years with tough pools (including the 2013 one mentioned earlier) while still consistently putting up points – they have qualified for nationals four times with a preliminary SOS greater than 1. Organizers (unintentionally) throw tough teams at UTS, and they still prevail.
So there are some data to ponder. I’m sure there are some less-frequent teams that also struggle or get an easy break, but the teams highlighted here should have enough sample size to stand out. Use your own results to see how your team compares to these provincial regulars.