## The R-Value

The points you gave me, nothing else can save me, SOS

Several of my posts have referenced the “R-value”. I think most people realize it is some sort of statistical measure of a team’s strength, but they are confused by either its derivation or interpretation. I am long overdue on clarifying this.

Primarily, the R-value is a mechanism to rank teams who all played the same questions, but did not necessarily play each other. The two most useful applications for this are the Ontario regional-to-provincial and the Ontario provincial prelim-to-playoff qualification systems. Both have a large number of teams that need to be condensed to a small fraction of top teams that would proceed to a higher level, and they all played (roughly) the same questions.

A mechanism exists for this purpose in the US. National Academic Quiz Tournaments’ college program has a couple hundred university teams compete in regional tournaments, all vying to qualify for 64 spots in their national championship (across two divisions). The regional tournaments are all played on the same set of questions. Originally, NAQT used an undisclosed “S-value” to statistically determine which teams, beyond regional winners, deserved a spot in the national championship. With the cooperation of regional hosts providing stats promptly, NAQT could quickly analyze the results and issue qualification invitations a few days after the regional tournaments. Prior to the 2010 season, Dwight Wynne proposed a modified formula made transparent so all teams could verify their values were correct. NAQT adopted this, and named the mechanism the “D-value” in honour of Dwight. In 2015, the Academic Competition Federation introduced their “A-value” for national qualifications, which largely followed the D-value formula.

The R-value is a D-value modified for SchoolReach. The “R” stands for “Reach” or “Reach for the Top”. SchoolReach results typically lack the detailed answer conversion information available in quizbowl, so the R-value is dependent on total points and strength of schedule. I also added 2 modifications that I will get to later.

The R-value asks: “How does a team compare to a theoretical average team playing on the question set?” It is answered in the form of a percentage; if a team has an R-value of 100%, they were statistically average for the field. A step-by-step process to get there:

Note: my primitive embedding of LaTeX in WordPress is used below. It is possible it may not appear in your browser.

• First, calculate all teams’ round-robin points-per-game (RRPPG). All games which occur in a round-robin system are included, even if a team plays another team multiple times. Playoffs, tiebreaking games, and exhibition matches are excluded. If certain games are known to be “extended” (for example, double-length), that is reflected in the “RR games” total.
• $RRPPG=\frac{RRPts}{RRG}$
• With the RRPPGs known, determine each team’s round-robin opponent average PPG (RROppPPG). This is the average of the PPGs of each opponent a team played, double- or triple- counting where appropriate if they faced each other multiple times. Note: this is different from a team’s average points against, which is a different statistic that is not used in this analysis.
• $RROppPPG=\frac{RRPPG_{opp_1} +RRPPG_{opp_2} +...+RRPPG_{opp_n}}{RR games}$
• The question set’s average points is also needed. This covers all pools and all sites where the questions were used for the purpose of the rank. I determine this average through total RR points and total RR games, so larger sites that have more games do end up with a larger influence on the set average.
• $SetPPG=\frac{\sum{RRPts}}{\sum{RRG}}$
• Strength of schedule (SOS) is a factor to determine how strong a team’s opponents were compared to facing an average set of opponents for the field. A value above 1 indicates a tougher than average schedule; below 1 is a lower than average schedule. In reasonably balanced pools, it is typical to have top teams below 1 and bottom teams above 1 – a top team doesn’t play itself, but its high point tally contributes to the total of one of its weaker opponents. Also, by comparing across multiple pools/sites, SOS can give an overview of how strong a pool/site was.
• $SOS=\frac{RROppPPG}{SetPPG}$
• Now for the biggest leap: the points a team earned must be modified to account for how strong its schedule was. Racking up 400 PPG is far more difficult against national contenders than against novices. Adjusted RRPPG multiplies points by the SOS factor – a tougher schedule gives a team a higher adjusted point total. This adjusted value theoretically represents a team’s PPG if they faced a slate of average teams. Note: this value is not shown in result tables.
• $RRPPG_{adj}=RRPPG \times SOS$
• This value is suitable on its own for ranking. However, I add an extra step of normalizing for the set, so I can compare across years. Earning 400 PPG is far more difficult when the set average is 200 compared to a set average of 300. For example, the late ’90s/early ’00s had much higher set point totals than today (through different formats), and a normalization is needed to compare historical teams of that era to today. The calculated result is the raw R-value, which I convert to a percentage for easier comprehension of how much different from average a team is.
• $Rval_{raw}=\frac{RRPPG_{adj}}{SetPPG} \times 100\%$

Raw R-value is the number I use for most comparison purposes. In earlier posts, I tried to show some examples of how this statistic is useful for predicting future performance (especially playoffs) and analyzing outlier results. If R-value is to be used for any sort of qualification system, however, it needs to account for the universally-accepted idea that it is most important to win games. Almost all tournaments use final ranks based primarily on winning (either in playoffs or just prelim results). A team with a low (raw) R-value that finishes ahead of a team with a high R-value deserves qualification just as much (if not more than) teams below them in the standings. The actual R-value is then calculated, based on NAQT’s system (quoting from their D-value page):

After the raw values are computed, they are listed in order for each [site] and a correction is applied to ensure that invitations do not break the order-of-finish at [a site]. Starting at the top of each [site], each team is checked to see if it finished above one or more teams with higher D-values. If it did, then that team and every team between it and the lowest team with a higher D-value are given the mean D-value of that group and ranked in order by their finish.

Let’s say a site winner had a raw R-value of 120% and the runner-up had a final upset while finishing with a raw R-value of 140%. Under this adjustment, both teams end up with the mean, 130%, for their true R-value. The winner receives a boost for finishing above one or more stronger teams, while the lower teams receive a penalty for not reaching their “potential”. The true R-values would then be compared across pools/sites for qualification purposes; if tied teams straddle the cutoff for qualification, invites are issued in order of rank at the tournament.

I do deviate slightly from this formula, though. It is possible, but rare, for the top-ranked team in this average to end up with a lower R-value for finishing higher than a stronger team (e.g: 1st 120%, 2nd 80%, 3rd 130%; all teams get 110%). I don’t believe this should ever happen. If it does, I modify the averaging by this algorithm:

• First, follow the NAQT algorithm
• If the first team in the averaging has their R-value lower than their raw R-value, ignore the last team (which has a higher raw R-value than the first team)
• Proceed to the team one rank above the formerly-last team and attempt the R-value average again. Repeat until the first team improves upon their R-value.
• Continue the NAQT algorithm with the next team after the new set of averaged teams

Look at the 2016 Ontario Provincials results for an example. Woburn had a very high raw R-value (131.8%), but finished very low (22nd). Under the basic D-value algorithm, 4th-placed London Central would have joined the big set of teams all the way down to Woburn, and ended up with a decrease in their R-value, thanks to the many intermediary teams with low raw R-values. Instead, Woburn was ignored, and the next-lowest team with a higher raw R-value (Hillfield at 132.9%) was tested. Again, this would drop Central’s R-value because of the low value for intermediary Marc Garneau. It is only an average with 5th-placed Waterloo that allows Central to improve on their raw result. From this, the algorithm goes to the next “unaveraged” team, Marc Garneau, who starts the group all the way down to Woburn because they earn a slight R-value boost. 6th through 22nd end up with a final R-value of 110.6% each.

And that’s how you get the R-value. The math isn’t that complicated, but it does require detailed number-crunching, especially for the opponent PPG step. Until more thorough result reporting occurs in SchoolReach, it is probably the best analysis that can be done with the information available. Thankfully, it is a fairly reliable metric for team performance, and I hope to show some examples in future posts.

Keeping PACE with current events.

The Partnership for Academic Competition Excellence (PACE) held their National Scholastic Championship (NSC) over the weekend. Unlike other major players in quiz tournaments, PACE is a registered non-profit that has a membership of coaches and former players. The NSC is their one tournament of the year (and a major fundraiser), while the rest of the year is outreach and assistance in the US.

Canadian teams have attended 3 5 NSCs. Lisgar attended in 2011, finishing 28th of 60 teams. White Oaks attended in 2016, finishing 81st of 96 teams. This past weekend, Lisgar sent two teams, including one that was fresh off their Reach for the Top championship. The “B” team, consisting of one of the champions and three additional players, did well in their second-phase bracket and ultimately finished 78th of 96 teams. Lisgar A had an excellent opening morning (losing only to the eventual second place team), but struggled in their second phase and placed 22nd.

Edited to add: I had poor memory and missed Lisgar in 2013 and Waterloo CI in 2015. I went back to check that I didn’t miss, say, one of the Alberta teams, but I think all the appearances are covered.

Lisgar A’s result is very good in the context of Canadian teams. The American circuit is far more robust and competitive than the scene up north – Lisgar probably played fewer quizbowl games pre-nationals than some teams played tournaments! Colonel By’s 21st-place finish at the 2015 NAQT HSNCT still remains the high-water mark, unless you count a 1988 exhibition match in which a team from Earl Haig defeated the NAC-winning team. Nevertheless, Lisgar did well in a tough schedule that saw them face 3 of the eventual top 4 teams over the course of the opening day.

The tournament itself was won by Detroit Catholic Central A. Combined Saturday/Sunday results are found here. Reach could also take a hint from how quickly (ie: live) the Youtube stream of the 2nd-place match, all-star game, and closing awards got uploaded (note: a true final did not occur because of the 2-win clearance of DCC A over the rest of the field).

Congratulations to Lisgar, along with all those teams from the US & Singapore!

A post-script: I, amusingly, have already set aside the tag “pace” for the team from Richmond Hill. The day PACE goes to PACE…

## 2017 Nationals Results

Lights! Camera! Inaction!

Last weekend, UTS and the University of Toronto hosted the 49th* national championship of Reach for the Top. 16 teams from seven provinces had a full day of round-robin competition before vying for the title in the playoffs.

*Reach claimed it was the 51st, but only 49 championship seasons have occurred due to the stoppages after the CBC era.

The full results are uploaded here. Lisgar CI claimed their third national title in a close 460-410 final over the University of Toronto Schools; it was an anticipated clash of titans and a rematch of the provincial title which UTS won. My rundown of the teams, in order of rank (note: for this tournament, I broke standings ties by round-robin seed):

• Auburn Drive (16th). Nova Scotia’s clubs were greatly hindered by job action this year; only five teams went to provincials. Let’s hope this year was only a blip and that teams can have more support next year. As for this particular team, I never saw them until their final consolation game. They kept close with SJHS in the first half and won an excruciatingly long shootout, but saw their tournament end there.
• Rundle College (15th). Schedule quirkiness meant I saw Rundle for 9 of the 15 preliminary games and became their unintended fan club. As a surprise invite from their fourth-place finish in Alberta, expectations were not high. Their top player will return next year, so hopefully they can build from this experience for another shot at Nationals.
• St. Paul’s (14th). I think there was a different line-up between provincials and nationals, because the Manitoba champs fell short of the other teams from their province. Hopefully this means the school has a large pool of players to choose from, and can re-assemble for another provincial title run next year.
• Collingwood (13th). This rank will simply go down as “oops”. They temporarily disappeared after one of their consolation losses and defaulted a win to lower-seeded Saint John. They would have been in contention for the consolation bracket title otherwise.
• Saint John (12th). They got a lucky break from Collingwood, but ended up fourth of the final four consolation teams. They will be in tough to qualify for nationals next year, because the competition for second in New Brunswick is very tight. Interestingly, I only ever saw them win: in the round-robin over Rundle and the playoffs over Auburn Drive.
• Marianopolis (11th). This team was a bit weaker than some CEGEP teams of the past, but they pulled one of the few upsets in the playoffs with a second-round consolation win over Renert.
• St. John’s-Ravenscourt (10th). Just getting to nationals was impressive: Horton’s drop-out meant that SJR organized a team trip from Winnipeg less than a week before the tournament. They didn’t show too much rust and even managed to beat their provincial champions in the round-robin!
• Renert (9th). Renert & Co. did improve from last year. Their highlight was either almost defeating KV or getting the most games of any team by taking the long route to get to the consolation bracket title. A tournament MVP came from this team (I don’t mention names due to a blog-wide policy of keeping players anonymous).
• Old Scona (8th). Unlike their provincial compatriots, Old Scona did pull off a win over KV, but fell back to eighth seed by the end of Saturday. Eighth seed unfortunately meant an early match with UTS, where even a 480-240 loss to them would be considered a good result.
• Sir Winston Churchill (7th). They seemed, on paper at least, to be the strongest team from BC, even though Collingwood beat them in both the provincial final and the round-robin match (they won the play-in match over Collingwood, though). They led UTS going into the final snapout of their match, but couldn’t pull off what would have been the biggest upset of the tournament. Their mix of ups and downs averaged them out to the middle of the pack.
• Templeton (6th). For a team’s first appearance at Nationals (either in a long while or ever – not sure), they did very well. They almost beat Martingrove in the round robin and finished as the highest-placing BC team. Considering that they were nowhere even on the provincial scene before this, they would certainly be the “most improved” team. A tournament MVP came from this team.
• Kelvin (5th). R-values suggest the 5th-8th place teams have razor-thin differences in strength between them, but Kelvin got the wins. Kelvin was on my radar as the Manitoba runners-up, but I didn’t expect them to get as high as fifth. Well done, though I didn’t get to see them play. A tournament MVP came from this team.
• Kennebecasis Valley (4th). I think that even before the tournament began, KV was destined for fourth: they weren’t quite up to the level of the Ontario teams but were definitely better than anyone else. They got within 30 points of Lisgar in the round-robin, but a loss to Old Scona almost cost them a playoff bye. The semifinal match against UTS was very impressive, though. They capitalized on UTS’ mistakes and frustrations to keep within 20 points late in the match, and nearly gave the favourites their first loss of the tournament. With good players returning, I would not be surprised to see a late-round rematch next year- perhaps even in the final for once!
• Martingrove (3rd). Like KV, Martingrove seemed set for their final position as a step behind Lisgar and UTS. A mere 250-230 loss to UTS in the round-robin gave the Ontario champs a little scare, and they easily handled Templeton in the quarterfinal. The semifinal wasn’t pretty though: a poor run during the team scrambles sapped any momentum they had and allowed Lisgar’s MVP alone to earn more points than them. Nevertheless, they were part of the good camaraderie among the Ontario teams and hopefully they’ll show up at more tournaments next year in their quest to keep their Nationals attendance streak alive.
• UTS (2nd). UTS was my pick as the strongest team on paper, despite what the R-value said. They swept the round-robin while rarely fielding their true A-team; it cost them a few extra points, but who needs points for seeding when you’re 15-0? Unfortunately, the team was mired in production difficulties in both of their final-day matches. They were not on top of their game and only narrowly beat KV before taking the loss in the final. I think the delays and frustrations ate away at them, but they also had to deal with Lisgar’s MVP getting a second wind on the last day. It was a very good final match, and they had a great season overall (including a 6-1 record across formats over Lisgar A). They should be just as strong next year, so best of luck for another title run!
• Lisgar (1st). Best ever regional result. Best ever provincial (round-robin) result. 2nd best ever national (round-robin) result. Analytically, this title is not a surprise. Realistically, it was anything but. The team played sick (barely getting to the stage) and they entered the playoffs knowing they had taken nothing but losses to UTS all year. There was not a lot of optimism among them for the final morning. However, that semifinal was a much-needed boost in confidence and set them up for a stellar (minus the 25-minute delay) final. Who needs shootout wins anyway? While Lisgar’s MVP (also selected as a tournament MVP) returns, no one else does, so this was expected to be Lisgar’s last chance for a while. They got the title, and now they can go back to their regularly-scheduled programming of quizbowl.

A great tournament by all the teams. The matches I saw both in-room and on-stage were great to watch, even if I ended up rooting for the Rundle underdog half the time.

The tournament organization was pretty good. Logistics has never been a problem for Reach, and for their price tag, you expect the perks and efficiencies. Games were on time, staff were prepared, food was ready, and results were prompt. Sunday’s stage games were also well done, even if there was an audience of just me and the coaches at times.

Unfortunately, Monday was a mess. The small change from SchoolReach to Reach for the Top – filming the event – was a world of difference for the negative. Floodlights blew the breakers in the first game and wrecked a buzzer. The need to announce players on the replacement buzzers forced a “pay no attention to the man behind the curtain” scene where a person hid behind a banner to identify the light that went off. Pre-games became fidgety with ridiculous insert shots of buzzing, applause, and phony reactions. Games stopped twice when camera SD cards filled. Most notoriously, the delay of “reviewing the tapes” (rather than using a tried-and-tested method south of the border of leniency on vowel pronunciation) dragged the final into a 90-minute affair. UTS’ auditorium is not a television studio. Reach needs to either get back to a real TV production or start embracing less intrusive broadcast options, like Twitch or Youtube streaming. Disrupting players for the sake of pretty video (that will never come to light) shows a complete disregard for what should be the most important part of these tournaments – the academic performance of the players. UTS was definitely compromised by the production, and while I don’t dispute the title, I think we were robbed of an even better final. I know some changes will be underway at Reach, and I hope the Monday routine is part of that.

The final gala was good, though. Much more concise and meaningful than some of the provincial galas.

But I shouldn’t let my rambling detract from recognizing the most important things: the players who all showed excellent skill, teamwork, and friendliness; the coaches who coordinate not only the management of a strong team but the logistics of getting them to events; and the volunteer staff who keep the games going in a timely manner.

To those graduating, best wishes for your post-secondary pursuits (hey, try quizbowl). For those returning, good luck in your title run!

Finally, a blog note: this is, obviously, the end of the Reach season. For the off-season, my priorities are an explanation of the R-value (and follow-ups with old provincial analysis), a look at some historical games, and an updated Reach champion ranking, which I last did in 2015. Stay tuned!