## Reach champion rankings, 2017 part 2

The early modern era

Today I begin ranking the Reach championship-winning clubs for the modern SchoolReach era. The rankings from the CBC era are found here.

The SchoolReach subscription program began right after the final CBC episodes of 1985. Schools enrolled to get sets of questions that were used either for intramural/interschool tournaments or local TV productions. Ontario teams were very active in these “lost” years. By 1988, a graduated regional/provincial/national system was re-established, with the help of coaches like Eric Stewart (BC), Chris Zarski (AB), Patricia Beecham-Cooper (ON), and Hans Budgey (NS). A few tournaments had television coverage, but national championships were done off-air in the early 1990s.

The teams for today’s set of rankings come from this part of the modern era. The clubs ranked 13-18 all had their one national title in the ’80s or ’90s and none returned for another nationals appearance (as far as I can tell). Most are inactive now.

Part 1 of the modern era rankings (sorry, I can’t make a numbered list start at 13):

• 13. Bell (ON)
• 14. Frontenac (ON) [up 2]
• 15. St. Joseph’s (ON) [down 1]
• 16. Earl Haig (ON) [down 1]
• 17. Tagwi (ON) [up 1]
• 18. Memorial Composite (NS) [down 1]

All of these teams were in the bottom 6 in 2015, but here’s my reasoning for the shuffles:

Frontenac had the single most dominant year of any of these teams. Their 1999 provincials R-value of 175% is not fully verified (derived from margins of victory rather than raw points), but was the best on record until Lisgar this year. At nationals, they beat national regulars (for the 1990s) Saunders 600-410 in the final; that is the highest championship-winning score and the highest combined score in a final. Frontenac deserves a little boost, but not as high as Bell, who could sustain some provincials appearances into the 21st century.

The Tagwi-Memorial swap is minor. Originally, Memorial had the edge because of their follow-up victory over the NAC champs from the U.S., but Tagwi never got their opportunity to try it. It was another disappointment for the Tagwi champs, coming after the fact that they never got the Reach trophy due to it being stuck in legal ownership limbo between Kate Andrews HS and the reincarnated SchoolReach program. Anyway, I have now given Tagwi the slight edge because their club remained active far longer than Memorial.

Next time, I’ll review the 7th to 12th place clubs. That cluster of teams, who mostly had their success near the turn of the millennium, will see the most change.

## Reach champion rankings, 2017 part 1

The CBC era

I first ranked the Reach for the Top championship-winning clubs in 2015. I summarized the list here. They were split into CBC and modern eras, because there is significant differences in how clubs approached and prepared for the competition. I will be revisiting the rankings this summer.

I’ll emphasize that the lists are about championship-winning clubs. A school at least needs one title for a rank, regardless of how many consecutive final appearances they have. Clubs are ranked instead of individual teams: I try to take in a school’s whole body of work, rather than determine whether the 1973 team was better than the 2003 team. Because they earned a title in each era, Cobequid Educational Centre appears twice.

Part 1 is the CBC era list. Very little has changed, because the era is over and I don’t get much new information. Without further ado:

1. Lorne Jenken (AB)
2. Gonzaga (NL)
3. Vincent Massey (Etobicoke, ON) [up 1]
4. Oak Bay (BC) [up 1]
5. O’Leary (AB) [down 2]
6. Hillcrest (ON) [up 1]
7. Glenlawn (MB) [down 1]
8. Cobequid (NS)
9. Roland Michener (ON)
10. Kelvin (MB)
11. Dakota (MB)
12. Banting Memorial (ON)
13. Central Peel (ON)
14. Queen Elizabeth (NS)
15. Deloraine (MB)
16. Neil McNeil (ON)
17. River East (MB)
18. Kate Andrews (AB)
19. Rideau (ON)

There are only minor changes. Here’s my rationale:

O’Leary has dropped. They have 2 known final appearances: the 1972 win and the 1974 loss. The 1972 win was the largest ever paradigm shift in the CBC era, and one of the most important ever. O’Leary is credited with the idea of practicing all year to lead up to a tournament. Unfortunately, they were quickly outclassed by their provincial rival Lorne Jenken at their own game. I think my earlier impressions placed too much emphasis on their innovation without considering the hard reality of not many results. Interestingly, their drop benefits Oak Bay, who pretty much solely represented British Columbia in all the years before 1972 and didn’t make use of a practice model.

Hillcrest is up slightly. I’ve had the opportunity to see more Ontario provincial games from the 1970s/80s. Hillcrest is more frequently there, but they lose out by not being the top (southern) Ontario team of a particular year. Their 2 final appearances is better than Glenlawn. Glenlawn had more national tournament appearances, but not the high finishes. I think I gave Glenlawn too much credit as the “highest-scoring final winner”; that title has since been lost to the discovery of Frontenac’s 600-410 victory.

For curiosity’s sake, I would place Dryden (the three-peat silver medallists of the 1970s) in the 7-9 range. I would consider them better than fellow northern Ontarians Roland Michener, who got their multiple national appearances in the relatively easier 1980s.

The upcoming modern era rankings, which I will split into three parts, will see a bit more change. Some of that is due to results since 2015, while other changes come from discoveries from the 1990s. These rankings will show up later in the summer.

## The 1979 National Final

Back when J.R. wasn’t shot yet.

This week, I’m trying something a little different. Thanks to people that have saved old tapes, some Reach for the Top games are available online. Today, I’ll give commentary on the 1979 National Final game.

The 1979 national tournament took place in Montréal and was broadcast by CBC. Bill Guest was the host, and Paul Russell was one of the judges.

The 1979 Final pitted the northern Ontario champion, Dryden HS, against the southern Ontario champion, Banting Memorial HS. Dryden HS, from Dryden, is no stranger to the final – the team and their (I assume) captain Brad lost the 1977 and 1978 finals. They’d be eager to break that “slump”, and got to the final by defeating Lorne Jenken (AB) in the quarters and Cobequid (NS) in the semis (both Reach champs). Banting, from Alliston, is less experienced on the national stage, but benefited from a weaker draw that only saw Gonzaga (NL) as a real threat. The database page for the 1979 tournament is here.

Note: video of this match was uploaded by 1978 champion Dino Zincone here, but beware that it is a Flash video with a bloated file size and might not be safe for all browsers.

Questions 1-8 are assigned to one player at a time (with no bounceback to the other team). The Russian literature category leads to a lot of Pushkin guesses, and teams end it tied 20-20.

Team scrambles were slightly different then. The scramble was worth 5 points, and there are four questions exclusive to the winning team. Brad made an anticipatory buzz during “what is the capital of…” and correctly assumed the reader would continue with “…Ethiopia”. Their exclusive questions were much more difficult, but they got 20 of the 40 points about Eritrean independence and the Ogaden War. 45-20 Dryden.

The next four questions were audio samples of artists up for Junos that year. Banting swept it to take the lead. Brad responded by 40-ing the “What am I” about polo. 85-60 Dryden.

Banting tidies up on questions about medical terms, then Eric casually answers “asbestos” for a team scramble (no mention of health effects…). By question 28, the score is 125-85 Banting.

Four visual questions about 20th century art goes mostly dead, including one to identify the artist when the signature is in view…

Another batch of eight assigned questions. This set, about anagramming phrases into names, is also done differently: the first players of each team compete on the buzzer to answer two questions, followed by the next two players, and so on. Brad nails both of his and helps get Dryden back to a 115-145 score at the ad break.

Banting has the edge on the snappers after the break, but Jim (Dryden) solves math sequences and Brad almost sweeps a set of questions on Montréal’s bridges. Banting is barely holding on to a 195-185 lead.

A list question is next. It takes an interesting twist from the modern version. There are many more answers available, but the first person to buzz earns just 5 points per answer. A player from the second team can then buzz to earn 10 points for any remaining answers. Might make for some odd tactics – do you let a weaker team go first and hope they only answer 2 or 3, or do you rush in and try to exhaust the list for fewer points? Anyway, neither happened here for this list of the nine muses: Brad gets one for 5 points, and Paul gets one for 10 points. What a letdown.

The deflation may have shifted “momentum” in Banting’s favour. They make quick work of a team scramble about kinetic energy to give themselves a nice cushion for the endgame. Brad picks up 30 points between the classical music and religious books categories, but they enter the final snapper round with a Banting lead of 250-220.

Brad destroys the buzzer during the snappers. Figuratively, of course: there is no doubt that his buzzer was still functional at the end of the game. Brad buzzed in first in all but one of the 16 snappers… and only got five. Meanwhile, Banting collectively earned six snappers while buzzing in second each time. Final score, Banting Memorial 310, Dryden 270. Banting is the 1979 Reach for the Top national champion.

Analysis of this game comes down to one thing: Brad’s buzzing. Brad’s trigger-happy finger probably cost the game; over the course of the match, Banting picked up 185 points by buzzing second to Brad. That’s more than half their score! Dryden’s team was incorrect 39 of their 64 buzzes, though some of it was guessing at the end of the question. Banting, meanwhile, was much more calm on the buzzer (20 incorrect of 52) and didn’t let Dryden pick up any points from second buzzes. A little more discipline probably could have swung three questions (and the title) Dryden’s way. I though Dryden should have had picked up more experience from their past tournaments, but instead we see the heartbreak of losing three straight finals. Neither team really impressed me with their knowledge base: Brad’s pickups on Ethiopian wars and Montréal roadworks were good, but both teams left a lot of questions dead that probably would have been taken by stronger teams from earlier in the decade. Based on scores, Cobequid was possibly the strongest team in the field, but I haven’t been able to see the match where Dryden eliminated them.

Both finalists disappeared from the national scene after this match. Banting played a bit into the SchoolReach era, but no longer competes. Dryden ending up losing northern Ontario titles to Roland Michener and Renfrew over the rest of the CBC era, and isolation from any major urban centres probably stopped them from subscribing to SchoolReach. Among other teams in the tournament, Gonzaga, Lorne Jenken, and Oak Bay had all won titles before, while Cobequid would go on to win two years later (and also in 2005).

I hope this was an interesting look at “old” Reach. I will probably do this again, considering the decent number of games out there and the time to fill in the offseason.

## The R-Value

The points you gave me, nothing else can save me, SOS

Several of my posts have referenced the “R-value”. I think most people realize it is some sort of statistical measure of a team’s strength, but they are confused by either its derivation or interpretation. I am long overdue on clarifying this.

Primarily, the R-value is a mechanism to rank teams who all played the same questions, but did not necessarily play each other. The two most useful applications for this are the Ontario regional-to-provincial and the Ontario provincial prelim-to-playoff qualification systems. Both have a large number of teams that need to be condensed to a small fraction of top teams that would proceed to a higher level, and they all played (roughly) the same questions.

A mechanism exists for this purpose in the US. National Academic Quiz Tournaments’ college program has a couple hundred university teams compete in regional tournaments, all vying to qualify for 64 spots in their national championship (across two divisions). The regional tournaments are all played on the same set of questions. Originally, NAQT used an undisclosed “S-value” to statistically determine which teams, beyond regional winners, deserved a spot in the national championship. With the cooperation of regional hosts providing stats promptly, NAQT could quickly analyze the results and issue qualification invitations a few days after the regional tournaments. Prior to the 2010 season, Dwight Wynne proposed a modified formula made transparent so all teams could verify their values were correct. NAQT adopted this, and named the mechanism the “D-value” in honour of Dwight. In 2015, the Academic Competition Federation introduced their “A-value” for national qualifications, which largely followed the D-value formula.

The R-value is a D-value modified for SchoolReach. The “R” stands for “Reach” or “Reach for the Top”. SchoolReach results typically lack the detailed answer conversion information available in quizbowl, so the R-value is dependent on total points and strength of schedule. I also added 2 modifications that I will get to later.

The R-value asks: “How does a team compare to a theoretical average team playing on the question set?” It is answered in the form of a percentage; if a team has an R-value of 100%, they were statistically average for the field. A step-by-step process to get there:

Note: my primitive embedding of LaTeX in WordPress is used below. It is possible it may not appear in your browser.

• First, calculate all teams’ round-robin points-per-game (RRPPG). All games which occur in a round-robin system are included, even if a team plays another team multiple times. Playoffs, tiebreaking games, and exhibition matches are excluded. If certain games are known to be “extended” (for example, double-length), that is reflected in the “RR games” total.
• $RRPPG=\frac{RRPts}{RRG}$
• With the RRPPGs known, determine each team’s round-robin opponent average PPG (RROppPPG). This is the average of the PPGs of each opponent a team played, double- or triple- counting where appropriate if they faced each other multiple times. Note: this is different from a team’s average points against, which is a different statistic that is not used in this analysis.
• $RROppPPG=\frac{RRPPG_{opp_1} +RRPPG_{opp_2} +...+RRPPG_{opp_n}}{RR games}$
• The question set’s average points is also needed. This covers all pools and all sites where the questions were used for the purpose of the rank. I determine this average through total RR points and total RR games, so larger sites that have more games do end up with a larger influence on the set average.
• $SetPPG=\frac{\sum{RRPts}}{\sum{RRG}}$
• Strength of schedule (SOS) is a factor to determine how strong a team’s opponents were compared to facing an average set of opponents for the field. A value above 1 indicates a tougher than average schedule; below 1 is a lower than average schedule. In reasonably balanced pools, it is typical to have top teams below 1 and bottom teams above 1 – a top team doesn’t play itself, but its high point tally contributes to the total of one of its weaker opponents. Also, by comparing across multiple pools/sites, SOS can give an overview of how strong a pool/site was.
• $SOS=\frac{RROppPPG}{SetPPG}$
• Now for the biggest leap: the points a team earned must be modified to account for how strong its schedule was. Racking up 400 PPG is far more difficult against national contenders than against novices. Adjusted RRPPG multiplies points by the SOS factor – a tougher schedule gives a team a higher adjusted point total. This adjusted value theoretically represents a team’s PPG if they faced a slate of average teams. Note: this value is not shown in result tables.
• $RRPPG_{adj}=RRPPG \times SOS$
• This value is suitable on its own for ranking. However, I add an extra step of normalizing for the set, so I can compare across years. Earning 400 PPG is far more difficult when the set average is 200 compared to a set average of 300. For example, the late ’90s/early ’00s had much higher set point totals than today (through different formats), and a normalization is needed to compare historical teams of that era to today. The calculated result is the raw R-value, which I convert to a percentage for easier comprehension of how much different from average a team is.
• $Rval_{raw}=\frac{RRPPG_{adj}}{SetPPG} \times 100\%$

Raw R-value is the number I use for most comparison purposes. In earlier posts, I tried to show some examples of how this statistic is useful for predicting future performance (especially playoffs) and analyzing outlier results. If R-value is to be used for any sort of qualification system, however, it needs to account for the universally-accepted idea that it is most important to win games. Almost all tournaments use final ranks based primarily on winning (either in playoffs or just prelim results). A team with a low (raw) R-value that finishes ahead of a team with a high R-value deserves qualification just as much (if not more than) teams below them in the standings. The actual R-value is then calculated, based on NAQT’s system (quoting from their D-value page):

After the raw values are computed, they are listed in order for each [site] and a correction is applied to ensure that invitations do not break the order-of-finish at [a site]. Starting at the top of each [site], each team is checked to see if it finished above one or more teams with higher D-values. If it did, then that team and every team between it and the lowest team with a higher D-value are given the mean D-value of that group and ranked in order by their finish.

Let’s say a site winner had a raw R-value of 120% and the runner-up had a final upset while finishing with a raw R-value of 140%. Under this adjustment, both teams end up with the mean, 130%, for their true R-value. The winner receives a boost for finishing above one or more stronger teams, while the lower teams receive a penalty for not reaching their “potential”. The true R-values would then be compared across pools/sites for qualification purposes; if tied teams straddle the cutoff for qualification, invites are issued in order of rank at the tournament.

I do deviate slightly from this formula, though. It is possible, but rare, for the top-ranked team in this average to end up with a lower R-value for finishing higher than a stronger team (e.g: 1st 120%, 2nd 80%, 3rd 130%; all teams get 110%). I don’t believe this should ever happen. If it does, I modify the averaging by this algorithm:

• First, follow the NAQT algorithm
• If the first team in the averaging has their R-value lower than their raw R-value, ignore the last team (which has a higher raw R-value than the first team)
• Proceed to the team one rank above the formerly-last team and attempt the R-value average again. Repeat until the first team improves upon their R-value.
• Continue the NAQT algorithm with the next team after the new set of averaged teams

Look at the 2016 Ontario Provincials results for an example. Woburn had a very high raw R-value (131.8%), but finished very low (22nd). Under the basic D-value algorithm, 4th-placed London Central would have joined the big set of teams all the way down to Woburn, and ended up with a decrease in their R-value, thanks to the many intermediary teams with low raw R-values. Instead, Woburn was ignored, and the next-lowest team with a higher raw R-value (Hillfield at 132.9%) was tested. Again, this would drop Central’s R-value because of the low value for intermediary Marc Garneau. It is only an average with 5th-placed Waterloo that allows Central to improve on their raw result. From this, the algorithm goes to the next “unaveraged” team, Marc Garneau, who starts the group all the way down to Woburn because they earn a slight R-value boost. 6th through 22nd end up with a final R-value of 110.6% each.

And that’s how you get the R-value. The math isn’t that complicated, but it does require detailed number-crunching, especially for the opponent PPG step. Until more thorough result reporting occurs in SchoolReach, it is probably the best analysis that can be done with the information available. Thankfully, it is a fairly reliable metric for team performance, and I hope to show some examples in future posts.

Keeping PACE with current events.

The Partnership for Academic Competition Excellence (PACE) held their National Scholastic Championship (NSC) over the weekend. Unlike other major players in quiz tournaments, PACE is a registered non-profit that has a membership of coaches and former players. The NSC is their one tournament of the year (and a major fundraiser), while the rest of the year is outreach and assistance in the US.

Canadian teams have attended 3 5 NSCs. Lisgar attended in 2011, finishing 28th of 60 teams. White Oaks attended in 2016, finishing 81st of 96 teams. This past weekend, Lisgar sent two teams, including one that was fresh off their Reach for the Top championship. The “B” team, consisting of one of the champions and three additional players, did well in their second-phase bracket and ultimately finished 78th of 96 teams. Lisgar A had an excellent opening morning (losing only to the eventual second place team), but struggled in their second phase and placed 22nd.

Edited to add: I had poor memory and missed Lisgar in 2013 and Waterloo CI in 2015. I went back to check that I didn’t miss, say, one of the Alberta teams, but I think all the appearances are covered.

Lisgar A’s result is very good in the context of Canadian teams. The American circuit is far more robust and competitive than the scene up north – Lisgar probably played fewer quizbowl games pre-nationals than some teams played tournaments! Colonel By’s 21st-place finish at the 2015 NAQT HSNCT still remains the high-water mark, unless you count a 1988 exhibition match in which a team from Earl Haig defeated the NAC-winning team. Nevertheless, Lisgar did well in a tough schedule that saw them face 3 of the eventual top 4 teams over the course of the opening day.

The tournament itself was won by Detroit Catholic Central A. Combined Saturday/Sunday results are found here. Reach could also take a hint from how quickly (ie: live) the Youtube stream of the 2nd-place match, all-star game, and closing awards got uploaded (note: a true final did not occur because of the 2-win clearance of DCC A over the rest of the field).

Congratulations to Lisgar, along with all those teams from the US & Singapore!

A post-script: I, amusingly, have already set aside the tag “pace” for the team from Richmond Hill. The day PACE goes to PACE…

## 2017 Nationals Results

Lights! Camera! Inaction!

Last weekend, UTS and the University of Toronto hosted the 49th* national championship of Reach for the Top. 16 teams from seven provinces had a full day of round-robin competition before vying for the title in the playoffs.

*Reach claimed it was the 51st, but only 49 championship seasons have occurred due to the stoppages after the CBC era.

The full results are uploaded here. Lisgar CI claimed their third national title in a close 460-410 final over the University of Toronto Schools; it was an anticipated clash of titans and a rematch of the provincial title which UTS won. My rundown of the teams, in order of rank (note: for this tournament, I broke standings ties by round-robin seed):

• Auburn Drive (16th). Nova Scotia’s clubs were greatly hindered by job action this year; only five teams went to provincials. Let’s hope this year was only a blip and that teams can have more support next year. As for this particular team, I never saw them until their final consolation game. They kept close with SJHS in the first half and won an excruciatingly long shootout, but saw their tournament end there.
• Rundle College (15th). Schedule quirkiness meant I saw Rundle for 9 of the 15 preliminary games and became their unintended fan club. As a surprise invite from their fourth-place finish in Alberta, expectations were not high. Their top player will return next year, so hopefully they can build from this experience for another shot at Nationals.
• St. Paul’s (14th). I think there was a different line-up between provincials and nationals, because the Manitoba champs fell short of the other teams from their province. Hopefully this means the school has a large pool of players to choose from, and can re-assemble for another provincial title run next year.
• Collingwood (13th). This rank will simply go down as “oops”. They temporarily disappeared after one of their consolation losses and defaulted a win to lower-seeded Saint John. They would have been in contention for the consolation bracket title otherwise.
• Saint John (12th). They got a lucky break from Collingwood, but ended up fourth of the final four consolation teams. They will be in tough to qualify for nationals next year, because the competition for second in New Brunswick is very tight. Interestingly, I only ever saw them win: in the round-robin over Rundle and the playoffs over Auburn Drive.
• Marianopolis (11th). This team was a bit weaker than some CEGEP teams of the past, but they pulled one of the few upsets in the playoffs with a second-round consolation win over Renert.
• St. John’s-Ravenscourt (10th). Just getting to nationals was impressive: Horton’s drop-out meant that SJR organized a team trip from Winnipeg less than a week before the tournament. They didn’t show too much rust and even managed to beat their provincial champions in the round-robin!
• Renert (9th). Renert & Co. did improve from last year. Their highlight was either almost defeating KV or getting the most games of any team by taking the long route to get to the consolation bracket title. A tournament MVP came from this team (I don’t mention names due to a blog-wide policy of keeping players anonymous).
• Old Scona (8th). Unlike their provincial compatriots, Old Scona did pull off a win over KV, but fell back to eighth seed by the end of Saturday. Eighth seed unfortunately meant an early match with UTS, where even a 480-240 loss to them would be considered a good result.
• Sir Winston Churchill (7th). They seemed, on paper at least, to be the strongest team from BC, even though Collingwood beat them in both the provincial final and the round-robin match (they won the play-in match over Collingwood, though). They led UTS going into the final snapout of their match, but couldn’t pull off what would have been the biggest upset of the tournament. Their mix of ups and downs averaged them out to the middle of the pack.
• Templeton (6th). For a team’s first appearance at Nationals (either in a long while or ever – not sure), they did very well. They almost beat Martingrove in the round robin and finished as the highest-placing BC team. Considering that they were nowhere even on the provincial scene before this, they would certainly be the “most improved” team. A tournament MVP came from this team.
• Kelvin (5th). R-values suggest the 5th-8th place teams have razor-thin differences in strength between them, but Kelvin got the wins. Kelvin was on my radar as the Manitoba runners-up, but I didn’t expect them to get as high as fifth. Well done, though I didn’t get to see them play. A tournament MVP came from this team.
• Kennebecasis Valley (4th). I think that even before the tournament began, KV was destined for fourth: they weren’t quite up to the level of the Ontario teams but were definitely better than anyone else. They got within 30 points of Lisgar in the round-robin, but a loss to Old Scona almost cost them a playoff bye. The semifinal match against UTS was very impressive, though. They capitalized on UTS’ mistakes and frustrations to keep within 20 points late in the match, and nearly gave the favourites their first loss of the tournament. With good players returning, I would not be surprised to see a late-round rematch next year- perhaps even in the final for once!
• Martingrove (3rd). Like KV, Martingrove seemed set for their final position as a step behind Lisgar and UTS. A mere 250-230 loss to UTS in the round-robin gave the Ontario champs a little scare, and they easily handled Templeton in the quarterfinal. The semifinal wasn’t pretty though: a poor run during the team scrambles sapped any momentum they had and allowed Lisgar’s MVP alone to earn more points than them. Nevertheless, they were part of the good camaraderie among the Ontario teams and hopefully they’ll show up at more tournaments next year in their quest to keep their Nationals attendance streak alive.
• UTS (2nd). UTS was my pick as the strongest team on paper, despite what the R-value said. They swept the round-robin while rarely fielding their true A-team; it cost them a few extra points, but who needs points for seeding when you’re 15-0? Unfortunately, the team was mired in production difficulties in both of their final-day matches. They were not on top of their game and only narrowly beat KV before taking the loss in the final. I think the delays and frustrations ate away at them, but they also had to deal with Lisgar’s MVP getting a second wind on the last day. It was a very good final match, and they had a great season overall (including a 6-1 record across formats over Lisgar A). They should be just as strong next year, so best of luck for another title run!
• Lisgar (1st). Best ever regional result. Best ever provincial (round-robin) result. 2nd best ever national (round-robin) result. Analytically, this title is not a surprise. Realistically, it was anything but. The team played sick (barely getting to the stage) and they entered the playoffs knowing they had taken nothing but losses to UTS all year. There was not a lot of optimism among them for the final morning. However, that semifinal was a much-needed boost in confidence and set them up for a stellar (minus the 25-minute delay) final. Who needs shootout wins anyway? While Lisgar’s MVP (also selected as a tournament MVP) returns, no one else does, so this was expected to be Lisgar’s last chance for a while. They got the title, and now they can go back to their regularly-scheduled programming of quizbowl.

A great tournament by all the teams. The matches I saw both in-room and on-stage were great to watch, even if I ended up rooting for the Rundle underdog half the time.

The tournament organization was pretty good. Logistics has never been a problem for Reach, and for their price tag, you expect the perks and efficiencies. Games were on time, staff were prepared, food was ready, and results were prompt. Sunday’s stage games were also well done, even if there was an audience of just me and the coaches at times.

Unfortunately, Monday was a mess. The small change from SchoolReach to Reach for the Top – filming the event – was a world of difference for the negative. Floodlights blew the breakers in the first game and wrecked a buzzer. The need to announce players on the replacement buzzers forced a “pay no attention to the man behind the curtain” scene where a person hid behind a banner to identify the light that went off. Pre-games became fidgety with ridiculous insert shots of buzzing, applause, and phony reactions. Games stopped twice when camera SD cards filled. Most notoriously, the delay of “reviewing the tapes” (rather than using a tried-and-tested method south of the border of leniency on vowel pronunciation) dragged the final into a 90-minute affair. UTS’ auditorium is not a television studio. Reach needs to either get back to a real TV production or start embracing less intrusive broadcast options, like Twitch or Youtube streaming. Disrupting players for the sake of pretty video (that will never come to light) shows a complete disregard for what should be the most important part of these tournaments – the academic performance of the players. UTS was definitely compromised by the production, and while I don’t dispute the title, I think we were robbed of an even better final. I know some changes will be underway at Reach, and I hope the Monday routine is part of that.

The final gala was good, though. Much more concise and meaningful than some of the provincial galas.

But I shouldn’t let my rambling detract from recognizing the most important things: the players who all showed excellent skill, teamwork, and friendliness; the coaches who coordinate not only the management of a strong team but the logistics of getting them to events; and the volunteer staff who keep the games going in a timely manner.

To those graduating, best wishes for your post-secondary pursuits (hey, try quizbowl). For those returning, good luck in your title run!

Finally, a blog note: this is, obviously, the end of the Reach season. For the off-season, my priorities are an explanation of the R-value (and follow-ups with old provincial analysis), a look at some historical games, and an updated Reach champion ranking, which I last did in 2015. Stay tuned!

## 2017 Nationals brief results

At least the table’s up.

I’ll be brief because of my quick turnaround time after leaving Toronto, but I will provide more information on the weekend.

16 teams from across Canada descended on Toronto last weekend to determine the Reach for the Top national champion. Teams played a full round-robin, followed by championship and consolation playoffs.

Lisgar CI defeated the University of Toronto Schools 460-410 in the final. The full preliminary table, as well as all playoff games, have been added to the database. This is Lisgar’s third national title.

Congratulations to all of the teams and participants in the National championships. It represented a year of hard work for everyone, and all of the teams should be proud of their performances. Best of luck in the pursuits that lie ahead!