This is Reach Scores, a place to find out about, well, “Reach” scores.
Many Canadian high school students played, watched, or otherwise heard of “Reach for the Top”. It began as a CBC television program in the 1960s, left the airwaves, and returned as the classroom-based “SchoolReach”. Lists of alumni and champions are readily available on the web.
But what about scores? Unlike, say, high school sports, scores of Reach games are not easy to find. The Reach for the Top organization publishes results of tournaments they manage (Nationals and some provincial championships), but the pickings are slim for regional or historical results. Every now and then you’ll find a YouTube video or a news archive of a single game from a tournament, but it is often in the context of a school tooting its horn, rather than providing a larger scope of gameplay and competition for a region or time period.
I’d like to change that. I have already uploaded some tournament results on a wiki-style site (use this index of tournaments as a start) for historical interest. People ought to be able to find results without needing to dig through an internet or library archive, and I hope to make it more convenient for them. I will use this blog for updates, including planned recordings of games. I will probably also add analysis (such as my “R-value”) and opinion. Contributions, results or otherwise, are welcome through the contact methods listed on this site.
Here ends the ranking of Reach championship-winning clubs. The rankings from the CBC era are here, while the first two parts of the modern era are here and here.
The top six teams are no strangers to national championships. Between them, they account for 15 titles and 26 final appearances, and you have to go back to the 2000 Merivale-Ridley final to find a title match that didn’t feature at least one of these teams. These are the teams that have dominated the new millennium.
Here are the top six, with not much change from 2015:
University of Toronto Schools (ON)
Kennebecasis Valley HS (NB)
St. George’s School (BC)
Lisgar CI (ON) [up 3]
London Central SS (ON) [down 1]
Martingrove CI (ON)
I don’t think UTS at #1 is a surprise to anyone. UTS participated in Reach as far back as the CBC era, but they really broke out at the 2000 Ontario Provincials with a sudden run to the semifinals after some absence. Their first peak from 2001-04 saw four consecutive provincial titles and two national titles, all while putting up dominant playing statistics that would be tops until very recently. After that, they continued picking up provincial titles and a few final appearances in a relative “lull”, then nabbed two more titles this decade to cement themselves as the winningest club. A new coach hasn’t slowed down the team; they cruised through most of Nationals this year and are expected to do well in the years to come.
KV is firmly at #2, thanks to another title they picked up last year. Three titles and a further three runners-up should be enough to rest on, but they also are incredibly consistent. KV ended Fredericton’s long domination of New Brunswick in 2004, missed 2005 for job action, then qualified for Nationals every year since. In their 13 National appearances, they have finished at least in the top half of the field every time. Not even UTS can claim this consistency, though they still have quite a bit to go to reach another team…
St. George’s remaining at #3 might be more surprising. They have mostly faded from national contention after the retirement of their coach who guided them in the ’90s and ’00s, though they still pick up some appearances as a BC representative. What keeps them ranked high is 20 National appearances (I can’t verify ’89 & ’90, which would add to that total). St. George’s was the team to beat in Reach until UTS took on that mantle. Their coach often lamented that the only thing that held them back from even more titles was Ontario’s grade 13 and all those old students dominating the ’90s (to be fair, Fredericton was often a better non-Ontario team than St. George’s during that period). I will still place St. George’s ahead of the remaining teams, but the next set are in a good position to rise further.
Lisgar, Central, and Martingrove are close to a coin flip. Lisgar has the most titles but is the least consistent, Central has been steady for a decade, while Martingrove has both history and an impressive consecutive appearance streak going (for an Ontario team). Here are National stats for the three:
3 (’08, ’15, ’17)
5 (& ’11, ’16)
London Central SS
2 (’07, ’09)
4 (& ’12, ’14)
5 (& ’16)
6 (& ’93, ’13, ’15-17)
All three each lead a different category. Martingrove, at first glance, appears a step below, but their strength in ’90s Ontario alongside Saunders shouldn’t be discounted (it is possible they also qualified in ’89 or ’90). Lisgar has the biggest jump forward; last time the rankings were done, they were lower in all categories compared to Central. For now, I think the order is fair, but even just next year’s results could alter the positions of all three.
And that concludes my Reach for the Top champion rankings. I try to find old results to help boost the reputations of clubs, but if you have further information, I’ll be glad to use it for later updating (like I did to elevate Saunders). I hope you enjoyed them!
I’m continuing the ranking of Reach championship-winning clubs. The rankings from the CBC era are found here, while the first part of the modern era is found here.
This next set of clubs mostly had their highlights during the turn of the millennium, 1995-2005. Sandy Stewart, the founder of the SchoolReach program, retired by this point, but the Reach program was in good shape: subscriptions were at their peak, provincial and national championships return to television, and new question styles (like shootouts and relays) were introduced. Reach alumni from the 1990s started establishing university clubs at Queen’s, Western and Waterloo, though Reach failed in their early-2000s attempt to get a university subscription program. The Reach circuit, as a whole, may not have had as much top-end strength as today, but it had a healthier population.
Part 2 of the modern era rankings:
7. Saunders (ON) [up 5]
8. Gloucester (ON) [down 3]
9. Fredericton (NB) [up 1]
10. Cobequid (NS) [down 2]
11. Merivale (ON) [down 2]
12. Woburn (ON) [down 1]
The biggest change of the whole list is Saunders’ rise. While Saunders’ four Nationals appearances in the five Thorsley years is impressive, I toned down their ranking before by attributing it all to the strength of one player. I was mistaken. Like the 1990 Oilers, Saunders could find success again without their star, and finished the 1999 final with one of the highest losing scores ever. There was clearly a good foundation to that club, and they deserved to be higher than originally placed. Unfortunately, they have been pretty much dormant this century, so that stops them from getting higher.
Gloucester drops because of the rise of other teams. Gloucester was probably the best program in Canada for the span of years I mentioned earlier, using different player compositions in all their National appearances. However, with the club inactive, they will continue to drop as other teams achieve success.
Fredericton gets a slight boost from my awareness of three straight finals, 1994-1996. I knew about their long dominance of New Brunswick, but taking it to the Ontario teams in an era of Ontario’s 5-year high schools is impressive (the St. George’s coach of the time claimed his school would have had many more titles had Ontario stopped at grade 12). Fredericton is still around, and may rise again.
Cobequid, Merivale, and Woburn all drop from other teams rising. Cobequid has come down from their 00s peak, while Merivale and Woburn can’t crack Nationals despite some playoff success provincially. None of these teams should be at risk of falling below the inactive clubs of last week, though.
The final installment comes next week. You can deduce who is in the top six, but I’ll reveal my ranks and reasoning then. The remaining top teams are all active, all have Nationals success, mostly all got their break in the top-heavy part of new millennium. Stay tuned!
Today I begin ranking the Reach championship-winning clubs for the modern SchoolReach era. The rankings from the CBC era are found here.
The SchoolReach subscription program began right after the final CBC episodes of 1985. Schools enrolled to get sets of questions that were used either for intramural/interschool tournaments or local TV productions. Ontario teams were very active in these “lost” years. By 1988, a graduated regional/provincial/national system was re-established, with the help of coaches like Eric Stewart (BC), Chris Zarski (AB), Patricia Beecham-Cooper (ON), and Hans Budgey (NS). A few tournaments had television coverage, but national championships were done off-air in the early 1990s.
The teams for today’s set of rankings come from this part of the modern era. The clubs ranked 13-18 all had their one national title in the ’80s or ’90s and none returned for another nationals appearance (as far as I can tell). Most are inactive now.
Part 1 of the modern era rankings (sorry, I can’t make a numbered list start at 13):
13. Bell (ON)
14. Frontenac (ON) [up 2]
15. St. Joseph’s (ON) [down 1]
16. Earl Haig (ON) [down 1]
17. Tagwi (ON) [up 1]
18. Memorial Composite (NS) [down 1]
All of these teams were in the bottom 6 in 2015, but here’s my reasoning for the shuffles:
Frontenac had the single most dominant year of any of these teams. Their 1999 provincials R-value of 175% is not fully verified (derived from margins of victory rather than raw points), but was the best on record until Lisgar this year. At nationals, they beat national regulars (for the 1990s) Saunders 600-410 in the final; that is the highest championship-winning score and the highest combined score in a final. Frontenac deserves a little boost, but not as high as Bell, who could sustain some provincials appearances into the 21st century.
The Tagwi-Memorial swap is minor. Originally, Memorial had the edge because of their follow-up victory over the NAC champs from the U.S., but Tagwi never got their opportunity to try it. It was another disappointment for the Tagwi champs, coming after the fact that they never got the Reach trophy due to it being stuck in legal ownership limbo between Kate Andrews HS and the reincarnated SchoolReach program. Anyway, I have now given Tagwi the slight edge because their club remained active far longer than Memorial.
Next time, I’ll review the 7th to 12th place clubs. That cluster of teams, who mostly had their success near the turn of the millennium, will see the most change.
I first ranked the Reach for the Top championship-winning clubs in 2015. I summarized the list here. They were split into CBC and modern eras, because there is significant differences in how clubs approached and prepared for the competition. I will be revisiting the rankings this summer.
I’ll emphasize that the lists are about championship-winning clubs. A school at least needs one title for a rank, regardless of how many consecutive final appearances they have. Clubs are ranked instead of individual teams: I try to take in a school’s whole body of work, rather than determine whether the 1973 team was better than the 2003 team. Because they earned a title in each era, Cobequid Educational Centre appears twice.
Part 1 is the CBC era list. Very little has changed, because the era is over and I don’t get much new information. Without further ado:
Lorne Jenken (AB)
Vincent Massey (Etobicoke, ON) [up 1]
Oak Bay (BC) [up 1]
O’Leary (AB) [down 2]
Hillcrest (ON) [up 1]
Glenlawn (MB) [down 1]
Roland Michener (ON)
Banting Memorial (ON)
Central Peel (ON)
Queen Elizabeth (NS)
Neil McNeil (ON)
River East (MB)
Kate Andrews (AB)
There are only minor changes. Here’s my rationale:
O’Leary has dropped. They have 2 known final appearances: the 1972 win and the 1974 loss. The 1972 win was the largest ever paradigm shift in the CBC era, and one of the most important ever. O’Leary is credited with the idea of practicing all year to lead up to a tournament. Unfortunately, they were quickly outclassed by their provincial rival Lorne Jenken at their own game. I think my earlier impressions placed too much emphasis on their innovation without considering the hard reality of not many results. Interestingly, their drop benefits Oak Bay, who pretty much solely represented British Columbia in all the years before 1972 and didn’t make use of a practice model.
Hillcrest is up slightly. I’ve had the opportunity to see more Ontario provincial games from the 1970s/80s. Hillcrest is more frequently there, but they lose out by not being the top (southern) Ontario team of a particular year. Their 2 final appearances is better than Glenlawn. Glenlawn had more national tournament appearances, but not the high finishes. I think I gave Glenlawn too much credit as the “highest-scoring final winner”; that title has since been lost to the discovery of Frontenac’s 600-410 victory.
For curiosity’s sake, I would place Dryden (the three-peat silver medallists of the 1970s) in the 7-9 range. I would consider them better than fellow northern Ontarians Roland Michener, who got their multiple national appearances in the relatively easier 1980s.
The upcoming modern era rankings, which I will split into three parts, will see a bit more change. Some of that is due to results since 2015, while other changes come from discoveries from the 1990s. These rankings will show up later in the summer.
This week, I’m trying something a little different. Thanks to people that have saved old tapes, some Reach for the Top games are available online. Today, I’ll give commentary on the 1979 National Final game.
The 1979 national tournament took place in Montréal and was broadcast by CBC. Bill Guest was the host, and Paul Russell was one of the judges.
The 1979 Final pitted the northern Ontario champion, Dryden HS, against the southern Ontario champion, Banting Memorial HS. Dryden HS, from Dryden, is no stranger to the final – the team and their (I assume) captain Brad lost the 1977 and 1978 finals. They’d be eager to break that “slump”, and got to the final by defeating Lorne Jenken (AB) in the quarters and Cobequid (NS) in the semis (both Reach champs). Banting, from Alliston, is less experienced on the national stage, but benefited from a weaker draw that only saw Gonzaga (NL) as a real threat. The database page for the 1979 tournament is here.
Note: video of this match was uploaded by 1978 champion Dino Zincone here, but beware that it is a Flash video with a bloated file size and might not be safe for all browsers.
Questions 1-8 are assigned to one player at a time (with no bounceback to the other team). The Russian literature category leads to a lot of Pushkin guesses, and teams end it tied 20-20.
Team scrambles were slightly different then. The scramble was worth 5 points, and there are four questions exclusive to the winning team. Brad made an anticipatory buzz during “what is the capital of…” and correctly assumed the reader would continue with “…Ethiopia”. Their exclusive questions were much more difficult, but they got 20 of the 40 points about Eritrean independence and the Ogaden War. 45-20 Dryden.
The next four questions were audio samples of artists up for Junos that year. Banting swept it to take the lead. Brad responded by 40-ing the “What am I” about polo. 85-60 Dryden.
Banting tidies up on questions about medical terms, then Eric casually answers “asbestos” for a team scramble (no mention of health effects…). By question 28, the score is 125-85 Banting.
Four visual questions about 20th century art goes mostly dead, including one to identify the artist when the signature is in view…
Another batch of eight assigned questions. This set, about anagramming phrases into names, is also done differently: the first players of each team compete on the buzzer to answer two questions, followed by the next two players, and so on. Brad nails both of his and helps get Dryden back to a 115-145 score at the ad break.
Banting has the edge on the snappers after the break, but Jim (Dryden) solves math sequences and Brad almost sweeps a set of questions on Montréal’s bridges. Banting is barely holding on to a 195-185 lead.
A list question is next. It takes an interesting twist from the modern version. There are many more answers available, but the first person to buzz earns just 5 points per answer. A player from the second team can then buzz to earn 10 points for any remaining answers. Might make for some odd tactics – do you let a weaker team go first and hope they only answer 2 or 3, or do you rush in and try to exhaust the list for fewer points? Anyway, neither happened here for this list of the nine muses: Brad gets one for 5 points, and Paul gets one for 10 points. What a letdown.
The deflation may have shifted “momentum” in Banting’s favour. They make quick work of a team scramble about kinetic energy to give themselves a nice cushion for the endgame. Brad picks up 30 points between the classical music and religious books categories, but they enter the final snapper round with a Banting lead of 250-220.
Brad destroys the buzzer during the snappers. Figuratively, of course: there is no doubt that his buzzer was still functional at the end of the game. Brad buzzed in first in all but one of the 16 snappers… and only got five. Meanwhile, Banting collectively earned six snappers while buzzing in second each time. Final score, Banting Memorial 310, Dryden 270. Banting is the 1979 Reach for the Top national champion.
Analysis of this game comes down to one thing: Brad’s buzzing. Brad’s trigger-happy finger probably cost the game; over the course of the match, Banting picked up 185 points by buzzing second to Brad. That’s more than half their score! Dryden’s team was incorrect 39 of their 64 buzzes, though some of it was guessing at the end of the question. Banting, meanwhile, was much more calm on the buzzer (20 incorrect of 52) and didn’t let Dryden pick up any points from second buzzes. A little more discipline probably could have swung three questions (and the title) Dryden’s way. I though Dryden should have had picked up more experience from their past tournaments, but instead we see the heartbreak of losing three straight finals. Neither team really impressed me with their knowledge base: Brad’s pickups on Ethiopian wars and Montréal roadworks were good, but both teams left a lot of questions dead that probably would have been taken by stronger teams from earlier in the decade. Based on scores, Cobequid was possibly the strongest team in the field, but I haven’t been able to see the match where Dryden eliminated them.
Both finalists disappeared from the national scene after this match. Banting played a bit into the SchoolReach era, but no longer competes. Dryden ending up losing northern Ontario titles to Roland Michener and Renfrew over the rest of the CBC era, and isolation from any major urban centres probably stopped them from subscribing to SchoolReach. Among other teams in the tournament, Gonzaga, Lorne Jenken, and Oak Bay had all won titles before, while Cobequid would go on to win two years later (and also in 2005).
I hope this was an interesting look at “old” Reach. I will probably do this again, considering the decent number of games out there and the time to fill in the offseason.
The points you gave me, nothing else can save me, SOS
Several of my posts have referenced the “R-value”. I think most people realize it is some sort of statistical measure of a team’s strength, but they are confused by either its derivation or interpretation. I am long overdue on clarifying this.
Primarily, the R-value is a mechanism to rank teams who all played the same questions, but did not necessarily play each other. The two most useful applications for this are the Ontario regional-to-provincial and the Ontario provincial prelim-to-playoff qualification systems. Both have a large number of teams that need to be condensed to a small fraction of top teams that would proceed to a higher level, and they all played (roughly) the same questions.
A mechanism exists for this purpose in the US. National Academic Quiz Tournaments’ college program has a couple hundred university teams compete in regional tournaments, all vying to qualify for 64 spots in their national championship (across two divisions). The regional tournaments are all played on the same set of questions. Originally, NAQT used an undisclosed “S-value” to statistically determine which teams, beyond regional winners, deserved a spot in the national championship. With the cooperation of regional hosts providing stats promptly, NAQT could quickly analyze the results and issue qualification invitations a few days after the regional tournaments. Prior to the 2010 season, Dwight Wynne proposed a modified formula made transparent so all teams could verify their values were correct. NAQT adopted this, and named the mechanism the “D-value” in honour of Dwight. In 2015, the Academic Competition Federation introduced their “A-value” for national qualifications, which largely followed the D-value formula.
The R-value is a D-value modified for SchoolReach. The “R” stands for “Reach” or “Reach for the Top”. SchoolReach results typically lack the detailed answer conversion information available in quizbowl, so the R-value is dependent on total points and strength of schedule. I also added 2 modifications that I will get to later.
The R-value asks: “How does a team compare to a theoretical average team playing on the question set?” It is answered in the form of a percentage; if a team has an R-value of 100%, they were statistically average for the field. A step-by-step process to get there:
Note: my primitive embedding of LaTeX in WordPress is used below. It is possible it may not appear in your browser.
First, calculate all teams’ round-robin points-per-game (RRPPG). All games which occur in a round-robin system are included, even if a team plays another team multiple times. Playoffs, tiebreaking games, and exhibition matches are excluded. If certain games are known to be “extended” (for example, double-length), that is reflected in the “RR games” total.
With the RRPPGs known, determine each team’s round-robin opponent average PPG (RROppPPG). This is the average of the PPGs of each opponent a team played, double- or triple- counting where appropriate if they faced each other multiple times. Note: this is different from a team’s average points against, which is a different statistic that is not used in this analysis.
The question set’s average points is also needed. This covers all pools and all sites where the questions were used for the purpose of the rank. I determine this average through total RR points and total RR games, so larger sites that have more games do end up with a larger influence on the set average.
Strength of schedule (SOS) is a factor to determine how strong a team’s opponents were compared to facing an average set of opponents for the field. A value above 1 indicates a tougher than average schedule; below 1 is a lower than average schedule. In reasonably balanced pools, it is typical to have top teams below 1 and bottom teams above 1 – a top team doesn’t play itself, but its high point tally contributes to the total of one of its weaker opponents. Also, by comparing across multiple pools/sites, SOS can give an overview of how strong a pool/site was.
Now for the biggest leap: the points a team earned must be modified to account for how strong its schedule was. Racking up 400 PPG is far more difficult against national contenders than against novices. Adjusted RRPPG multiplies points by the SOS factor – a tougher schedule gives a team a higher adjusted point total. This adjusted value theoretically represents a team’s PPG if they faced a slate of average teams. Note: this value is not shown in result tables.
This value is suitable on its own for ranking. However, I add an extra step of normalizing for the set, so I can compare across years. Earning 400 PPG is far more difficult when the set average is 200 compared to a set average of 300. For example, the late ’90s/early ’00s had much higher set point totals than today (through different formats), and a normalization is needed to compare historical teams of that era to today. The calculated result is the raw R-value, which I convert to a percentage for easier comprehension of how much different from average a team is.
Raw R-value is the number I use for most comparison purposes. In earlier posts, I tried to show some examples of how this statistic is useful for predicting future performance (especially playoffs) and analyzing outlier results. If R-value is to be used for any sort of qualification system, however, it needs to account for the universally-accepted idea that it is most important to win games. Almost all tournaments use final ranks based primarily on winning (either in playoffs or just prelim results). A team with a low (raw) R-value that finishes ahead of a team with a high R-value deserves qualification just as much (if not more than) teams below them in the standings. The actual R-value is then calculated, based on NAQT’s system (quoting from their D-value page):
After the raw values are computed, they are listed in order for each [site] and a correction is applied to ensure that invitations do not break the order-of-finish at [a site]. Starting at the top of each [site], each team is checked to see if it finished above one or more teams with higher D-values. If it did, then that team and every team between it and the lowest team with a higher D-value are given the mean D-value of that group and ranked in order by their finish.
Let’s say a site winner had a raw R-value of 120% and the runner-up had a final upset while finishing with a raw R-value of 140%. Under this adjustment, both teams end up with the mean, 130%, for their true R-value. The winner receives a boost for finishing above one or more stronger teams, while the lower teams receive a penalty for not reaching their “potential”. The true R-values would then be compared across pools/sites for qualification purposes; if tied teams straddle the cutoff for qualification, invites are issued in order of rank at the tournament.
I do deviate slightly from this formula, though. It is possible, but rare, for the top-ranked team in this average to end up with a lower R-value for finishing higher than a stronger team (e.g: 1st 120%, 2nd 80%, 3rd 130%; all teams get 110%). I don’t believe this should ever happen. If it does, I modify the averaging by this algorithm:
First, follow the NAQT algorithm
If the first team in the averaging has their R-value lower than their raw R-value, ignore the last team (which has a higher raw R-value than the first team)
Proceed to the team one rank above the formerly-last team and attempt the R-value average again. Repeat until the first team improves upon their R-value.
Continue the NAQT algorithm with the next team after the new set of averaged teams
Look at the 2016 Ontario Provincials results for an example. Woburn had a very high raw R-value (131.8%), but finished very low (22nd). Under the basic D-value algorithm, 4th-placed London Central would have joined the big set of teams all the way down to Woburn, and ended up with a decrease in their R-value, thanks to the many intermediary teams with low raw R-values. Instead, Woburn was ignored, and the next-lowest team with a higher raw R-value (Hillfield at 132.9%) was tested. Again, this would drop Central’s R-value because of the low value for intermediary Marc Garneau. It is only an average with 5th-placed Waterloo that allows Central to improve on their raw result. From this, the algorithm goes to the next “unaveraged” team, Marc Garneau, who starts the group all the way down to Woburn because they earn a slight R-value boost. 6th through 22nd end up with a final R-value of 110.6% each.
And that’s how you get the R-value. The math isn’t that complicated, but it does require detailed number-crunching, especially for the opponent PPG step. Until more thorough result reporting occurs in SchoolReach, it is probably the best analysis that can be done with the information available. Thankfully, it is a fairly reliable metric for team performance, and I hope to show some examples in future posts.
The Partnership for Academic Competition Excellence (PACE) held their National Scholastic Championship (NSC) over the weekend. Unlike other major players in quiz tournaments, PACE is a registered non-profit that has a membership of coaches and former players. The NSC is their one tournament of the year (and a major fundraiser), while the rest of the year is outreach and assistance in the US.
Canadian teams have attended 3 5 NSCs. Lisgar attended in 2011, finishing 28th of 60 teams. White Oaks attended in 2016, finishing 81st of 96 teams. This past weekend, Lisgar sent two teams, including one that was fresh off their Reach for the Top championship. The “B” team, consisting of one of the champions and three additional players, did well in their second-phase bracket and ultimately finished 78th of 96 teams. Lisgar A had an excellent opening morning (losing only to the eventual second place team), but struggled in their second phase and placed 22nd.
Edited to add: I had poor memory and missed Lisgar in 2013 and Waterloo CI in 2015. I went back to check that I didn’t miss, say, one of the Alberta teams, but I think all the appearances are covered.
Lisgar A’s result is very good in the context of Canadian teams. The American circuit is far more robust and competitive than the scene up north – Lisgar probably played fewer quizbowl games pre-nationals than some teams played tournaments! Colonel By’s 21st-place finish at the 2015 NAQT HSNCT still remains the high-water mark, unless you count a 1988 exhibition match in which a team from Earl Haig defeated the NAC-winning team. Nevertheless, Lisgar did well in a tough schedule that saw them face 3 of the eventual top 4 teams over the course of the opening day.