Calculated Reactions: The effect of interconference play on team rankings

Earlier this week, we introduced football power rankings, based on (in part) RPI and SRS, two widely-used measures of team performance. (Okay, RPI is a little more widely-used.)

The problem is all these OUA teams are at the top: Western, McMaster, and Ottawa all in the non-Laval top 3? Windsor at 8?

This isn't right. It's also based on just two or three games per team, and you're bound to get weird results. But we're not about to expand CIS football to 82 games, so we have to deal with it.

How can we do that?

One way is to introduce interconference results. If one conference consistently beats the other, then the teams in that loser conference won't be so high in the rankings anymore.

But we don't have a lot of preseason games to work with, and some of them are of questionable quality (fourth-stringers playing, quarterbacks not "live") so I am loathe to include any of them.

Still, it's worth knowing what happens when a game is played that crosses the previously-uncrossed interconference chasm.


If you count just the regular-season and (intra-)conference playoffs, the best teams last year, by RPI, were Western (.673), Ottawa (.637), Laval (.620), and McMaster (.617). Looks kind of like this year, eh? By SRS, the best were Laval (+25.1), Western (+14.5), Saskatchewan (+10.0), and Ottawa (+9.8).

When you add one interconference result — namely, Laval's 13-11 Uteck Bowl win over Western — here's what happens to the RPI:

  • Laval goes up 19 points, Western goes down 1. Makes sense. But neither team moves enough to change its overall ranking.
  • Every other OUA team loses 5-8 points, since they all played Western and the Mustangs now have a worse record.
  • Note that Western didn't lose as many because their decline in win-loss percentage was offset by the fact that they played a very good team. Losing to McGill would be worse.

This is all very sensible. The marginal effect of one game on the RPI is basically as we expect.

But SRS does not handle things the same way:

  • Laval loses 4.2 points, falling to +20.9, even though they won. Western goes up 4.4 to +18.9, even though they lost.
  • In fact, every team that played Laval, or had an opponent who played Laval, dropped 4.2, and every one of Western's opponents saw that same 4.4 increase.

This kind of makes sense when you look at it more closely. Before the Uteck, Laval's rating, +25.1, was about 10 or 11 points "better" than Western's, +14.5. So at that time, our best estimate was that Laval was 10 points better. But after the game, SRS looks at the two-point margin and says, "Aha! Laval was not in fact 10 points better, or else they would have won by 10!" And thus the Laval and Western ratings are adjusted to reflect that result (20.9 minus 18.9 is, after all, two points).

The problem is, that result is all SRS has to judge the relative quality of those conferences. So the adjustment is applied to all teams in those conferences, including the AUS. This wouldn't happen if we had more than one game, or more than three games — even after including the Mitchell Bowl and Vanier Cup, the OUA teams benefit the most.

Thankfully, there are other sports with more extensive interconference play...


Again we'll do the same thing: a before and after comparison of the ranking systems. Our "before" is once again all regular-season games and conference playoffs.

Entering the Final 8, Carleton was the No. 1 team in SRS, and No. 2 in RPI behind Cape Breton. Concordia was 3rd in RPI but 15th in SRS.

We'll just include the Carleton-Concordia game at first, 73-66 Ravens win, to confirm the same thing happens:

  • RPI gives more credit to Carleton and some OUA teams, and takes points away from the Q teams. Concordia, like the football Mustangs, don't get penalized that much for losing to the best.
  • SRS gives 8.3 points to every Q team and takes 2.0 away from every OUA team. Again, this is the over-correction: before the game, Carleton (+20.5) was considered to be 17 points better than Concordia (+3.2). But they only beat them by six. So 11 points, or so, had to be reallocated: 8 and change to Concordia and 2 from Carleton.
The Final 8 is only, well, a few games (seven that matter), so SRS still can't estimate properly, but RPI can: Carleton, TWU, and Saskatchewan each gain quite a bit. Reasonable. if we include all 144 exhibition games, along with the Final 8, things are much more believable. Winnipeg and Saint Mary's, who had respectable nonconference performances, gained more than two points in SRS; UQAM, Bishop's, and Queen's, who did not, each lost more than four. This is good news: it means that SRS can "settle down" once the list of interconference games is large enough. Here, we have 144 games per 42 teams, or three or four games per team.

Wrapping it up

This is all a very long-winded way of saying that SRS is too volatile to handle interconference games until it has a large number of them. Which is why it might be a bad idea to use it in football, where we don't have nearly any of them ... after all, do you really want to move every OUA team down 25 points and every Q and AUS team up 25? Because that's what would happen based on McGill's 37-14 "win" over Ottawa.

So, until every CIS football team plays three games against teams not in its conference, SRS will consider all conferences of equal quality until the bowl games. At that point, one game can throw everything into disarray.
Next PostNewer Post Previous PostOlder Post Home


Post a Comment