It’s tough to be an NHL general manager evaluating goalies these days. With the advances in equipment and technical focus on the position, it’s more and more difficult to establish the difference between a guy who can play in the NHL, and one who will hear his name chanted by rabid fans late into the spring. With the amount of salary cap money being designated for goaltending, it’s getting more and more important to guess correctly. Most advanced player statistics evaluate performance over large sample sizes, for the purpose of determining value. Big picture goaltending analytics are also evolving with the same purpose in mind.
For years, goalie statistics were based on team wins and losses, the number of goals per game that were scored, and save percentage. Save percentage is a simple calculation of saves made vs shots on goal. For example, (statistics from this, or head to war-on-ice) Pekka Rinne of the Nashville Predators played 64 games last season. He faced 1807 shots and made 1667 saves, allowing 140 goals for a save percentage of .923, 7th in the league. (He’s good.)
The problem with save percentage has always been that not all goals scored are the result of a goalie’s specific actions. Total shots faced is nearly completely out of the goalie’s control, and will vary based on the differential quality of play of the teams involved. And, clearly, some scoring chances are more dangerous than others.
There are emerging statistics that try to assess for this “degree of difficulty.” As Greg Balloch of InGoal Magazine recently explained, an adjusted save percentage statistic separately weights a goaltender’s performance on shots from “low,” “medium,” and “high” danger zones, and extrapolates the data to a 60 minute 5-on-5 scenario in order to eliminate situational and special teams variations (e.g. penalty kills). This is an improvement, since it does incorporate to some degree the performance of the defense in front of the goaltender. For example, Pekka Rinne’s “high-danger-zone” save percentage last year was .852, and his adjusted save percentage of .935 kept him within the top 10 goaltenders in the league in this category as well. This type of statistic makes some degree of comparative analysis possible as well. Paul Campbell recently compared current Maple Leafs’ goalies Jonathan Bernier and James Reimer using advanced analytics.
Even these adjusted calculations, though, are flawed as a comparative goalie evaluation, precisely because they don’t account for specific game variables such as power plays, penalty kills, and end of game empty net scenarios. (To be fair, many situational variations are addressed in separate metrics.) Performance during these situations may very well be what separate the truly elite goaltenders from the very, very good goaltenders of the NHL. For example, a goaltender who can excel against high danger shots in a penalty kill situation is more valuable than one who might show excellent 5-on-5 statistics but sag when shorthanded.
Save percentages, even adjusted ones, can be misleadingly equivalent. For example, who had a better game? A goalie who faced 33 shots, gave up 3 goals, including 2 in the 3rd period, and lost, or the opposing goalie who faced 22 shots, several of which were from high danger zones on a couple of power plays, gave up 2 goals, and won? By the numbers, the performances were basically equivalent. Most coaches would take the game-winner.
Adjusted save percentage does acknowledge that not all shots are created equal, which is a good thing. What it doesn’t address, though, is that not all goals are equal either. A single goal, given up at the end of a game with a one goal lead, is clearly more important to a team than one given up in the second period with a three or four goal lead. A goal allowed on a seemingly harmless shot can be psychologically devastating to a team, particularly in the playoffs.
Game flow is also underrepresented. Say two goalies face the same number of shots in a game, and give up the same number of goals. One team might have carried a significant possession advantage, with deep cycling and side to side play behind the net, while the other team generated its shots on shorter possessions and quick counterattacks. The workload for the respective goalies is completely different, irrespective of shot totals.
Don't get me wrong. I like the new analytics. In fact, I see adjusted save percentage as an excellent indicator of basic competence at a given level of competition. Anyone significantly below the league average is unlikely to have a long NHL career. A goalie able to sustain a level above the average over the course of an entire season would likely be considered an upper echelon starting goaltender.
Ultimately, though, these numbers are just that, numbers. If Pekka Rinne had allowed 30 more goals last season (less than one for every 2 games he played) he would have been about 54th in the league in save percentage, and we would be wondering what all the fuss was about. In fact, he has yet to duplicate his regular season success in the playoffs. Jonathan Quick, on the other hand, is not overly impressive when judged by statistical analysis, yet would be at the top of a very short list of goalies most coaches would give the net in game 7 of the Stanley Cup Final. There’s more to it than the numbers. And no, its not just being “calm” or “aggressive,” or positioning “outside the blue paint,” as the TV folks would lead us to believe.
At the end of last season, I started what should probably be called an overanalysis series, in which I studied single save sequences, or goals allowed, in obsessive detail. On each individual play or sequence, I found, there was something to learn about the goaltender involved. I criticized Ben Bishop for errors in his skating instincts, and Andrei Vasilevsky for being overaggressive, which reduced their chances of making saves on specific high danger chances. I praised Corey Crawford for his ability to make an outstanding, technically sound recovery save, but saw that poor rebound control on what should have been an easy save was what allowed the chance in the first place. I commented on Frederick Andersen’s inexperience in the Western Conference Final, and marveled at the technical brilliance of Carey Price during a milestone regular season game that I attended with my family.
This “eye test” is what I enjoy the most about watching goalies. (That, and the pad and mask designs!) There are myriad technical factors that aren’t quantifiable, but can be observed. For example, is the goalie’s positioning style suited to one defensive system more than another? Is the goalie a competent puck-handler who can neutralize an opponent’s forecheck and allow aggressive defense of zone entries at the blue line? Does he have a movement tendency or positional flaw that might prevent him from sustaining his high level of performance year to year, or during the playoffs? Or conversely, does his technical base predict continued or increasing success?
My own personal preference is to obsess over these technical and aesthetic details of a goalie’s performance, and let someone else do the math, keeping in mind that both are necessary. It’s nice to have confidence that a goaltender will perform consistently, like Rinne or Tuukka Rask of the Bruins, especially if he’s going to get paid a good chunk of money. What we all want to know is whether he will perform at critical moments of important games. And, let's be real, what we really want to know is if he can get it done in the playoffs when he gets the chance, like Jonathan Quick or Henrik Lundqvist, or even Corey Crawford.
That’s ultimately my point here. We need the big picture provided by analytics, because large sample size performance metrics can help determine who belongs in an NHL net and who doesn’t. We also need the small picture provided by detailed analysis of technique, because the truly elite goalies prove their value when small sample sizes are all that matter.