clock menu more-arrow no yes mobile

Filed under:

Big plays are the 3-pointers of football

New, 11 comments

The only way to prevent them is by preventing ANY kind of successful play.

If you buy something from an SB Nation link, Vox Media may earn a commission. See our ethics statement.

NCAA Football: Big Ten Championship-Wisconsin vs Penn State Brian Spurlock-USA TODAY Sports

In my first book, Study Hall: College Football, Its Stats and Its Stories (now available on Kindle, too!), I wrote the following passage in a chapter called “Coaches vs. Stats” (Pg. 138):

And perhaps most interestingly of all, Mack Brown initiated a long-term statistical study of … “about 70-some-odd variables” to figure out what truly impacts the college game. The study revealed that big plays (and therefore yards per play) were enormous factors, even bigger than expected. ... This study actually led to a change in philosophy for some; perhaps it is worth it to take risks and go for big plays despite the consequences?

There are interesting stats, and there are useful stats. Knowing that big plays matter is interesting. Figuring out how that can actually impact your approach to the game, however, is useful.

From a down-and-distance perspective, when do big plays happen?

If we were to define a big play as a rush gaining at least 12 yards or a pass gaining at least 16 — one of many definitions coaches use — here’s the national frequency of big plays for all of FBS football in 2016 by down and distance, along with the national success rate:

The success rate changes pretty dramatically based on down and distance. Aside from third-and-1, though, big play rates barely change at all.

Granted, part of this is a scale thing — if you change your big play rate just two percent, from, say, 11 percent to 13 percent, that’s one to two more big plays per game, and that could make an immense difference in your win probability.

Still, the range of expected efficiency and expected explosiveness based on down and distance are different animals.

(By the way, consider that wording a harbinger. The next step in adjusting my S&P+ ratings, probably after this coming season, will be taking expected success into account instead of simple success rate. But that’s a different post.)

Since I put out my Five Factors post in January 2014, my view on big plays has changed dramatically. Here’s what I said in the original post:

Big plays are probably the single most important factor to winning football games. The team more adept at creating numbers advantages and getting a guy into the open field, or the team that can simply outman its opponent and win one-on-one battles will almost certainly generate more big plays and win more games.

Nothing is more demoralizing than giving up a 20-play, 80-yard, nine-minute drive. But unless your team is Navy, that doesn't happen too often. Defensive coaches often teach their squads the concept of leverage — prevent the ball-carrier from getting the outside lane, steer him to the middle, make the tackle, and live to play another down. It is the bend-don't-break style of defense, and it often works because if you give the offense enough opportunities, they might eventually make a drive-killing mistake, especially at the collegiate level. If you allow them 40 yards in one play, their likelihood of making a drive-killing mistake plummets.

All of this is still basically true. But it goes back to interesting vs. useful. Every coach knows you should make more big plays than your opponent, just as every coach knows you could commit fewer turnovers than them, too. How?

If big plays can happen on any down, at a reasonably similar rate, then tell me where I’m wrong in saying this:

The key to explosiveness is efficiency. The key to making big plays is being able to stay on the field long enough to make one.

This doesn’t happen often, but one of the most fun things about getting into advanced stats has been, for me, the chance for a single revelation or a single article to completely change the way you look at a sport. There’s a nerd adrenaline rush to it that defies description.

I had watched plenty of baseball, and played with plenty of baseball stats, in my lifetime before I first read Voros McCracken’s musings about pitching and defense and how one’s batting average on balls in play (BABIP) is really only so much in your, or a pitcher’s, control. But once you read it, you realize how many things you were looking at incorrectly.

I had nearly the same “How could I have not seen this?” moment back in 2012, when Ken Pomeroy began breaking down 3PT% defense. In basically a series of posts, he went about showing that the only thing the defense has control of is an opponent’s 3-point attempts, and that once the ball’s in the air, randomness takes over.

With few exceptions, the best measure of three-point defense is a team’s ability to keep the opponents from taking 3’s. This is what the [Rick] Majerus defense did, and fortunately for Billiken fans, Jim Crews is carrying on that tradition as SLU is currently allowing the nation’s second-fewest 3PA%.

When people say that advanced-stats users are a bunch of nerds, I can only think the people that don’t use them are the real dorks. Nobody with any knowledge of the game would talk about free throw defense using opponents’ FT% as if it was a real thing, yet we’ll hear plenty of references to three-point defense in that way from famous and respected people. Of course, both free-throw defense and three-point defense exist, but it’s much better measured in the amount of shots taken and not in the noisy world of the percentage of shots an opponent makes.

His series on this is very much worth your time whether you like basketball or not.

I had a miniature Eureka! moment of my own in the weeks following my Five Factors post. Since my first dalliances with creating my own measures 10 years ago, my S&P formula was based on two relatively simple concepts:

  • Success rate, which defines every play as a success or non-success based on the following criteria: gaining 50 percent of your necessary yardage on first down, 70 percent on second down, and 100 percent on third or fourth down. I have always viewed it as an on-base percentage for football, more or less.
  • Points per play (PPP), which is based on the equivalent point values each play produces. Every yard line can be given an expected point value — as in, if you’re on your own 10, you can expect to score X.X points per possession on average (there are many ways of deriving those values) — which means that every play can have an approximate point value based on where it started and where it finished.

S + P, get it? The main problem here is that the P is drastically related to the S. A comment on an old Varsity Numbers piece at Football Outsiders crystallized that for me.

One way of measuring this that might be useful is PPP per successful play. That might more directly get at the key question - when you have successful plays, are the REALLY successful, or just a little successful.

The name IsoPPP is super awkward; a coach has recommended that I just change it simply to “explosiveness” for clarity’s sake, and I might. But it’s not the hardest thing to explain: it’s my old PPP measure, but it isolates the successful plays. It boils all of football down to two questions: How frequently are you successful? And when you’re successful, just how successful are you?

With IsoPPP, I have a way of basically stripping apart my main efficiency component from my main explosiveness component. That’s exciting. And as it turns out, one of those two components is pretty damn random.

Go back and look at that first KenPom post I shared above. This was the chart that blew my mind a bit:

Ken Pomeroy

Here’s what you’re looking at:

I took last season’s conference-only data for every team and split it into two halves. Then I compared each team’s opponents’ 3P% between the first half and second half of the conference season. I did the same for opponents’ three-point attempt percentage (their percentage of field-goal attempts that are from three-point range). The following plots of that data should make it clear that opponents’ three-point accuracy is largely out of a team’s control.

One pretty easy and telling test you can run for data is basically what you could call the bucket test. Toss half the data into one bucket and half in the other, and see how similar they are. A reliable measure will feature pretty similar data. An unreliable one won’t. Your opponent’s 3PT% is extremely unreliable, then.

So are big plays.

I did the same sort of test for Success Rate and IsoPPP. I looked at conference games from the 2016 FBS season, tossing half of a team’s conference games in one bucket and half in the other and looking at the correlations.

Now, these correlations aren’t going to be as strong as those on the basketball side, in part because of sample size. Most teams play only eight conference games, and at most, including conference title games, you only play 10. So that’s basically half of the college basketball sample size and a pittance compared to pro basketball or baseball. One strange game in either bucket, and your numbers get blurry in a hurry.

With that in mind, though, this small-sample data still tells a pretty clear story, one that I’d been catching onto for a while.

Efficiency has a decent correlation

Bucket 1 and bucket 2 were reasonably similar in the success rate department.

That’s reasonably similar to Pomeroy’s look at 3-point attempts above. The R2 value isn’t quite as strong, but again, I’m betting sample plays a role in that. This suggests that the go-to efficiency measure is pretty reliable.

Explosiveness has almost no correlation

Your IsoPPP average in bucket 1 bore almost no resemblance to your IsoPPP average in bucket 2.

There is the slightest of correlations there, but looking at data in this way leads one to believe that you don’t have much control over the size of your successful plays. The only thing you really have control over is the success.

In the last 3PT% post I shared above, some months later, Pomeroy took another approach to defining the randomness involved in 3-point shooting.

What I’ve done is taken both the top 20 and bottom 20 teams in opponents’ 3P% as of 12/4 each of the last four seasons, and recorded their three-point defense from December 5th onward. All figures are averages for the 20 teams in each group.

[O]n average over the rest of the season, there will be a roughly a 1-2 percent difference between the teams currently in the top and bottom 20 of opponents’ 3-point percentage. We’re talking about a difference of one make every two or three games between the best and worst groups. Without more analysis, one can’t say precisely how much skill a team has at influencing its opponents’ three-point percentage, but there’s a fair amount of evidence it’s not much, and even end-of-season figures are the result of significantly more noise than skill.

Since I am nothing if not an ideas thief — most of my work has basically been an attempt to take advantage of college football being last in the analytics race — let’s do the same thing for Success Rate and IsoPPP.

For both 2015 and 2016, let’s look at the averages for the top 10 and bottom 10 teams (offense and defense) in each category seven weeks through the season, then let’s look at the same teams’ performance over the rest of the year.

First half of season vs. second half on offense

Offense Top 10 thru 7 Bottom 10 thru 7 Difference
Offense Top 10 thru 7 Bottom 10 thru 7 Difference
Avg. Success Rate (first 7 of 2016) 52.0% 35.1% 16.9%
Avg. Success Rate (rest of 2016) 49.3% 37.0% 12.3%
Avg. Success Rate (first 7 of 2015) 51.5% 33.0% 18.5%
Avg. Success Rate (rest of 2015) 48.7% 33.6% 15.1%
Avg. IsoPPP (first 7 of 2016) 1.47 1.09 0.38
Avg. IsoPPP (rest of 2016) 1.32 1.23 0.09
Avg. IsoPPP (first 7 of 2015) 1.54 1.06 0.48
Avg. IsoPPP (rest of 2015) 1.33 1.19 0.14

This was a hard table to label, but here’s what you’re seeing:

  • Through seven weeks of 2016, the top 10 teams in success rate had combined for an average success rate of 52.0%. Over the rest of 2016, the teams in that bucket produced 49.3%. A little bit of regression to the mean, but not a ton. In 2015, the same sample produced a 51.5% through seven weeks and 48.7% thereafter.
  • Through seven weeks of 2016, the bottom 10 teams in success rate had combined for an average success rate of just 35.1%. Over the rest of 2016, they averaged 37.0%. In 2015, it was 33.0% and 33.6%, respectively. Minimal change. This suggests that success rate is a pretty stable offensive measure overall.

Through seven weeks last year, eight teams had produced a success rate of at least 50 percent, led by Washington at 55.7 percent. Among those, seven had at least a 48 percent success rate the rest of the way, and four were over 50 percent.

NCAA Football: Pac-12 Championship-Colorado vs Washington
Jake Browning and the Washington offense started 2016 efficiently and finished efficiently. At least until they played Alabama.
Kyle Terada-USA TODAY Sports

Meanwhile, nine teams had produced a success rate under 36 percent, “led” by Buffalo at 33.8 percent. Only three of those nine produced a success rate over 37 percent the rest of the way: South Carolina (39.2 percent), UMass (39.0 percent), and SMU (38.9 percent). Since the national average was 43.1 percent, that means the best in this bunch still remained well below average.

  • On the other hand, the top 10 teams in IsoPPP through seven weeks of 2016 had produced an average IsoPPP of 1.47; from that point forward, they produced an average of just 1.32. In 2015, it was 1.54 and 1.33, respectively.
  • The bottom 10 teams in IsoPPP through seven weeks of 2015 had produced an average IsoPPP of 1.09; from that point forward, they produced an average of 1.23, just 0.09 points behind what had been the top 10. In 2015, it was 1.06 and 1.19 respectively.

Depending on where you are on the field, the difference between IsoPPP averages of 1.47 and 1.09 is about the difference between a 15-yard gain and a 21-yard gain. The difference between 1.32 and 1.23 is the difference between an 18-yarder and a 19-yarder.

Through seven weeks last fall, eight teams had an IsoPPP average of at least 1.45; only two were above 1.35 the rest of the way. On the flip side, seven offenses had an IsoPPP average of 1.10 or worse. Five improved to at least 1.26 over the second half of the season.

IsoPPP is a dramatically unstable measure and suggests that while there is a difference between the most and least explosive offenses in the country, that difference isn’t dramatic. Your success rate will remain similar throughout the season, but your IsoPPP rate will regress dramatically toward the mean.

Is it the same story on defense? Pretty much.

First half of season vs. second half on defense

Defense Top 10 thru 7 Bottom 10 thru 7 Difference
Defense Top 10 thru 7 Bottom 10 thru 7 Difference
Avg. Success Rate (first 7 of 2016) 29.8% 48.3% -18.5%
Avg. Success Rate (rest of 2016) 34.4% 48.0% -13.6%
Avg. Success Rate (first 7 of 2015) 29.3% 50.9% -21.6%
Avg. Success Rate (rest of 2015) 34.2% 47.9% -13.7%
Avg. IsoPPP (first 7 of 2016) 1.07 1.51 -0.44
Avg. IsoPPP (rest of 2016) 1.23 1.40 -0.17
Avg. IsoPPP (first 7 of 2015) 1.05 1.53 -0.48
Avg. IsoPPP (rest of 2015) 1.26 1.37 -0.11

The best success rate defenses midway through both 2015 and 2016 regressed toward the mean a bit more than the best offenses did, but the worst defenses barely changed at all. Meanwhile, the range in IsoPPP reverted from huge to small, just as it did with offense.

Efficiency is everything in college football. Explosiveness is too random to rely on without efficiency.

Unless you’re Penn State, anyway.