Using the charting data from every Big Ten football game in the 2025-26 season, excluding garbage time and games against FCS opponents, we can make granular comparisons of how each team performed. The strongest correlation with scoring strength that survives regardless of opponent quality — that is, what teams bring with them into the postseason against comparable quality opponents, unclouded by the ups and downs of the scoreboard — are the core fundamentals of per-play efficiency measured by success
rate, yards per play, and explosiveness in their rush and pass performance. I’ve laid out these metrics for each Big Ten team in my last two articles (part 1 is here, part 2 is here).
This article examines the “spanner in the works” – the factor that is most likely to interfere with drive effciency matching a team’s per-play efficiency, that factor being negative play rate. That is, when a team has scoring drives less (or more) often than their offense seems like it should based on how efficient they are on a per-play basis, the reason is almost always that they have a problem going backwards too much … or they’re very good at avoiding going backwards, in the case of being more drive efficient than they seem like they should be.
There are dozens of examples of negative plays in football, too many to effectively list. The important distinction, which hundreds of millions of individual datapoints in the global datapool support through retrodictve testing, is that a play or penalty losing even a single yard has a demonstrable effect on playcalling — both for the offense and defense — for the rest of the sequence far more than a zero-gain play such as an incomplete pass.
I put each negative play type into one of nine subcategories (later, I’ll cluster these into three supercategories for useful discussion):
- tackle for loss on designed rush
- sack / QB scramble for lost yardage on designed pass play
- backwards pass (typically screens that are blown up)
- bad snap / botched backfield exchange (unforced by defense)
- liveball advantage foul (mostly holding, plus OPI, illegal blocks, a few others)
- fumble (forced by defense)
- interception
- procedural foul (mostly false starts, plus illegal formations, other presnaps)
- deadball / safety foul (mostly unsportsmanlike, some personal fouls away from the play)
Most offenses generate around 650 meaningful snaps in a season, though things like pace of play, a late postseason run, avoiding an FCS game, taking disproportionate numbers of games to the wire or as blowouts can all affect the final number, so the range this past year in the Big Ten swung by more than a hundred snaps in both directions. To keep comparisons on-point, I’ll list all statistics in this article on a rate rather than absolute count basis.
Over a decade and a half of charting conferences around the country I’ve found that the FBS median for negative play rate is about 12.6% of snaps with a standard deviation of 2.4 percentage points, and the eighteen Big Ten teams in 2025 fit right into norms with a 12.64% median and an exactly 2.40 st.dev. Here’s how each team did overall:
While tackles for loss in the designed rushing game are the biggest single subcategory, they also seem to be “baked into the cake” with a certain fraction — about 3.5% of all plays — simply being accepted with very little variance between schemes, conferences, or years, though there are occasional wild outliers. In other words, beyond avoiding being one of the rare horrible TFL rate teams, there’s not much an offense can do to change their TFLs and it’s just a baseline component of their negative play rate.
The two remaining large subcategories do have substantial variation, however. Those are sacks and procedurals. QB escapability and the willingness to throw the ball away combine to substantially bring the sack rate down, whereas QBs who are either statues in the pocket or keep holding the ball outside of the pocket even when there’s no play to be made can drive up the sack / negative-scramble rate significantly above average. There’s a lot going on with the procedural subcategory (an entire separate article could be written about more and less effective coaching staffs at mechanical operations), but in terms of variance this essentially boils down to how good offensive lines are at avoiding false starts – with a swing from one every other game for the best lines, to two or three every single game for the worst.
The remaining subcategories are usually under one percent apiece, with the pattern being that to move the needle on the overall rate here, they either need to be very bad at two of them (for example, they throw a lot of interceptions and they commit a lot of deadball fouls), or they need to be a bit better than usual across the board. It’s very, very unusual to find any examples of teams that are so extremely good or bad at any one subcategory other than sacks, TFLs, or procedurals such that it meaningfully changes their overall negative play rate, simply because they don’t happen enough for that to be so.
Although they don’t happen as frequently as sacks, there is one other aspect of QB play which has a statistically significant variance and a major impact on game outcomes, which is the interception rate. (Luck plays some role in this – from charting I count inadvisable throws that should have been picked off as “interceptable”, the rate that these are actually intercepted is always less than 100% and can vary from game to game; however in the Big Ten in 2025 luck in this matter was fairly consistent so we can just take the interception rate at basically face value.) Precisely because interception rate is correlated with QB and in turn with game circumstances particular to each QB, ball security is a variable and modelable factor … astute readers will have noted these discussions in my opponent previews, and only for QBs with higher-than-baseline interception rates.
With the subcategories described, let’s cluster them into supercategories to differentiate Big Ten teams’ reasons for performing differently than might be expected from their per-play efficiency. The first and most obvious supercategory is turnovers; the second is non-live penalties reflecting discipline, meaning procedurals, deadball and safety fouls (the last two bullet points on the list above); and the third is everything else, which amounts to the offense in some way just played football so badly from scrimmage that they lost yardage.
In turnovers, there are a number of clear correlations at the extremes – some of the best ball security teams were the lowest overall negative play rate teams with the best performing offenses like Indiana and Ohio State, and vice versa with Northwesterern, Purdue, and Wisconsin, while many others clustered in the middle and aligned with similar scores in turnover rate as they did overall:
However there are some interesting deviations as well. Minnsota’s first-year QB took care of the ball very well, a good indicator for his future. Rutgers had a very low turnover and overall negative play rate and winds up orthogonal to this project, part of their season-long baffling issue of having an excellent offense in every metric between the 20s, only to fall apart in the redzone. Three teams with poor overall negative play rates due to sloppy play from scrimmage — Michigan State, UCLA, and Washington — nonetheless had good ball security. Four teams were the the other way around, good scores overall mainly due to escapable QBs who’d throw the ball away and/or o-lines without false start problems, but did have significant ball security issues – those were Iowa, Maryland, Michigan, and USC.
Next let’s look at the “discipline” supercategory, which represents procedural penalties plus deadball and safety fouls. One thing to note here is that there was some daylight between this Big Ten dataset and the global FBS dataset, with the conference median being half a percentage point less frequent due to Big Ten officials’ well documented laxity:
Four teams are notable here for being relatively low in discipline penalties — MSU, Nebraska, Purdue, and Wisconsin — because their overall negative play rates and their offenses in general were so poor … meaning, their play from scrimmage was all the worse for not being “assisted” by the o-line false starting or players getting in fights constantly. Maryland and USC have significantly higher discipline foul rates than their overall penalty rates, but these are in line with their historical norms given their typical o-line quality and the differential is just an artifact of how effective their scrambling QBs were at avoiding negative plays. UCLA and UW also having spectacularly high discipline fouls is unsurprising given historical norms, and they didn’t have the large differential that Maryland and USC did because their scrambling QBs didn’t throw the ball away as we’ll discuss shortly. The outlier in this year’s chart is Oregon, which had for the first time in a generation an offensive line that committed fouls at an above- instead of below-average rate.
Now for the play from scrimmage chart. This mostly tracks the overall negative play rate chart simply because the bulk of the total negative play quantity are from this supercategory, though there are some exceptions:
Northwestern and Purdue get special mention as their negative plays from scrimmage are merely average, but their overall negative play rates are very poor – that’s because as mentioned earlier Purdue was amazingly terrible at ball security, while Northwestern was pretty bad at both ball security and discipline fouls. Penn State and Rutgers are the other way around, towards the higher end of the midrange in scrimmage play, but kept to the lower end of midrange overall because they did so well at ball security and procedurals.
Otherwise, what the extremes are basically tracking is the truism that sacks are a quarterback stat – while protections will vary they’ll all eventually give up pressure some of the time (and protections weren’t great in the Big Ten this year, with several historical stalwarts like Ohio State, Oregon, Penn State, and Wisconsin having subpar lines), and ultimately QBs decide what’ll happen once pressure gets through – escape to make a play or get rid of the ball harmlessly … or not. Indiana, Maryland, and USC had QBs who fit the former description, while MSU, UW, and Wisconsin had that latter (UCLA’s QB was a wild mix, while Nebraska split the season between two types of QBs due to an injury).
Finally, some notes on individual subcategories in 2025 with interesting outliers.
* Oregon and Wisconsin shared the dubious disctinction of being the only teams with over 1% of meaningful plays featuring bad snaps or botched exchanges unforced by the defense. These were far higher than any other Big Ten teams in 2025 or FBS norms; typically the rate is about 0.4% and the third highest conference team in 2025 was at 0.31%.
* The biggest variance was in the failed backwards pass subcategory, with multiple teams (Illinois, Indiana, Rutgers, UCLA) having essentially none, and more than half of the conference having about a 1.5% of all meaningful plays end this way. There doesn’t appear to be any real correlation with it other than schematic choices – some teams run a lot of screens and some don’t, and those that do risk a certain percentage getting blown up.
* Liveball penalties, particularly holding fouls, in previous seasons had followed a U-shape, punishing the worst offenses and, perversely, the best offenses. In 2025 this stopped being the case and instead these fouls just seemed to be called arbitrarily. I am at a loss as to the game Big Ten officials are playing.
* Above I mentioned that, throughout the global FBS dataset, there occasionally exist large outliers in the TFL subcategory, for what’s otherwise a fairly stable baseline for all offenses across time and conferences. One such extreme TFL outlier existed in the Big Ten in 2025, which was Michigan State. The regression engine automatically produced this chart and I intended to save it for my Summer preview of the Spartans, but it’s too striking not to share now:














