By Joe Landers
Click Here if you missed the Introduction to the Series
While I track individual performance along the offensive line over the course of each season, I’ve only been doing so for seven seasons. Believe it or not, this is not long enough to draw reliable conclusions. Only two classes (2009 and 2010) have at least six seasons in the league and that’s just not enough data. The preliminary data that I do have seems to back up my hypothesis that every position along the offensive line takes considerably longer to develop than any other position on the field. Unlike every other position we’ve seen so far, my hunch was that we would see the peak season for Centers, Guards, and Tackles pushed out beyond Year 4.
With insufficient data, it looks like the peak season for;
- Centers is between Year 5 and Year 7.
- Guards, it appears as though it will be between Year 3 and Year 7.
- Tackles, somewhere between Year 2 and Year 8.
Even with the medians, that still puts as at Centers Year 6, Guards Year 5, and Tackles Year 5 – all further out than any other position. No surprise.
It sure looks like the NFL Owners knew what they were doing when they reached an agreement with the NFLPA for four years to be the default drafted rookie contract. As you’ve seen, most positions have their average best year between Year 2 and Year 4. In fact, 66.8% of the 10,390 players who qualified for the study had their best season before Year 4. By the time drafted rookies hit their second contract, their best year is likely behind two-thirds of them. The outlying third are generally easy to see (e.g., Manning, Adrian Peterson, Calvin Johnson) and, from a business perspective, it is financially feasible to pay these outliers a premium. The ideal would be to only pay the outliers a premium. As many of you may have determined while reading, the ideal is not always achievable.
This model is far from perfect. A number of things concern me about its reliability and how it can be interpreted: tenure scaling, formation variability, correlating ROI depreciation, isolated application, and replacement value.
First, the population needs to be looked at on a graduated scale for tendencies – players that played at least three years was my threshold.
I had a valid reason. (We’ve all seen the stat that the average NFL player only lasts three years in the league. The denominator in that three-year average is comprised from all players who drew a paycheck. I narrowed it down to just players who contributed on the field for three seasons – not PSq, not IR, not NFI, not some other reserve designation for a full season. They had to contribute for three years.) Even with the cutoff at three years of productivity, are the tendencies different for those that played at least four years, five, six, seven, eight, and nine?
The problem here is time and availability of performance data. Despite the fact that my data on all production for all players goes back to 2005 for all players and 1998 for the majority, the population of quarterbacks that played nine seasons is just so small that the data is not terribly statistically significant. Over time, trends will become more obvious. Perhaps the trends we see with ten years of data will be not vary at all from data collected after twenty years, maybe they’ll be markedly different. I just don’t know.
Having set the minimum number of productive years at three, my suspicion is that the data is naturally skewed towards that third year, but not as drastically as setting the minimum at zero. If the minimum was set at five years, would the data skew towards that fifth year? I can say I started out looking at all players, whether they had any productivity not. Essentially, the minimum was set at zero. The result was that all positions were skewing towards the first year as being the most productive.
Intuitively, that didn’t seem right. Setting it out at three as the minimum at least distributes the peak season away from Year 1. It’s incredibly time-consuming with my data to test the theory that the best season tracks the minimum. Therefore, this is the first potential shortcoming of the study. Someone with a more efficient data structure could more readily test this.
Second, my performance data is not as accurate as I’d like it to be.
Without going into painful detail, each position should be graded differently for each action on the field. For example, a fumble recovery for a tight end in his own end zone with his team owning a 3-point lead (resulting in a safety and his team still owning a 1-point lead versus the opponent recovering the fumble, scoring a TD, and the TE’s team losing the lead) should be valued higher than a fumble recovery at midfield on 2nd down on the opponent’s 45-yard line in the 1st Quarter of a tied game.
A corner like Richard Sherman knocking down a game-winning pass to Michael Crabtree on the last play of the game is worth more than knocking down a 3-yard out at the 40-yard line on a 2nd down in the 2nd Quarter of a 3-3 game. Actions need variable value based on game situation to be valued appropriately. I don’t know anyone that does this today in fantasy land or in the real world, but it’s what needs to happen to assign accurate values. Formation variability needs to be taken into account.
Due to some data acquisition limitations, I haven’t been able to achieve that level of detail. PFF, FO, and a number of other organizations keep detailed grading records for every player. Would it all even out in the end? Maybe so. There’s a chance even the most specific performance data (even vetted by team position coaches) would arrive at the same macro-level results I have, but I need to concede this potential shortcoming and the desire to be precisely accurate.
Third, part of the value of this data is to understand if, for example, Von Miller is going to be worth a 5-year $100m contract as a 6th year linebacker?
The odds are that he’s seen his peak season and that the ROI on his productivity will only decline from 2016 on. If someone like Michael Ginnitti from @spotrac had similar year-by-year trending data, I’d be curious to see if the peak ROI year mirrors the peak years by position in this study. Maybe there’s no point to doing such a comparison since every player who has their peak year in their first four years (contract terms limited by new CBA) will have an outlandish ROI compared to performance in the years covered by their 2nd contract (Year 5 and beyond). Nevertheless, it’s a correlation I’d like to be able to see, prove, and understand what it says compared to these results.
Fourth, using this model alone to predict future performance for rookies, RFAs, or UFAs would be flawed.
If I think I’ve learned anything over the last seventeen years of attempting to predict NFL personnel performance, it’s that there is no single model or individual metric that predicts success. Each model provides valuable insights that need to be taken into context to paint a clear picture. For the record, my other models are;
- The Relevance of the Combine
- Is It Really In the Water?
- Best Offensive Line Development
- Sudden Impact – Rookie Impact by Position
- The Importance of Day One
- Over Veteran Average (OVA) – Predicting Rookie Success
- Draft Domination versus NFL Success
If the Best Season model was used alone as a compass, Tom Brady would’ve been let go after the 2007 season. He’s an outlier, he’s a Hall-of-Famer. To let go of Brady, Charles Woodson, J.J. Watt, or Adrian Peterson because they’ve passed the average Best Season for their position would be to suggest that they are not future Hall-of-Famers and it would suggest there’s someone available who’s going to perform better – which takes me to my last point.
Lastly, while a player like Dez Bryant may have his best season behind him, who was Dallas going to find that was better than Dez?
Cam Newton may never exceed his performance in 2015, but that doesn’t mean that there’s someone better to take his place. Drew Brees’ best season was in 2011, but there hasn’t been anyone better for New Orleans. Aaron Rodgers? 2011. No one better for Green Bay. Brady, 2007. None better for the Pats. Sometimes teams have to pay non-HOFers like future HOFers simply because there’s no one available who’s even close to the player in question. This is probably the most salient point illustrating why this model should not be used alone.
In sum, there’s something to understanding the average peak season by position. There’s something to acknowledging and understanding that players who are great in 2015 will not be great four years from now. In general, every position has a peak and then a declining value over time – a depreciation model, if you will. At the micro level, players have peaks and subsequent declination as well. Understanding the risk, establishing clear expectations (going into a 5yr/$100m that will carry Von Miller through years 7-11 after tagging him in 2016), and having some degree of predictability with players and positions are all intended parts of this model.
Here’s to hoping this premise is acknowledged and taken leap years beyond where I have with this study in terms of applicability.