Do you ever think about how Over Expected metrics (CPOE, RYOE, etc) are just model error terms
And then what does it mean to assign 100% of the model error to a single omitted variable (QB, RB, etc)
I think its fine for something like CPOE, which fairly stable after one season of performance
Basically a QBs past CPOE tells us something about their future CPOE, so it probably measures something about their ability
But I really don't know what make of RYOE, which is super unstable year to year (min 50 attempts)
To me this says the model's error term has a lot more in it than can be explained by the RB, so we shouldn't say RYOE is a RB stat unless we've got a big af sample
Like, even if we only look at RBs with >1000 career carries, yards per attempt and RYOE have a correlation north of 0.60
A single game of RYOE might as well be YPA
@CoachCClement
Im guilty of this, but like, we should try not to do it
There is one tweet specifically Im thinking of that is just egregious confirmation bias using an OE metric
@greerreNFL
Nobody talks about these choices. With FG kickers it makes sense but in baseball for example example, we only only assign variance to variables that demonstrate it is theirs.
@greerreNFL
Your statement is absolutely true. RYOE & CPOE are aggregate residuals of a model.
We are still working through the bridge between our current CNN architecture and what we believe to be a possible solution (mixed effects).
Then there is the temporal aspect (change over time).