trainwreckog wrote:rapm is measuring the same thing apm measures, but is just doing a better job of it right?
APM and RAPM are both based on a regression on the different game snippets, in that sense, they are trying to "measure" the same thing; at that: how much does a player change the outcome of a possession.
APM is using OLS (ordinary least square), while RAPM is using ridge regression. Because we deal with a ill-posed problem (there are a lot of more game snippets than players in the dataset; more equations than variables), ridge regression will per se give a better prediction (mathematical proven theorem!). But the ridge factor also introduces a bias, which will act like an anchor, drawing everyone to the mean.
The better prediction comes partly from resolving issues with multicollinearites as well as from regression to the mean.
When calculating APM and RAPM it all comes down to matrix algebra, making it pretty simply to handle the thousands of equations going in for one season (each game snippet, 5 man vs. 5 man, is one equation).
trainwreckog wrote:if you look at the lakers point differential as a team in the years 2008-2010, it is much higher then their point differential in 2002-2004, so there is a bigger pie to calculate kobe's slice from in 2008-2010.
Due to the use of a regression, the coefficients derived are per se adjusted for the strength of teammates, opponents and home court advantage. It is also misunderstanding the method, if you believe, having an overall lower point differential would per se lead to a smaller RAPM value, because that is not the case. The player itself can just have played with weaker teammates against better opponents in average, which would lead to a lower scoring margin, but not per se to a lower RAPM/APM figure. In essence, there isn't really a "pie" which grows bigger or smaller.
trainwreckog wrote:this could easily not be the reason.. i'm just taking a shot at it... i had never looked up rapm before.. from what i can tell, there is a ton of error without proper sample sizes, which can take years to accumulate (since there is often not big sample sizes of minutes for specific 5-man units vs. other specific 5-man units).
For RAPM a full season is rather sufficient to get a good picture, but that goes more for the overall value, not for the offensive and defensive value. The issue comes with the interpretation of the numbers, a player must not be a good defender to achieve a positive value for the defensive part, he simply can be sufficient enough defensively while enabling the coach to play better defensive players with him on the court due to his great offense in order to achieve a better defensive value. Also, there is a connection between offense and defense; for example, if a player helps his team to avoid turnovers, he ultimately reduces the fast- and secondary break opportunities for the opponents, opportunities which usually are more efficient than normal halfcourt sets. A similar effect may happen with a better defensive player, who is able to disrupt the offensive sets and forces turnovers which are ending up steals, which then are leading to more efficient offensive opportunrities as well.
What we can take from those numbers is: How much changed the presence of a player the outcome of a possession, on offense, on defense and overall. We can't say how that was achieved, for that we would need the "eye test" and the boxscore to come to a meaningful conclusion. But we still can say, with reasonable certainty, by how much the outcome was/is changed (APM more for was, RAPM more for is).
Also, such numbers may not be comparable between seasons (due to various reasons including sample size). In order to normalize them, I suggest using the standard deviation before staring a comparison. In essence, a player making 1 sigma difference in one season is as valuable as the player making that 1 sigma difference in another season.