dougthonus wrote:League Circles wrote:That's the whole point. It doesn't need to be adjusted or account for other factors, because, in the cases I'm referring to, it correctly identifies that the situation is an absolute non problem to begin with. Doesn't mean improvement isn't possible by taking the guy out of the role, but compels you to look at other unrelated changes first when you are looking for possible things to change, which is always the situation. Unlike composite and estimate metrics, it establishes beyond any doubt (sometimes) that you are doing very, very well with a guy in a role. Yes, it may be due to other factors, but that's irrelevant. Don't prioritize fixing what's working super well, even if you don't understand exactly why it's working so well. Start with other changes to things that clearly aren't working well first.
Not sure what to say from a mathematical / statistical perspective, what you saying is objectively incorrect and absolutely not how that works. There is no reason to think that non regressed, raw numbers that don't account for all the important pieces of known variance are better than the version of these numbers that do account for those things. If you want to believe that is how it works, then not much else to discuss about it.
I'm not saying one is better than the other. They can be used for different things. Adjusting the data surely does give you a better reflection of how good or bad a player is than the raw data. But because basketball isn't baseball, that's not always what you might be trying to determine.
I think you think that I'm saying something much more complicated than I am. Here's a hypothetical extreme example that shows the simplicity of what I'm noting:
Let's say the 2010 Bulls were maybe still doing well but not #1 in the league, and the coach was trying to determine what to maybe tinker with. Let's say Keith Bogans had terrible adjusted numbers, indicating that he sucked, and Korver and Brewer had better adjusted numbers, indicating that they might be better in an expanded role (though let's say Keith actually played more than them which wasn't the case). But if the coach looked at raw numbers and saw that we are destroying teams while Keith is on the court, it would be silly to try removing him from that role at least as an initial alternative, because the raw data would prove that we're destroying teams while he's out there (with the starters), regardless of how bad he may be. Doesn't mean another player couldn't improve the level of destruction, but it does establish that removing him from that role is trying to solve a problem that doesn't exist. Obviously in this hypothetical, the team by definition would be playing worse when he's not on the floor. The coach would then be well advised to consider how to tweak the unit that is doing worse before tweaking the unit that is doing better. Perhaps that might mean something like trying Kurt Thomas or John Lucas more with the 2nd unit instead of benching Keith. It's just a simple way to identify problems, or rather eliminate them from focus, as an initial part of decision making.
I'm sure you know more about this analysis than me, but I also know more than most casual fans. What I think I'm trying to get to is an initial value problem. I think (but might be wrong), that the weakness of adjusted data is that it relies on a misconstrued initial value. Perhaps the individual metrics of other players (opponents and teammates), and essentially treats them like baseball players instead of basketball players. When really, the initial value should be team performance, because that's what the game is defined as. To be clear, I'm definitely not saying that raw data is an overall more useful metric than adjusted.
I know I probably didn't communicate this well, but the bottom line is that for some uses, much more limited, but perfectly accurate and relevant data is more useful than other metrics that may be more useful in overall decision making and for more uses, especially player comparison.











