hankscorpioLA wrote:I don't want to crap all over this, but I also don't really understand what the point is.
If you are trying to determine the "real" value of a particular stat then the way to do it is through analysis of the data.
I like using blocks as an example because you can see easily how different outcomes affect the value of a blocked shot. In terms of ascending value, you have
1) a blocked shot that goes back to the opposing team - this likely presents them with a fairly easy opportunity to score
2) a blocked shot that goes out bounds - although the opposing team gets possession, they now have to run an inbounds play, often with little time on the clock (also a factor)
3) a blocked shot that goes out of bounds off an opposing player
4) a blocked shot that results in a change of possession - these are most likely to result in fast break opportunities and easy buckets.
Now you look at each of these scenarios and figure out how often they result in points for or against.
So you start with a blocked shot - that is worth 2 points because it prevents the opponent from scoring. But when you block a shot to the opposing team and you find that they score 70% of the time, then that kind of block is only "really" worth 0.6 points saved. On the flipside, if when you block the ball to your team and you score 70% of the time, then that block is worth 3.4 points.
This is just an example, but I would argue that this is the kind of approach that is needed.
Perhaps rather than soliciting opinions you could try to crowdsource this analysis.
What you did was basically what I was looking for...different perspectives. I know what you are getting at but there are flaws in your theory as well, like for example if you block a shot it usually means that the opposing team has already been able to get the ball inside and if that happens there needs to be an adjustment to the probability of scoring on a possession because they are already more likely to score on that possession than if they were not able to penetrate.
Another thing to consider is when you are using advanced analysis and you get a large enough sample size (a couple years should be fine) all the different types of blocks will level themselves out to some degree so an over riding exponent will do the trick basically.
I am not trying to argue with you at all I am just saying that certain quantification techniques have loopholes in logic, someone may find a hole in my theory too.
I was just hoping that a discussion would start and I could cherry pick some good points that were made and include this in my overall vision.