BorisDK1 wrote:Ripp wrote:Ah, I don't mean in some abstract way. Like, given a team Ortg and Drtg (not in the Oliver sense, in the basketballvalue.com sense), I can get a pretty good prediction of how many games that the team will win next year (Google Pythagorean wins if you haven't seen this before.) It isn't perfect, but it does a solid job....that is why Hollinger and some of these other guys can pretty quickly forecast how many wins a team will get fairly early into the season (of course, Hollinger is going to use more complicated stuff that builds on this basic idea, but you get the point.)
Can one do the same with PDSS? How do I take the ratings for each player produced by PDSS and then produce an estimate of what the team Drtg will be next season? Or for the last 60 games of the current season, given that I've looked at the first 22? You can do that pretty well with variants of on/off...can you do the same with PDSS? In particular, if I cannot...then who cares what PDSS says?
Individual pythagorean wins with PDSS? Sure. Oliver actually has a Net Points formula estimating points allowed with the estimated DRat, which will only be more accurate when the exact totals are known. I'm not happy with the way it's working right now in some ways because I didn't track 3FGA, but next year that shouldn't be a problem.
As far as predicting team DRat? This team next year? No. Too much flux in personnel, too many unknowns, too many still unresolved questions. I don't think any system is going to accurately predict this team's defensive performance 1) until more things get resolved, namely, Calderon's future and 2) until we see this team at least play against non-summer league competition.
I'll discuss what I mean at the end of this post.
What "second term" are you talking about? Are you talking about the estimaed DRat yet again? The two are not the same formulas...build a bridge, and get over it.
Focus on the equation for the PDSS based DRat. You have a first term that is the same for every player on a team. You have a second term that is a function of Stop% (among other variables), and thus varies from player to player.
As I stated, both of these Oliver DRats involve adding a TEAM DRat number to some other number (the second term in your equation, I don't want to c/p it again) that corresponds solely to that only of an individual. Like, if you want to compute this PDSS Drtg number, you first compute Stop% for Amir, then one for Jose Calderon. If Amir's Stop% is 100%, this second term will be a big negative number (say, -8 or -9.) If Jose's were say 0%, this second term is a big positive number (say +8 or +9.)
Yet depending on the overall average defensive performance of the Raps, the Drtgs for both players might be very high.
My point is that this second term has a pretty natural interpretation as how a player impacts his team defense (either he improves it or worsens it.) But if I understand you correctly, it isn't important if he improves or worsens the defense, but what the final defensive number is? Or am I misunderstanding you?
Are assists reliable in the NBA? Then PDSS compiled by a competent and informed person is equally reliable.
Err, but what people usually do is to compile raw counting stats like assists, blocks, rebounds, etc. You instead are not presenting the raw counting stats you kept track of, but a formula involving your new counting stats. Just because I think assists are reliable in the NBA doesn't mean I necessarily think PER is reliable, or WS, or WP, etc.
Like I said earlier, I like the raw counting stats you kept, but don't have much personal confidence in this particular statistic involving those raw counting stats.
Nope; Neil's answer didn't answer the OPs question, before there isn't much of a difference between the possession estimates for guys who play on the same team...typically minutes are a very good proxy for this. The poster's question was, how the hell can we have a team where ALL of the major big minute guys have great Oliver offensive and defensive ratings, yet have a terrible team? His point is that there is very likely something wrong with the formula.
Neil answered that question perfectly, by pointing out that you can't divvy these things up by minutes, but by individual possessions used. You evidently missed out on the fact that he's talking about summing the individual possessions, and not team possessions. I'd suggest that's due to unfamiliarity with Oliver's methodology.
The 6 or 7 guys who lead a team in minutes played almost certainly consume most of the defensive and offensive possessions. If those 6 or 7 guys all have their Ortgs much higher than Drtgs, and the team has a bad Ortg/Drtg differential (and thus not very many wins), then something is wrong. Saying that some guys who aren't playing very many minutes are the ones chewing up all the possessions is not a good explanation for this discrepancy (think about what this means...you have guys playing no minutes Think at a very intuitive level about what you are saying. We have a team where the top 6 or 7 guys in minutes played ALL of an Ortg much higher than their Drtgs. These are the top dogs on the team, the guys leading the team in minutes played.
Yet the overall team as a whole has an Ortg less than the Drtg...substantially so, in fact. And you are saying that this is fine, because if we weighted instead by usage, then it would normalize out?
Offensive Rating without usage isn't a meaningful metric. A guy can have an ORat of 138.2 - doesn't mean he's a good offensive player or that he was on a good offensive team. He may have just been extremely efficient on minuscule usage. You seem to think that the offensive rating is indicating team efficiency while he's out there: not true. The individual offensive rating measured Points Produced / 100 Individual Possessions. I think this is yet another case of you not being conversant in these metrics and making some basic mistakes of fact.
The top 6 or 7 guys in minutes played on a team will represent the lion's share of the possessions consumed. I'm more than familiar with the metrics...in fact, familiar enough with them that I can step back and see if they pass basic smell tests, and make intuitive sense
Individual ORat does not equal on court team offensive efficiency. Please be clear on that. That's why that article was so flawed, because it tried to use the two concepts interchangeably. Similarily, the DRat (either estimated or PDSS) is not just a measure of on-court team defensive rating while a player is out there: it includes team defensive performance, but it's basketball's equivalent of an Earned Run Average.
I understand that. But how do I go from these individual ORats to the team ORat? That is my point...ultimately, if we want to check that the model we've built works well, it sure would be nice if it matched what actual game results are (e.g., team Ortg, team Drtg, same quantities for lineups, and finally Wins and Losses.)
I understand the formula, and read the justification listed on page 204 (but do not have access to that of Chapter 3, or Appendix 1, where I guess he justifies it further.) The thing is, there are many such formulas one can come up with that have the exact same properties, that trade off between the opposing team's ability to shoot and their ability to grab offensive rebounds. Why is this particular one the right way to go? Why not Fmwt squared? Why is his particular way to trade off between the two variables the correct one?
Why not?
Honestly, you're trying to apply a universal negative here and it's not going to work. Prove there's something flawed in it, or move on.
That is not how statistical methodology works. You justify different quantities you use, not say, "Well, what else would you use?" Ideally you justify it by showing that nothing else makes sense, and moreover there are independent ways to show that any other choice leads to a bad outcome. In practice, you find some sort of weaker justification. And if you cannot do this, then you say that the choice is a bit ad hoc and arbitrary.
What do you mean, "what objective reference"? What objective reference is any basketball metric compared to? It's a measure of individual defensive performance based on the outcome of every play during the course of a basketball game. From that, you can easily develop a stop%, DPoss% and ultimately a defensive rating. Does it communicate a lot more directly about players than peripheral analysis? IMO, yes.
It's as meaningful as any other efficiency-based individual metric for any of the functions you name.
No. I don't trust or believe for example Berri's Wins Produced, because you cannot use it to predict what is going to happen in games. Things like PER and +/- based approaches can be...that is why I have confidence that they have some value.
This is the point I'm making....if you have a formula and have some quantity that you cannot justify, then why can't someone else take your formula and change the value 10 in your formula to a billion, and claim his formula is better than yours? How do you show him that yours is right (or at least, better), and his is wrong?
It is easy to do this with variants of PER and +/-....you try to predict the number of games won at the end of the season by your favorite team (or better yet, all NBA teams.) You can do this successfully with some techniques (i.e., within some margin of error), but less successfully with certain other techniques.