Is PDSS a good way to analyze the Raptors defense?
Moderators: Morris_Shatford, 7 Footer, DG88, niQ, Duffman100, tsherkin, Reeko, lebron stopper, HiJiNX
Re: Is PDSS a good way to analyze the Raptors defense?
-
FluLikeSymptoms
- Retired Mod

- Posts: 19,115
- And1: 8,718
- Joined: Nov 26, 2004
- Location: TBD
Re: Is PDSS a good way to analyze the Raptors defense?
Here's a stat: 1 of the 3 number crunchers on this board understands the game of basketball and the value of spreadsheet scouting.
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
thesciencedroppa wrote:Here's a stat: 1 of the 3 number crunchers on this board understands the game of basketball and the value of spreadsheet scouting.
Here's another one. 100% of the people who make up stats have a 100% margin of error.
Re: Is PDSS a good way to analyze the Raptors defense?
- dacrusha
- RealGM
- Posts: 12,696
- And1: 5,418
- Joined: Dec 11, 2003
- Location: Waiting for Jesse Ventura to show up...
-
Re: Is PDSS a good way to analyze the Raptors defense?
ranger001 wrote:thesciencedroppa wrote:Here's a stat: 1 of the 3 number crunchers on this board understands the game of basketball and the value of spreadsheet scouting.
Here's another one. 100% of the people who make up stats have a 100% margin of error.
So, you're saying you're wrong as well?
"If you can’t make a profit, you should sell your team" - Michael Jordan
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
I could be wrong, but a 100% margin means I could also be right.
Re: Is PDSS a good way to analyze the Raptors defense?
- chsh22
- Analyst
- Posts: 3,252
- And1: 1
- Joined: Aug 22, 2007
- Location: The Watcher
Re: Is PDSS a good way to analyze the Raptors defense?
Ripp wrote:Ah, but the key point you are missing is that AVERAGE performance can be determined. And that this is usually one of the best predictors of future performance. This is why the coin-flipping example is so apt....if you can determine the parameter P to great accuracy, you'll learn a tremendous amount about the future average behavior of the coin-flipping sequence.
Heck, if in 1000 post-up plays run against player X, only 200 of them are successful, what is likely to be the outcome of the next postup play run against that player? Or an even more nuanced question would be, what is the expected number of points the other team will generate by posting up on player X?
Except average performance is a terrible predictor when you are ignoring so many variables. In your example, if Shaq is actively defending Oliver Miller for the first 1000 post-up plays, and then the rules change and say he can't body up or impede Miller's movement to the basket, how valuable is the data from the first 1000 post-ups going to be?
Let's take Steve Nash as an example. Were any stats able to predict that he had the capacity to be a two-time MVP after his first four seasons?
There are simply too many variables for stats to be useful as a predictor. This is even more true for defense because on defense it's rare that a single player is 100% responsible for defending.
Look at what rule changes have done for certain types of point guards.
You can get rid of some of the matchup issues by looking at a team's performance instead of an individuals but it definitely doesn't get rid of them.
If Jose ONLY played at the same time Bargnani does, and Bargnani only when Jose does, then yes, what you say is true...the two players become coupled, and it would be impossible to distinguish them. But given a sufficient amount of observations of them not together on the floor, you can determine a lot. You can sort of think about it in terms of linear algebra and systems of equations, if you like....but it is pretty intuitively true w/o really delving into math.
What you're missing was that defense is a team-generated thing, and that trying to isolate or compare Bargnani won't show anything of value because the actual skill of the entire team at D has varied so greatly over the measured timeframe.
Looking at individual defensive numbers for one season helps how? When taken in isolation there is no information on how a good defender will look vs a bad one. Especially when so much of the game revolves around switches / specific defensive strategies for specific star players.
I mean, this is a hypothesis of yours that may or may not be true.
Really? I'd say that the results of everything you guys are showing pretty well backs up the concept that the situation is too complex to measure, let alone build statistics off of.
But...what if they do act as predictors? Hollinger nails the season records pretty accurately using his techniques. This Voulgaris guy makes a lot of money gambling on games using his custom system, which seems to use variants of APM also. Ultimately, being able to understand average performance tells us a lot about future performance...
There is no point saying it cannot be done, when people are doing it right now.
You're talking complete team versus individual. Hollinger's player rating system is, by his own admission, offensively biased.
What's the threshold on his accuracy of prediction with PER or Pythagorean W/L? How many games off do you have to be to decide it's a failed prediction?
This past year, he figured we were going to go 35-47. Is 40-42 a failed prediction?
How about in 06-07, when he predicted 33 wins and we got 47?
Sometimes he gets it fairly right, but every year there are a number of teams (between 3 and 7 - more lately) that he just gets what I would consider "wrong" -- more than 3-4 games of swing.
I don't see what real value there is in pursuing a largely defensive-oriented stat other than as an exercise. If that's all it is, cool. That's not what this discussion has led me to believe is what PDSS or other methods introduced are being billed as.
theonlyeastcoastrapsfan wrote:If you were going to give the raps board an enema, you'd stick the tube in this thread.
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
chsh22 wrote:Except average performance is a terrible predictor when you are ignoring so many variables. In your example, if Shaq is actively defending Oliver Miller for the first 1000 post-up plays, and then the rules change and say he can't body up or impede Miller's movement to the basket, how valuable is the data from the first 1000 post-ups going to be?
So a "rule change" is different from "ignoring so many variables." A lot of those sort of time-varying effects can be dealt with by making your averaging process time-dependent as well. For example, if in the coin-flipping example, the probability of coming up heads slowly changes with time, I can place greater emphasis on more recent data than past one. So you'd have to come up with a bit more complicated a case than "ignoring so many variables"...some case where we believe the average performance over the past X units of time or possessions (appropriately weighted to take into effects like you bring up, if necessary), will not describe the average performance over the future. Yes, in cases like that, any sort of estimator is very likely to fail...but at the same time, most humans or experts will almost certain fail in that case too.
Let's take Steve Nash as an example. Were any stats able to predict that he had the capacity to be a two-time MVP after his first four seasons?
Were any experts able to predict this either? Was he himself able to predict this? It seems pretty silly to say that try to discredit statistics because something happened once that has never before happened in history (a borderline allstar PG with a bad back becomes an MVP-caliber player in his 30s.) Is this an indictment of statistics, in your mind? If so, let us agree to disagree.
There are simply too many variables for stats to be useful as a predictor. This is even more true for defense because on defense it's rare that a single player is 100% responsible for defending.
Look at what rule changes have done for certain types of point guards.
Again, you keep asserting this, but it isn't necessarily true. "rule changes" are something different from "too many variables"...the former is relatively easy to deal with, the latter essentially just an abstract criticism of yours that may or may not have merit.
You can get rid of some of the matchup issues by looking at a team's performance instead of an individuals but it definitely doesn't get rid of them.
Err, what?
If Jose ONLY played at the same time Bargnani does, and Bargnani only when Jose does, then yes, what you say is true...the two players become coupled, and it would be impossible to distinguish them. But given a sufficient amount of observations of them not together on the floor, you can determine a lot. You can sort of think about it in terms of linear algebra and systems of equations, if you like....but it is pretty intuitively true w/o really delving into math.
What you're missing was that defense is a team-generated thing, and that trying to isolate or compare Bargnani won't show anything of value because the actual skill of the entire team at D has varied so greatly over the measured timeframe.
Again, this is an opinion of yours, that may or may not be valid. I mean, we all agree that defense is a function of the entire team, or more specifically, that lineup. So comparing different lineups DOES tell you something about their defensive abilities...your opinion is that time-varying effects, noise, bias, sample size etc hides the useful information, but again, this is just an opinion of yours.
Looking at individual defensive numbers for one season helps how? When taken in isolation there is no information on how a good defender will look vs a bad one. Especially when so much of the game revolves around switches / specific defensive strategies for specific star players.
I mean, this is a hypothesis of yours that may or may not be true.
Really? I'd say that the results of everything you guys are showing pretty well backs up the concept that the situation is too complex to measure, let alone build statistics off of.
Who are these "you guys"? And how have you drawn that conclusion?
But...what if they do act as predictors? Hollinger nails the season records pretty accurately using his techniques. This Voulgaris guy makes a lot of money gambling on games using his custom system, which seems to use variants of APM also. Ultimately, being able to understand average performance tells us a lot about future performance...
There is no point saying it cannot be done, when people are doing it right now.
You're talking complete team versus individual. Hollinger's player rating system is, by his own admission, offensively biased.
How on earth does one verify an individual based stat if you don't check if the individuals combine in ways that make sense, and are consistent with the actual team performance? Ultimately, the point of any rating system (individual or team) must be to accurately predict team performance, no?
I'm not saying Hollinger's system is the best, i'm saying that he is doing a pretty good job despite only really considering scoring, assists, blocks, rebounds and steals. Like, even his fairly simple approach does a surprisingly decent job...there are lots of ways to improve his model.
What's the threshold on his accuracy of prediction with PER or Pythagorean W/L? How many games off do you have to be to decide it's a failed prediction?
This past year, he figured we were going to go 35-47. Is 40-42 a failed prediction?
How about in 06-07, when he predicted 33 wins and we got 47?
Sometimes he gets it fairly right, but every year there are a number of teams (between 3 and 7 - more lately) that he just gets what I would consider "wrong" -- more than 3-4 games of swing.
Well, I guess that is up to you...you'd have to define your thresholds for success/failure. Clearly setting the threshold at getting the W/Ls of ALL the teams correct is almost certainly too ambitious. For me, something on the order of an average error of 3 or 4 wins averaged over the 30 teams in the league is a pretty decent overall job. But maybe for you, something more stringent is required...i.e., a procedure only does a good job if its MAXIMUM error is less than say 4 wins. Of course, that then raises the question of whether any such procedure exists, and whether such a stringent criterion is too strong. Ultimately, if I have a technique that can predict who will and will not get into the playoffs with good accuracy, that is fairly valuable, even if it makes some mistakes here and there.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
- chsh22
- Analyst
- Posts: 3,252
- And1: 1
- Joined: Aug 22, 2007
- Location: The Watcher
Re: Is PDSS a good way to analyze the Raptors defense?
Ripp wrote:So a "rule change" is different from "ignoring so many variables." A lot of those sort of time-varying effects can be dealt with by making your averaging process time-dependent as well. For example, if in the coin-flipping example, the probability of coming up heads slowly changes with time, I can place greater emphasis on more recent data than past one. So you'd have to come up with a bit more complicated a case than "ignoring so many variables"...some case where we believe the average performance over the past X units of time or possessions (appropriately weighted to take into effects like you bring up, if necessary), will not describe the average performance over the future. Yes, in cases like that, any sort of estimator is very likely to fail...but at the same time, most humans or experts will almost certain fail in that case too.
Since you seemed to pounce like a cat on the "rule change", which I provided as one specific example, how about any of the following:
- Player progression / evolution in skill.
- Coaching and Management changes.
- "Outside" influences (friends, family)
- Health and Age
- Team changes (encompassing the above for all other players that our player is playing with).
- Teammate changes (trades, drafting, etc.).
All of these change year to year, at least. I'm sure there's other variables I haven't considered.
Were any experts able to predict this either? Was he himself able to predict this? It seems pretty silly to say that try to discredit statistics because something happened once that has never before happened in history (a borderline allstar PG with a bad back becomes an MVP-caliber player in his 30s.) Is this an indictment of statistics, in your mind? If so, let us agree to disagree.
This is exactly my point. There are enough of these cases to make the stats somewhat less useful than they could be if all players performed statically.
I think you're taking this as an attack on statistics. I'm not by any means doing so. I'm trying to point out that sometimes as a tool it doesn't really provide anything useful. Understanding just what your statistics are actually showing, and how relevant they are is rather important and often forgotten.
Math is an incredibly powerful discipline that requires a very keen understanding of what it is you're trying to accomplish. To use an idiom from Programming, Garbage In, Garbage Out.
Again, this is an opinion of yours, that may or may not be valid. I mean, we all agree that defense is a function of the entire team, or more specifically, that lineup. So comparing different lineups DOES tell you something about their defensive abilities...your opinion is that time-varying effects, noise, bias, sample size etc hides the useful information, but again, this is just an opinion of yours.
Just so you're aware, this particular tactic comes across as fairly juvenile. In most constructive discussion there shouldn't really be a need to to preface every paragraph with "IN MY OPINION" to make that clear. I don't expect you should be required to do the same, so please give me that courtesy.
Your opinion is obviously the opposite to mine, however do you have any evidence that yours is more correct? I would put forth pretty much everything presented in this thread as evidence that the effects of the other variables render the data less than valuable. Keep in mind, this is how we ended up discussing whether Bargnani was one of our better defenders in the first place.
What would your counterarguments be? Is there a specific margin of error that is acceptable in this particular problem domain? 20%? 40%?
[...]
Well, I guess that is up to you...you'd have to define your thresholds for success/failure. Clearly setting the threshold at getting the W/Ls of ALL the teams correct is almost certainly too ambitious. For me, something on the order of an average error of 3 or 4 wins averaged over the 30 teams in the league is a pretty decent overall job. But maybe for you, something more stringent is required...i.e., a procedure only does a good job if its MAXIMUM error is less than say 4 wins. Of course, that then raises the question of whether any such procedure exists, and whether such a stringent criterion is too strong. Ultimately, if I have a technique that can predict who will and will not get into the playoffs with good accuracy, that is fairly valuable, even if it makes some mistakes here and there.
Ultimately I think a lot of people overrate statistics in sports. Sure it can be good enough some of the time, or maybe even good enough much of the time.
There's that old saying of "that's why they play the games" specifically because sports are by nature unpredictable due to human fallibility. For instance, Tiger Woods goes out and plays like the Amateur side of a Pro-Am.
I guess my issue is that I see a lot of people develop stats or try and use stats to answer questions (myself included) but the trick is to understand that massive grains of salt are required when trying to hammer them into use as a predictor.
theonlyeastcoastrapsfan wrote:If you were going to give the raps board an enema, you'd stick the tube in this thread.
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
chsh22 wrote:Ripp wrote:So a "rule change" is different from "ignoring so many variables." A lot of those sort of time-varying effects can be dealt with by making your averaging process time-dependent as well. For example, if in the coin-flipping example, the probability of coming up heads slowly changes with time, I can place greater emphasis on more recent data than past one. So you'd have to come up with a bit more complicated a case than "ignoring so many variables"...some case where we believe the average performance over the past X units of time or possessions (appropriately weighted to take into effects like you bring up, if necessary), will not describe the average performance over the future. Yes, in cases like that, any sort of estimator is very likely to fail...but at the same time, most humans or experts will almost certain fail in that case too.
Since you seemed to pounce like a cat on the "rule change", which I provided as one specific example, how about any of the following:
- Player progression / evolution in skill.
- Coaching and Management changes.
- "Outside" influences (friends, family)
- Health and Age
- Team changes (encompassing the above for all other players that our player is playing with).
- Teammate changes (trades, drafting, etc.).
All of these change year to year, at least. I'm sure there's other variables I haven't considered.
If it wasn't clear, it wasn't rule change I was focusing on. Like I said, there is a basic tradeoff between how quickly some underlying variable we are trying to study changes and how much data is required to detect this change. We want to use as much data as we can in our prediction/evaluation, but the time-varying aspects you bring up make the older data less reliable. But there are pretty easy techniques to deal with this in statistics...time-weighting the observations is just the most obvious and popular one I know of.
Were any experts able to predict this either? Was he himself able to predict this? It seems pretty silly to say that try to discredit statistics because something happened once that has never before happened in history (a borderline allstar PG with a bad back becomes an MVP-caliber player in his 30s.) Is this an indictment of statistics, in your mind? If so, let us agree to disagree.
This is exactly my point. There are enough of these cases to make the stats somewhat less useful than they could be if all players performed statically.
I think you're taking this as an attack on statistics. I'm not by any means doing so. I'm trying to point out that sometimes as a tool it doesn't really provide anything useful. Understanding just what your statistics are actually showing, and how relevant they are is rather important and often forgotten.
Math is an incredibly powerful discipline that requires a very keen understanding of what it is you're trying to accomplish. To use an idiom from Programming, Garbage In, Garbage Out.
I mostly agree...but none of what you've said is really controversial. Yes, you have to ensure that the model you build does a good job of understanding the NBA as a whole. At the same time, some mistakes here and there are not in my opinion evidence that the algorithm is doing a poor job...most algorithms that one is likely to come up with (either statistical ones, or the wisdom of coaches, etc) are unlikely to be infallible.
Again, this is an opinion of yours, that may or may not be valid. I mean, we all agree that defense is a function of the entire team, or more specifically, that lineup. So comparing different lineups DOES tell you something about their defensive abilities...your opinion is that time-varying effects, noise, bias, sample size etc hides the useful information, but again, this is just an opinion of yours.
Just so you're aware, this particular tactic comes across as fairly juvenile. In most constructive discussion there shouldn't really be a need to to preface every paragraph with "IN MY OPINION" to make that clear. I don't expect you should be required to do the same, so please give me that courtesy.
No, like...this isn't just a dismissal of your point. Let me explain more simply. We both agree that team defense is a function of many variables, one of which includes Player X's defense, right?
You are of the opinion that the other variables (defensive capabilities of other players, noise, coaching, etc, etc) are far more significant variables, and thus make it quite difficult to extract individual defensive data, correct?
Well, I'm of the opinion that in certain cases this is not true, that you can learn a lot...depending on the magnitude of a player's defensive impact. Here is a small example: if an individual player has zero impact, then yes, it might be difficult to detect things one way or another. But if their impact is say 1 billion, then it will dominate those other variables, and it should be easy for us to detect this.
However, this example is not proof or evidence one way or another...it is just my opinion on how things should work. It sounds like a nice plausible explanation, but might not be how things work in reality. Similarly, your opinion sounds reasonable...but again is ultimately just an opinion. There is no reason for us to believe either opinion one way or another until we do some testing...
Your opinion is obviously the opposite to mine, however do you have any evidence that yours is more correct? I would put forth pretty much everything presented in this thread as evidence that the effects of the other variables render the data less than valuable. Keep in mind, this is how we ended up discussing whether Bargnani was one of our better defenders in the first place.
What would your counterarguments be? Is there a specific margin of error that is acceptable in this particular problem domain? 20%? 40%?
It depends really on what your criterion for "correctness" is. For me, I'm interested in closely predicting wins over the course of an actual NBA season. I implemented a variety of techniques on my computer (Adjusted +/-, Statistical +/-, my own improved version of Statistical +/-, and a dummy estimator that says that the home team wins by 3 points).
I break the 1230 games in the regular season into two chunks...chunk one is of size 820 games, and chunk two is of size 410. My own algorithm (which can be viewed as a generalization of both SPM and APM) only has access to the play-by-play results of those 820 games, as well as end-of-season box scores (Yes, I know that it would be better if I used box score information only available up to the 820nd game of the season, but the end-of-season totals are easy to grab.)
So I give APM, SPM, and my own algorithm access to the above data, then it it spits out player ratings, which can be combined to predict the performance of lineups, and thus the outcome of actual games. So then I see what each algorithm has to say about the final margin of victory and the winner for each of the 410 games that the algorithms did not see.
Keep in mind that this is a very important aspect...the algorithms have absolutely no information about those 410 games (well, aside from the information I indirectly provided by providing end of season box scores. I admit this is a problem conceptually, but it is easy to fix and should not hurt the predictive power too much. And to be fair, APM doesn't use box score information anyway, so this isn't relevant for that particular algorithm.)
Anyway, so the question is what fraction of games does each technique get right? Surprisingly enough, the dummy estimator that just says that the home team wins gets roughly 60% of games correct. APM get roughly 68% correct. SPM, roughly 71%. My technique, roughly 74%.
So ultimately, it depends on what you think. If you don't like my criterion (predicting winners of games), then you might not be impressed with the results until you see how it does on your own favored criterion. It is also possible that you think a computer algorithm that predicts the winner 74% of the time is not particularly impressive. But for me, I feel that this is a pretty good approach for understanding various aspects of the NBA.
However, even if you are skeptical, I'll point out that there are a lot of obvious ways to improve my algorithm (incorporate data from say the 5 previous years, incorporate Synergy defensive results, use more interesting and realistic modeling assumptions beyond simple linear regression, [e.g., player pairs, triplets, etc].)
Ultimately, approaches like APM and mine are just the beginning; there are plenty of ways to improve them...but the point is that predicting 74% of the games correctly imo indicates a model that is actually "learning", rather than just regurgitating.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
Are you saying that last year you were able to predict the future results of all NBA games at 74% accuracy? Or is it some kind of backwards algorithm where you were looking at past data.
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
No, retrodiction (http://en.wikipedia.org/wiki/Retrodiction)...testing the performance of your algorithm on past data. I did this a few months ago with the 2006-2007 NBA regular season, partitioning the season into 820 and 410 games, building the algorithm on the block of 820, and then performance testing on the block of 410.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
Well its my experience with stats that you can predict anything in the past if you tweak the equations enough. But really the proof is if you can do it with future data.
Are you going to show us how your technique does this year with future predictions?
Are you going to show us how your technique does this year with future predictions?
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
^---Ah, but I'm (almost) entirely blind to those 410 games. Like, if the 820th game ends on January 11th of that season, then the remaining 410 games are predicted only using data available before January 11th of that year (I say "almost" since I'm using an end-of-season cumulative box scores, not just the league-wide player box scores up until January 11th. But this I'm pretty sure is not going to be a significant issue in affecting the predictive power.)
I plan on writing up the methodology and making it publicly available online once it is finished, but it is still a few weeks away from being ready.
I plan on writing up the methodology and making it publicly available online once it is finished, but it is still a few weeks away from being ready.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
Ok but you're still using the first 410 games. The question is whether you can repeat that accuracy this year to make predictions after 410 games rather than retrodictions.
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
So your point is that accurately retrodicting the last 410 games of some other season (while giving myself only access to the first 820) doesn't mean that I'll accurately predict the performance for say the 2010-2011 season (either giving myself access to 820 games again, or using previous years data as a surrogate)?
If that is your point, I agree, one doesn't necessarily follow from the other. But for me, doing well in retrodiction suggests that the model understands what is going on, and is pretty likely to do well at other tasks. But as you say, it makes sense to test it more broadly.
If that is your point, I agree, one doesn't necessarily follow from the other. But for me, doing well in retrodiction suggests that the model understands what is going on, and is pretty likely to do well at other tasks. But as you say, it makes sense to test it more broadly.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
- ranger001
- Retired Mod

- Posts: 26,938
- And1: 3,752
- Joined: Feb 23, 2001
-
Re: Is PDSS a good way to analyze the Raptors defense?
Doing well in retrodiction means that your model works well for the 2nd half of the 2006 season. Given the underlying assumptions of the model though there should be no reason that it would not work for other seasons and also in a predictive sense for the upcoming season. So if this works for the future its a strong case to prove your point i.e. that team performance is directly related to the sum of individual performance.
I might add though that I'm thinking I could come up with something that is also predictive if I just add up points per game of each team member and subtract points allowed. Not sure what percentage I'd get but if you can achieve a 74% prediction rate I think that will beat the bookies.
I might add though that I'm thinking I could come up with something that is also predictive if I just add up points per game of each team member and subtract points allowed. Not sure what percentage I'd get but if you can achieve a 74% prediction rate I think that will beat the bookies.
Re: Is PDSS a good way to analyze the Raptors defense?
- Courtside
- RealGM
- Posts: 19,460
- And1: 14,205
- Joined: Jul 25, 2002
Re: Is PDSS a good way to analyze the Raptors defense?
Having a roster that carries over is the single biggest variable you have to account for, I think. With so many moving parts and changing roles, I'm not sure any one of your models would work better than intuition would, for the upcoming season.
Maybe let a few posters into the predictions with their educated guesses and see how they do in comparison. Just for fun, maybe even add a girlfriend or two to make guesses at random, too. It's often some know-nothing who picks teams by name or color that wins office pools, isn't it?
Maybe let a few posters into the predictions with their educated guesses and see how they do in comparison. Just for fun, maybe even add a girlfriend or two to make guesses at random, too. It's often some know-nothing who picks teams by name or color that wins office pools, isn't it?
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
ranger001 wrote:Doing well in retrodiction means that your model works well for the 2nd half of the 2006 season. Given the underlying assumptions of the model though there should be no reason that it would not work for other seasons and also in a predictive sense for the upcoming season. So if this works for the future its a strong case to prove your point i.e. that team performance is directly related to the sum of individual performance.
Well, I wouldn't go so far as to say I truly believe that...all I'd be willing to say is that team performance can be well approximated by a sum of individual performances. As an analogy, think about when you learn about the Taylor Series in calculus...the function e^{x} is well approximated for small x by 1+x+x^2/2.
Also, you can go beyond this ""individuality" assumption of APM and its variants by adding terms for pairs, triplets, etc of players. But I've not done any experiments myself to see how much this boosts predictive power.
I might add though that I'm thinking I could come up with something that is also predictive if I just add up points per game of each team member and subtract points allowed. Not sure what percentage I'd get but if you can achieve a 74% prediction rate I think that will beat the bookies.
Yeah, I'd be curious to hear from someone who actually gambles successfully on NBA games. I don't know anything about gambling myself, so am not really sure if this 74% is good or bad. For all I know, they have procedures which do 90%+. And obviously if you have a great algorithm, you are not likely to share any details about its inner workings, since there is money involved.
A Tolkienesque strategy war game made by me: http://www.warlords.co
Re: Is PDSS a good way to analyze the Raptors defense?
-
Fairview4Life
- RealGM
- Posts: 70,291
- And1: 34,109
- Joined: Jul 25, 2005
-
Re: Is PDSS a good way to analyze the Raptors defense?
74% is basically unheard of in gambling. if you're hitting at 74%, you are getting rich. Fast. 53% or something is the break even point, assuming all your plays are for the same $ amount and at 110% vig. If you start varying it, you can afford to lose a little more if you're still hitting your bigger plays. If you can actually predict the outcomes of games at a 74% clip before they happen, and do it consistently over a long period of time (not just some random one year fluke), you are going to go to sleep every night on a bed of money surrounded by many beautiful women. Possibly in an annex of some sort.
9. Similarly, IF THOU HAST SPENT the entire offseason predicting that thy team will stink, thou shalt not gloat, nor even be happy, shouldst thou turn out to be correct. Realistic analysis is fine, but be a fan first, a smug smarty-pants second.
Re: Is PDSS a good way to analyze the Raptors defense?
- darth_federer
- Retired Mod

- Posts: 29,060
- And1: 922
- Joined: Apr 12, 2009
- Contact:
Re: Is PDSS a good way to analyze the Raptors defense?
Fairview4Life wrote:74% is basically unheard of in gambling. if you're hitting at 74%, you are getting rich. Fast. 53% or something is the break even point, assuming all your plays are for the same $ amount and at 110% vig. If you start varying it, you can afford to lose a little more if you're still hitting your bigger plays. If you can actually predict the outcomes of games at a 74% clip before they happen, and do it consistently over a long period of time (not just some random one year fluke), you are going to go to sleep every night on a bed of money surrounded by many beautiful women. Possibly in an annex of some sort.
How are you guys doing this prediction thing?

Profanity wrote:This is why I question a Canadian team in our league. it's a govt conspiracy trina to sell all our milk to Russia. They let the raptors participate to not let canadians demand crossing taxes. it will backfire one day.
Re: Is PDSS a good way to analyze the Raptors defense?
-
Ripp
- General Manager
- Posts: 9,269
- And1: 324
- Joined: Dec 27, 2009
Re: Is PDSS a good way to analyze the Raptors defense?
Fairview: I thought the bookies ask you to take an over/under, not just predict W/L? If they did W/L only, they'd need to make the money offered for betting on the Lakers in Lakers vs. Clippers pretty small.
Or alternatively, if you want to have equal money offered for both choices, adjust the over/under to where the chance of each event happening is 50% (e.g., Lakers favored by 7 points over Clippers)
Or alternatively, if you want to have equal money offered for both choices, adjust the over/under to where the chance of each event happening is 50% (e.g., Lakers favored by 7 points over Clippers)
A Tolkienesque strategy war game made by me: http://www.warlords.co









