Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Moderators: penbeast0, PaulieWal, Clyde Frazier, Doctor MJ, trex_8063
Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Hello everyone! I’m Sansterre; thanks so much for having me! So, what is this project?
This is *not* an attempt to make an authoritative list of the best NBA teams of the shot clock era. Something like that is an indirect byproduct of the project, but that isn’t the actual goal. However much I love my formula, it almost certainly cannot compete with the collaboration of great minds that this site facilitates. Instead this project is intended as a learning tool (mostly for my learning frankly, but I can’t see why others wouldn’t appreciate it as well). Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
Because I don’t just want to post a list one through a hundred and then pay my tab, walk out the saloon doors, mount my horse and ride off into the sunset. Where’s the fun there? I want this to be a genuine exploration of all of these teams. I want to cover their stats, but also to talk about their team makeup, their history, etc. I basically want to write several pages on each team so that, by the end of the project, even if you disagree with the ranking (and you will) I (and hopefully you) will come out with a better understanding and deeper appreciation of the best teams in history. That’s the goal anyways; you guys can tell me how I’m doing as I post. I intend to post these one at a time, ideally once a day, because who doesn’t love a good countdown?
How does the formula work? Good question.
So, most everyone knows of SRS. SRS is basically a margin-of-victory system adjusted for quality of opponent. So a +5 SRS team, on average, beats a league average team by five points, but loses to an all-time great team by 5 points. I love SRS. But here’s the problem. It stops at the regular season.
Why is that a big deal? Because the playoffs are a fundamentally different environment. History is littered with players whose performance fundamentally change in the playoffs, whether for the better (Hakeem, Jordan, LeBron, etc) or for the worse (Malone, Robinson, Harden, etc). And for that matter, teams often play very differently. Many teams simply wait until the playoffs to turn on the jets (the ‘01 Lakers, ‘16-17 Cavs, ‘18 Warriors and ‘95 Rockets are some of the biggest examples) while others seem to hit a wall in the playoffs. So the 2017 Warriors beat the 2017 Cavs (+2.87 SRS) in the Finals . . . but does that mean they beat a merely above-average team? Heck no! We know that the ‘17 Cavs (playoff edition) were considerably better. So how can we account for that?
Basically, I updated SRS as the playoffs progress, sort of like Elo ratings. I start with the regular season as the baseline, and then after a series is concluded the formula looks at the SRS of your opponent, what your margin of victory (or loss) was, how many games the series was (because more games equals better sample size) and then adjusts your SRS accordingly. The game-weighting (regular
season vs playoffs) is designed so that, by the time you’ve played in the Finals, your Overall SRS is about 65% playoffs and 35% regular season (I'd love to say that this number is the product of thorough study, but I just eyeballed it - maybe due to be changed in version 2.0). This has a bunch of ramifications. First, strike-shortened seasons (1999, 2012) are more playoff-weighted because of the lower number of regular season games. Second, the formula (for Overall SRS) doesn’t care about games won (or even whether you won the series), it’s purely driven by MoV. This leads to weird results where you can win a series but be outscored 5 points a game (looking at you first round 2018 Cavs) and the formula will straight-up punish you for that weak showing, despite having won the series. This creates some discordance between the formula’s take and our own, because the SRS part of the formula doesn’t know who won. This may seem weird, but I think it’s important. SRS is more predictive than wins in the regular season; I don’t understand why it would suddenly be less reflective of team quality in the playoffs. The better team *can* lose playoff series; why not reward them for being the better team? The third ramification is that your opponent quality is based on the team *when you played them*, not how they eventually finished. So the 2018 Rockets are considered to have lost to a +8.7 SRS Golden State team (+5.8 in the regular season, a +11.7 series and a +12.66 series), not the +15.7 SRS team that they were through the playoffs. Part of this is because you still want to root things in the regular season (because sample size) and part of it is that if you retroactively adjust this crap, where do you stop? Upsides and downsides, it is what it is.
I’ve made two adjustments to this SRS-driven formula. The first is to reward teams for advancing in the playoffs. It’s not an enormous bump, but the formula likes teams that move forward over teams that don’t. I didn’t want the bump to be too big because, generally, the team that wins is the team that (SRS-wise) played better, so you really don’t need too much of a bump (because the SRS-side is already handling a lot of that). The second adjustment is for the competition-level of the league, by which I mean the standard deviation in Overall SRS (which is my combo regular season / playoffs SRS). The purpose of this is a bit more twitchy.
Sometimes the level of competition in a league drops. Sometimes this is driven by expansion (adding more teams decreases the average level of quality for a time) and sometimes it is driven by tanking. But either way, different years/eras have different amounts of horrible teams. In 2015, 10% of the league had an SRS of -8 or worse. In 1976, 0% of the league had an SRS worse than -3. Can you really look at a +6 SRS team in 1976 (which doesn’t get to beat up on crap-tastic rosters) and say that they’re definitely worse than a +8 SRS team in 2015? I don’t know that I could. So I want a degree of compensation here. Part of what makes teams in the last 15 years so good (by SRS) is the increase in teams tanking, and I don’t really want them to be rewarded for that. So I take standard deviation into account.
But I don’t make it the whole thing. I tried that, and the problem is that teams that were way above a very tight league (the 1976 Golden State Warriors were about +6.5 in a league that was insanely close to average besides them) grades out identical to a murder team in a more stratified era (say, the 2018 Warriors). I think the standard deviation angle is worth taking into account, but there’s no universe where I’m okay with the ‘76 Warriors and the ‘18 Warriors being considered comparable. So it’s a bump, like winning a series. So those are the components: 1) Overall SRS (adjusted through the playoffs) most of all, with 2) how close to the championship you got and 3) your OSRS standard deviation above the mean being included as adjustments on the OSRS baseline. That’s the system, for better or for worse.
How does it shake out? The decades for the Top 100 broke down pretty intuitively:
1950s: 2
1960s: 8
1970s: 11
1980s: 18
1990s: 15
2000s: 20
2010s: 24
2020: 2
The low number of teams from the 50s and 60s is mostly because there simply aren’t that many teams back then. The 90s are unusually low because, aside from the Bulls (who are six of those fifteen teams) the 90s honestly didn’t have that many strong team seasons.
As far as rounds advanced to, the top 100 is pretty intuitive:
Knocked out in the 2nd round: 3
Knocked out in the Conference Finals: 19
Knocked out in the NBA Finals: 23
Won the Championship: 55
Trust me, those three teams that were knocked out in the 2nd round were all *really good*. Why almost as many teams in the Conference Finals as Finals? Because, remember, there are twice as many teams that get knocked out then - basically the percentage of teams that lost in the Finals that made this list is twice as high as it is for teams knocked out in the Conference Finals. And as for over half the list being Champions, that shouldn’t surprise anyone.
And yet. We’re covering from 1955 to 2020, which means that there have been 66 Champions, and only 56 made the list, which means that ten didn’t make the cut. Every single one of those teams came up short in some key way, whether it was lackluster playoff performance (despite winning every round), really low regular season performance, or both. To some extent, again, this is meant to be a bit predictive; “If they played the season again, which team would we expect to be the best?” And sometimes teams won that simply weren’t that dominant.
Breakdown by Franchises:
Celtics: 19
Lakers: 17
Spurs: 8
Bulls & Warriors: 6
Bucks & Pistons: 5
Cavs, Heat & Suns: 4
Blazers & Thunder/Sonics: 3
76ers, Jazz, Knicks, Magic, Mavericks & Rockets: 2
Bullets, Kings, Nuggets & Raptors: 1
Pretty intuitive within reason.
You can find fault in it, but I think this is a fairly reasonable breakdown. Anyhow, without further adieu, number 100! (I’ll post the individual articles in separate threads).
100. The 1991 Los Angeles Lakers
99. The 2015 Cleveland Cavaliers
98. The 1975 Washington Bullets
97. The 1988 Detroit Pistons
96. The 1990 Phoenix Suns
95. The 2008 Los Angeles Lakers
94. The 2018 Houston Rockets
93. The 1995 Houston Rockets
92. The 2009 Orlando Magic
91. The 2019 Golden State Warriors
90. The 2010 Boston Celtics
89. The 2005 Detroit Pistons
88. The 1976 Golden State Warriors
87. The 2006 Miami Heat
86. The 1985 Boston Celtics
85. The 1989 Phoenix Suns
84. The 2002 Sacramento Kings
83. The 1986 Los Angeles Lakers
82. The 1969 Boston Celtics
81. The 2011 Miami Heat
80. The 1966 Boston Celtics
79. The 1973 Los Angeles Lakers
78. The 2007 Phoenix Suns
77. The 1981 Milwaukee Bucks
76. The 1989 Los Angeles Lakers
75. The 1996 Seattle SuperSonics
74. The 1992 Portland Trail Blazers
73. The 2012 San Antonio Spurs
72. The 1982 Los Angeles Lakers
71. The 1980 Boston Celtics
70. The 1959 Boston Celtics
69. The 1957 Boston Celtics
68. The 2000 Los Angeles Lakers
67. The 1974 Boston Celtics
66. The 1980 Los Angeles Lakers
65. The 2009 Denver Nuggets
64. The 1997 Utah Jazz
63. The 1984 Los Angeles Lakers
62. The 2000 Portland Trail Blazers
61. The 1962 Boston Celtics
60. The 1990 Detroit Pistons
59. The 1974 Milwaukee Bucks
58. The 1960 Boston Celtics
57. The 1982 Boston Celtics
56. The 2012 Oklahoma City Thunder
55. The 1964 Boston Celtics
54. The 2008 Boston Celtics
53. The 2005 Phoenix Suns
52. The 2010 Los Angeles Lakers
51. The 1993 Chicago Bulls
50. The 1984 Boston Celtics
49. The 1977 Portland Trail Blazers
48. The 1973 New York Knicks
47. The 2020 Boston Celtics
46. The 1981 Boston Celtics
45. The 1970 New York Knicks
44. The 1965 Boston Celtics
43. The 2017 Cleveland Cavaliers
42. The 2006 Dallas Mavericks
41. The 2011 Dallas Mavericks
40. The 2020 Los Angeles Lakers
39. The 2004 Detroit Pistons
38. The 2009 Cleveland Cavaliers
37. The 2003 San Antonio Spurs
36. The 2013 Miami Heat
35. The 1996 Utah Jazz
34. The 2002 Los Angeles Lakers
33. The 1961 Boston Celtics
32. The 2010 Orlando Magic
31. The 2019 Toronto Raptors
30. The 2005 San Antonio Spurs
29. The 2016 Oklahoma City Thunder
28. The 1989 Detroit Pistons
27. The 2007 San Antonio Spurs
26. The 2016 Golden State Warriors
25. The 2019 Milwaukee Bucks
24. The 1972 Milwaukee Bucks
23. The 2016 San Antonio Spurs
22. The 1983 Philadelphia 76ers
21. The 2013 San Antonio Spurs
20. The 1972 Los Angeles Lakers
19. The 1998 Chicago Bulls
18. The 2012 Miami Heat
17. The 1999 San Antonio Spurs
16. The 2016 Cleveland Cavaliers
15. The 1967 Philadelphia 76ers
14. The 1997 Chicago Bulls
13. The 1992 Chicago Bulls
12. The 1987 Los Angeles Lakers
11. The 2009 Los Angeles Lakers
10. The 1985 Los Angeles Lakers
9. The 2015 Golden State Warriors
8. The 2001 Los Angeles Lakers
7. The 2014 San Antonio Spurs
6. The 1986 Boston Celtics
5. The 2018 Golden State Warriors
4. The 1991 Chicago Bulls
3. The 1971 Milwaukee Bucks
2. The 1996 Chicago Bulls
1. The 2017 Golden State Warriors
This is *not* an attempt to make an authoritative list of the best NBA teams of the shot clock era. Something like that is an indirect byproduct of the project, but that isn’t the actual goal. However much I love my formula, it almost certainly cannot compete with the collaboration of great minds that this site facilitates. Instead this project is intended as a learning tool (mostly for my learning frankly, but I can’t see why others wouldn’t appreciate it as well). Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
Because I don’t just want to post a list one through a hundred and then pay my tab, walk out the saloon doors, mount my horse and ride off into the sunset. Where’s the fun there? I want this to be a genuine exploration of all of these teams. I want to cover their stats, but also to talk about their team makeup, their history, etc. I basically want to write several pages on each team so that, by the end of the project, even if you disagree with the ranking (and you will) I (and hopefully you) will come out with a better understanding and deeper appreciation of the best teams in history. That’s the goal anyways; you guys can tell me how I’m doing as I post. I intend to post these one at a time, ideally once a day, because who doesn’t love a good countdown?
How does the formula work? Good question.
So, most everyone knows of SRS. SRS is basically a margin-of-victory system adjusted for quality of opponent. So a +5 SRS team, on average, beats a league average team by five points, but loses to an all-time great team by 5 points. I love SRS. But here’s the problem. It stops at the regular season.
Why is that a big deal? Because the playoffs are a fundamentally different environment. History is littered with players whose performance fundamentally change in the playoffs, whether for the better (Hakeem, Jordan, LeBron, etc) or for the worse (Malone, Robinson, Harden, etc). And for that matter, teams often play very differently. Many teams simply wait until the playoffs to turn on the jets (the ‘01 Lakers, ‘16-17 Cavs, ‘18 Warriors and ‘95 Rockets are some of the biggest examples) while others seem to hit a wall in the playoffs. So the 2017 Warriors beat the 2017 Cavs (+2.87 SRS) in the Finals . . . but does that mean they beat a merely above-average team? Heck no! We know that the ‘17 Cavs (playoff edition) were considerably better. So how can we account for that?
Basically, I updated SRS as the playoffs progress, sort of like Elo ratings. I start with the regular season as the baseline, and then after a series is concluded the formula looks at the SRS of your opponent, what your margin of victory (or loss) was, how many games the series was (because more games equals better sample size) and then adjusts your SRS accordingly. The game-weighting (regular
season vs playoffs) is designed so that, by the time you’ve played in the Finals, your Overall SRS is about 65% playoffs and 35% regular season (I'd love to say that this number is the product of thorough study, but I just eyeballed it - maybe due to be changed in version 2.0). This has a bunch of ramifications. First, strike-shortened seasons (1999, 2012) are more playoff-weighted because of the lower number of regular season games. Second, the formula (for Overall SRS) doesn’t care about games won (or even whether you won the series), it’s purely driven by MoV. This leads to weird results where you can win a series but be outscored 5 points a game (looking at you first round 2018 Cavs) and the formula will straight-up punish you for that weak showing, despite having won the series. This creates some discordance between the formula’s take and our own, because the SRS part of the formula doesn’t know who won. This may seem weird, but I think it’s important. SRS is more predictive than wins in the regular season; I don’t understand why it would suddenly be less reflective of team quality in the playoffs. The better team *can* lose playoff series; why not reward them for being the better team? The third ramification is that your opponent quality is based on the team *when you played them*, not how they eventually finished. So the 2018 Rockets are considered to have lost to a +8.7 SRS Golden State team (+5.8 in the regular season, a +11.7 series and a +12.66 series), not the +15.7 SRS team that they were through the playoffs. Part of this is because you still want to root things in the regular season (because sample size) and part of it is that if you retroactively adjust this crap, where do you stop? Upsides and downsides, it is what it is.
I’ve made two adjustments to this SRS-driven formula. The first is to reward teams for advancing in the playoffs. It’s not an enormous bump, but the formula likes teams that move forward over teams that don’t. I didn’t want the bump to be too big because, generally, the team that wins is the team that (SRS-wise) played better, so you really don’t need too much of a bump (because the SRS-side is already handling a lot of that). The second adjustment is for the competition-level of the league, by which I mean the standard deviation in Overall SRS (which is my combo regular season / playoffs SRS). The purpose of this is a bit more twitchy.
Sometimes the level of competition in a league drops. Sometimes this is driven by expansion (adding more teams decreases the average level of quality for a time) and sometimes it is driven by tanking. But either way, different years/eras have different amounts of horrible teams. In 2015, 10% of the league had an SRS of -8 or worse. In 1976, 0% of the league had an SRS worse than -3. Can you really look at a +6 SRS team in 1976 (which doesn’t get to beat up on crap-tastic rosters) and say that they’re definitely worse than a +8 SRS team in 2015? I don’t know that I could. So I want a degree of compensation here. Part of what makes teams in the last 15 years so good (by SRS) is the increase in teams tanking, and I don’t really want them to be rewarded for that. So I take standard deviation into account.
But I don’t make it the whole thing. I tried that, and the problem is that teams that were way above a very tight league (the 1976 Golden State Warriors were about +6.5 in a league that was insanely close to average besides them) grades out identical to a murder team in a more stratified era (say, the 2018 Warriors). I think the standard deviation angle is worth taking into account, but there’s no universe where I’m okay with the ‘76 Warriors and the ‘18 Warriors being considered comparable. So it’s a bump, like winning a series. So those are the components: 1) Overall SRS (adjusted through the playoffs) most of all, with 2) how close to the championship you got and 3) your OSRS standard deviation above the mean being included as adjustments on the OSRS baseline. That’s the system, for better or for worse.
How does it shake out? The decades for the Top 100 broke down pretty intuitively:
1950s: 2
1960s: 8
1970s: 11
1980s: 18
1990s: 15
2000s: 20
2010s: 24
2020: 2
The low number of teams from the 50s and 60s is mostly because there simply aren’t that many teams back then. The 90s are unusually low because, aside from the Bulls (who are six of those fifteen teams) the 90s honestly didn’t have that many strong team seasons.
As far as rounds advanced to, the top 100 is pretty intuitive:
Knocked out in the 2nd round: 3
Knocked out in the Conference Finals: 19
Knocked out in the NBA Finals: 23
Won the Championship: 55
Trust me, those three teams that were knocked out in the 2nd round were all *really good*. Why almost as many teams in the Conference Finals as Finals? Because, remember, there are twice as many teams that get knocked out then - basically the percentage of teams that lost in the Finals that made this list is twice as high as it is for teams knocked out in the Conference Finals. And as for over half the list being Champions, that shouldn’t surprise anyone.
And yet. We’re covering from 1955 to 2020, which means that there have been 66 Champions, and only 56 made the list, which means that ten didn’t make the cut. Every single one of those teams came up short in some key way, whether it was lackluster playoff performance (despite winning every round), really low regular season performance, or both. To some extent, again, this is meant to be a bit predictive; “If they played the season again, which team would we expect to be the best?” And sometimes teams won that simply weren’t that dominant.
Breakdown by Franchises:
Celtics: 19
Lakers: 17
Spurs: 8
Bulls & Warriors: 6
Bucks & Pistons: 5
Cavs, Heat & Suns: 4
Blazers & Thunder/Sonics: 3
76ers, Jazz, Knicks, Magic, Mavericks & Rockets: 2
Bullets, Kings, Nuggets & Raptors: 1
Pretty intuitive within reason.
You can find fault in it, but I think this is a fairly reasonable breakdown. Anyhow, without further adieu, number 100! (I’ll post the individual articles in separate threads).
100. The 1991 Los Angeles Lakers
99. The 2015 Cleveland Cavaliers
98. The 1975 Washington Bullets
97. The 1988 Detroit Pistons
96. The 1990 Phoenix Suns
95. The 2008 Los Angeles Lakers
94. The 2018 Houston Rockets
93. The 1995 Houston Rockets
92. The 2009 Orlando Magic
91. The 2019 Golden State Warriors
90. The 2010 Boston Celtics
89. The 2005 Detroit Pistons
88. The 1976 Golden State Warriors
87. The 2006 Miami Heat
86. The 1985 Boston Celtics
85. The 1989 Phoenix Suns
84. The 2002 Sacramento Kings
83. The 1986 Los Angeles Lakers
82. The 1969 Boston Celtics
81. The 2011 Miami Heat
80. The 1966 Boston Celtics
79. The 1973 Los Angeles Lakers
78. The 2007 Phoenix Suns
77. The 1981 Milwaukee Bucks
76. The 1989 Los Angeles Lakers
75. The 1996 Seattle SuperSonics
74. The 1992 Portland Trail Blazers
73. The 2012 San Antonio Spurs
72. The 1982 Los Angeles Lakers
71. The 1980 Boston Celtics
70. The 1959 Boston Celtics
69. The 1957 Boston Celtics
68. The 2000 Los Angeles Lakers
67. The 1974 Boston Celtics
66. The 1980 Los Angeles Lakers
65. The 2009 Denver Nuggets
64. The 1997 Utah Jazz
63. The 1984 Los Angeles Lakers
62. The 2000 Portland Trail Blazers
61. The 1962 Boston Celtics
60. The 1990 Detroit Pistons
59. The 1974 Milwaukee Bucks
58. The 1960 Boston Celtics
57. The 1982 Boston Celtics
56. The 2012 Oklahoma City Thunder
55. The 1964 Boston Celtics
54. The 2008 Boston Celtics
53. The 2005 Phoenix Suns
52. The 2010 Los Angeles Lakers
51. The 1993 Chicago Bulls
50. The 1984 Boston Celtics
49. The 1977 Portland Trail Blazers
48. The 1973 New York Knicks
47. The 2020 Boston Celtics
46. The 1981 Boston Celtics
45. The 1970 New York Knicks
44. The 1965 Boston Celtics
43. The 2017 Cleveland Cavaliers
42. The 2006 Dallas Mavericks
41. The 2011 Dallas Mavericks
40. The 2020 Los Angeles Lakers
39. The 2004 Detroit Pistons
38. The 2009 Cleveland Cavaliers
37. The 2003 San Antonio Spurs
36. The 2013 Miami Heat
35. The 1996 Utah Jazz
34. The 2002 Los Angeles Lakers
33. The 1961 Boston Celtics
32. The 2010 Orlando Magic
31. The 2019 Toronto Raptors
30. The 2005 San Antonio Spurs
29. The 2016 Oklahoma City Thunder
28. The 1989 Detroit Pistons
27. The 2007 San Antonio Spurs
26. The 2016 Golden State Warriors
25. The 2019 Milwaukee Bucks
24. The 1972 Milwaukee Bucks
23. The 2016 San Antonio Spurs
22. The 1983 Philadelphia 76ers
21. The 2013 San Antonio Spurs
20. The 1972 Los Angeles Lakers
19. The 1998 Chicago Bulls
18. The 2012 Miami Heat
17. The 1999 San Antonio Spurs
16. The 2016 Cleveland Cavaliers
15. The 1967 Philadelphia 76ers
14. The 1997 Chicago Bulls
13. The 1992 Chicago Bulls
12. The 1987 Los Angeles Lakers
11. The 2009 Los Angeles Lakers
10. The 1985 Los Angeles Lakers
9. The 2015 Golden State Warriors
8. The 2001 Los Angeles Lakers
7. The 2014 San Antonio Spurs
6. The 1986 Boston Celtics
5. The 2018 Golden State Warriors
4. The 1991 Chicago Bulls
3. The 1971 Milwaukee Bucks
2. The 1996 Chicago Bulls
1. The 2017 Golden State Warriors
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- KobesScarf
- Veteran
- Posts: 2,855
- And1: 604
- Joined: Jul 17, 2016
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
I can already tell I won't like the list if you think the 2010s has more top 100 teams than the 70s and 80s.
11 teams from the 70s seems absurd just the Knicks Celtics Bucks Bullets and Lakers from early/mid 70s are the most loaded teams of all time and should already make up more than 11 teams nevermind the rest of the decade
11 teams from the 70s seems absurd just the Knicks Celtics Bucks Bullets and Lakers from early/mid 70s are the most loaded teams of all time and should already make up more than 11 teams nevermind the rest of the decade
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
KobesScarf wrote:I can already tell I won't like the list if you think the 2010s has more top 100 teams than the 70s and 80s.
11 teams from the 70s seems absurd just the Knicks Celtics Bucks Bullets and Lakers from early/mid 70s are the most loaded teams of all time and should already make up more than 11 teams nevermind the rest of the decade
That's an understandable position. I myself had several double-takes where a modern team was graded comparable to a historical team in a way that seemed unbelievable. But when I looked at the objective evidence I could see where it came from. Just remember that the NBA in the 2010s has almost twice the number of teams as the 1970s. Assuming a completely normal distribution of quality, we'd assume that the 2010s would have almost twice as many great teams as the 70s had purely because there are more teams to be good.
Remember, this is formula-driven. It doesn't know anything about aesthetics, or which teams have had how many books written about them. Win a lot of games in the playoffs by a lot of points against above league-average competition and you'll do well on this list. If you don't do that, you don't really show up.
1970 to 1974 combine for 8 teams on the list, which is a damned good representation (for a league with only 17 teams or so, that's pretty good). The problem is that, between 1974 and 1979, the league was very even with few standouts (by the numbers), so only three teams from that part of the decade made it in.
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Senior Mod
- Posts: 52,780
- And1: 21,719
- Joined: Mar 10, 2005
- Location: Cali
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
sansterre wrote:Hello everyone! I’m Sansterre; thanks so much for having me! So, what is this project?
This is *not* an attempt to make an authoritative list of the best NBA teams of the shot clock era. Something like that is an indirect byproduct of the project, but that isn’t the actual goal. However much I love my formula, it almost certainly cannot compete with the collaboration of great minds that this site facilitates. Instead this project is intended as a learning tool (mostly for my learning frankly, but I can’t see why others wouldn’t appreciate it as well). Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
I like this approach particularly when it's made clear up front. Ben Taylor (ElGee 'round these parts) did something similar with his Back Picks 40 evaluating player careers. He wasn't posting his opinion on player comparisons, he was trying to make a "good vanilla" that you can use as a starting point for your own perspective.
Sounds like you're doing the same.
Re: countdown. A pretty tried and true approach in general, though we'll have to see how it drives discussion. Very few people have a set belief about how high the '91 Lakers should be on a ranked list, so discussion may come as natural comparison emerge.
Getting ready for the RealGM 100 on the PC Board
Come join the WNBA Board if you're a fan!
Come join the WNBA Board if you're a fan!
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- SideshowBob
- General Manager
- Posts: 9,061
- And1: 6,262
- Joined: Jul 16, 2010
- Location: Washington DC
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Am excited to see this!
I imagine you will see discussion ramp up once you get into top the 50.
I imagine you will see discussion ramp up once you get into top the 50.
But in his home dwelling...the hi-top faded warrior is revered. *Smack!* The sound of his palm blocking the basketball... the sound of thousands rising, roaring... the sound of "get that sugar honey iced tea outta here!"
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Senior
- Posts: 581
- And1: 263
- Joined: Jul 17, 2014
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
I don't post all that much, but wanted to chime in and say I'm excited to read this.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- eminence
- RealGM
- Posts: 16,729
- And1: 11,564
- Joined: Mar 07, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Just wanted to say that the write-ups so far are absolutely excellent and will keep me coming back for more.
I bought a boat.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- RealGM
- Posts: 29,599
- And1: 24,920
- Joined: Aug 11, 2015
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
So far the list and write-ups are great and I don't expect to be any worse as we move on. I'm glad to see some underrated contenders from various eras.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- eminence
- RealGM
- Posts: 16,729
- And1: 11,564
- Joined: Mar 07, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Just realized Mikan's Lakers won't be on here 

I bought a boat.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- WestGOAT
- Veteran
- Posts: 2,591
- And1: 3,506
- Joined: Dec 20, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
sansterre wrote:Hello everyone! I’m Sansterre; thanks so much for having me! So, what is this project?
Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Do you mind disclosing the exact formula? I'm curious to see the details of it and how you weigh specific components.
sansterre wrote:Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
Cool! I like this relatively unbiased approach.
sansterre wrote:So, most everyone knows of SRS. SRS is basically a margin-of-victory system adjusted for quality of opponent. So a +5 SRS team, on average, beats a league average team by five points, but loses to an all-time great team by 5 points.
Why 5 points? Are you implying all-great teams have a SRS that hover around 10?
sansterre wrote: Why is that a big deal? Because the playoffs are a fundamentally different environment. History is littered with players whose performance fundamentally change in the playoffs, whether for the better (Hakeem, Jordan, LeBron, etc) or for the worse (Malone, Robinson, Harden, etc). And for that matter, teams often play very differently. Many teams simply wait until the playoffs to turn on the jets (the ‘01 Lakers, ‘16-17 Cavs, ‘18 Warriors and ‘95 Rockets are some of the biggest examples) while others seem to hit a wall in the playoffs. So the 2017 Warriors beat the 2017 Cavs (+2.87 SRS) in the Finals . . . but does that mean they beat a merely above-average team? Heck no! We know that the ‘17 Cavs (playoff edition) were considerably better. So how can we account for that?
This is why I'm not fully convinced yet of using the opposition's regular-season DRtg as baseline for calculating a team's relative ORTg in the playoffs, despite realizing we have to correct for the quality of opposition somehow.
sansterre wrote: Basically, I updated SRS as the playoffs progress, sort of like Elo ratings. I start with the regular season as the baseline, and then after a series is concluded the formula looks at the SRS of your opponent, what your margin of victory (or loss) was, how many games the series was (because more games equals better sample size) and then adjusts your SRS accordingly. The game-weighting (regular season vs playoffs) is designed so that, by the time you’ve played in the Finals, your Overall SRS is about 65% playoffs and 35% regular season (I'd love to say that this number is the product of thorough study, but I just eyeballed it - maybe due to be changed in version 2.0).
Interesting! How did you decide on relative weights of 2/3 for playoffs and 1/3 for regular season when a team reaches the final? Why not 50/50, or other weights.
sansterre wrote: This has a bunch of ramifications. First, strike-shortened seasons (1999, 2012) are more playoff-weighted because of the lower number of regular season games. Second, the formula (for Overall SRS) doesn’t care about games won (or even whether you won the series), it’s purely driven by MoV. This leads to weird results where you can win a series but be outscored 5 points a game (looking at you first round 2018 Cavs) and the formula will straight-up punish you for that weak showing, despite having won the series. This creates some discordance between the formula’s take and our own, because the SRS part of the formula doesn’t know who won. This may seem weird, but I think it’s important. SRS is more predictive than wins in the regular season; I don’t understand why it would suddenly be less reflective of team quality in the playoffs. The better team *can* lose playoff series; why not reward them for being the better team? The third ramification is that your opponent quality is based on the team *when you played them*, not how they eventually finished. So the 2018 Rockets are considered to have lost to a +8.7 SRS Golden State team (+5.8 in the regular season, a +11.7 series and a +12.66 series), not the +15.7 SRS team that they were through the playoffs. Part of this is because you still want to root things in the regular season (because sample size) and part of it is that if you retroactively adjust this crap, where do you stop? Upsides and downsides, it is what it is.
I think it's really cool you address potential limitations (of extrapolating) conclusions from your formula, very cool!
sansterre wrote: I’ve made two adjustments to this SRS-driven formula. The first is to reward teams for advancing in the playoffs. It’s not an enormous bump, but the formula likes teams that move forward over teams that don’t. I didn’t want the bump to be too big because, generally, the team that wins is the team that (SRS-wise) played better, so you really don’t need too much of a bump (because the SRS-side is already handling a lot of that). The second adjustment is for the competition-level of the league, by which I mean the standard deviation in Overall SRS (which is my combo regular season / playoffs SRS). The purpose of this is a bit more twitchy.
So how exactly did you make these two adjustments?
sansterre wrote: Sometimes the level of competition in a league drops. Sometimes this is driven by expansion (adding more teams decreases the average level of quality for a time) and sometimes it is driven by tanking. But either way, different years/eras have different amounts of horrible teams. In 2015, 10% of the league had an SRS of -8 or worse. In 1976, 0% of the league had an SRS worse than -3. Can you really look at a +6 SRS team in 1976 (which doesn’t get to beat up on crap-tastic rosters) and say that they’re definitely worse than a +8 SRS team in 2015? I don’t know that I could. So I want a degree of compensation here. Part of what makes teams in the last 15 years so good (by SRS) is the increase in teams tanking, and I don’t really want them to be rewarded for that. So I take standard deviation into account.
I like that you take standard deviations into account, it definitely beats simply calculating differences based on a simple average. You can definetly be more confident in claiming (significant) differences. It seems a lot of extra work though!
sansterre wrote:But I don’t make it the whole thing. I tried that, and the problem is that teams that were way above a very tight league (the 1976 Golden State Warriors were about +6.5 in a league that was insanely close to average besides them) grades out identical to a murder team in a more stratified era (say, the 2018 Warriors). I think the standard deviation angle is worth taking into account, but there’s no universe where I’m okay with the ‘76 Warriors and the ‘18 Warriors being considered comparable. So it’s a bump, like winning a series.
I think this further reinforces how relatively "meaningless" regular-season can be. Based on my gut-feeling this is especially the case in the more modern era with teams coasting, and even superstars coasting (Lebron, Kawhi), but maybe this was also true for teams more in the past..
sansterre wrote:So those are the components: 1) Overall SRS (adjusted through the playoffs) most of all, with 2) how close to the championship you got and 3) your OSRS standard deviation above the mean being included as adjustments on the OSRS baseline. That’s the system, for better or for worse.
So how do these bumps exactly affect the formula you conceived?
I decided to read this post first before addressing your the Glossary you made in the specific ranking topics. I think I will make another post about that later. The concepts of Heliocentrism, Wingmen and Depth are pretty intriguing!

spotted in Bologna
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- mojomarc
- Retired Mod
- Posts: 16,816
- And1: 971
- Joined: Jun 01, 2004
- Location: Funkytown
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
I applaud this "big data" approach. As a specialist in advanced analytics, this is near and dear to my heart. The write-ups so far have been great from the handful I've read. Really looking forward to seeing more!
Great questions by WestGOAT though. I would love to see the algorithm and understand the reasons for certain weightings/factors. Since you are taking a formula and using it ruthlessly, the area for fan debate is exactly on these weights. Maybe after you're done with the 100 you share some details and we play with alternate rankings?
Great questions by WestGOAT though. I would love to see the algorithm and understand the reasons for certain weightings/factors. Since you are taking a formula and using it ruthlessly, the area for fan debate is exactly on these weights. Maybe after you're done with the 100 you share some details and we play with alternate rankings?
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
WestGOAT wrote:sansterre wrote:Hello everyone! I’m Sansterre; thanks so much for having me! So, what is this project?
Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Do you mind disclosing the exact formula? I'm curious to see the details of it and how you weigh specific components.sansterre wrote:Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
Cool! I like this relatively unbiased approach.sansterre wrote:So, most everyone knows of SRS. SRS is basically a margin-of-victory system adjusted for quality of opponent. So a +5 SRS team, on average, beats a league average team by five points, but loses to an all-time great team by 5 points.
Why 5 points? Are you implying all-great teams have a SRS that hover around 10?sansterre wrote: Why is that a big deal? Because the playoffs are a fundamentally different environment. History is littered with players whose performance fundamentally change in the playoffs, whether for the better (Hakeem, Jordan, LeBron, etc) or for the worse (Malone, Robinson, Harden, etc). And for that matter, teams often play very differently. Many teams simply wait until the playoffs to turn on the jets (the ‘01 Lakers, ‘16-17 Cavs, ‘18 Warriors and ‘95 Rockets are some of the biggest examples) while others seem to hit a wall in the playoffs. So the 2017 Warriors beat the 2017 Cavs (+2.87 SRS) in the Finals . . . but does that mean they beat a merely above-average team? Heck no! We know that the ‘17 Cavs (playoff edition) were considerably better. So how can we account for that?
This is why I'm not fully convinced yet of using the opposition's regular-season DRtg as baseline for calculating a team's relative ORTg in the playoffs, despite realizing we have to correct for the quality of opposition somehow.sansterre wrote: Basically, I updated SRS as the playoffs progress, sort of like Elo ratings. I start with the regular season as the baseline, and then after a series is concluded the formula looks at the SRS of your opponent, what your margin of victory (or loss) was, how many games the series was (because more games equals better sample size) and then adjusts your SRS accordingly. The game-weighting (regular season vs playoffs) is designed so that, by the time you’ve played in the Finals, your Overall SRS is about 65% playoffs and 35% regular season (I'd love to say that this number is the product of thorough study, but I just eyeballed it - maybe due to be changed in version 2.0).
Interesting! How did you decide on relative weights of 2/3 for playoffs and 1/3 for regular season when a team reaches the final? Why not 50/50, or other weights.sansterre wrote: This has a bunch of ramifications. First, strike-shortened seasons (1999, 2012) are more playoff-weighted because of the lower number of regular season games. Second, the formula (for Overall SRS) doesn’t care about games won (or even whether you won the series), it’s purely driven by MoV. This leads to weird results where you can win a series but be outscored 5 points a game (looking at you first round 2018 Cavs) and the formula will straight-up punish you for that weak showing, despite having won the series. This creates some discordance between the formula’s take and our own, because the SRS part of the formula doesn’t know who won. This may seem weird, but I think it’s important. SRS is more predictive than wins in the regular season; I don’t understand why it would suddenly be less reflective of team quality in the playoffs. The better team *can* lose playoff series; why not reward them for being the better team? The third ramification is that your opponent quality is based on the team *when you played them*, not how they eventually finished. So the 2018 Rockets are considered to have lost to a +8.7 SRS Golden State team (+5.8 in the regular season, a +11.7 series and a +12.66 series), not the +15.7 SRS team that they were through the playoffs. Part of this is because you still want to root things in the regular season (because sample size) and part of it is that if you retroactively adjust this crap, where do you stop? Upsides and downsides, it is what it is.
I think it's really cool you address potential limitations (of extrapolating) conclusions from your formula, very cool!sansterre wrote: I’ve made two adjustments to this SRS-driven formula. The first is to reward teams for advancing in the playoffs. It’s not an enormous bump, but the formula likes teams that move forward over teams that don’t. I didn’t want the bump to be too big because, generally, the team that wins is the team that (SRS-wise) played better, so you really don’t need too much of a bump (because the SRS-side is already handling a lot of that). The second adjustment is for the competition-level of the league, by which I mean the standard deviation in Overall SRS (which is my combo regular season / playoffs SRS). The purpose of this is a bit more twitchy.
So how exactly did you make these two adjustments?sansterre wrote: Sometimes the level of competition in a league drops. Sometimes this is driven by expansion (adding more teams decreases the average level of quality for a time) and sometimes it is driven by tanking. But either way, different years/eras have different amounts of horrible teams. In 2015, 10% of the league had an SRS of -8 or worse. In 1976, 0% of the league had an SRS worse than -3. Can you really look at a +6 SRS team in 1976 (which doesn’t get to beat up on crap-tastic rosters) and say that they’re definitely worse than a +8 SRS team in 2015? I don’t know that I could. So I want a degree of compensation here. Part of what makes teams in the last 15 years so good (by SRS) is the increase in teams tanking, and I don’t really want them to be rewarded for that. So I take standard deviation into account.
I like that you take standard deviations into account, it definitely beats simply calculating differences based on a simple average. You can definetly be more confident in claiming (significant) differences. It seems a lot of extra work though!sansterre wrote:But I don’t make it the whole thing. I tried that, and the problem is that teams that were way above a very tight league (the 1976 Golden State Warriors were about +6.5 in a league that was insanely close to average besides them) grades out identical to a murder team in a more stratified era (say, the 2018 Warriors). I think the standard deviation angle is worth taking into account, but there’s no universe where I’m okay with the ‘76 Warriors and the ‘18 Warriors being considered comparable. So it’s a bump, like winning a series.
I think this further reinforces how relatively "meaningless" regular-season can be. Based on my gut-feeling this is especially the case in the more modern era with teams coasting, and even superstars coasting (Lebron, Kawhi), but maybe this was also true for teams more in the past..sansterre wrote:So those are the components: 1) Overall SRS (adjusted through the playoffs) most of all, with 2) how close to the championship you got and 3) your OSRS standard deviation above the mean being included as adjustments on the OSRS baseline. That’s the system, for better or for worse.
So how do these bumps exactly affect the formula you conceived?
I decided to read this post first before addressing your the Glossary you made in the specific ranking topics. I think I will make another post about that later. The concepts of Heliocentrism, Wingmen and Depth are pretty intriguing!
Exact formula:
I don’t really mind handing out the formula, especially since I’m already thinking about the 2.0 changes:
Your OSRS after any given series is as follows:
(Regular Season SRS * Regular Season Games + (SRS eq for each series * the number of games of that series * 7)) / (Regular Season Games + 7 * the number of playoff games)
So let’s take the ‘95 Rockets after beating the Jazz:
(2.32 * 82 games + (12.75 SRS * 5 games * 7)) / (82 + 5 * 7)
So they started off with a 2.32 SRS in the regular season, but with a dominant +12.75 SRS performance they raised themselves to +5.44 OSRS.
The Final Calculation (that is used to rank the teams overall) is made of three parts: OSRS, the “Result” and the Standard Deviations.
Result is just one to five, based on the round you lost on. A “2” is a semifinal exit, a “4” is losing the NBA Finals, a “5” is winning a championship. And this means that teams in seasons with fewer rounds are basically credited with advancing for free - if your bye puts you in the Conference Finals that’s an automatic “3” at minimum. For the purposes of the calculation, getting one round farther is worth 0.7 additional OSRS. This was a hard value to come up with. The problem is that, normally, you don’t need a bump for advancing because you probably posted a positive MoV in your win (because the winning team tends to outscore the losing team). So, in theory, the team with the best OSRS should have won the Finals anyways (and they often do). So we don’t need the Result coefficient to reward teams for winning - it’s to try and compensate for teams that won with a negative MoV, or lost with a positive MoV. Ultimately, advancing a round is worth . . . it’s been a while, but it’s about the equivalent of a 2-3 point MoV for the round. So winning a series with a -2.5 MoV is considered a wash (this is from memory as far as the details, but hopefully you get the idea). And then that total (OSRS + 0.7 * Result) is multiplied by the square root of the team's OSRS standard deviations above zero. This makes the standard deviation really powerful . . . but not too powerful. And honestly my barometer was the ‘76 Warriors, who had a really low OSRS but a really high STD. I just jiggered with the formula until they fell about where I thought was fair (which was about #89, good enough to safely make the list but not much better). The Russell Celtics are another good example; if Standard Deviations aren’t a big deal, his teams come out looking pretty weak, even though his teams were obviously and consistently the best in the league. It was a challenge, because that STD coefficient changed a lot. The higher that is, the more it punishes teams in extreme years (1972 and 2016 for example), and the lower it is the more it punishes teams in really competitive years. I eyeballed it. I won’t pretend like it was given to me on stone tablets or anything.
10 SRS for great teams:
It’s totally arbitrary, a flaw of the decimal system. Here are the #1, #10, #25 and #50 SRSs for regular season and postseason from my sheet:
RSRS: +11.92 / +10.01 / +7.97 / +6.76
PSRS: +19.45 / +14.49 / +13.07 / +10.76
So at the intersection of lower sample size and the general tendency that the best teams get better in the playoffs, PSRSs are way higher than RSRSs for top teams. So a +10 SRS for the regular season is all-time great, a +10 SRS playoffs is probably 60-70th all-time. Interestingly (or alarmingly) the R^2 between RSRS and PSRS (on this list) is really low, around 6.2%, which is to say, that RSRS is a really bad predictor of PSRS. A lot of this is small sample size noise I’m sure, but some of it is also that certain teams seem to be able to hit the jets in the playoffs in a way that others can’t. And, within reason, the skills that lead to regular season success aren’t the same as the skills that lead to postseason success. Or this is all noise and nothing means anything, but that’s no fun so I’m sticking to assuming that it’s legit.
Flaws of ORating/DRating from reg season in playoffs:
I completely agree. In version 2.0 of the sheet/formula I hope to be upgrading/changing the ORating and DRating through the playoffs the exact same way I do OSRS.
? weight for playoffs
Completely arbitrary. But let’s face it, most of what we talk about when we refer to great teams is about postseason dominance. High RSRS teams that struggle in the playoffs get almost zero love. Low RSRS and High PSRS teams show up as clutch (nobody’s complaining that the ‘01 Lakers weren’t that great because of their regular season SRS being so low). So, one might argue, you’d be better off making it *all* PSRS and ignoring the regular season. There are two problems with this:
1. That small playoff sample sizes mean that one team could have a monster series or two and suddenly they show up as an all-time great team. This is less of a problem for teams that play four rounds, but how do you evaluate teams that only made the semis? Or heck, teams that played before 1980 (who played in fewer playoff series altogether)? You need some way to stabilize for how few games you’re judging based on; I don’t feel comfortable looking at 10-20 games and saying “based on these 10-20 games, here are the best teams ever”.
2. Also, I’m not okay with the 1996 Bulls being ranked lower than the 2001 Lakers. This is absolutely arbitrary, but I feel like the Bulls dominating from pole to pole deserves some credit over a team that was only decent until the playoffs.
So for both of these reasons, I feel like the regular season needs to be featured in the calculation. Why two-thirds? I just made it up. I’m hoping for 2.0 to do some regressions to see what values are the most predictive, but ? gave me results that met the smell test and that was good enough for me.
Extra work for standard deviations:
Nowhere near as much as you’d think. The sheet already auto-calculates the OSRS of every team in the season; adding one cell that runs the standard deviation is no effort, especially since the formulas are copied into every year, and only the data changes.
Heliocentrism/Wingmen/Depth:
I just thought it was a cool idea. Which teams were led by one star? Which teams were carried by their bench? It’s only as good as BBR’s WAR, but it’s still really cool. I love that it points out how deep the 2012-2016 Spurs were, while also how thin the Jazz were after Malone, Stockton and Hornacek. As a substantive tool it’s fairly specious, but as a quick-glance storytelling tool, it’s quite nice.
In all likelihood I'm going to throw up a post asking for contributions to designing a 2.0 formula, but that'll be a ways from now.
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- WestGOAT
- Veteran
- Posts: 2,591
- And1: 3,506
- Joined: Dec 20, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
sansterre wrote:WestGOAT wrote:sansterre wrote:Hello everyone! I’m Sansterre; thanks so much for having me! So, what is this project?
Basically, I built a formula designed to rank teams historically. And then I implemented the formula ruthlessly and made a Top 100 from that. There is *zero* deviation from the formula. If I think Team A should be higher than Team B, but the formula has Team B higher, tough. But why?
Do you mind disclosing the exact formula? I'm curious to see the details of it and how you weigh specific components.sansterre wrote:Because, the goal isn’t to be right (though that would be nice). The goal is to create an a priori system (based on presumably reasonable premises) and then apply it without restraint. The benefit of this is that it cuts through any pre-conceived ideas that I (or anyone) might have. If the formula says that a team is #25 when “we all know” that they were Top Ten . . . maybe “we all know” wrong. I explicitly *want* some teams to pop higher than we thought they would so we can go, “Huh, you know, I hadn’t thought about it but, but that team was, objectively maybe better than I gave them credit for.” And if a team shows up lower than we’d think, I want that to be a chance for us to re-examine our thinking. Our brains apply certain heuristics to how we think of the best teams ever. My formula applies different heuristics. Both of them are selectively dumb. The goal of this project is first and foremost to use a different heuristic to perhaps help us to reexamine our own. So to be clear: I am *not* asserting that my formula is right. It’s just an icebreaker, a conversational tool.
Cool! I like this relatively unbiased approach.sansterre wrote:So, most everyone knows of SRS. SRS is basically a margin-of-victory system adjusted for quality of opponent. So a +5 SRS team, on average, beats a league average team by five points, but loses to an all-time great team by 5 points.
Why 5 points? Are you implying all-great teams have a SRS that hover around 10?sansterre wrote: Why is that a big deal? Because the playoffs are a fundamentally different environment. History is littered with players whose performance fundamentally change in the playoffs, whether for the better (Hakeem, Jordan, LeBron, etc) or for the worse (Malone, Robinson, Harden, etc). And for that matter, teams often play very differently. Many teams simply wait until the playoffs to turn on the jets (the ‘01 Lakers, ‘16-17 Cavs, ‘18 Warriors and ‘95 Rockets are some of the biggest examples) while others seem to hit a wall in the playoffs. So the 2017 Warriors beat the 2017 Cavs (+2.87 SRS) in the Finals . . . but does that mean they beat a merely above-average team? Heck no! We know that the ‘17 Cavs (playoff edition) were considerably better. So how can we account for that?
This is why I'm not fully convinced yet of using the opposition's regular-season DRtg as baseline for calculating a team's relative ORTg in the playoffs, despite realizing we have to correct for the quality of opposition somehow.sansterre wrote: Basically, I updated SRS as the playoffs progress, sort of like Elo ratings. I start with the regular season as the baseline, and then after a series is concluded the formula looks at the SRS of your opponent, what your margin of victory (or loss) was, how many games the series was (because more games equals better sample size) and then adjusts your SRS accordingly. The game-weighting (regular season vs playoffs) is designed so that, by the time you’ve played in the Finals, your Overall SRS is about 65% playoffs and 35% regular season (I'd love to say that this number is the product of thorough study, but I just eyeballed it - maybe due to be changed in version 2.0).
Interesting! How did you decide on relative weights of 2/3 for playoffs and 1/3 for regular season when a team reaches the final? Why not 50/50, or other weights.sansterre wrote: This has a bunch of ramifications. First, strike-shortened seasons (1999, 2012) are more playoff-weighted because of the lower number of regular season games. Second, the formula (for Overall SRS) doesn’t care about games won (or even whether you won the series), it’s purely driven by MoV. This leads to weird results where you can win a series but be outscored 5 points a game (looking at you first round 2018 Cavs) and the formula will straight-up punish you for that weak showing, despite having won the series. This creates some discordance between the formula’s take and our own, because the SRS part of the formula doesn’t know who won. This may seem weird, but I think it’s important. SRS is more predictive than wins in the regular season; I don’t understand why it would suddenly be less reflective of team quality in the playoffs. The better team *can* lose playoff series; why not reward them for being the better team? The third ramification is that your opponent quality is based on the team *when you played them*, not how they eventually finished. So the 2018 Rockets are considered to have lost to a +8.7 SRS Golden State team (+5.8 in the regular season, a +11.7 series and a +12.66 series), not the +15.7 SRS team that they were through the playoffs. Part of this is because you still want to root things in the regular season (because sample size) and part of it is that if you retroactively adjust this crap, where do you stop? Upsides and downsides, it is what it is.
I think it's really cool you address potential limitations (of extrapolating) conclusions from your formula, very cool!sansterre wrote: I’ve made two adjustments to this SRS-driven formula. The first is to reward teams for advancing in the playoffs. It’s not an enormous bump, but the formula likes teams that move forward over teams that don’t. I didn’t want the bump to be too big because, generally, the team that wins is the team that (SRS-wise) played better, so you really don’t need too much of a bump (because the SRS-side is already handling a lot of that). The second adjustment is for the competition-level of the league, by which I mean the standard deviation in Overall SRS (which is my combo regular season / playoffs SRS). The purpose of this is a bit more twitchy.
So how exactly did you make these two adjustments?sansterre wrote: Sometimes the level of competition in a league drops. Sometimes this is driven by expansion (adding more teams decreases the average level of quality for a time) and sometimes it is driven by tanking. But either way, different years/eras have different amounts of horrible teams. In 2015, 10% of the league had an SRS of -8 or worse. In 1976, 0% of the league had an SRS worse than -3. Can you really look at a +6 SRS team in 1976 (which doesn’t get to beat up on crap-tastic rosters) and say that they’re definitely worse than a +8 SRS team in 2015? I don’t know that I could. So I want a degree of compensation here. Part of what makes teams in the last 15 years so good (by SRS) is the increase in teams tanking, and I don’t really want them to be rewarded for that. So I take standard deviation into account.
I like that you take standard deviations into account, it definitely beats simply calculating differences based on a simple average. You can definetly be more confident in claiming (significant) differences. It seems a lot of extra work though!sansterre wrote:But I don’t make it the whole thing. I tried that, and the problem is that teams that were way above a very tight league (the 1976 Golden State Warriors were about +6.5 in a league that was insanely close to average besides them) grades out identical to a murder team in a more stratified era (say, the 2018 Warriors). I think the standard deviation angle is worth taking into account, but there’s no universe where I’m okay with the ‘76 Warriors and the ‘18 Warriors being considered comparable. So it’s a bump, like winning a series.
I think this further reinforces how relatively "meaningless" regular-season can be. Based on my gut-feeling this is especially the case in the more modern era with teams coasting, and even superstars coasting (Lebron, Kawhi), but maybe this was also true for teams more in the past..sansterre wrote:So those are the components: 1) Overall SRS (adjusted through the playoffs) most of all, with 2) how close to the championship you got and 3) your OSRS standard deviation above the mean being included as adjustments on the OSRS baseline. That’s the system, for better or for worse.
So how do these bumps exactly affect the formula you conceived?
I decided to read this post first before addressing your the Glossary you made in the specific ranking topics. I think I will make another post about that later. The concepts of Heliocentrism, Wingmen and Depth are pretty intriguing!
Exact formula:
I don’t really mind handing out the formula, especially since I’m already thinking about the 2.0 changes:
Your OSRS after any given series is as follows:
(Regular Season SRS * Regular Season Games + (SRS eq for each series * the number of games of that series * 7)) / (Regular Season Games + 7 * the number of playoff games)
So let’s take the ‘95 Rockets after beating the Jazz:
(2.32 * 82 games + (12.75 SRS * 5 games * 7)) / (82 + 5 * 7)
So they started off with a 2.32 SRS in the regular season, but with a dominant +12.75 SRS performance they raised themselves to +5.44 OSRS.
The Final Calculation (that is used to rank the teams overall) is made of three parts: OSRS, the “Result” and the Standard Deviations.
Result is just one to five, based on the round you lost on. A “2” is a semifinal exit, a “4” is losing the NBA Finals, a “5” is winning a championship. And this means that teams in seasons with fewer rounds are basically credited with advancing for free - if your bye puts you in the Conference Finals that’s an automatic “3” at minimum. For the purposes of the calculation, getting one round farther is worth 0.7 additional OSRS. This was a hard value to come up with. The problem is that, normally, you don’t need a bump for advancing because you probably posted a positive MoV in your win (because the winning team tends to outscore the losing team). So, in theory, the team with the best OSRS should have won the Finals anyways (and they often do). So we don’t need the Result coefficient to reward teams for winning - it’s to try and compensate for teams that won with a negative MoV, or lost with a positive MoV. Ultimately, advancing a round is worth . . . it’s been a while, but it’s about the equivalent of a 2-3 point MoV for the round. So winning a series with a -2.5 MoV is considered a wash (this is from memory as far as the details, but hopefully you get the idea). And then that total (OSRS + 0.7 * Result) is multiplied by the square root of the team's OSRS standard deviations above zero. This makes the standard deviation really powerful . . . but not too powerful. And honestly my barometer was the ‘76 Warriors, who had a really low OSRS but a really high STD. I just jiggered with the formula until they fell about where I thought was fair (which was about #89, good enough to safely make the list but not much better). The Russell Celtics are another good example; if Standard Deviations aren’t a big deal, his teams come out looking pretty weak, even though his teams were obviously and consistently the best in the league. It was a challenge, because that STD coefficient changed a lot. The higher that is, the more it punishes teams in extreme years (1972 and 2016 for example), and the lower it is the more it punishes teams in really competitive years. I eyeballed it. I won’t pretend like it was given to me on stone tablets or anything.
10 SRS for great teams:
It’s totally arbitrary, a flaw of the decimal system. Here are the #1, #10, #25 and #50 SRSs for regular season and postseason from my sheet:
RSRS: +11.92 / +10.01 / +7.97 / +6.76
PSRS: +19.45 / +14.49 / +13.07 / +10.76
So at the intersection of lower sample size and the general tendency that the best teams get better in the playoffs, PSRSs are way higher than RSRSs for top teams. So a +10 SRS for the regular season is all-time great, a +10 SRS playoffs is probably 60-70th all-time. Interestingly (or alarmingly) the R^2 between RSRS and PSRS (on this list) is really low, around 6.2%, which is to say, that RSRS is a really bad predictor of PSRS. A lot of this is small sample size noise I’m sure, but some of it is also that certain teams seem to be able to hit the jets in the playoffs in a way that others can’t. And, within reason, the skills that lead to regular season success aren’t the same as the skills that lead to postseason success. Or this is all noise and nothing means anything, but that’s no fun so I’m sticking to assuming that it’s legit.
Flaws of ORating/DRating from reg season in playoffs:
I completely agree. In version 2.0 of the sheet/formula I hope to be upgrading/changing the ORating and DRating through the playoffs the exact same way I do OSRS.
? weight for playoffs
Completely arbitrary. But let’s face it, most of what we talk about when we refer to great teams is about postseason dominance. High RSRS teams that struggle in the playoffs get almost zero love. Low RSRS and High PSRS teams show up as clutch (nobody’s complaining that the ‘01 Lakers weren’t that great because of their regular season SRS being so low). So, one might argue, you’d be better off making it *all* PSRS and ignoring the regular season. There are two problems with this:
1. That small playoff sample sizes mean that one team could have a monster series or two and suddenly they show up as an all-time great team. This is less of a problem for teams that play four rounds, but how do you evaluate teams that only made the semis? Or heck, teams that played before 1980 (who played in fewer playoff series altogether)? You need some way to stabilize for how few games you’re judging based on; I don’t feel comfortable looking at 10-20 games and saying “based on these 10-20 games, here are the best teams ever”.
2. Also, I’m not okay with the 1996 Bulls being ranked lower than the 2001 Lakers. This is absolutely arbitrary, but I feel like the Bulls dominating from pole to pole deserves some credit over a team that was only decent until the playoffs.
So for both of these reasons, I feel like the regular season needs to be featured in the calculation. Why two-thirds? I just made it up. I’m hoping for 2.0 to do some regressions to see what values are the most predictive, but ? gave me results that met the smell test and that was good enough for me.
Extra work for standard deviations:
Nowhere near as much as you’d think. The sheet already auto-calculates the OSRS of every team in the season; adding one cell that runs the standard deviation is no effort, especially since the formulas are copied into every year, and only the data changes.
Heliocentrism/Wingmen/Depth:
I just thought it was a cool idea. Which teams were led by one star? Which teams were carried by their bench? It’s only as good as BBR’s WAR, but it’s still really cool. I love that it points out how deep the 2012-2016 Spurs were, while also how thin the Jazz were after Malone, Stockton and Hornacek. As a substantive tool it’s fairly specious, but as a quick-glance storytelling tool, it’s quite nice.
In all likelihood I'm going to throw up a post asking for contributions to designing a 2.0 formula, but that'll be a ways from now.
Fascinating stuff, I haven't gone through this post in depth yet (which it definitely deserves), but I think I more or less get what you mean by how you try to incorporate standard deviations. I just think it's quite tedious to go year-by-year (did you use basketball-reference?) and collect each team's SRS, and other advanced stats, and do it basically for more than 60 years? Not to mention getting the data to calculate playoff SRS for the different series. I like playing around with numbers and visualizing them, so I'm really enjoying your posts. I've actually trying to learn how to scrape data from the web and I decided to learn by extracting all kinds of data from basketball-reference lately, and it's quite a learning experience since I'm a noob with python.
I'll go through your post in more detail later, but definitely do make a topic on potential contributions on designing your 2.0 formula, maybe I and others can pitch in as well!

spotted in Bologna
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
WestGOAT wrote:sansterre wrote:WestGOAT wrote:Do you mind disclosing the exact formula? I'm curious to see the details of it and how you weigh specific components.
Cool! I like this relatively unbiased approach.
Why 5 points? Are you implying all-great teams have a SRS that hover around 10?
This is why I'm not fully convinced yet of using the opposition's regular-season DRtg as baseline for calculating a team's relative ORTg in the playoffs, despite realizing we have to correct for the quality of opposition somehow.
Interesting! How did you decide on relative weights of 2/3 for playoffs and 1/3 for regular season when a team reaches the final? Why not 50/50, or other weights.
I think it's really cool you address potential limitations (of extrapolating) conclusions from your formula, very cool!
So how exactly did you make these two adjustments?
I like that you take standard deviations into account, it definitely beats simply calculating differences based on a simple average. You can definetly be more confident in claiming (significant) differences. It seems a lot of extra work though!
I think this further reinforces how relatively "meaningless" regular-season can be. Based on my gut-feeling this is especially the case in the more modern era with teams coasting, and even superstars coasting (Lebron, Kawhi), but maybe this was also true for teams more in the past..
So how do these bumps exactly affect the formula you conceived?
I decided to read this post first before addressing your the Glossary you made in the specific ranking topics. I think I will make another post about that later. The concepts of Heliocentrism, Wingmen and Depth are pretty intriguing!
Exact formula:
I don’t really mind handing out the formula, especially since I’m already thinking about the 2.0 changes:
Your OSRS after any given series is as follows:
(Regular Season SRS * Regular Season Games + (SRS eq for each series * the number of games of that series * 7)) / (Regular Season Games + 7 * the number of playoff games)
So let’s take the ‘95 Rockets after beating the Jazz:
(2.32 * 82 games + (12.75 SRS * 5 games * 7)) / (82 + 5 * 7)
So they started off with a 2.32 SRS in the regular season, but with a dominant +12.75 SRS performance they raised themselves to +5.44 OSRS.
The Final Calculation (that is used to rank the teams overall) is made of three parts: OSRS, the “Result” and the Standard Deviations.
Result is just one to five, based on the round you lost on. A “2” is a semifinal exit, a “4” is losing the NBA Finals, a “5” is winning a championship. And this means that teams in seasons with fewer rounds are basically credited with advancing for free - if your bye puts you in the Conference Finals that’s an automatic “3” at minimum. For the purposes of the calculation, getting one round farther is worth 0.7 additional OSRS. This was a hard value to come up with. The problem is that, normally, you don’t need a bump for advancing because you probably posted a positive MoV in your win (because the winning team tends to outscore the losing team). So, in theory, the team with the best OSRS should have won the Finals anyways (and they often do). So we don’t need the Result coefficient to reward teams for winning - it’s to try and compensate for teams that won with a negative MoV, or lost with a positive MoV. Ultimately, advancing a round is worth . . . it’s been a while, but it’s about the equivalent of a 2-3 point MoV for the round. So winning a series with a -2.5 MoV is considered a wash (this is from memory as far as the details, but hopefully you get the idea). And then that total (OSRS + 0.7 * Result) is multiplied by the square root of the team's OSRS standard deviations above zero. This makes the standard deviation really powerful . . . but not too powerful. And honestly my barometer was the ‘76 Warriors, who had a really low OSRS but a really high STD. I just jiggered with the formula until they fell about where I thought was fair (which was about #89, good enough to safely make the list but not much better). The Russell Celtics are another good example; if Standard Deviations aren’t a big deal, his teams come out looking pretty weak, even though his teams were obviously and consistently the best in the league. It was a challenge, because that STD coefficient changed a lot. The higher that is, the more it punishes teams in extreme years (1972 and 2016 for example), and the lower it is the more it punishes teams in really competitive years. I eyeballed it. I won’t pretend like it was given to me on stone tablets or anything.
10 SRS for great teams:
It’s totally arbitrary, a flaw of the decimal system. Here are the #1, #10, #25 and #50 SRSs for regular season and postseason from my sheet:
RSRS: +11.92 / +10.01 / +7.97 / +6.76
PSRS: +19.45 / +14.49 / +13.07 / +10.76
So at the intersection of lower sample size and the general tendency that the best teams get better in the playoffs, PSRSs are way higher than RSRSs for top teams. So a +10 SRS for the regular season is all-time great, a +10 SRS playoffs is probably 60-70th all-time. Interestingly (or alarmingly) the R^2 between RSRS and PSRS (on this list) is really low, around 6.2%, which is to say, that RSRS is a really bad predictor of PSRS. A lot of this is small sample size noise I’m sure, but some of it is also that certain teams seem to be able to hit the jets in the playoffs in a way that others can’t. And, within reason, the skills that lead to regular season success aren’t the same as the skills that lead to postseason success. Or this is all noise and nothing means anything, but that’s no fun so I’m sticking to assuming that it’s legit.
Flaws of ORating/DRating from reg season in playoffs:
I completely agree. In version 2.0 of the sheet/formula I hope to be upgrading/changing the ORating and DRating through the playoffs the exact same way I do OSRS.
? weight for playoffs
Completely arbitrary. But let’s face it, most of what we talk about when we refer to great teams is about postseason dominance. High RSRS teams that struggle in the playoffs get almost zero love. Low RSRS and High PSRS teams show up as clutch (nobody’s complaining that the ‘01 Lakers weren’t that great because of their regular season SRS being so low). So, one might argue, you’d be better off making it *all* PSRS and ignoring the regular season. There are two problems with this:
1. That small playoff sample sizes mean that one team could have a monster series or two and suddenly they show up as an all-time great team. This is less of a problem for teams that play four rounds, but how do you evaluate teams that only made the semis? Or heck, teams that played before 1980 (who played in fewer playoff series altogether)? You need some way to stabilize for how few games you’re judging based on; I don’t feel comfortable looking at 10-20 games and saying “based on these 10-20 games, here are the best teams ever”.
2. Also, I’m not okay with the 1996 Bulls being ranked lower than the 2001 Lakers. This is absolutely arbitrary, but I feel like the Bulls dominating from pole to pole deserves some credit over a team that was only decent until the playoffs.
So for both of these reasons, I feel like the regular season needs to be featured in the calculation. Why two-thirds? I just made it up. I’m hoping for 2.0 to do some regressions to see what values are the most predictive, but ? gave me results that met the smell test and that was good enough for me.
Extra work for standard deviations:
Nowhere near as much as you’d think. The sheet already auto-calculates the OSRS of every team in the season; adding one cell that runs the standard deviation is no effort, especially since the formulas are copied into every year, and only the data changes.
Heliocentrism/Wingmen/Depth:
I just thought it was a cool idea. Which teams were led by one star? Which teams were carried by their bench? It’s only as good as BBR’s WAR, but it’s still really cool. I love that it points out how deep the 2012-2016 Spurs were, while also how thin the Jazz were after Malone, Stockton and Hornacek. As a substantive tool it’s fairly specious, but as a quick-glance storytelling tool, it’s quite nice.
In all likelihood I'm going to throw up a post asking for contributions to designing a 2.0 formula, but that'll be a ways from now.
Fascinating stuff, I haven't gone through this post in depth yet (which it definitely deserves), but I think I more or less get what you mean by how you try to incorporate standard deviations. I just think it's quite tedious to go year-by-year (did you use basketball-reference?) and collect each team's SRS, and other advanced stats, and do it basically for more than 60 years? Not to mention getting the data to calculate playoff SRS for the different series. I like playing around with numbers and visualizing them, so I'm really enjoying your posts. I've actually trying to learn how to scrape data from the web and I decided to learn by extracting all kinds of data from basketball-reference lately, and it's quite a learning experience since I'm a noob with python.
I'll go through your post in more detail later, but definitely do make a topic on potential contributions on designing your 2.0 formula, maybe I and others can pitch in as well!
The building of the very first sheet was a little time consuming. Once I have it I:
1) Went to the year on BBR
2) Put in all the regular season SRSs for all the teams
3) Go playoff series by playoff series and put in:
a) the SRS of the opponent for each team
b) the number of games and
c) the margin of victory
From there the sheet does all the OSRS and Standard Deviation calculations. And any teams that I thought had a shot at the Top 100 get their OSRS, Result and STD copied into the Master List.
So putting in a full season (once I got the hang of it) didn't take more than ten minutes or so. And while that makes putting in 65 years take a fair while, it is also like watching each season for the first time (in a weird way). So I kind of enjoyed it.
I don't make any claims to this being the end-all and be-all or anything. I just wanted a loosely objective way to rank NBA teams all-time and this is the best I could come up with. Plus it gave me the chance to do these fun write-ups, which I've really enjoyed and from which I've learned a lot!
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- eminence
- RealGM
- Posts: 16,729
- And1: 11,564
- Joined: Mar 07, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Still loving the project, really respect your dedication!
Curious if you had used MOV in place of SRS which pre-shot clock teams would've made the list, I expect the '49/'50 Lakers, do any other squads metrics line up well enough?
Curious if you had used MOV in place of SRS which pre-shot clock teams would've made the list, I expect the '49/'50 Lakers, do any other squads metrics line up well enough?
I bought a boat.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
eminence wrote:Still loving the project, really respect your dedication!
Curious if you had used MOV in place of SRS which pre-shot clock teams would've made the list, I expect the '49/'50 Lakers, do any other squads metrics line up well enough?
Glad you're enjoying it!
I've never checked; it's totally possible. But a team would really need to go nuts to pry themselves into the list without the benefit of opponent adjustments.
I'll poke around and see what I can see.
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- eminence
- RealGM
- Posts: 16,729
- And1: 11,564
- Joined: Mar 07, 2015
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
sansterre wrote:eminence wrote:Still loving the project, really respect your dedication!
Curious if you had used MOV in place of SRS which pre-shot clock teams would've made the list, I expect the '49/'50 Lakers, do any other squads metrics line up well enough?
Glad you're enjoying it!
I've never checked; it's totally possible. But a team would really need to go nuts to pry themselves into the list without the benefit of opponent adjustments.
I'll poke around and see what I can see.
I was imagining still doing adj, say a +6 MOV and a +4 MOV team meet in the playoffs and Team A wins by 3 ppg, they'd get a +7 rating for the series. Obviously a little rougher than SRS and Ortg numbers, but not so far off either I think.
The early Lakers I just assume were in, they absolutely wrecked teams.
I bought a boat.
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,371
- And1: 1,121
- Joined: May 12, 2018
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Saving this to my bookmarks page. I love that you're doing this, OP!
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
-
- Bench Warmer
- Posts: 1,312
- And1: 1,816
- Joined: Oct 22, 2020
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
The master list is now in the correct order (regarding the '20 Lakers and '11 Mavericks) but the teams themselves are still in the wrong posts. I apologize for the confusion. At some point I'll move the teams around in their articles.
"If you wish to see the truth, hold no opinions."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
"Trust one who seeks the truth. Doubt one who claims to have found the truth."
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
- SideshowBob
- General Manager
- Posts: 9,061
- And1: 6,262
- Joined: Jul 16, 2010
- Location: Washington DC
-
Re: Sansterre's Top 100 Teams of the Shot Clock Era - Masterlist
Hmm, are we looking at 3 teams from 2016 in the top 30?
SAS drops too early. OKC/GSW/CLE all make it?
OKC boosted by defeating ~10 SRS SAS and then taking ~11 SRS GSW to 7.
GSW boosted by defeating ~10 SRS OKC and then going to 7 with ~10-15 SRS CLE.
CLE boosted by massacring lowly East teams (and solid TOR) and then taking 12 SRS GSW to 7.
I remember positing this before the playoffs that year. In retrospect I underrated CLE/OKC considerably - I'm still impressed at GSW's WCF performance
SAS drops too early. OKC/GSW/CLE all make it?
OKC boosted by defeating ~10 SRS SAS and then taking ~11 SRS GSW to 7.
GSW boosted by defeating ~10 SRS OKC and then going to 7 with ~10-15 SRS CLE.
CLE boosted by massacring lowly East teams (and solid TOR) and then taking 12 SRS GSW to 7.
I remember positing this before the playoffs that year. In retrospect I underrated CLE/OKC considerably - I'm still impressed at GSW's WCF performance
But in his home dwelling...the hi-top faded warrior is revered. *Smack!* The sound of his palm blocking the basketball... the sound of thousands rising, roaring... the sound of "get that sugar honey iced tea outta here!"