The-Power wrote:On a related note: while I personally agree that the discussion is much more important than the ranking, I believe we should not underestimate the value in having a tangible output and closure at the end of a project. First, it keeps people motivated (whether we like it or not, I would contend that it's a pretty indisputable part of human nature). Second, it receives attention and puts the board on the map (not everyone will care about it but I do believe it offers clear value, for example by having new people join that end up being valuable contributors for years).
So while I do believe a ‘book club’-project just focused on discussion could very much be fun, I don't see it replacing the existing projects or filling the needs that put these projects into existence in the first place. It also seems much smaller in nature to me (which is not a bad thing by any means; but it's different).
Two ideas off the top of my head I thought I'd share:
1) Instead of having concrete rankings, we could find a way to run projects that end up with ranges. Not sure how this would best work in practice but the idea of ranking ranges instead of specific rankings (as Ben Taylor, among others, practices it) has a certain appeal to me. It's not only more reflective of reality but also might work to take out some of the animosity between, and obsession by, some posters when not every thread has one winner and everyone else loses (or so it seems to be interpreted all too often). It still gives you tangible output at the end of the day, even as it might feel less satisfactory to those who like to have a neat order.
2) Instead of creating a list of rankings, the community could make an effort to create ‘scouting reports’ for a set of players (either for their careers, or for peak seasons), possibly selected based on the results from previous projects, in a collaborative effort to describe and summarize the players. That would be a very valuable resource for a lot of different people, I can imagine, and it could be a ton of fun to participate in. In some ways, it also carries the spirit of the ‘book club’ idea – just in a somewhat different way. Once again, I'm not sure how this could best work in practice with a larger group when it comes to the output but I'll just put the idea out here (and come to think of it, it's perhaps not that much different from what Ben Taylor did for his greatest peaks project(s) – just with a lot more participants and without producing videos).
In the end, what matters most is having a core group of dedicated people that are willing to carry the project – of whichever nature – to its fruition. Those people's voices (and I very much do not include myself here as I have not been nearly active enough for a while) should be heard the most on here. If you have an idea you are committed to and enough other people are excited as well, the PC board should be happy to host and facilitate the project under its banner.
It might be more work for the organizer, but I like the idea of including ranges as part of standard output of a ranking project like Greatest peaks/careers. For example, after collecting everyone's rankings for some group of players we might:
-set rank order based on mean rank of each player and set ranges based on say 1 or 2 standard deviations in the ranks
-set rank order based on median rank of each player and set ranges based on 25th/75th or 10th/90th percentiles in the ranks
Pros of using mean and standard deviation is it allows us to capture information based on outlier voters (e.g. if someone ranks Jordan 6th or Kobe 25th or whatever).
Pros of the median and percentile is that it makes us less sensitive to outliers and thus voter manipulation (e.g. if a minority of people are trying to make a conspiracy to get Jordan ranked lower, their 'true' opinion of Jordan would be low anyway... the median doesn't care whether they rank him 4th 5th 6th or 20th if they're a minority of voters).
To get accurate understanding of the standard deviation or percentile range, we'd likely want to give people more than a Top 3 every thread. Giving people only a top 3 isn't a great way to get a sense of the mean/median or the range when there are more than 3 candidates who might get voted in. Still, we probably don't want to just have everyone give their Top 100 ranking right off the bat... that would be impractical and now allow for as thorough discussion. Not sure the best way to accomplish this.
One option is to vote-in candidates in batches or tiers. We have people discuss like before, then give a ranking of their top X players of the players still available (say have people rank their Top 5 or 7 or 10 remaining best players), then induct the top few players onto the list based on the mean/median from those votes (say top 3 or 5), then start the next thread with the next set of remaining players.
Giving people a longer list than three allows us to get a sense for the uncertainty ranges of each player voted in (we could potentially couple the votes with votes from the prior threads / next threads, to get the proper range for players who are right between tiers). But selecting players in reasonably small groups/batches still allows for feasible discussion of a small subset of comparable players.
This might be convoluted, and there might be a voting system that already exists that allows for ranges. I like the idea of ranges, but am just trying to fill in the details for how it would work in practice.
Edit: it's worth mentioning, a simpler version of this is to just elect players in tiers of X size, and note vote between the tiers. That's certainly easier to implement as a voting system... but less interesting or informative than ranges.