cdel00 wrote:whoa some strange logic in this thread
1) The result of the next event in a series of random events is still random and NOT influenced by the past (Ripp) you can flip the coin a google times and the next time you flip it there is still a 50/50 chance of heads.
2) Increasing sample size does not reduce noise it actually increases noise. Each instance of an event has noise so when you sample the event you also sample the noise in that event as you sample many events you sample many occurances of noise. What you are hoping for in a large sample size is that thenoise is evenly balanced and cancels itself out but that is not fact it is hope. The only way to remove noise is to account for it in each sampling and remove it at source.
I have more to say but i must go now hope to post more later
1. I didn't say the coin had a 50% chance of coming up heads. It has probability P of coming up heads, and it is your task to estimate that probability to high accuracy. This is something that can be easily done.
2. But again, if you are trying to compute some underlying functional of the underlying distribution (in this case, the chance of coming up heads), then more noisy samples help.
The key that you seem to be missing is that I'm not trying to predict what exactly the sequence of coin flips will be, but what the AVERAGE behavior will be.
To make the analogy with basketball, I'm not interested in determining exactly whether a player was good/bad in this specific play, but what their AVERAGE performance is. Of course, if in 90% of plays, they are good defensively, it is very suggestive..