Anyone any good at statistics?.
I’ve set up a database which picks a prize winner each month using a Random Number generator, which I randomize each time before use.
As I was concerned about the true randomness, I set up a test, and ran the selector 250,000 times (I know, but it seemed like a good idea), to select a number between 1 & 211 (current membership).
The results showed an average of each number winning 1184.83 times which is what I would expect. However the highest was 1269, and the lowest 1086 (I don’t really know enough about Standard Deviations, but the result was 34.878.
I must admit I was rather surprised at how wide the spread was, and assumed that as I had picked such a large number, the Max and Min would be much closer to the average (but I know nothing about stats). But it shows in this case that 1 number was selected 16% more than the lowest.
Is this in line with what would be expected statistically.
Since the monthly prize is reasonably large, I need to be confident that the selection is fair.