I have always been a little skeptical of polls. Having taking a statistics course in college, I knew that you could sort of skew numbers to make them say what you wanted. So, I figured that the person publishing the results simply used the numbers that supported their opinion best.
That was before I discovered something about polls. If you’ve ever listed or read the results carefully you’ve noticed that they usually carry a note that they have a margin of error of “+/-3%.” That percentage could vary a bit, but three seems the most common number. I always assumed that that meant that if they asked the question of any other group of people who fit their criteria (Americans, Indiana farmers, UCLA students, etc.), they’d get essentially the same results—maybe 3% more or 3% fewer would give those answers.
That’s not what it means.
What that “+/-3%” margin of error means among an identical sampling the percentage could vary by 3% either way. This might not seem like a big distinction, but think about it. What if their sampling isn’t Americans but Americans who are registered with a particular political party? What if their sampling isn’t Indiana farmers but Indiana farmers who received corn subsidies? What if their sampling isn’t UCLA students but UCLA students who failed calculus? Depending on the questions they asked, that might make a huge difference.
Now I’m even more skeptical of polls.