by Jason Marshall
In today’s article, we’re wrapping-up our introductory series on fundamental statistics by talking about how knowledge of statistical quantities like the mean and standard deviation can help you understand the significance of the latest political polling results.
But first, the podcast edition of this article was sponsored by Go to Meeting. With this meeting service, you can hold your meetings over the Internet and give presentations, product demos and training sessions right from your PC. For a free, 45 day trial, visit GoToMeeting.com/podcast.
Should You Believe the Results of All Political Polls?
Should you believe the results of every political poll you see reported in the news? The short and simple answer is: “no.” For me, there are two reasons for this. First, being skeptically natured, I tend not to believe a lot of what I see until I can verify it for myself. Some people have agendas that they’d like to steer you and me towards, and I generally try to ensure that that doesn’t happen to me blindly. The second reason is a bit less conspiratorial in nature: even though most polls are conducted properly, the results are often reported improperly. That usually has to do with not understanding the statistical nature of the poll on the part of the reporter. But, since you’ve learned how to calculate mean values and can answer the question what are the range and standard deviation? you now know everything you need to decipher poll results and to decide for yourself whether or not you believe them.
Why and How are Polls Conducted
Let’s start by talking briefly about how polls are conducted and why they’re taken in the first place. Polls are used to figure out the opinions and preferences of the entire population without having to ask every single person what they think. In other words, the goal is to poll a subset of the entire population (this subset is called a sample) and come up with an answer that is representative of what the population as a whole believes. The most important factor in creating an accurate poll is to come up with a sample that represents the diversity of the entire population. It must be chosen carefully so as not to overrepresent any one group.
That is exactly what it means when you hear reporters say that something is a “scientific poll.” There is indeed a science to choosing an unbiased sample, and polls that employ this science have a much better chance of yielding accurate results. On the other hand, polls taken at news websites, for example, are decidedly unscientific since the population taking the poll is self-selected and is therefore completely biased. In other words, only people who visit that website (and who probably have certain common beliefs) will take the poll—so it cannot be a fair representation of the entire population. Any such self-selected unscientific poll (which many news websites are all too eager to post and report on in an effort to raise viewer involvement) is essentially meaningless—the results are simply too biased to give valuable information about the entire population.
How are Poll Results Reported?
Now that we know how to check whether or not a poll has the potential to be meaningful—whether or not it’s scientific—let’s move on to figuring out whether or not it actually is. That’s right: the fact that a poll has the potential to be meaningful does not necessarily mean that it will give a conclusive result. Let’s take the simple example of a poll measuring the support for two presidential candidates. The result of such a poll is typically reported by giving the percentage of the population supporting candidate A, the percentage supporting candidate B, possibly the percentage of the population that is undecided, and the all-important margin of error.
What is the Sampling Error?
First of all, notice that I said the poll result gives the percentage of the population supporting each candidate, not the percentage of the sample. That’s the whole point of the poll, after all—to figure out what the entire population is thinking. But it’s important to keep in mind that the pollster questioned only a portion of the population, and that even though she may have conducted a well thought-out and scientific poll, she could have gotten slightly different results if a slightly different portion of the population had been questioned. That sampling error is precisely the origin of the margin of error you see reported alongside polls, and it should not be ignored!
What Does the Margin of Error in Polls Mean?
The margin of error is typically reported as a plus-or-minus percentage. For example, the margin of error might be ±3%. But what does this mean? Well, let’s imagine that support for our imaginary presidential candidate A is polled to be 42% and support for B is polled at 46%, with a margin of error of ±3%. This means that the pollster is confident that if an election were held measuring the actual level of support across the entire population, candidate A would receive anywhere between 39% and 45% of the vote (that is 42% – 3% and 42% + 3%), and candidate B would receive anywhere between 43% and 49% (that is 46% – 3% and 46% + 3%).
Just how confident are pollsters with this margin of error? Well, the statistical margin of error reported with poll results is typically what’s called the 95% confidence interval. That means if the pollster created and polled 100 different samples of the population, the result would be within the original reported margin of error in 95 out of these 100 cases. In other words, the 95% confidence interval will contain the true value 95% of the time. While that’s a lot, keep in mind that the 95% confidence interval will not contain the true value in 1 out of every 20 polls.
How to Know if Political Poll Results are Significant Or Not
Now, let’s go back to our original question: Should you trust the results of a poll? Well, let’s return to our imaginary presidential poll in which candidate A received the support of 42% of the population and candidate B received the support of 46% of the population, with a margin of error of ±3%. This means that candidate A could have up to 45% of the support and candidate B could have as little as 43% of the support, so that while the poll seems to indicate a 46% to 42% lead for candidate B, it cannot actually be used to determine who is really leading—no matter how many times pundits claim otherwise—since the margin of error is too big.
If, however, the margin of error were smaller—something like ±1.5% instead of ±3% could be achieved by polling a significantly larger sample of the population—then candidate B’s 4% lead would actually be significant since (with 95% confidence) candidate B would have a minimum of 44.5% of the vote to the maximum 43.5% support of candidate A. The general rule of thumb is that the lead must be at least twice the margin of error to be significant, and the quick and dirty tip is therefore to be sure and pay attention to margins of error—without them, poll results are essentially meaningless.
Wrap Up
That’s all the math we have time for today. And that’s all the time we’re going to take to talk about statistics too—for now, at least. In upcoming episodes, we’ll be heading back to math basic training to talk some more about math fundamentals.
Thanks again to our sponsor this week, Go To Meeting. Visit GoToMeeting.com/podcast and sign up for a free 45 day trial of their online conferencing service.
Please email your math questions and comments to..............You can get updates about the Math Dude podcast, the “Video Extra!” episodes on YouTube, and all my other musings about math, science, and life in general by following me on Twitter. And don’t forget to join our great community of social networking math fans by becoming a fan of the Math Dude on Facebook.
Until next time, this is Jason Marshall with The Math Dude’s Quick and Dirty Tips to Make Math Easier. Thanks for reading, math fans!
|