Showing posts with label 2008 Election; polls. Show all posts
Showing posts with label 2008 Election; polls. Show all posts

Sunday, September 21, 2008

Election Polls: Part 6: Recap


Who's winning in the polls? What do the polls mean?

What should you consider in evaluating for yourself what these polls mean?

This is the last of a six-part series in which we looked at poll variables to help you sort through the garbage to find the gold nuggets in the polling world.

Here are the top 5 tips:

1. Who? Be sure to look at who participated in the poll. Is it a random poll or a self-selected sample?

2. When? Is the poll more than 3 days old and have any intervening events occurred since the poll was conducted that might change the result? i.e. financial crash.

3. How? Are the questions as worded biased, leading, or complex? Do they use double-negatives?

4. Race matters? Does race matter? Be aware of the possible Bradley Effect, or higher poll numbers for the minority candidate because people won't disclose their racial biases in a poll. Poll numbers do not necessarily reflect how the participants will actually vote once they are alone in the booth.

5. Size and margin of error. As a general rule, samples of 1,000 or more are considered statistically reliable, with a small margin of error of plus or minus 2%. The smaller the sample, the higher the margin of error. The larger the sample, the lower the margin of error. A sample of 50 people, showing a 55% lead to 45% can vary significantly, giving the "leader" a possible range of 40%-70%, and the "loser" a range of 30% to 60%. There's a 30 point overlap, making this kind of a result extremely unreliable, especially in a "close" race. In contrast, a 1,000 sample showing a 55% lead to 45% means that the leader has 53-57% and the other 43-47%. This can be fairly interpreted as a lead.

Bonus Tip: TV call-in polls and Internet polls are inherently unreliable.

By evaluating these few variables, you know more clearly how to interpret poll results for yourself. And you won't be misled by the latest polls trying to tell you who's going to win the election.

And please, don't ever, ever let poll predictions keep you from casting your vote. It's not over until it's over. Vote!!

Subscribe to Kimberlie Ryan's Working Wellness

Election Polls: Part 5: Size and Margin of Error


When judging the accuracy of polls you see, be sure to look at the size of the poll and the margin of error. In polling lingo, that's MOSE (margin of sampling error).

For those of you pretentious types, you can pull that out at your next cocktail party. As in, "because the MOSE of the last Gallup Poll was plus or minus three, it's a pretty good reflection of the current mood of the country." Now let's go through the basics.

Pollsters have figured out the mathematical model for accuracy of poll results based on the size of the poll. Here's the general breakdown of the margin of error by sample size:

Sample 50 people, plus or minus 15% margin of error

Sample 500 people, plus or minus 6% margin of error

Sample 1000 people, plus or minus 3% margin of error

Sample 5000 people, plus or minus 1% margin of error

Many state polls use 500-person poll samples. This means if the state poll shows Obama and McCain at 50/50, they could actually be anywhere between 46% and 56%, which is not really a dead heat.

A 1,000 sample is standard among national polls, but at a cost of $50,000, it's too expensive for many state polls. There's nothing wrong with using smaller polls, but it won't give the most statistically accurate result. Always look at the methodology and the margin of error.

The Associated Press has guidelines for reporting these polls. If the difference between the candidates is more than twice the sampling error margin, reporters can say the candidate is "leading." This is usually 7 or more percentage points in a 1000 person poll. If the sampling error margin is 3-6 points, then the candidates are "close." If they candidates are 0-2 points apart, they are considered "tied." But remember, if the sample is only of 50 people, there is a 15-point margin of error, so it wouldn't be accurate to say that candidates are tied, even if the poll shows them at 50/50 results.

Today's Gallup poll shows Obama leading McCain by 4 points. Obama's at 49%, and McCain's at 45%. This is based on a 2,720 person sample, with three-day rolling averages of responses to interviews. So, the margin of error is plus or minus 2%. Since twice the margin of error is 4%, it is safe to report Obama as in the lead under the AP guidelines.

But hopefully these numbers are not impacted by the Bradley Effect we reviewed in the last post. This would happen if bigoted poll participants either refrained from voting or lied (to the pollsters or themselves). Sometimes people answer polls differently than they actually vote when alone in the booth.

Test out your knowledge so far. Here's the link to the Gallup poll from today: Gallup Daily: Obama Leading McCain by 4 Points. Be sure to look for who, when, and the MOSE. You will see that is a sample is registered voters, so you know from previous posts that this does not show new voters or possible swing voters. Look at the "Survey Methods" section to evaluate.

Now you know just enough to be dangerous - like the pollsters!

Subscribe to Kimberlie Ryan's Working Wellness

Election Polls: Part 4: Race Views and Polling


Do people report their racial biases in polls? Pollsters hypothesize that poll participants who are more likely to vote based on race are less likely to answer polls.

Why? They call it the "social desirability bias." Bigots might not not want to admit their true feelings to others, or even to themselves, for fear of being judged harshly for their racial biases. Even Rush Limbaugh says he is just joking when he calls Mexicans "stupid" and tells them to "go back to their countries." He just can't admit that he is a bigot. In the same way, some people responding to polls might not admit that they won't vote for Obama because he is black, or half-black, or half-white. That's why some of them wear hoods. They don't want to be seen. Some hockey-moms don't wear hoods, but they might as well. So they just don't answer the questions at all, or if they do, they don't answer honestly.

Pollsters call this the "Bradley Effect." The effect is named for black Democrat Tom Bradley, who lost the California governor’s race in 1982 even though he was way, way ahead in the polls. Some think this happened to Obama in the New Hampshire Democratic presidential primary. The polls had him running ahead of Hillary Clinton by up to 13 points. Yet, when the returns came in election night, Obama lost by three points.

When looking at poll results claiming to report on racial views of participants, remember the possibility that the Bradley effect might just be slanting the results. People just don't want to report their bigoted views honestly. Imagine that. Under this theory, even if Obama leads large in the polls on race issues, the actual voting scene may be different since bigots usually don't self-identify in polls.

What about those large poll leads anyway? What should you know about margins of error and poll sizes? Stay tuned.

Source: John Ridley, Rush Limbaugh Hates Mexicans (But in a Funny Way)!, http://www.huffingtonpost.com/john-ridley/rush-limbaugh-hates-mexic_b_127902.html (accessed Sept. 21, 2008); Source: John Nichols, Did the Bradley effect beat Obama in New Hampshire?, http://www.thenation.com/blogs/campaignmatters?pid=268328. (accessed Sept. 21, 2008); Subscribe to Kimberlie Ryan's Working Wellness

Election Polls: Part 3: Question Wording


When did you stop beating your wife? Known as a loaded question, it implies that you were indeed beating your wife. If you say, "I haven't," this isn't good, because you have accepted the premise of the question (unless you actually beat your wife, in which case, you should get immediate help).

In polling situations, when you're looking at poll results, you should carefully evaluate the words of the question asked. Some questions have an obvious bias, and the answers may not accurately reflect the whole picture.

Leading questions are statements disguised as questions. They may make the poll participant feel that only one response is legitimate. For example, "You think it is important to value life, don't you?" That's a leading question, as opposed to an "open-ended question" like "What is your view on the value of life?"


You also can see that the question may depend on how the respondent interprets the words "value of life." If asked further, some may say they value quality of life (maybe so-called right to die advocates), while others may say life above all else (maybe so-called pro-life/anti-choice advocates). They could both answer the leading question in the same way (yes, it is important to value life), while their fundamental beliefs may be opposite from each other.

Another thing to look for, if you can, is the order of the questions. Sometimes pollsters will ask a so-called "horse race" questions first. For example, the first question might be a question about Iraq, like "do you agree that the US should withdraw from Iraq?" Then the follow up question might be, "what do you see as the biggest problem facing our country today?" The tendency might be to answer, "Iraq," since it's already on the poll participant's mind. So, if you see wildly diverging poll results, you might see if you can find out the order of the questions asked.

Also watch for double-negatives and questions that are too complex. A good example of a double-negative can be found in this law: "Unless prohibited by treaty, no person shall be discriminated against by the Department of Defense." This is a double negative. Confusing. Here's another good example: "Do you not want no additional tax increases?" If the answer is "yes, I do not want no additional tax increases," it really means, yes I want tax some increases, because I don't want no tax increases. If the answer is "no, I don't want no tax increases," it means I want some tax increases. Watch for double negatives.

Also watch for questions that are complex, "do you want the candidate to vote for subsidies of all non-governmental financial institutions for the next three years unless the financial institutions implement internal reporting structures to which all financial analysts should report before the institution of the plan to subsidize all qualifying non-governmental institutions until such time as Congress approves otherwise unless earlier determined by a statutorily authorized governmental body or the voters by referendum vote or other determination as left to the discretion of Congress or the voters?" You can see the potential problems with a poll question like this.

So, when you're evaluating the credibility of a poll result, look at the words of the question to determine whether it would be easy to confuse the issues by using biased language, leading questions, double negatives, or complex sentence structures. Some might say, K.I.S.S.

Interested in finding out about how racial bias might influence polling results? Stay tuned!

Photo Source: Hay Kranen, Widimedia Commons, http://commons.wikimedia.org/wiki/Image:Question_mark_3d.png (accessed Sept. 21, 2008)

Subscribe to Kimberlie Ryan's Working Wellness

Election Polls: Part 2: Poll Timing


In our last post, we examined who is polled and how that impacts the credibility of the poll. If you'll remember, polls are more reliable when they are random from a defined population, like random digit telephone dialing. Poll results are less reliable when the poll participants select themselves, like Internet or TV call-in polls.


You'll also want to watch for when the poll occurred. According to pollster Claudia Deane, you should think "dead fish and relatives." You know, the 3-day rule? In the current political climate, some poll results can grow rancid even faster.

Polls can spoil quickly, for many reasons. Intervening events can change the way people think. Imagine polls taken on Friday, before the announcement of the financial melt down on Saturday. This event alone made a lot of people cranky. You can see how this change of mood could affect previous poll results.

Another big intervening event will be the debates. People could change their opinions after the debates, starting next Friday night, which is why poll results from this week might not be viable next week.

Of course, there will be post-debate polls. Even then, people change their minds after a couple of days. They watch the news, think about their current situations, and the story develops and evolves. In the Gore/Bush debates, many of the polls reported Gore as the winner the first day, but a few days later the polls reported a change of opinion toward Bush.

When you're looking at poll results, check out both the "methodology" box to see how they selected the sample, and when the poll occurred. Three days can be a good rule for large-scale polls, depending on the questions and any intervening events. Then ask yourself whether any significant events have occurred since the poll that would impact the results you are seeing.

Polls can be enlightening, if you know what they are actually telling you. If you think about who is participating and when the poll occurred, you are well on your way to putting these election polls into perspective. Are you ready to consider another important factor - the words of the question? Stay tuned.

Subscribe to Kimberlie Ryan's Working Wellness

Election Polls: Part 1: Selection of Poll Participants


I'll admit it - election polling was a mystery to me. I hear one poll say Obama is ahead. Another says McCain. Another says they're in a dead heat.

How do we know which to believe? How do they determine who's ahead at any given moment?

So I decided to take a class, or rather a Webinar from Poynter/NewsU, called Understanding and Interpreting Polls in the 2008 Election. I highly recommend it. In the meantime, I'll share a few gold nuggets with you.

One of the first things you should know when trying to interpret a poll is how the participants were selected.

The best way to ensure an accurate result is to take a random sample from a specific population. If it's not random, it's not mathematically accurate. The polling industry standard is the RDD, or random digit dial. This is a poll taken by telephone where numbers are not taken from a list, but instead are dialed randomly. Since a large number of the population has telephones, it excludes few enough people that it can still be statistically accurate.

Since women are statistically more likely to answer the phone, sometimes you will have someone ask for the man of the house, or even ask for someone born in a particular month. This is the pollster's effort to even out the possible bias if mostly women were to participate because they simply answered the phone. So they're not just being sexist!

Another method for selecting poll participants is registration sampling. This is when the list is derived from registered voters. This can give a good picture, but may exclude the all-important swing voters or new voters.

Both of these methods are called probability polling, because they draw random samples from defined populations.

Another alternative for selecting poll participants is non-probability sampling. This includes self-selected poll participants. For example, if you're watching Lou Dobbs on CNN, he sometimes has call-in polls. While this can give a reliable sample of Lou Dobbs watchers, it does not give an accurate measure of the whole country. Someone who is watching Lou Dobbs, who cares enough to pick up the phone and self-selects to participate is not representative of the voters who do not watch CNN or who instead are watching Jerry Springer or All My Children.

Be wary of these TV call-in polls (on any channel) for predicting the mood of the country - they are not statistically reliable.

What about Internet polls? While they are fast, easy, and private, pollsters do not yet know how to get a random sample of Internet addresses. They cannot make them up, like they can with phone numbers (imagine all the crazy e-mail addresses you know). And there is no consolidated directory of all Internet addresses. Another downfall of Internet polls - they do not know who is actually participating - it could even be a kid! Again, Internet polls usually involve self-selection, which are not probability polls and do not give accurate measures.

When you're looking at poll results, they usually have a little box somewhere on the page that shows the Methodology. Take a look to see if you can tell how the poll participants were selected, and take that into account when assessing the credibility of the poll numbers you are reading.

Next we'll explore other important factor in understanding polls, like timing and state polls v. national polls. Stay tuned!

Source: Poynter/NewsU, Claudia Dean: Understanding and Interpreting Polls in the 2008 Election, http://www.newsu.org/courses/course_list.aspx (accessed 9/18/08)

Subscribe to Kimberlie Ryan's Working Wellness