In order for a poll to predict an election, you have to have the poll reflect the proper demographics of the population. Right now there are a lot more dem voters than GOP ones, so you would have to reflect that in the sampling. This just gave Republicans equal to slightly greater weight, and shock horror wouldn't you know it it reflected far less Biden support. Most respectable polls have more dems in their sample than Republicans for this reason.
You might recall during the Romney campaign there was a sight called "unskewed polls" which "corrected the skewing of the polls" by reducing the dem vote to the level of the GOP, to "correct the skewing of the polls." Naturally based on that it was predicted by these unskewed polls that Romney would win. He didn't.
I haven't read this article but a quick search popped it up so it might explain better than me
https://www.gq.com/story/dont-be-an-ama ... l-sleuther
So You Want to Unskew the Polls
Don’t go into the crosstabs. Do talk about “nonresponse bias.”
By Mike Goodman
July 28, 2020
Our modern obsession with presidential polling dates back, of course, to 2008, when Nate Silver’s aggregation method reassured anxious liberals that Obama really was going to win. The seeds of another obsession—critiquing the validity of said polls—also sprouted during that election, with debates about polls conducted via cell phone vs. landline (the lack of the former, it was suggested, accounted for Obama not being even further ahead). But it wasn’t until 2012, when a man named Dean Chambers introduced the term “unskewed” to the electoral lexicon via a (now defunct) website devoted to proving that the polls were wrong about Mitt Romney, that the pastime of poll debunking really took off. The 2012 polls were not wrong—but the 2016 polls were (or, rather, as any data scientist would tell you, they said that Hillary was more likely to win than Trump, and we know how that turned out), and as a result nobody trusts Joe Biden’s 8% lead circa July 2020. As polling analysis has become more sophisticated, so have the ways people talk themselves into believing those polls are wrong. Here’s how to do it properly, and how not to.
Don’t unskew the polls. No, seriously, don’t. Polls are imprecise measurements. They have margins of error for a reason. They aren’t “right” or “wrong”: They’re snapshots of a moment, little bite-size pieces of information.
Do look at polls in more context. I haven’t persuaded you, have I? Fine. If you must start meddling, you can make polls more useful by viewing them in conjunction with other polls—that is, adding the context of other little bite-size pieces of information—and then if you’re really ambitious, do some math to create fancy advanced polling averages, or models that take those averages and turn them into probabilities. But none of that involves looking at a single poll and deciding it’s wrong and then hunting for the proof in the guts of the thing.
Don’t go into the crosstabs. One of the most common ways that amateur poll sleuthers go awry is by delving into the crosstabs of a single poll (where information about responses by race, gender, age, etc. is housed) in order to declare that it sampled the wrong amounts of different demographics. Hardcore Bernie Sanders supporters during the 2020 Democratic primary, for example, argued that polls were failing to capture his support because the crosstabs showed not enough young people were being interviewed.
This sounds smart, but it is, fundamentally, not how polls work. Polling companies don’t simply ask a random sampling of people their opinion, then write it up and call it a day. When it comes to elections, first they ask somewhere between 300 and 5,000 people their opinion, and then they weight those opinions based on categories so that the final averages in the poll reflect the demographics of the people being polled. Get too many old fogeys picking up the phone? They have less weight in the final average.
Sometimes polls will have few enough respondents in a given crosstab that they don’t even list the results and instead throw up an n/a (not applicable). An unskewer uninitiated in the ways of polling might think this meant that nobody in that category was interviewed, but that’s not the case. Most pollsters don’t share results when a very small number of people in a given crosstab are reached because the margin of error climbs so high, but that doesn’t mean that overall they aren’t weighted correctly in the poll.
Crosstabs are dangerous things. It’s best to steer clear.
Do examine whether a poll is weighted by education. One place where polling methodology is in flux is around weighting by education. It’s always been the case that people with a higher level of education are more likely to answer the phone. For most of polling history this didn’t matter, since education levels were not particularly predictive of how a person would vote. That changed in 2016 when, all else being equal, people with higher levels of education became more likely to vote for Hillary Clinton and those with lower levels were more likely to vote for Donald Trump. Still, not all polls (especially statewide ones) weight by education, and ones that don’t tend to favor Democrats by roughly three percentage points too many. So if a poll seems too good (or bad) to be true, check to see if it’s weighted by education.
Don’t talk about shy Trump voters. The myth of the shy Trump voter will not die. Because his victory was so unlikely (despite polls that seemed to indicate that maybe the race was closer than it seemed), one popular explanation is that there must be voters out there who like Trump but lie to pollsters about it. The only problem with this nice-sounding theory is that there is zero evidence for it. The reality of 2016 is that the polls were a little bit off thanks to the educational weighting issue, and there were a lot of undecided voters heading into election day. By a large margin, they decided to vote for Trump. That’s it.
Do talk about “nonresponse bias” (but carefully). Polls are snapshots of the current moment, and that means that they can be moved by short-term events. Historically, for example, each candidate has gotten a polling bump during their party’s presidential convention. Why is that? Well, it’s possible that undecided voters just tune in that week, start paying attention, and like what they see enough to hop on the bandwagon. Another possibility is that during the Republican convention it’s harder for pollsters to get Democrats to pick up the phone, and vice versa—this would be nonresponse bias.
Identifying and dealing with nonresponse bias is a tricky subject, and there isn’t widespread agreement on the best way to handle it. Most pollsters simply don’t worry about it and let the polling chips fall where they may. Some weight by party in order to make sure that the number of respondents reflects the number of Republicans and Democrats they expect to see in their sample. The differing approaches really can produce differing results. It’s possible that big swings in public polling can reflect nonresponse bias, but it’s also possible they can capture real changes in people’s opinions; differentiating between the two is the challenge. The good news for the unskewer is, this means you can tut-tut about nonresponse bias without really getting called on it!
Do examine the wording of questions. Electoral polling is relatively straightforward, because it mostly involves simple questions about who you’ll vote for. There’s a lot more room for interpretation when it comes to other types of poll results. That’s because rather than critiquing how well the poll is measuring something, you can critique what, exactly, the poll is measuring. For example, there have been a number of polls showing Medicare for All is very popular, topping out at 69% support. However, further polling complicates matters. The Kaiser Family Foundation found that many people who voice their support for the proposal believe, incorrectly, that they would be able to keep their private insurance; that support eroded dramatically when questions phrased the issue differently.
People’s opinions are complicated and oftentimes opaque, even to themselves. There’s only so much that even good faith polling can do to capture what people believe. And quite often polls are, in fact, giving people new information and then asking their opinions on it. So if you find it hard to believe that a lot of people have firm opinions on “cancel culture,” as a recent Politico/Morning Consult poll claimed, you have a lot of room to debate those findings.
Do check what a poll actually says. In late February, just as coronavirus was first becoming a story, a marketing firm released a poll that got written up in major publications across the internet, with the claim that 38% of people now said they wouldn’t drink Corona beer because of the coronavirus. The poll said no such thing: Instead, it found that 38% of beer drinkers said they wouldn’t drink Corona, regardless of the circumstances. Some unscrupulous marketing emails and a failure to actually follow up and look at the poll itself led to a news cycle totally divorced from the actual findings.
Don’t sue the pollster. The Trump administration recently threatened to sue CNN over a poll showing Joe Biden leading by 14% nationally; CNN naturally declined to retract. And thus the unskew-the-polls movement reaches its absurdist apogee.