Is Biden in for a landslide? How to read the 2020 presidential election polls
“Ignore the national polls.”
It has become the t-shirt worthy motto of the 2020 U.S. Presidential election. Don’t dwell on Joe Biden’s polling advantage, we’re told, it means little more than Hillary Clinton’s sizable lead in 2016. It reflects a share of the popular vote and, as we all now know, the popular vote has never decided the outcome of a presidential election. To understand which way the White House might go, look at the individual states instead, especially those most likely to swing in November…
It’s all sensible advice, though immediately makes the matter of analysis more complicated. It pushes us deeper into the weeds of what constitutes a “reliable poll”, which is especially pronounced once we move from the national to the state level. As polling guru, Nate Silver, explains in his 2012 book, The Signal and the Noise: “The further down the ballot you go, the more volatile the polls tend to be: polls of House races are less accurate than polls of Senate races, which are in turn less accurate than polls of presidential races. Polls of primaries, also, are considerably less accurate than general election polls. “
But let’s begin with what’s easy to say about 2020. There are thought to be six key swing states come November: Arizona, Florida, Michigan, North Carolina, Pennsylvania, and Wisconsin. They were all carried by Trump in 2016 but Biden currently leads polls in all six (though Florida looks more like a tie).
Indeed, a recent surge in favour of Biden means that even more states might be liable to swing, with his now cash-rich campaign moving $6.2 million into ad-spending in Texas alone, which is almost outrageous. Add in other states where there are historically close battles – Iowa and Ohio both went red in 2016, with Hillary Clinton taking Colorado, Minnesota, Nevada, New Hampshire, Virginia – and you begin to see the full width of the battle line. This is why Trump’s campaign, starved of cash, is struggling to mount a defence, let along think about offence.
Yet, just to make matters even more confusing, no two states are entirely alike. Each state is weighted and given a set number of votes to be carried into the Electoral College in December. The presidency is decided by these votes. The candidate with the most wins the White House even if, as in the case of Trump and George W. Bush before him, in 2000, they had fewer people voting for them in November. This is why a big state like Florida with its 29 votes dominates the news. However, if Biden can take a few of the smaller states, even Florida looks less vital. With Florida (or Texas), we’re heading into landslide territory.
But all that assumes the polls are right.
And herein lies the problem. New polls come out daily and often present conflicting pictures of the electorate. Which do we believe? The answer is: probably none of them. Belief is a projection of faith and we shouldn’t be in the faith business. What we’re looking for is a prediction rooted in maths. A poll doesn’t indicate that something will happen. It is probabilistic. It indicates the chances that something might happen. It’s easy to confuse the two.
Indeed, the history of polls around elections is the history of this kind of statistical hubris: people who thought they’d figured out the secret for one election, only to come a cropper the next. Naturally, the human brain tends to prefer prediction over probability. The former is sexy, the latter nerdish. The very first political poll was for the 1824 presidential election and proved hugely popular after correctly predicting that Andrew Jackson would win.
Soon, polling became formalised in newspapers and The Literary Digest was one of the first to earn a reputation for accurate polling by correctly anticipating the outcomes of 1920, 1924, 1928, and then 1932 elections. And then they started to get them wrong and George Gallup established his reputation as the new go-to psephologist by predicting Roosevelt’s election in 1936.
Little has changed over the years. Every election cycle, we laud or condemn the pollsters, faddishly believing that some new upstart company now holds the secret to future elections. In 2016, the pollster Rasmussen Reports boasted it was the most accurate pollster and was soon quoted at every rally by the excited President who declared his love for the pollster who constantly reported that his numbers were high.
Yet their success was no more than that of another pollster who struck gold one time and felt infallible. By 2018, CNN was reporting that Rasmussen was the least accurate of all the pollsters. Some analysts refused to include it in their data because of its perceived Republican bias. As Silver puts it, “sometimes polls that have a crappy methodology are gonna get lucky”.
Beyond how we perceive the polls, there is the hard science of the methodologies. First, we have problems with sampling, choosing the right people who reflect the population. The problem here is that bias is impossible to avoid, even when it comes to finding a random sample. Without getting into the deep maths of the problem, even random numbers generated by computers aren’t random but pseudo-random.
Yet, even if you could produce a random sample, perhaps picking numbers out of a telephone directory to sample the public, you’re introducing a different kind of bias. You’ve assumed that telephone owners represent the electorate. More specifically, telephone owners who are around in the middle of the day or aren’t too busy to answer a pollster’s questions. This tends to skew results in favour of older, retired, and wealthier voters.
To make it more contemporary, this is the phenomenon we saw during last year’s U.K. general election when Twitter users reported the youth of the nation were marching to their polling stations to put Jeremy Corbyn into power. It might have been convincing until you remembered that Twitter isn’t the electorate. Twitter is inherently biased towards a certain kind of voter. It does not reflect reality.
The answer that most pollsters have to this discrepancy is “weighting”, which means recognising the inherent bias of the sample and then adjusting it, so it more closely matches the demographics of the entire electorate. This is why polls increasingly talk about “likely voters”, so it avoids the noise of that sizable portion of the possible electorate who might hold a view but are unlikely to influence the result. Weighting also addresses demographic differences that might mean some groups are under or over-represented. Some polls will mention how they conduct some surveys in Spanish or reach out to voters without college degrees.
The result is that some pollsters have more robust methodologies than others and, thankfully, we have Nate Silver’s site, Five Thirty-Eight, to point those out. Silver achieved some notoriety by getting 49 out of 50 states right in the 2008 U.S. election and has, ever since, become a valuable analyst not just of the polling data but the polling methodologies.
Silver’s site helpfully rates the polling companies with only six earning the highest mark (A+): Marist College, Monmouth University, ABC News/The Washington Post, Siena Collect/The New York Times Upshot, Selzer & Co, and Muhlenberg College.
Notable omissions from the top tier are YouGov (B) and Gallup (B), as well as the President’s favourite poll, Rasmussen, which earns only a C+ rating.
What does this tell us? Well, that not all poll (an pollsters) are equal but also that we should be cautious when thinking about polling. Silver himself cautions against political punditry and getting “lost in the narrative”. “Politics”, he says, “may be especially susceptible to poor predictions precisely because of its human elements: a good election engages our dramatic sensibilities.”
Yet so too is it easy to get lost in the polls. It’s akin to that trend among bad GPs who only diagnose through blood results. No matter what your ailment, this kind of doctor doesn’t diagnose the patient. They diagnose the blood. The same can be true of anybody trying to understand politics through polling. The political organism is complex and polls a compelling but lazy way to understand it.
This is what happened in 2016 when it was fairly obvious that the polls weren’t accurately conveying the different levels of enthusiasm for the two candidates and, certainly, didn’t reflect the pragmatics of the peculiar system of the electoral college, where the winner could come second, and the loser could become the 45th president.
Rather, polls should be part of a holistic approach to understanding elections. They reflect opinion but those opinions are held by people who are often hard to tabulate. In 2020, the opinion polls reflect neither voter suppression nor grassroots efforts to turn out the vote. They don’t reflect the anger or fear of the electorate, nor the degree to which the narratives can turn quickly.
They certainly don’t measure the funds available to the campaigns in the final weeks of the election, nor do they provide a probability for an FBI Director announcing a probe into a candidate’s emails on the eve of the election or the cohesive quality of the punch cards that produced “hanging chads” in Florida in 2000.
The best we can say, for the moment, is that Biden is “likely” to win but that doesn’t mean that he “will” win. Even a tossed coin can sometimes land on its edge.