Not for the first time, the pollsters got it wrong. Far from being a sweeping win for the left, the first round of Brazil’s presidential elections was much closer than expected, with the country’s far-right president significantly outperforming predictions.
With almost all votes counted on Monday, Jair Bolsonaro’s veteran leftist rival, Luiz Inácio Lula da Silva, had secured 48.3%, while the populist incumbent was just five percentage points behind on 43.3%, a much narrower margin than most pre-election estimates.
The final two polls of the campaign, released on Saturday by some of Brazil’s most respected pollsters, Ipec and DataFolha, had shown Lula tantalisingly close to avoiding a second-round runoff with 50% or more of first-round votes, excluding blank and spoiled ballots.
Both predicted Lula would emerge with a 13- or 14-point margin over Bolsonaro, who was projected to win just 36% or 37% of the vote. Other late polls also placed the leftist within the margin of error of outright victory in the first round.
So what went wrong? Few surveys, to be fair, were more than a couple of points off in projecting Lula’s final score. But many were way off with Bolsonaro’s. Why did so many pollsters fail to capture the level of the far-right figurehead’s support?
Political polling is, notoriously, an uncertain business. Notable recent failures include Britain’s 2015 parliamentary election, when almost 50% of all polls over the six-week campaign showed Labour ahead, but the Conservatives won by seven points.
The next year, while half of all Brexit campaign polls had the vote to leave ahead, none of the British Polling Council’s seven members correctly forecast the final result (though several were within the margin of error). Remain was consistently overestimated.
Also in 2016, US pollsters called the popular vote right, with scores that were well within their margins of error – but failed to accurately predict the swing-state votes that would end up propelling Donald Trump into the White House.
There have also been notable successes. In 2017, French pollsters predicted the four leading candidates in the presidential election first round would score 24%, 22%, 20% and 19%. They proved accurate, to within less than one percentage point, on each.
And more recently, Italy’s pollsters turned in a respectable performance, underestimating the final 26% winning score of the Brothers of Italy leader, Giorgia Meloni, by little more – on average – than one percentage point.
It is the failures, however, that people remember. How do they happen? Pollsters work using samples of voters whose raw responses are weighted to make them as representative as possible. Any errors, therefore, tend to lie in the sample selection method, or the statistical adjustments applied afterwards, or both.
“Occasionally there’s a specific reason, like a very low turnout or a very late swing,” said Anthony Wells, head of European political and social research at the pollster YouGov. “Almost always, though, if there’s a big error, it’s to do with the sample.”
Typically, Wells said, this amounts to “not controlling for demographics that have become significant for this election”. All samples “contain some skew”, he said; the most obvious – age, gender, social class – are routinely controlled for.
“The problems arise when the sample is skewed in a way we didn’t expect, and haven’t adjusted for,” he said. In the Brexit referendum, for example, UK pollsters have concluded they went wrong largely because they failed to weight sufficiently for education.
US pollsters came to essentially the same conclusion with their 2016 election, realising that voters without college degrees – who ended up turning out for Trump in huge numbers – had been badly underrepresented in state polls in particular.
In Brazil’s election, Andrei Roman of the pollster AtlasIntel told Bloomberg, many samples overrepresented poor voters, who generally support Lula. Partly, that was because Brazil has not carried out a census since 2010.
Moreover, while polling firms adjust for “shy” far-right supporters unwilling to tell the truth about their voting intentions, many Bolsonaro voters, like many Trump voters, may just have refused to respond, seeing polls as part of a “fake news establishment” – and leaving pollsters unable to reach a large part of the electorate.
Ultimately, pollsters stress, polling remains as much an art as a science, necessitating hard judgments not just about how different kinds of people respond to polls, but how they will end up actually voting. “We are constantly running to catch up,” said Wells. “Every election, there will be something different.”