471
Views
0
CrossRef citations to date
0
Altmetric
Debate

Debate: Reporting pre-election polls: it is less about average Jane and Joe, and more about polarized Karen and Kevin

ORCID Icon & ORCID Icon

2024 marks the year of elections, with at least 64 countries holding national or regional elections, collectively representing almost 50% of the total world population (Ewe, Citation2023; Meakem, Citation2024). Politicians, political parties, think thanks and market research institutes monitor weekly (if not daily) voting polls to gauge election candidates’ performance. They track the effectiveness of electoral campaigns in convincing citizens to cast their votes to support a candidate’s ideology, vision, manifesto, strategic plan, or governance programme. Voting polls also serve as a valuable tool to assess the impact of scandals surrounding election candidates—whether initiated by scrupulous opponents or not—on a candidate’s public image (Pereira & Waterbury, Citation2019; Rienks, Citation2023; Von Sikorski et al., Citation2020), and offer easy topics for grateful news outlets.

What is wrong with today’s voting polls?

Typically, poll data are presented as averages or percentages: ‘23% of voters would choose Candidate ABC, while … ’. However, only a few news articles delve into the sampling approach and sample size. Even fewer provide insights into variations in responses or confidence intervals. Almost none explain what all of this means in terms of the validity and reliability of poll results—for example in predicting actual election outcomes, forming personal opinions on voting choices, or strategizing post-election coalitions and parliamentary majorities. Therefore, for critical readers and researchers, it is a Sisyphean challenge to uncover more information through often inaccessible websites about sample proportions, applied weighting and the exact survey questions asked. Furthermore, when additional information is available, it is seldom complete and journalists often overstep reasonable boundaries, drawing conclusions from poll data beyond what can be reasonably inferred. Unfortunately, the impossibility of validating journalists’ claims about poll data is a widespread issue for many news outlets, including major institutions in Austria, Belgium, Germany, the Netherlands, the USA and the UK—these six being the authors’ countries of interest regarding voting poll data.

But, why would journalists—who are increasingly assessing their own journalistic quality based on social media clicks, shares and ‘likes’—be too critical and nuanced about the data they report? In the end, surprisingly high or low poll values for a particular party or candidate provide tempting content for click-bait titles, while providing critical nuances to such uncontextualized numbers would essentially mean acknowledging that readers are misled by the click-bait titles used. A journalist from a prominent Belgian news outlet told us: ‘Issues of reliability and confidence intervals are too complex for the average person; people are not interested in these details’. But isn’t it the responsibility of a journalist to clarify that a difference of, for instance, less than 5% between two election candidates, derived from a sample constituting of under 0.05% of the voter population, should not warrant a (click-bait titled) newspaper article?

We must not be naïve. The true impact of reporting poll data in national media lies not in the data itself, but in the substantial amplification effect it has on public opinion—particularly among undecided voters (Kuha, Citation2022). Through the dissemination of poll results, politicians gain additional opportunities for campaigning, including giving interviews, ridiculing opponents, or exploiting the collective confirmation bias and bandwagon effect inherent in uncontextualized data reporting. This multiplication mechanism is well understood by politicians and journalist. For example, in Austria, a former chancellor and his political entourage are currently under legal scrutiny for intentionally using biased poll data in favour of his persona to generate momentum within his party, eventually influencing national elections (Bennhold, Citation2021). In Belgium, several politicians resigned in 2022 after continued critique initiated by a national poll that was sponsored by two major news outlets and was given massive media attention (by these and other news outlets), and which despite repeated critical comments by several political scientists about the disproportionate and unnuanced media attention that was in strong contrast to the available poll data (Abbeloos, Citation2022; De Maeseneer, Citation2022).

And even if journalists could be persuaded to be more cautious about generating too much excitement over poll data, these data are typically reported as aggregated percentages and averages. However, aggregated evaluations fail to provide insight into the extent of differences in citizens’ opinions and their compatibility for a suitable post-election solution. For example, an extreme right-wing party might ‘win’ elections by becoming the largest party in a multi-party parliamentary democracy (for example in the national elections in the Netherlands in 2023). However, if a substantial proportion of citizens who did not vote for this extreme right-wing party strongly opposes a coalition with it, viable solutions become challenging in post-election negations. Eventually, the spread in opinions entails that parties and candidates—to form government majorities—must make substantial post-election compromises not aligning with the reasons their respective voters had for supporting them initially.

Also, an averaged score for trust or satisfaction of, for instance, five out of ten may indicate (when there is not much variation in opinions) that all citizens more or less agree that government or party performance has been mediocre. In such scenario, there is room for improvement and the situation can be changed with an approach directed towards all citizens, or at least a substantial majority of them. However, in a polarized society, such a score may result from a substantial proportion of citizens being (very) satisfied, while a smaller but still substantial proportion is extremely dissatisfied. Consequently, government and party strategies for improving trust and satisfaction need diversified approaches towards specific groups in society.

What should we do about it?

To value poll data’s relevance and contribute to depolarization rather than polarization, a more profound understanding is needed of the differences and variances in various citizen opinions. Therefore, we formulate the following recommendations:

  • Provide more details on sampling, research methods and data quality assurances, even when perceived as complex.

  • Formulate conclusions and claims judiciously (aligning them with the sample size, survey design and analysis methods), even if it makes an article less ‘breaking’.

  • Report with greater detail and nuance regarding variations in the poll data and provide details on confidence intervals. Articulate the level of uncertainty of claims made using the available data.

  • Make the raw poll data, complete survey and research protocol publicly available. This is good practice in the scientific community. With the impact that national media have, it should soon become good practice for journalists too.

  • For policy-makers, public administrators and politicians, we recommend engaging carefully with poll data and public reactions to it. Only nuanced reactions and careful decisions can contribute to depolarization and continuing stable policy-making and public services.

  • For researchers, we recommend staying vigilant and contributing actively to a constructive debate on good data (reporting) practice in the scientific community and thus also far beyond the boundaries of this community.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References