Accessibility Settings

color options

monochrome muted color dark

reading tools

isolation ruler

Illustration: Marcelle Louw for GIJN

Resource

» Guide

Topics

Revised Elections Guide for Investigative Reporters: Political Messaging and Disinformation

Read this article in

Editor’s Note: This reporting guide has been updated and revised for the 2024 election cycle. It was originally published in 2022 and the previous version of this chapter can be read here.

A useful starting point for combating election disinformation in the current global landscape is adopting a transparent and unapologetic position: that journalism does, and should, take sides on the issue of democracy, and serves as one of its guardians.

At a minimum, experts say this basic position helps to counter claims of party-political bias, and sharpens the choices on which bad actors and what kinds of disinformation to investigate: those designed to undermine democratic rights.

One report on election disinformation by the Reuters Institute for the Study of Journalism at the University of Oxford identified a common strategy between independent newsrooms in at-risk democracies like the Philippines (Rappler), India (The Quint), and South Africa (Daily Maverick): “a common sense of democratic mission associated with accountability journalism.”

These newsrooms have chosen to pay less attention to the “antics” and divisive statements by populist candidates and to focus instead on investigating topics with a direct impact on communities, and on the early identification of those deception campaigns with the potential to do the most damage.

Another structural strategy is for newsrooms to publicly commit to investigate the bad actors behind disinformation campaigns — both to attract sources and whistleblowers, and to put healthy pressure on reporters to dig beyond the falsehoods. For instance, for 2024 — in a model that involves AI tools, and which could be replicated in other countries — the UK-based Bureau of Investigative Journalism has launched an Influence Operations project dedicated not only to revealing efforts to dishonestly manipulate voters, but also to expose the people behind them.

One great example of a bad actors-focused election disinformation investigation was the collaborative Digital Mercenaries cross-border project, in which newsrooms from 16 countries, including Argentina, Venezuela, Nicaragua, and Mexico, exposed a network of commercial consultants dedicated to deceiving voters throughout Latin America. Co-ordinated by the Latin American Center for Investigative Journalism — or CLIP, by its Spanish acronym — the series showed how influence “mercenaries” exploit xenophobic fears and partisan hatred to mislead, and used follow-the-money techniques to show how their strategies are often exported across borders. The project noted that these so-called dark PR professionals “like to think of themselves as strategists, but they seem more like televangelists of formulas to manipulate” — and cautioned that they sometimes have more influence on voting choices than politicians themselves.

Social media platforms have, of course, grown into powerful domains of political messaging in elections. Their impact runs from the positive, such as boosting young and disenfranchised voter engagement, to the frivolous, like passing around candidate memes, to misleading paid advertisements and coordinated disinformation campaigns, to nativist or partisan platforms that welcome hate speech. Meanwhile, several major social media platforms have gutted their already inadequate safeguards and hate speech watchdog teams.

Disinformation is funded and coordinated because it has the proven power to swing national elections, and to power new anti-democratic laws that can ensure future skewed election wins. A classic study of the US 2016 election by Ohio State University found that 4% of likely Hillary Clinton supporters — a decisive margin — were dissuaded from voting for her due to wildly false “news” stories. These included headlines like “Clinton approved weapons sales to Islamic jihadists, ‘including ISIS’” — a story believed by 20% of former Barack Obama voters.

Major New Disinformation Threats to  Monitor

A New Level of Foreign Interference

Elections Guide Chapter 4 small image final

Illustration: Marcelle Louw for GIJN

While several elections have suffered isolated foreign attacks in recent years, digital security groups and pro-democracy think tanks have warned that established autocracies — including Russia, Iran, and China — will likely use the major election calendars of 2024 and 2025, and their generally chaotic politics, as a historic opportunity “to discredit democracy as a global model of governance.”

Experts told The New York Times that many campaigns will likely echo a recent Russian influence project called Doppelgänger, which “cloned” 17 trusted media brands with similar domain names, designs, social media profiles, and search redirect programs, and used AI tools to generate fake articles and posts. (EU DisinfoLab researchers used tools such as the Meta Ad Library, Crowdtangle, and open source domain infrastructure tools to track the Russian network.) Another model is represented by China’s “Spamouflage” operation, which uses commercial ads as camouflage for targeted political messages — and which responded to big-tech moderation crackdowns by easily pivoting to small social platforms and forums.

Meanwhile, the most common form of both foreign and domestic cyber attacks are DDoS (distributed denial of service) assaults against campaign websites and independent election commissions, often timed to election deadlines like voter registration and polling days.

Tip: For unverified but sometimes powerful leads on foreign influence ops, check updates from volunteer troll farm watchdog groups like antibot4navalny.

Deepfakes and AI Speech Impersonations

Although, thankfully, sophisticated “deepfake” videos and images did not impact elections in recent years — as media manipulation experts had feared — a lower-tech form of disinformation known as “cheapfakes” has frequently sought to deceive voters in several countries. These include editing the closed captions on real video clips of candidates to add false or outrageous statements, so that voters viewing the clips on their phones without audio might believe the falsehoods.

However, those experts now fully expect deepfakes, particularly AI-generated voice impersonations, to threaten elections in many countries in 2024, due to the new, easy availability of artificial intelligence tools. Their fears of impact were underlined by a recent study by University College London, which found that humans cannot detect speech deepfakes — in which a person’s voice is cloned by a machine learning tool to state a fabricated message — 27% of the time.

Days before Slovakia’s election in October 2023, the viral spread of deepfake audio that simulated the voice of the leader of the pro-Western Progressive Slovakia party — and which featured a discussion of rigging the election — was linked to a narrow victory for a pro-Russia party. According to a recent report by Infosecurity Magazine: “From the US presidential race to European and Indian polls, the world must brace itself for a year of unprecedented cyber threats to its electoral systems.” The report also quoted one cybersecurity expert who warned of a new tactic in 2024: that disinformation experts will use these techniques to target key human influencers who have already amassed large audiences, “and intercept and interrupt their followers, and create false narratives that will heighten tensions and cause the political bases to become more divided.”

The Myth of Consequential In-Person Voter Fraud

It’s hard to think of a disinformation topic more widespread, transferrable, and baseless — and, indeed, successful — than the claim that tens of thousands, or even millions, of individual voters risk criminal prosecution to vote multiple times, or successfully impersonate fellow citizens at the polls. For instance, research by the Brennan Center for Justice has placed the incident rate of impersonation fraud in the US at between 0.0003% and 0.0025%, making a voter statistically more likely to be struck by lightning. The notion that it changes election results is more far-fetched still. Yet — from Brazil to the US and Myanmar — populists and their messaging allies, and even juntas, have built effective disinformation campaigns around this myth that intimidate minority and immigrant communities, and that have generated new voting restriction rules that do, in fact, rob thousands of citizens of their rights, and which can change results.

Tip: use data-based graphics and effective metaphors to demolish the myth. For instance: imposing voter restrictions to counter in-person voter fraud is like abolishing car seat belts because a handful of drivers were trapped by jammed seat belt buckles after crashes in the course of a year.

‘Flooding the Zone’ with Confusion

Journalists are familiar with the threat posed by conspiracy theories — but numerous nations are now suffering from a deluge of what some researchers call “conspiracy theory without the theory.” This refers to a powerful messaging tactic once described by former Trump advisor and far-right provocateur Steve Bannon as “flooding the zone with shit.” The goal here is to erode the public’s trust in media and democratic institutions by overwhelming people with a barrage of claims and conspiracies without bothering to offer even fake or anecdotal evidence. Pushed by partisan supporters and amplified by social media algorithms that prioritize angry speech over facts, this kind of “naked assertion messaging” represents a hidden threat to democracy everywhere, according to experts, because it “imposes its own reality through repetition.” (See this investigation by Rolling Stone that shows how Trump himself became the chief source and amplifier of wild, easily disprovable claims about critical election data infrastructure in the US.)

Branko Čečen, director of the Center for Investigative Journalism of Serbia (CINS), warns that an alarming number of voters in the Balkans region have fallen under the thrall of completely groundless online conspiracy theories. Čečen notes that this flood of naked claims has generated political apathy in other citizens, who have “just retreated.”

Tip: Relentlessly repeat the evidence-based truth of an issue prominently, or high up, in published investigations, and use data infographics wherever possible.

Meanwhile, state-run and partisan traditional media corporations continue to be the dominant misinformation amplifiers for autocrats and populist candidates. In places like Africa and India, these elements have manufactured false narratives via complex and disciplined messaging networks. The growing consolidation and monopolization of privately owned TV channels by allies of authoritarian leaders drown out opposition voices in countries like Serbia and Poland. As the Media & Journalism Research Center points out, there has been a far greater research focus on social media disinformation than on the election impacts of partisan media ownership. As a result, the center has launched a searchable data gathering project called Decoding the Power Play: Media and Elections in 2024, which already features some useful background information and aims to “uncover the connections between media corporations and political entities” across 40 nations holding elections in 2024.

Thanks to data sharing and tracking, online campaign advertising now can be precisely micro-targeted to demographic subgroups. According to global fact-checking nonprofit First Draft News, this “patchwork of laws and a laissez-faire approach to political advertising has resulted in a lack of accountability in political campaigns, and the spread of falsehoods, mis- and disinformation.”

While fact-checking organizations generally do an admirable job of exposing online falsehoods and identifying manipulated images, often the role of tracking the individuals behind election disinformation or dirty tricks is left to investigative journalists. They’ve become the primary source for uncovering critical questions about election messaging, such as: Who is driving the political messaging online? What conversations do they trigger among which voters? Where does the funding for the candidates come from?

For reporters in established democracies under siege from disinformation and anti-democratic forces, Sarah Blaskey, an investigative reporter with the Miami Herald, offers this insightful suggestion: Democracies are much more fragile than you think, so cover your election as if you are a foreign correspondent.

In this chapter, we share useful techniques for tracking political conversations, campaign advertisements, and disinformation narratives online, as well as tips for finding the sources of these campaigns based on interviews with some of the world’s leading media manipulation experts. We also share a list of “dirty tricks” that journalists need to watch out for and explain how to reveal the culprits or questionable laws behind them.

A quick note on terms: This chapter uses the terms misinformation, which is broadly defined as false information that is spread regardless of whether there is intent to mislead, and disinformation or “influence operations,” which refers to the publishing of deliberately misleading or biased information, manipulated narratives or facts, and propaganda.

Messaging Trends to Watch

First Draft News has flagged three main tracks for election disinformation.

1.  Disinformation intended to discredit candidates and parties.

2.  Information operations intended to disrupt the voting process and discourage participation, like misleading voters about the times and places to vote.

3.  Falsehoods designed to undermine public confidence in the results.

Claire Wardle, co-founder of First Draft News, warns that the “weaponization of context” — in which genuine content is willfully distorted — is the most persuasive form of disinformation in elections. Check out AFP’s new, two-part online course — “How to Tackle Disinformation During Elections” — which is presented in French, English, Spanish, and Portuguese, and is designed to help investigative reporters digging into campaigns around the world in 2024.

The recent rise in generative AI bots and other easy-to-access artificial intelligence apps present alarming threats to electorates, where a single activist can flood the internet with dozens of AI-generated pieces of content that can credibly masquerade as well-sourced newsroom stories or original videos relevant to the election.

Political campaigns are also increasingly using new technologies to get around rules to protect voters from phone harassment and intimidation, and can use anonymous mass text campaigns to urge supporters to swarm targeted events or voting stations. For instance, the US Federal Communications Commission bans the use of “autodialing” to text political messages to citizens. In response — rather than hire armies of volunteers to manually dial all the numbers — some campaigns are using semi-automated, peer-to-peer partisan texting platforms to bombard people with unsolicited mass messages.

Similarly, Indian president Narendra Modi’s Bharatiya Janata Party has built a massive army of volunteers who are primed to repeatedly share pre-written campaign messages and themes with their friends and family. An analysis by the Africa Center for Strategic Studies found rampant disinformation election campaigns across numerous countries on that continent. “The coordinated disinformation we have uncovered is just the tip of the iceberg,” explained Tessa Knight, a South Africa-based researcher with The Atlantic Council’s Digital Forensic Research Lab (DFRLab). “It is expanding as governments and political figures learn to manipulate social media algorithms through fake, duplicated, and coordinated content production.”

Techniques for Tracking Political Messaging

Given the information deluge involved with elections, experts in the field try to organize and automate their monitoring as much as possible. “When following people driving the conversation, the important thing is to let algorithms do the work — so when you follow one account, Instagram will automatically recommend a handle of others related,” says digital media manipulation expert Jane Lytvynenko.

Find where the political conversations are happening. Every country, region, and ideological group has its preferred social media and messaging platforms — WhatsApp dominates in southern Africa and many parts of Latin America; in China, it’s WeChat; among right-wing groups, it’s largely Telegram or VK; and in the Philippines, it is almost entirely Facebook. So, it’s crucial to identify the key platforms and messaging apps in your election landscape at the outset.

Hand holding cellphone with Telegram app imag

Find out where the political conversations are happening online in your country. Image: Shutterstock

“The first thing that’s really important is to understand where conversations are happening, and the trends,” explains Lytvynenko. “For instance, we see a huge uptake of Telegram in countries like Brazil, but it’s dwindling in the US. But be aware that Facebook groups continue to be a huge vector for mis- and disinformation.”

Discover what voters are trying to find out. Use the filters within the Google Trends tool to learn what voting communities are searching for, as well as sudden accelerations of interest in an election topic.

Ask the right questions on disinformation, in sequence. In her disinformation chapter for GIJN’s Investigating Digital Threats guide, Lytvynenko says: “The initial question every reporter should be asking is whether they’re looking at a single incident or a wide-scale attempt at manipulation… There are several indicators and questions that can help here: when the accounts were created, when the content is shared, who amplified the content on different platforms, and what are the commonalities in the content itself? Timing can also be telling — has some of the content been shared within minutes or even seconds from accounts with similar characteristics?”

Minimize harm when describing the falsehood. Says Lytvynenko: “Put the accurate information in the headline. In the body of the text, adopt the “truth sandwich” approach: accurate-inaccurate-accurate. This will help readers remember the true rather than the false information. When linking out, send your readers to an archived version of the false information to avoid bringing traffic to disinformers.”

Try the copy-paste trick to check for allied partisan sites. This is one technique for spotting coordinated influence campaigns that takes just seconds to complete. Copy a chunk of text from the “About” or “Home” page of hyper-partisan websites, paste it into Google, and, within moments, you can quickly see if it’s replicated on other sites. Also, look out for similar logos and layouts, as these could indicate the involvement of the same web designer. (If you get hits, you can then dig for the hidden owner of the original site by using a tool like Whoxy.com, or following the “UA/Pub” method described in this guide’s New Election Digging Tools chapter.

Find local researchers already studying political ads. “Are there academics in your country who study political advertising? There usually are such people in almost any country,” says ProPublica’s media manipulation expert Craig Silverman. “Definitely talk to them, and see what studies and data they’re collecting, which could be different from what the platforms are putting out.”

New Tools for Disinformation Digging

SimilarWeb for tracking traffic on platforms like WhatsApp. “Closed” platforms such as WhatsApp have become so important for political messaging in Southern Africa and Latin America that newsrooms might consider one of the paid-for, commercial tools that can accurately measure and track traffic among chat groups there. Some investigative journalists recommend SimilarWeb — a complex tool more commonly used for marketing intelligence — as a powerful way to see which disinformation websites get the most shares on private and political WhatsApp groups. A more user-friendly WhatsApp analysis tool is Palver, although this is limited to South America.

Junkipedia for tracking disinformation networks. As described in the New Election Digging Tools chapter of this guide, Junkipedia offers databases and remarkable insight into at least 12 social media platforms, and allows reporters to build account lists and even auto-transcribe and search fringe podcasts.

Content verification tools. While fact-checking organizations do an excellent job debunking election falsehoods, it’s still essential for investigative reporters to have a solid verification toolkit to find patterns behind problematic posts and content. Craig Silverman’s seminal Verification Handbook includes dozens of open source tools, and tailors many of these to specific kinds of content. Look for tools that specifically examine election claims directed at minority groups — such as the Factchequeado portal, which checks Spanish-language content aimed at Latino voters in North America. Meanwhile, the powerful WeVerify platform has been popular among election reporters for the past few years. “You can use it to reverse search images or videos, and compare images for manipulation,” Lytvynenko says.

A tools methodology for digging into Telegram. Jane Lytvynenko suggests this three-step strategy for mining election threads on Telegram — a hugely important messaging force in numerous election campaigns, and a focal point for many far-right groups.

  1. Use the following operator in Google — site:t.me (keywords) — to find the few Telegram channels that might be useful to you, using keywords for the wildcard.

  2. Then open the tgstat tool, and plug in the interesting channels you find. “Tgstat is particularly useful because it gives you a view of the ecosystem, and then you follow those channels,” says Lytvynenko. Reporters can also use the Telegago tool for these searches.

  3. Download the Telegram desktop app directly from the site. Doing so, Lytvynenko notes, gives you the option to export the conversation history. “That helps with any kind of bulk analysis. The beauty of the Telegram desktop app is that once you’re subscribed to enough channels, you can just use it as a search engine.” She also suggests that journalists try metadata2go.com to directly dig into the metadata behind videos and images on Telegram.

In addition, a new, journalist-built tool called Telepathy has quickly become known as “the Swiss army knife of Telegram tools,” because it can not only show how channels are linked, but can also archive entire chats, identify top posters, and collect member lists. While disinformation researchers call it “user-friendly,” the tool does need basic open source computer skills to install and run. It has both free and paid tiers.

Snapchat Snap Map

A Snapchat Snap Map showing geolocated posts from Kabul, Afghanistan. Image: Screenshot

Snap Map for breaking news. SnapChat’s Snap Map feature shows heatmaps of messaging activity and allows you to zoom in to a spot on a map, and watch the video snaps being taken there in real time. “It’s a useful tool, and it allows you to collect more context in a breaking news situation,” says Craig Silverman.

Find hidden domain owners with Whoxy. For those without command-line WHOIS search skills, Whoxy.com allows reporters to search for the owners of problematic campaign-related sites or domains with email, person, or company names. Mexican disinformation reporter Emiliano Fernández also recommends the free DNSdumpster site and the paid-for Iris Investigate dashboard as useful tools for digging into the hidden world of domain data. See this comprehensive checklist of clues to follow when investigating hidden individuals behind campaign-related sites.

Auto-archive election investigations with Hunchly. Emiliano Fernández stresses that it is crucial to preserve a record of online campaign investigations, and says the desktop version of Hunchly not only captures and saves every web page you visit, but also automatically organizes these sites for separate investigative projects. “When the extension is activated, all visited sites are neatly organized within the designated folder,” he explains.

Followerwonk to compare partisan accounts. The Followerwonk tool allows reporters to track and compare followers of multiple Twitter/X accounts.

Mozilla’s analysis tool tracking internet shutdowns. Mozilla has opened access to a vast dataset on internet outages around the world, some of which can be shown to be election-related. Apply for free access to the dataset through this form.

Tools to Dig Into the Sources of Online Political Ads

Online political ad campaigns increasingly use micro-targeting tactics, in which different ads, or slightly tweaked versions of the same one, target specific demographic subgroups. For instance, an analysis of the 2019 UK general election by First Draft News found that a Conservative Party ad stating “Get Brexit Done!” exclusively reached male users under 34 years old. But a near-exact copy of that same advertisement, with just a new subtitle focused on health services and safety, was only seen by women. The report explains that social media algorithms amplify messages to groups that initially respond the most, eventually creating a reliable outreach strategy. And online ads can be a bargain for campaigns. First Draft News found one SnapChat ad that attracted more than half a million impressions buy only cost $765 to run.

Search Facebook’s Meta Ad Library for political ads. Despite broad skepticism about data strategies at Meta/Facebook, several experts, including Julia Brothers, elections program manager at the US National Democratic Institute, say the Meta Ad Library Library has grown into an important global tool for digging into political ads and the groups behind them.

“It should be a standard practice in elections now for reporters to be scanning the library, to see what political ads are out there,” agrees ProPublica’s Silverman. “You can target specific pages; you can do keyword searches. I suspect Facebook puts less effort into smaller countries, but if people want to run political ads on Facebook, they are supposed to register ahead of time, and get approved by Facebook, and those ads should be archived for years.” Silverman says reporters can dig deeper by searching names of interest they find in the Ad Library in the OpenCorporates database.

Check Google’s political ads transparency tool. Google’s Political Ads Transparency Center  claims to offer a searchable, up-to-date database of election ads and their vetted sources. Reporters in the US can also explore the excellent NYU Ad Observatory — developed by New York University’s Cybersecurity for Democracy Unit — for deeper insights into the organizations behind Facebook advertising.

How to Track Actors Behind Election Disinformation

“Spreading false information, particularly on social media, is increasingly politically and financially profitable,” notes Lytvynenko. “One way to think about this as an investigative reporter is to ask: ‘Who benefits?’ If you’re worried about state-sponsored interference, ask whether the known disinformer states — Russia, China, Iran — benefit diplomatically from interfering in a local election narrative.” She adds: “Domestically, we see politicians using misinformation to bolster their own agendas; to make it seem that they have more support than they actually do; or to push through a particular policy.”

Given the shortened attention spans that social media helps generate, Lytvynenko says visual forms of disinformation are increasingly powerful — and recommends this comprehensive Washington Post guide for understanding the threat.

Washington Post Guide to manipulated video

Disinformation experts recommend The Washington Post’s guide to checking for manipulated video. Image: Screenshot, The Washington Post

Who are the disinformers in elections? These can be political operatives behind coordinated domestic campaigns; hyper-partisan news media; foreign, state-backed troll farms; anti-democratic or political extremists; special interest groups; dedicated social media propaganda networks; and sometimes teenagers who have found a way to monetize political falsehoods, at a fraction-of-a-cent per site visit. They can also be unwitting participants. As First Draft News points out, a sincere assertion by a well-meaning citizen on a narrow issue can be repurposed by disinformation agents for use in election messaging. Their report offered this example: “A [person] misreporting the cause of an Australian bushfire season as an arson wave, which is then picked up by conspiracy theorists with a climate denial agenda.”

In 2016, an investigation by Silverman revealed that more than 100 pro-Trump disinformation websites were being run by young propagandists in a single town in Macedonia, some of whom earned up to $5,000 per month in traffic-based advertising revenue. Most didn’t care about the ideological differences between frontrunners Donald Trump and his Democratic challenger, Hillary Clinton. They simply found social media shares by Trump supporters to be more profitable — and yet their false and misleading posts are believed to have had a malign influence on the 2016 elections in the US. More recently, French reporter Alexandre Capron found that a damaging disinformation campaign in the Democratic Republic of Congo was prompted by neither money nor political influence, but simply social media bragging rights.

“The first stage of misinformation we usually see is an absolute barrage of content on social media — usually visual content; sometimes out of context, sometimes clipped in a misleading way,” notes Lytvynenko.

Use quick verification tools as new story leads. While fact-checking organizations typically assume the primary role in debunking, suspicious images and claims often crop up in larger investigations — which also require verification and can trigger valuable new leads. In GIJN’s popular image verification guide, journalism trainer Raymond Joseph meticulously describes how to use several user-friendly tools, including the free Photo Sherlock app and the Fake Image Detector app to check for manipulated photos in social media, as well as tips on how to spot clues in the images. Especially useful for time-pressed reporters on the campaign trail: He also explains how journalists can check photos in seconds on their mobile phones, by dropping photo URLs or web addresses into Google Images.

Educate audiences on the common types of election propaganda. Misinformation and disinformation are, unfortunately, now so widespread that it’s important to understand the different species of falsehoods — like agitprop, which is designed to provoke the audience into a specific action, or gaslighting, which spins false and deceptive narratives to attack established facts and undermine trust — and to contextualize their differences. For more on this, see this helpful explainer by the nonprofit group Data & Society, and this information disorder toolbox from First Draft News.

Track disinformation superspreaders. How can you distinguish between coordinated social media election disinformation and the innocent sharing of popular, but inaccurate, messages? CooRnet, a program developed at Italy’s University of Urbino, uses algorithms to identify suspicious sharing patterns. A tool within the R programming language, CooRnet packs even more power when combined with the Gephi open-source visualization platform.

Identify automated social media accounts. In addition to other useful insights, the account analysis app uses various techniques to expose suspected ‘bot’ accounts. For instance, its “Daily Rhythm” feature flags accounts that post tweets between 1:00 a.m. and 5:00 a.m. local time, when humans are typically asleep. Meanwhile, the Botometer tool offers scores on the likelihood that an account you’re investigating, or its followers, are bots.

Look for attempts to exploit falsehoods. Why do some politicians bother to push falsehoods that have already been debunked, and don’t appear to help their campaigns? “Once you tie disinformation to a certain politician or activist group, it’s important to see if they attempt to push any policies that would go hand in hand,” Lytvynenko says. “In the US, we see a huge restriction on voting rights off the back of the false ‘Stop the Steal’ campaign [which pushed the lie that Donald Trump was cheated out of victory in 2020]. That step will help you understand the purpose of the misinformation.”

Investigating Dirty Tricks in Elections

Election dirty tricks are often only pursued by investigative journalists for several reasons. Law enforcement agencies rarely address unethical tactics, voting oversight groups tend to take too long to react, media audiences are typically the targets or victims, and the actors behind one bad-faith campaign can often lead you to broader election scandals.

These are distinct from legitimate political tricks — like drowning media watchdogs in deliberately large data dumps. One classic example: the release of hundreds of pages of a candidate’s medical records close to a campaign deadline.

Rather, we’re talking about tactics designed to misinform or cheat voters, and they also include duly enacted laws that journalists can expose as anti-democratic, unethical, or racist. For instance, in 2014, a new law in the US state of Alabama mandated a narrow list of photo ID documents as acceptable proof to vote, including drivers’ licenses. But just a year later, partisan officials systematically closed the government offices that issue those licenses in neighborhoods likely to support the opposition party. To refute the claim that budget cuts were behind the moves, the Brennan Center, a nonpartisan law and policy organization, published a useful map showing how the 31 closures in the state happened overwhelmingly in counties with a high proportion of citizens likely to vote for the opposition.

Use crowdsourcing to expose the truth. The origins of misleading election robocalls — automated calls that can deliver a massive amount of prerecorded messages — are notoriously difficult to track. But experts believe crowdsourcing is one of the best ways to detect campaign dirty tricks like these. In 2018, the Comprova Project successfully uncovered falsehoods undermining Brazil’s elections when a collaboration of 24 media organizations published the same WhatsApp number, and received a flood of tips for their combined audience.

“Crowdsourcing is hugely important, particularly when it comes to identifying early narratives for misinformation — and especially for WhatsApp,” says Lytvynenko. “Do a tipline, and collaborate.”

Consider an investigation into ‘dirty tricks’ when there are the following warning signs.

  • A political party that stops trying to persuade new voters for their support. Typically, the only road to victory for parties that give up on growing their numbers — beyond maximizing supporter turnout, or forging coalitions — is to shrink poll results for the opposition, using ant-democratic strategies like targeted voter suppression, election fraud, intimidation, or the abuse of election laws.

  • Mass voter eligibility challenges by a small number of citizens. “Voter fraud vigilantes” who claim to be personally concerned about the eligibility of thousands of fellow voters are often pawns funded and coordinated for intimidation campaigns run by ideological groups or partisan businesspeople.

  • Staged, targeted traffic jams on election days and voter registration days.

  • Push-polling, where public opinion surveys use dishonest framing or manipulated messaging to purportedly show declining support for opposition candidates or policies.

  • Disinformation robocalls to discourage voting, like spreading false information about the election calendar and ID requirements.

  • Intimidating new voters by claiming voter registration will attract greater scrutiny by tax or immigration authorities.

  • Anti-competitive practices, like starving an opponent’s campaign of skills or media opportunities.

  • Deliberately confusing voters on mail-in ballot procedures.

  • Using “dark PR” strategies to smear candidates with false or exaggerated links to unpopular individuals.

  • Illegal or unethical use of public resources for campaign activity.

  • Re-registering an opposition candidate’s home address in a new political district.

  • Soliciting foreign interference.

Anti-democratic legislative tactics could also include these tactics.

  • Changing election rules to disadvantage, dissuade, or hinder opposition voters. These include laws designed to prevent voter registration on days popular with certain partisan communities, and laws that require the election identification that opposition voters are least likely to possess.

  • Laws from the “autocrats’ election playbook.” See the list of legislative dirty tricks commonly used by authoritarians mentioned earlier in this guide.

  • Extreme gerrymandering. Abuse of the process of redrawing political district boundaries can lead to elections in which voters don’t choose their leaders, but leaders choose their voters. As a result, parties that lose the popular vote by huge margins can still win control of representative bodies, making a mockery of democratic principles. While less of a problem in nations with proportional representation, like Israel and the Netherlands, or countries that allow non-partisan organizations to draw their legislative boundaries, like Australia and Canada, gerrymandering tactics remain a threat to voters’ rights in places like Hungary, the US, Hong Kong, Sudan, and the Philippines.

  • Laws that use the pretext of voter fraud claims to make voting harder. Research indicates that in-person voter fraud is extraordinarily rare around the world and inconsequential to national results, yet many political parties grossly exaggerate this non-issue to introduce laws that intentionally make voting harder for certain groups that tend to vote against them. If your local data shows that in-person voter fraud is less common than, say, injuries from lightning strikes, or holes-in-one scored by golfers on their birthdays, then use visualization tools like Flourish to highlight these data anomalies. “Remember that most election fraud narratives start out locally,” Lytvynenko notes.

  • Partisan polling station closures. Pulitzer-winning reporter David Cay Johnston says voter suppression by targeted polling station closures is on the increase in democracies. Look for databases similar to that of the Center for Public Integrity in the US, which can show targeted polling place shutdowns in areas of opposition support.

“Newsrooms need to ask audiences to be their eyes and ears in elections,” says Silverman. “So tell your readers or viewers: ‘If you see or hear attempts to deceive, or interfere with the vote, here’s how to reach us.’ After all — it’s their democracy.”


Rowan Philp, senior reporter, GIJNRowan Philp is a senior reporter for GIJN. He was formerly chief reporter for South Africa’s Sunday Times. As a foreign correspondent, he has reported on news, politics, corruption, and conflict from more than two dozen countries around the world.

Republish our articles for free, online or in print, under a Creative Commons license.

Republish this article


Material from GIJN’s website is generally available for republication under a Creative Commons Attribution-NonCommercial 4.0 International license. Images usually are published under a different license, so we advise you to use alternatives or contact us regarding permission. Here are our full terms for republication. You must credit the author, link to the original story, and name GIJN as the first publisher. For any queries or to send us a courtesy republication note, write to hello@gijn.org.

Read Next