Accessibility Settings

color options

monochrome muted color dark

reading tools

isolation ruler
IJF24 Reframing Visual Journalism panel AI Deepfake.
IJF24 Reframing Visual Journalism panel AI Deepfake.

IJF24 Reframing Visual Journalism panel AI Deepfake.

Stories

Topics

Building Trust and Authenticity in Visual Journalism in the Age of Deepfakes

Read this article in

With generative AI becoming increasingly advanced, nurturing trust and authenticity in visual journalism is not only more valuable, but essential.

This was the main takeaway from a panel discussion titled “Reframing Visual Journalism in the Age of Synthetic Media” at the 2024 International Journalism Festival in Perugia, featuring various experts in the field and moderated by The VII Foundation’s education director, David Campbell.

The panelists discussed how and why visual journalism needs to be elevated in contemporary discussions; the danger synthetic (fake) media poses to the role of images in documenting events and issues; and why generative AI is likely to increase media organizations’ reliance on authenticated visual content.

A Visual as a Document of Reality

Generative AI poses a real threat to journalism and photojournalism, because photography is supposed to be a document, so there has to be “adherence to reality,” according to Alessia Glaviano, senior photo editor at Vogue Italia and head of Global PhotoVogue. “What does it mean when… you can create an image just by sitting on your couch in your living room, saying you are reporting from Gaza? This could be a real danger for media authenticity,” she warned.

To illustrate a point about people’s innate trust in the authenticity of images, Glaviano referenced a recent scandal in which the Princess of Wales, Kate Middleton, admitted to doctoring a photograph of herself with her children that had been posted on the British Royal Family’s official Twitter account. Glaviano said that some of the public’s shock that the photo was manipulated took her by surprise — in a good way. “I was positively shocked… that people still believe in photography as a document,” she said.

Citing a rare example of AI being used to illustrate reality, rather than fabricate it, Glaviano discussed Exhibit A-i: The Refugee Account, in which researchers, photojournalists, and AI technicians worked with refugees to create images to depict their real experiences of discrimination, abuse, and violence the latter faced in offshore detention centers in Australia. In this exceptional case, AI was used to fill a void where visual evidence was lacking.

Filtering the Authentic from the Synthetic

Arthur Grimonpont, IJF24 panel

RSF head of AI and global challenges desk, Arthur Grimonpont. Image: Courtesy of IJF24, Bartoloemo Rossi

Arthur Grimonpont, head of the AI and global challenges desk at Reporters Without Borders (RSF), set a cautious tone in his presentation with a famous quote by German-American historian and philosopher Hannah Arendt: “If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer.”

He noted how in just two years, generative AI has advanced from child-like drawings to HD videos — to a point where it’s “totally impossible to distinguish between authentic and synthetic content.” This creates three main challenges for visuals in journalism: to identify authentic content in a “growing sea” of synthetic content; to maintain public trust; and to catch people’s attention with “actual photographs.”

In addition, Grimonport raised concerns that social media algorithms today work against journalism, in that they are designed to maximize advertisement revenues and engagement. “[Generative AI and deepfakes] are entering an information ecosystem that systematically rewards sensationalism at the expense of true quality reporting,” he said, labeling it the “biggest challenge” for visual journalism.

Rihanna false image of Australia wildfires in 2020

To demonstrate the dangers of visual misinformation, Grimonpont pointed to this 2020 tweet from Rihanna, who shared a misleading, artist-rendering of the wildfires in Australia. Image: Screenshot, Twitter

He also recalled how, in 2020, pop artist Rihanna had shared a false image of Australia’s wildfires on Twitter. Referred to as the Black Summer, the deadly fires caused the death of billions of animals continent-wide, as well as serious ecological damage. This image, purportedly taken from space but which was “impossible” — clearly fake or manipulated — was shared by Rihanna’s millions of followers, then further amplified by Twitter’s algorithms, and “got more visibility than scientific or journalistic content.”

In response to a question from Campbell, of The VII Foundation, about the way forward for media outlets in light of these concerning challenges, RSF’s Grimonport highlighted the recent Paris Charter on AI and Journalism, published last November by RSF and 16 other organizations, including GIJN.

The Charter defines 10 key principles for safeguarding the integrity of information and preserving journalism’s social role. The principles include that ethics must govern technological choices within the media, that human agency must remain central in editorial decisions, that the media must help society distinguish between authentic and synthetic content with confidence, and that the media must participate in global AI governance and defend the viability of journalism when negotiating with tech companies, among other points.

Fact-checking Captions, Photo Sequences, Forensic Teams

Joumana El Zein Khoury, executive director of the World Press Photo Foundation, explained how her organization helps safeguard authenticity for visual journalism, emphasizing that “trust is really of the utmost importance.”

These include investing in forensic expertise to remove any images that have been manipulated, asking photographers for raw files as well as the sequences of images, and having their team of researchers fact-check captions provided by photographers. El Zein Khoury also strongly suggested media outlets invest in forensic experts to examine the authenticity of the images they receive.

She added that media outlets need to highlight legitimate stories “produced by professionals who take risks to report the stories that matter… generative AI cannot be a witness — not yet anyhow.”

Watch the full video of the Reframing Visual Journalism in Age of Synthetic Media panel on YouTube below.


Joanna Demarco is GIJN’s visuals and newsletter editor and has been working in journalism in Malta for the past decade, both as a local reporter and a freelance photojournalist. She has published her work in publications including Politico Europe, The Washington Post, National Geographic, and Der Spiegel.

Republish our articles for free, online or in print, under a Creative Commons license.

Republish this article


Material from GIJN’s website is generally available for republication under a Creative Commons Attribution-NonCommercial 4.0 International license. Images usually are published under a different license, so we advise you to use alternatives or contact us regarding permission. Here are our full terms for republication. You must credit the author, link to the original story, and name GIJN as the first publisher. For any queries or to send us a courtesy republication note, write to hello@gijn.org.

Read Next

Resource Video

GIJC23 – The Future of Data Journalism: New Analytical Tools, Data Visualization, and AI

This roundtable will conclude the data track of the conference with a discussion on the tools that are coming to the forefront for doing data stories and with predictions for what is ahead. Join us with your ideas for what is next. ———————– The Global Investigative Journalism Network is an international association of journalism organizations […]