In its December 2019 issue, The Economist published an interview with a piece of artificial intelligence (AI), called GPT-2. GPT-2 was trained to generate text, and in the interview it answered questions about the state of the world in 2020. On the question “What is the future of AI?” the bot replied: “It would be good if we used the technology more responsibly. In other words, we should treat it like a utility, like a tool. We should put as much effort into developing the technology as necessary, rather than worrying that it’s going to harm us and destroy our lives.”
GPT-2 does not really understand what it says, and although its answers in the interview are a bit vague and general, its performance is still pretty impressive. Earlier this year, OpenAI, the developer of GPT-2, introduced its successor GPT-3, which is a lot more powerful. Automatic text generation is one of many examples that show the potential power of AI used by the media.
In June 2019 the European Science-Media Hub (ESMH) published an article on AI in journalism, following a three-day summer school for young journalists and students in the French city of Strasbourg on that topic. What have been the most important developments and discussions since then?
“The most important development in the last year has been the growing adaptation and awareness of AI in newsrooms worldwide,” said Nicholas Diakopoulos, associate professor in communication studies and computer science at Northwestern University in the United States. “The type of jobs in journalism will evolve. Some jobs will look less like traditional reporting jobs and will involve more IT skills.”
Diakopoulos gave some examples of recently developed journalistic AI-tools: The New York Times launched a research and development lab to experiment with AI in journalism. The Washington Post began experimenting with news discovery tools, partly in cooperation with Diakopoulos. On the academic side conferences like Machines + Media and the Computation + Journalism symposium have been established.
Before the coronavirus crisis broke out and the 2020 Olympic Games were canceled, The New York Times was experimenting with computer vision, also an AI technology, to construct 3D scenes of sports events. The idea is to use AI to augment the experience of TV viewers in real time, for example by providing insights into the live performance of athletes.
Together with a team at The Washington Post, Diakopoulos developed a news discovery tool for the 2020 US presidential elections. Their goal was to use a dataset of a few hundred million registered voters across the US to try and find interesting locations where a political journalist might go in order to discover a demographic trend in the electorate — for example because there turns out to be a spike in the new registration of Hispanic voters in Texas.
Diakopoulos and his research lab are also developing a tool that is meant to help investigative journalists dig into the use of algorithms by the US government. The tool can be set up to automatically give an alert every week. For example: “Last week I found eight new algorithms that I think matches [sic] your interest.” At the moment nine journalists are testing the tool.
Impact on Society
In 2019, the London School of Economics published a global survey of journalism and AI. The report was based on interviews with 71 newsrooms spread over 32 countries. One of its conclusions was that AI will reshape journalism in an incremental way but with structural effects in the long term. The report also noted that the use of journalistic AI tools raises questions about the implications for society. What are the effects on democracy, on the diversity of journalistic reporting, and on public values in the reporting?
Professor Natali Helberger of the University of Amsterdam studies exactly these questions. She stresses the fact that, historically speaking, journalism and technology have always had a close relationship: photography, telephone, radio, TV, computer, internet, smartphone — they all drove changes in journalism. “The introduction of each technology came along first with a major hype, then with a wave of concern and even dystopia, and finally with a constructive phase in which the technology was used for the benefit of journalism,” she said in an interview.
According to Helberger it is wrong to denigrate a particular technology per se. Instead, she says, we should investigate how we can use the technology to support a democratic society. “AI does have transformative power for journalism. It provides new ways of engaging with the audience and creates possibilities for people to find information more efficiently and get better informed,” she said. “But with that power also come responsibilities, for example to protect fundamental rights and freedom.”
Because of these responsibilities, Helberger is worried about the lack of structural independent funding for R & D in journalism. “A lot of tech innovation in journalism is funded by the Google News Initiative. It’s really cool that they do it. But Google is a company, right? Media play a key role in our democracies and they should always be independent,” she said.
Her own research focus is on the use of automatic news recommendation and which consequences they have on the diversity of the reporting. “Diversity in ideas, opinions, cultures, ethnicities, and religions is important in a democracy because it teaches us tolerance. Especially in our present polarized time the media have to make sure that they are inclusive, can serve everybody and not just a particular group,” she said.
Together with the German public broadcaster ZDF, she and her university colleagues are now testing a diversity toolkit that they developed in cooperation with data scientists from the commercial broadcaster RTL in the Netherlands. This is a generic, stand-alone toolkit that helps media professionals to understand and assess the diversity of their algorithmic recommendations. Depending on these insights and whether the news outlet wants, for example, more engaging content, more political content, or more minority voices, the wheel of the recommendation tool can be turned.
“Every AI tool in the media has to be optimized for the particular news outlet, because with a generic tool you have very little to say about the values which you as a journalistic medium find important,” said Helberger. “A machine cannot decide about which values are important. That’s fundamentally a human decision.”
This article was originally published on the European Science-Media Hub website on September 9, 2020. We are republishing with permission.
Bennie Mols is a freelance science journalist, author, and speaker based in Amsterdam. He specializes in artificial intelligence, robots, and the human brain. You can visit his blog (in English and Dutch) here and his website (in Dutch) here. You can also view his TEDx presentation about human-machine interaction here.