How Innovative Newsrooms Are Using Artificial Intelligence
Read this article in
العربية | বাংলা | 中文 | Français | Português | Русский | Español
Many large newsrooms and news agencies have, for some time, relegated sports, weather, stock exchange movements and corporate performance stories to computers. Surprisingly, machines can be more rigorous and comprehensive than some reporters. Unlike many journalists who often single-source stories, software can import data from various sources, recognize trends and patterns and, using Natural Language Processing, put those trends into context, constructing sophisticated sentences with adjectives, metaphors and similes. Robots can now convincingly report on crowd emotions in a tight soccer match.
These developments are why many in the journalistic profession fear Artificial Intelligence will leave them without a job. But, if instead of fearing it, journalists embrace AI, it could become the savior of the trade — making it possible for them to better cover the increasingly complex, globalized and information-rich world we live in.
Intelligent machines can turbo-power journalists’ reporting, creativity and ability to engage audiences. Following predictable data patterns and programmed to “learn” variations in these patterns over time, an algorithm can help reporters arrange, sort and produce content at a speed never thought possible. It can systematize data to find a missing link in an investigative story. It can identify trends and spot the outlier among millions of data points that could be the beginnings of a great scoop. For example, nowadays, a media outlet can continuously feed public procurement data into an algorithm which has the ability to cross-reference this data against companies sharing the same address. Perfecting this system could give reporters many clues as to where corruption may be happening in a given country.
Not only can intelligent computers analyze huge amounts of data to aid timely investigations, they can also help source and fact-check stories from the crowd to see if contributions are reliable. According to a 2017 Tow Centre Report, several media outlets in the United States are already using AI to fact-check. Reuters, for example, is using News Tracer to track breaking news on social media and verify the integrity of tweets. Serenata de Amor, a group of technology enthusiasts and journalists from Brazil, uses a robot named Rosie to track every reimbursement claimed by the country’s members of Congress and highlights the reasons that make some of the expenditures suspicious.
There are many other ways in which algorithms are helping journalists, from making rough cuts of videos, to recognizing voice patterns and identifying faces in a crowd. They can be programmed to chat with readers (chatbots) and answer queries. The tricky part is that this process cannot happen without a human journalist present who, with a goal in mind, asks relevant questions about the data. Reporters and editors need to learn fast how these systems operate and how they can use them to enhance their journalism.
Most journalists in the world do not have access to a team of programmers or data scientists to help design and build their projects. Collaboration is the answer. Small newsrooms and freelancers can make up for the lack of resources by teaming up with software developers to help build a more permanent collaboration. They can also become perceptive in spotting the many open-source search and analytics tools available.
Communication between journalists and techies is not a given. It needs a lot of learning from both sides and some trial and error. With ongoing technological development, journalists now have an ever-expanding toolkit in which to hold power to account. With this increased capacity to listen to their communities and identify their needs, it would be a tremendous waste not to try.
Ethical Challenges
The readers’ editor of The Guardian, Paul Chadwick, writing about the relationship between journalism and Artificial Intelligence, proposes a new clause for the newspaper’s code of ethics.
“Software that ‘thinks’ is increasingly useful, but it does not necessarily gather or process information ethically,” he warns. “When using Artificial Intelligence to augment your journalism, consider its compatibility with the values of this code.”
Journalists have to be aware that algorithms may lie or mislead. They have been programmed by humans, who have biases, and logical patterns may lead to the wrong conclusions. This means journalists will always need to check results with their century-old verification techniques: cross-checking sources, comparing documents, doubting their findings.
Transparency is another must for journalism in this new era of machine intelligence.
“The biggest stumbling block for the entrance of AI into newsrooms is transparency. Transparency, a basic journalistic value, is often at odds with Artificial Intelligence, which usually works behind the scenes,” says Nausicaa Renner, digital editor of the Columbia Journalism Review.
Media should let its audience know what personal data it is collecting if it wants to remain credible. Despite the powerful new toys allowing them to cater precisely to their audiences’ taste, editors should also strive to inform users about what they don’t want to know. The public interest is still the media’s business and the key to its survival.
By the same token, investigative reporters should do their best to explain how they are using algorithms to find patterns or process evidence for a story, if they want to be different from the manipulators and demagogues who secretly collect data for use as a commercial or political weapon. Moreover, healthy journalism should continue to bring to life those silenced voices and intractable issues around which no one has systematically collected information or built data sets.
In the end, while it is true that AI enables journalism as never before, it is also true that this brings new challenges for learning and accountability. Without journalistic clarity, all this technology will not lead to a well-informed society. Without ethics, intelligent technology could herald journalism’s demise. Without clear purposes, transparent processes and the public interest as a compass, journalism will lose the credibility of people, no matter how many charts, bots and whistles you adorn it with.
How AI is Used in Journalism
Automated journalism: producing stories from data. Initially it was used in reporting on sports and financial news. It can free journalists from routine tasks, improving efficiency and cost-cutting. AP uses Wordsmith software to turn financial data into stories. The Washington Post uses in house developed technology Heliograf for reporting on sports events and electoral races.
Organizing workflow: tracking down breaking news, aggregating and organizing news using tags and links, moderating comments and using automated voice transcription. The New York Times uses the Perspective API tool developed by Jigsaw (Alphabet) to moderate readers’ comments. The Reuters Connect platform for journalists displays all Reuters content, including the archive, and content from media partners around the world in real time.
Tracking news on social media: analyzing real time and historical data, identifying influencers and engaging with audiences. AP uses Newswhip to monitor social media trends and increase engagement.
Engaging with audiences: Quartz Bot studio’s chatbot app allows users to text questions about news events, people, or places, and the app replies with content it believes is be relevant to them. Others include bots for Facebook Messenger such as Guardian. The BBC used bots to help cover the EU referendum. The AfriBOT project, one of the Innovate Africa grant winners, by the European Journalism Centre and The Source (Namibia and Zimbabwe), are developing an open source newsbot “to help African news organizations deliver personalized news and engage more effectively with audiences via messaging platforms.”.
Automated fact-checking: allows journalists a speedy fact check of public statements or claims. Chequeabot is used by Chequeado in Argentina; Full Fact UK and partners are developing an automated fact-checking engine that “will spot claims that have already been fact-checked in new places; and it will automatically detect and check new claims using Natural Language Processing and structured data.” The Duke Reporter’s Lab in the US developed the tool ClaimBuster to deliver politically meaningful claims to media and, in 2017, launched a hub for automated fact-checking projects. Factmata in the UK is also developing an automated fact-checking tool. Read more about automated fact-checking.
Analyzing large data bases: software crunches data and looks for patterns, changes or anything unusual. Reuters’ Lynx Insight goes through massive data sets and provides journalists with results and background information. OCCRP’s Crime Pattern Recognition uses technology that analyzes large databases of documents for similar corruption-related crimes and links between involved parties.
Image recognition: technology that recognises objects, places, human faces and even sentiment in images. The New York Times uses Amazon’s Rekognition API to identify members of congress in photos. Any user can test Google’s Vision API image recognition technology for free.
Video production: automatically creates scripts from news articles and produces narrated rough cuts of short video pieces from video footage. Wibbitz software is used by USA Today, Bloomberg and NBC. Researchers at Stanford University are developing an automated video editing tool.
This piece was originally published on the Medium page of the Open Society Foundation’s Program on Independent Journalism, and is reprinted with permission. Note: OSF is a funder of GIJN.
María Teresa Ronderos recently finished serving as director of the OSF Program on Independent Journalism, which oversees efforts to promote viable, high-quality media, particularly in countries transitioning to democracy. Ronderos came to OSF from Semana, Colombia’s leading news magazine.