Accessibility Settings

color options

monochrome muted color dark

reading tools

isolation ruler

Stories

Topics

Why Web Scraping Is Vital to Democracy

Read this article in

Image: Pixabay

Editor’s Note: The Markup, a New York-based investigative newsroom that covers the tech industry, recently argued for web scraping in an amicus curiae (literally, a friend of the court brief) for a US Supreme Court case that threatens to make scraping illegal. Here’s why they did it.

The fruits of web scraping — using code to harvest data and information from websites — are all around us.

People build scrapers that can find every Applebee’s on the planet or collect congressional legislation and votes or track fancy watches for sale on fan websites. Businesses use scrapers to manage their online retail inventory and monitor competitors’ prices. Lots of well-known sites use scrapers to do things like track airline ticket prices and job listings. Google is essentially a giant, crawling web scraper.

Scrapers are also the tools of watchdogs and journalists, which is why The Markup filed an amicus brief in a case before the United States Supreme Court that threatens to make scraping illegal.

The case itself — Van Buren v. United States — is not about scraping but rather a legal question regarding the prosecution of a Georgia police officer, Nathan Van Buren, who was bribed to look up confidential information in a law enforcement database. Van Buren was prosecuted under the Computer Fraud and Abuse Act (CFAA), which prohibits unauthorized access to a computer network such as computer hacking, where someone breaks into a system to steal information (or, as dramatized in the 1980s classic movie “WarGames,” potentially start World War III).

In Van Buren’s case, since he was allowed to access the database for work, the question is whether the court will broadly define his troubling activities as “exceeding authorized access” to extract data, which is what would make it a crime under the CFAA. And it’s that definition that could affect journalists.

Or, as Justice Neil Gorsuch put it during Monday’s oral arguments, lead in the direction of “perhaps making a federal criminal of us all.”

Investigative journalists and other watchdogs often use scrapers to illuminate issues big and small, from tracking the influence of lobbyists in Peru by harvesting the digital visitor logs for government buildings to monitoring and collecting political ads on Facebook. In both of those instances, the pages and data scraped are publicly available on the internet — no hacking necessary — but sites involved could easily change the fine print on their terms of service to label the aggregation of that information “unauthorized.” And the Supreme Court, depending on how it rules, could decide that violating those terms of service is a crime under the CFAA.

“A statute that allows powerful forces like the government or wealthy corporate actors to unilaterally criminalize newsgathering activities by blocking these efforts through the terms of service for their websites would violate the First Amendment,” The Markup wrote in the brief.

What sort of work is at risk? Here’s a roundup of some recent journalism made possible by web scraping:

  • The COVID tracking project, from The Atlantic, collects and aggregates data from around the country on a daily basis, serving as a means of monitoring where testing is happening, where the pandemic is growing, and the racial disparities in who’s contracting and dying from the virus.
  • This project, from Reveal, scraped extremist Facebook groups and compared their membership rolls to those of law enforcement groups on Facebook — and found a lot of overlap.
  • The Markup’s recent investigation into Google’s search results found that it consistently favors its own products, leaving some websites from which the web giant itself scrapes information struggling for visitors and, therefore, ad revenue. The United States Department of Justice cited the issue in an antitrust lawsuit against the company.
  • In Copy, Paste, Legislate, USA Today found a pattern of cookie-cutter laws, pushed by special interest groups, circulating in legislatures around the country.

This article was originally published on The Markup and is republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Additional Reading

Document of the Day: In Defense of Data Scraping

Web Scraping: A Journalist’s Guide

On the Ethics of Web Scraping and Data Journalism


Originally published on themarkup.org

The Markup is a nonprofit newsroom that investigates how powerful institutions use technology to change society. It is staffed with “quantitative journalists who pursue meaningful, data-driven investigations.”

Republish our articles for free, online or in print, under a Creative Commons license.

Republish this article


Material from GIJN’s website is generally available for republication under a Creative Commons Attribution-NonCommercial 4.0 International license. Images usually are published under a different license, so we advise you to use alternatives or contact us regarding permission. Here are our full terms for republication. You must credit the author, link to the original story, and name GIJN as the first publisher. For any queries or to send us a courtesy republication note, write to hello@gijn.org.

Read Next

Data Journalism

Data Journalism Top 10: Royal Instagram Mystery, US Election, The Markup Launches, 100 Years of Mideast Deals

What’s the global data journalism community tweeting about this week? Our NodeXL #ddj mapping from February 24 to March 1 finds The New York Times digging into some curious data from two Instagram accounts of the British royal family, Al Jazeera analyzing Trump’s plan to resolve the Israeli-Palestinian conflict, The Markup launching with an investigation into auto insurance algorithms, and Pew Research Center sharing some American election data snapshots.

Data Journalism Methodology Reporting Tools & Tips

On the Ethics of Web Scraping and Data Journalism

Web scraping is a way to extract information presented on websites. As I explained it in the first installment of this article, web scraping is used by many companies. It’s also a great tool for reporters who know how to code, since more and more public institutions publish their data on their websites.
With web scrapers, which are also called “bots,” it’s possible to gather large amounts of data for stories. But what are the ethical rules that reporters have to follow while web scraping?