Top Ten #ddj: This Week’s Top Data Journalism

What’s the global #ddj community tweeting about? Our NodeXL mapping from June 5 to 11 includes #VisualTrumpery from @mcrosasb, analysis of Theresa May’s election disaster by @GuardianVisuals, dataviz structuring strategies from @eagereyes, and school enrollment woes in Delhi from @htTweets.

Top Ten #ddj: The Week’s Most Popular Data Journalism Links

Here are the top data journalism tweets for March 13-19, per our NodeXL mapping: #NICAR17 resources (@MacDiva); Breitbart network analysis (@CJR); climate change calculator (@FT); Trump’s Budget (@washingtonpost); Berlin pickpockets (@berlinerzeitung); Caracas jobless (@CamburYMedio); & more.

Top Ten #ddj: The Week’s Most Popular Data Journalism Links

What’s the data-driven journalism crowd tweeting? Here are top links for Oct 24-30: Clinton/Trump facial analysis (@benheubl ); 10K edits to Clinton/Trump Wikipedia pages (@chrisalcantara); Amnesty 36-year dataviz (@jwyg); Latinos in office (@UnivisionData); open source tools (@M_Mandalka); & more.

Investigating Uber Surge Pricing: A Data Journalism Case Study

The story published in the Washington Post’s Wonkblog ended up being about race, but it didn’t start out that way. Nick Diakopoulos, who leads the lab, wrote for the Wonkblog last year with a story on how surge pricing motivates Uber drivers to move to those surging areas, but does not increase the number of drivers on the road as Uber claims.

Web Scraping: A Journalist’s Guide

$8 billion in just a few hours earlier this year? It was because of a web scraper, a tool companies use—as do many data reporters. A web scraper is simply a computer program that reads the HTML code from webpages, and analyze it. With such a program, or “bot,” it’s possible to extract data and information from websites.