(By Dan Meredith)
While I am by no means a seasoned investigative journalist, I have the good fortune to work with some. Looking ten years back I couldn’t imagine a media organization considering geek qualifications a core part of an investigative team. In 2011, turning a geek into an investigative journalist is a no-brainer.
The information landscape a journalist lives in today is very different than ten years ago. People share more information on the Internet about themselves than ever before. Journalists have access to large quantities of free information stored in social networks, government databases, and Freedom of Information requests. In response, the traditional journalist is evolving quickly. Today’s journalist is not only sitting in the court room or town hall meeting with pen and paper but with a laptop sifting through relevant online information, filling FOIA requests, and chatting with their editors. With journalism, the market for tools and methods to collect, analyze, and present this information is growing fast.
The days of Excel spreadsheets and HTML tables are gone. Whether we’re watching on TV, reading online, or in a newspaper we expect beautiful and easy to understand representations of important information, no matter how large the underlying data is. DocumentCloud, Information is Beautiful, Piwik, Mining of Massive Datasets, PACER, Google Refine, Google Fusion Tables, Google Public Data Explorer, IBM’s Many Eyes, and ScraperWiki are just some of the data driven journalism tools widely used by mainstream media today.
There already exists a wealth of awesome write-ups documenting methods and tools for journalists creating data driven stories.   Rather than add to it, my focus is on another important and evolving component of investigative journalism: sources, communication, and protection of privacy.