Accessibility Settings

color options

monochrome muted color dark

reading tools

isolation ruler

Stories

Topics

New Tools Open Up Virtual Reality to Journalists

cc6dbcc0323d89cd38ba24172d69264e_1

(Topher McCulloch, licensed under CC)

When Gustavo Cerati, a legendary Argentinian musician and songwriter, was asked to share his best advice for new musicians, he refused—saying instead that “experiences are not transferable.” You may agree or may not with his statement, but if you’ve ever worn an Oculus Rift or a similar virtual reality (VR) headset, you’ll know we are getting closer and closer to transferable experiences.

Journalism seems to be a good use case for VR technology. The New York Times has delivered a wide range of stories through their nytvr app, available on Android and iOS; The Washington Post took us to Mars; and The Guardian showed us what it’s like to survive solitary confinement.

On the other hand, companies like Google, Facebook, and Microsoft are pushing the limits of the technology in partnership with game makers, artists, and developers. We are still experimenting and trying to understand what the future of Virtual Reality will look like in the near future.

With the rise of VR, Mozilla, Google, and other companies expressed interest in developing open standards for making the web the home of the VR ecosystem, which led to WebVR, a JavaScript API that provides access to Virtual Reality devices. The specification draft is available, and we can even use what is called a polyfill to use WebVR today, before this API is added in your mobile browser.

With WebVR, a web developer can access a wide range of device-specific information that’s needed to create VR apps: position, orientation, velocity, acceleration, field of view, and eye distance.

I got really interested in the development of WebVR technology and the impact it could have in journalism. This led me to explore a variety of tools for making VR easier to develop, both for journalists and for developers without graphics programming knowledge.

Here are three tools for making VR in the open web, in descending order of required technical skills. But even if the first are harder to use for non-technical people, I’ll bet that if you can write just couple of lines of JavaScript, you’ll be able to create amazing VR scenes.

A Note on Asset Generation and Usage

The first question I get in every workshop is, “How can I take my own pictures and videos?” And it makes a lot of sense. If we want to create our own experiences, we need to be able to capture our own world.

Taking 360° Panoramas

The preferred format for this kind of 360° panoramas is called “equirectangular,” and the good news is, you can take them with your phone.

The Android Camera has a mode called Photo Sphere and an active community around the functionality. The Cardboard Camera is also a good option, enabling the ability to record audio along with your panoramas.

On iOS, you can install the Google Street View app to take this kind of picture—there are more apps that will help you, but this one works really well.

If you don’t want your own pictures, you can always download one from the Flickr Equirectangular group. You can search among more than 15000 equirectangular pictures (remember to add attributions to the photographers and be careful with the licenses).

Taking 360° Videos

This gets tricky because usually you will need a special camera. We are in a stage where consumer cameras are not perfect yet, but I can point to a couple of cameras that do the job:

  • The Ricoh Theta S and the Samsung Gear 360 are tiny 360° cameras that can make your life easier, and both come with their own apps and software for preview, editing and uploading videos. The battery duration and heat are real issues, but you can get pretty decent videos.
  • For higher quality, VR makers sometimes use a rig with GoPros. There are a lot of different options and more cameras and techniques are coming out.

Recording Audio for VR

Recording audio is easier, and you can use your regular audio recorder to do this. If you want to record in 360°, there are special mics. You can take a look at different options here.

Uploading Your Assets

Equirectangular pictures and videos are regular media files. Your panorama file can be opened and edited with your favorite photography app, meaning you can upload your assets to the same services you use to upload your images and videos.

The only catch is that if you host your pictures in a domain that is not the same as your VR app (your website), the service needs to support CORS. If you don’t know what CORS is, don’t worry, you can upload your images to services like Imgur and also your videos to a public Dropbox folder, and it will work. For uploading to AWS S3, make sure you have a CORS policy set up.

WebVR Starter Kit

I was happy to discover that, thanks to the work of amazing developers, you can actually create simple VR scenes that work on the Oculus or Google Cardboard with just one or two lines of JavaScript or HTML.

The first project I want to highlight is the WebVR Starter Kit by Brian Chirls. This library allows developers (or people with very little JavaScript experience) to create VR scenes in seconds. The examples really show the power of the library.

The main idea is that by adding this library to your website (as a .js script) your website is automagically converted to a 3D VR scene. The only thing you need to do is just add objects (like boxes, spheres, audios, panoramas, videos) with attributes like position, color, and material. You can even add simple animation.

You can, for example, play God and indicate that you want a wooden floor and a sky with two lines:

Screen Shot 2016-07-24 at 2.58.34 PM

Then you can add a green cylinder, position it, and tell everybody it’s yours:

Screen Shot 2016-07-24 at 2.58.44 PM

Create a nice atmosphere for this little bunny video in just three lines of code.

A-Frame with HTML

After some time playing around with the amazing WebVR starter kit, I found out that a Mozilla team working on WebVR experiments, known as MozVR, was creating a framework that can be used by people who can’t write a single line of JavaScript.

It’s called A-Frame. The idea is to create VR scenes using HTML tags and properties, making the VRdevelopment process pretty much like building a website. The A-Frame project website is full of examples, the documentation is really good, and the community is active and growing. Inside every example on their website, there is a link to the source code.

This time, instead of using JavaScript to create our objects we will use HTML-like tags. Most of the WebVR starter kit objects are available, including handy resources like 3D model loaders and arrow controls for moving inside the scene.

Let’s say we want to get a videosphere surrounding the scene and a box inside. We would express it via the a-box and a-videosphere tags:

Screen Shot 2016-07-24 at 3.00.57 PM

As you can see, it’s really easy to create—for example—a spherical video just with some lines of HTML.

Also, if you’re still not sure about the power of this tool, check out Mars: an interactive journey—a Washington Post VR experience made with A-Frame.

Guri VR, for Journalists

After playing around with all these amazing tools available to create VR experiences, two questions popped into my mind:

  • Is there a way to add context to a story told with VR?
  • Is the A-Frame learning curve reachable for storytellers and journalists, or should I create something simpler for them?

Based on those questions, and after some experimentation, I created Guri VR. Guri is a set of tools focused on the creation of VR experiences based on intuitive descriptions, and it’s targeted at journalists and other non-developers. The main tool is the Guri editor. This online tool allows the users to express in plain English what they want to experience and generate a shareable and embeddable link with the VR scene.

The output is a HTML file using autogenerated A-Frame markup. This is helpful if you want to create a prototype and then pass the code to a developer to change it.

You can play around with the editor at GuriVR.com, but let’s see how you can describe a basic scene. For example I can write down this into the editor:

My first scene lasts 5 seconds and has a skyblue background and text saying “This is my first scene”.

The second is 30 seconds and has just a panorama located at https://ucarecdn.com/8e6da182-c794–4692–861d-d43da2fd5507/ along with the audio https://ucarecdn.com/49f6a82b–30fc–4ab9–80b5–85f286d67830/

And this is the result.

Since one of the goals is to remove the friction between the users and the VR generation, the editor includes a file uploader. To upload a file, you just need to drag it to the editor and select where to place it based on the cursor position.

071916gurivrdemo

Since my goal was to present a friendly interface for VR-scene generation, I also wanted to interact with existing tools. For example, I started working on an A-Frame Chartbuilder component. You can feed the component with the JSON output from ChartBuilder, and it will draw a 3D representation of the chart you just made. This works on any A-Frame scene, but I also added it to the Guri Editor. There is also a guide to help you getting started with the tool.

071916gurivrchartbuilder

VR Tweetbot

In the end, Guri is an API that accepts a JSON file describing what we want and translating it into A-Frame so that it can be easily modified after.

Thinking about even easier ways to create a VR scene, I developed a proof of concept using Twitter. Using GuriVR and the Twitter API, I use @guri_vr as a VR bot. You can tweet an equirectangular picture, and if you mention @guri_vr on that tweet, it will tweet you back with the VR scene link embed into a Twitter Card, so you can even watch the experience without leaving Twitter:

For now, it only works with a single picture, but it can be easily modified to allow multiple panoramas as scenes or intertitles.

Transferable Experiences, Open Web

Guri VR is open source, and it’s under heavy development. You can fork it and help make it great. I’m also looking for feedback from storytellers and other people interested in creating VR without coding skills.

VR opens up powerful new ways of telling meaningful stories, and it’s important to be able to prototype these stories easily. As with any new technology, there is a potentially steep learning curve and a lot of hype around VR. But I think that the right uses can be very beneficial for newsrooms and their readers, and by using open web standards, we are as close as possible to the public. I encourage you to try these tools—and see for yourself.


This story originally appeared on the website of Source and is reprinted with permission.

E-n4f2k1 Dan Zajdband is an Argentinian software developer currently working as a Knight-Mozilla fellow based at The Coral Project in  New York City. He’s contributed to many open source projects including Hacks/Hackers Buenos AiresMedia Party, HackDash,  BAFrontend, and JSConf Argentina@impronunciable

Republish our articles for free, online or in print, under a Creative Commons license.

Republish this article


Material from GIJN’s website is generally available for republication under a Creative Commons Attribution-NonCommercial 4.0 International license. Images usually are published under a different license, so we advise you to use alternatives or contact us regarding permission. Here are our full terms for republication. You must credit the author, link to the original story, and name GIJN as the first publisher. For any queries or to send us a courtesy republication note, write to hello@gijn.org.

Read Next

Data Journalism

GIJN’s Data Journalism Top 10: Air Pollution, China Cables, #29Leaks, Predators on Dating Apps

What’s the global data journalism community tweeting about this week? Our NodeXL #ddj mapping from December 2 to 8 finds The New York Times visualizing particle pollution in augmented reality; various media outlets investigating #29Leaks, a global reporting project based on a massive data leak from an offshore services provider; Columbia Journalism Investigations and ProPublica digging into the problem of sexual predators lurking in dating apps; and The International Consortium of Investigative Journalists breaking down the significance of the China Cables.

Reporting Tools & Tips

9 Types of Visual Storytelling on Mobile

In September 2017, the lead producer for the BBC Internet Research & Future Services, Tristan Ferne, identified 12 different story formats used in digital news. With new formats constantly emerging, Emma-Leena Ovaskainen, a visual journalist for Finland’s biggest daily newspaper Helsingin Sanomat, has modified the list and made her own additions.

Data Journalism

Peer Reviewing Our Data Stories

As journalists who analyze data for stories, we strive to hold ourselves accountable to a high standard of accuracy. But checking our work is rarely a straightforward process. Newsroom editors and fact-checkers might not have enough data expertise. Often, we need an outside opinion. Ideally, we could ask each other for advice, or even turn to experts in other fields for help. In academia, asking for outside comment before publication is broadly referred to as “peer review.”