A guidebook to two tools and methods for capturing and telling stories in 3D

➡️ - Portable, Do-It-Yourself Volumetric Camera and Computer in a Box

➡️ - A Pipeline for Combining Volumetric Video and Photogrammetry into a 3D Scene

🚦How Does It Work?

1️⃣- 📹Capture a 3D scene
2️⃣- 🖥Post-production and Editing
3️⃣- 📲🌐😎Publish to Augmented Reality, Virtual Reality and more!

📺 What Does It Look Like?

A volumetric capture made with the Volum Box played back in 2D

📋A Quick Tour of this Guide

This document contains examples, walkthroughs and links to further resources.
See how the Volum Box and Volum Workflow were used for a test 3D capture with magician Mark Mitton
What goes into the Volum Box camera
The steps in the Volum Workflow pipeline
External links to examples of volumetric storytelling and other how-tos and tutorials
Key terms
There are no rows in this table

🤔 What is Volumetric Video?

Volumetric video captures time and space in 3D. The result is 3D video that has both the depth and position of an image. Volumetric video can be viewed from different angles on playback.

🎺 Project Credits

2019: was started by Trevor Snapp, Ben Kreimer, Sam Wolson, and Ben Sax after receiving initial support from .
Spring 2020: An exploration by , with support from and the as part of the .
Volum Boxes (2).jpg

⚒ Volum Box Background

Volum was started by Trevor Snapp, Ben Kreimer, Sam Wolson, and Ben Sax after receiving initial support from Journalism 360.
The team set out to make multi-camera volumetric technology, both hardware and software,
accessible and field-friendly for journalists, storytellers, and creatives around the world. Given the current state of volumetric camera technology, multiple camera volumetric video capture systems are found in studio and other controlled indoor environments, and require the hiring of a volumetric video production company with proprietary software and workflows.
This limits who has access to multiple camera volumetric video technology, and how and where it can be used.
A way of capturing the world for virtual reality and augmented reality mediums, volumetric video when captured in artificial, controlled studio environments removes people and other subjects from their personal, comfortable, and natural spaces, and goes against the spirit and fundamental values of journalism, and limits the opportunities for many creatives. In short, while capture studios aim to maximize visual quality at all costs, including directing subjects to stand in certain locations on the capture stage, doing so severely limits the creative possibilities for volumetric video capture. We set out to challenge that approach to multiple camera volumetric camera by working towards four intertwined goals:
Figure out how to make capture technology field friendly, and as cheap as possible
Figure out the most accessible way to use multiple cameras for complete subject coverage (as opposed to a single-camera-single-sided volumetric capture)
Find a post-production pipeline that supported the use of low-cost multiple camera capture
Figure out storytelling applications for field friendly multi-camera volumetric video capture

Learning from the Past

Film cameras of the late 1800s were large and unwieldy, limiting their use to studios. In 1900
the original Kodak Brownie was released, changing photography forever by making the
technology more accessible than ever before. The Brownie was inexpensive and portable,
making photography available to the masses.
Multiple camera volumetric video capture systems are in an analogous position to photography of the late 1800s. Volumetric capture studios, such as those owned and operated by Intel and Microsoft, charge hundreds and thousands of dollars per minute for use of multiple camera volumetric systems with green screens. Other companies like Jaunt (recently acquired by Verizon) and Evercoast have portable capture stages, but are still designed to function in highly controlled indoor environments, and involve a desktop computer with complex towers of cameras. For all of these companies and studios, the capture and processing of the raw volumetric data is handled using proprietary software and workflows.
Single camera volumetric video capture is possible and accessible to creatives using a computer and Depthkit or Brekel software. Both programs allow volumetric media makers to
connect a single depth camera to a laptop for capturing volumetric video. Depth cameras used for Depthkit and Brekel, such as the various versions of the Microsoft Kinect or Intel RealSense models, combine a color camera with a depth sensor designed to measure the depth of surfaces in a scene. When processed, the depth sensor provides a three-dimensional hologram-like volumetric capture with a color overlay provided by the color camera.
On September 12, 2019, the latest beta version of Brekel PointCloud was released, featuring
multi-camera recording capabilities. Scatter, the company behind Depthkit, is still in the
pre-alpha development stage of their multiple camera feature set.
Also like Microsoft and Intel, Scatter has not made their multiple camera process publically
available, requiring their role in the production and post-production process, limiting the number of people who can have access to multi-camera volumetric video capture technology. We had an in-person meeting at Scatter’s Brooklyn office in July 2019, and after sharing our Volum mission and progress, asked for access to and information about their multiple camera hardware and software setup. The Scatter team politely declined to reveal the workflow, encouraging us to only use a single camera system. We continued developing our own multiple camera solution, based around our Volum Modules, designed for portability and durability. We would use Brekel for the capture process, but the software overwhelms the small single board computers in our Volum Modules.

Building the Volum Box

Inspired by the durability of GoPros, and how their size made it possible to place cameras in the middle of action, we recognized the form factor problem with laptops, and the opportunities that would arise from solving that problem. They are fragile, can’t get wet, and are cumbersome to carry around when the screen is unfolded, and the RealSense camera has to be mounted elsewhere for shooting. This creates an unwieldy setup, especially when deploying multiple RealSense cameras. With these factors in mind, we designed and built a set of four Volum Modules, each containing one Intel RealSense D415 depth camera ($149), a quad-core Pentium equipped Intel Up Squared single board computer ($299), 5” HDMI 5" 800x480 touchscreen display ($75), 500 GB solid state hard drive ($100), and a DJI Phantom 3 drone battery ($66), providing the system with about two hours of recording time and battery life. This hardware, along with the electrical components and cables needed to connect and power everything, fits inside a small lunchbox-size Seahorse SE120 hard plastic case ($18). There is no limit to the number of Volum Modules than can be used during a shoot.


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.