icon picker
Case Study: Volum + Newmark Collaboration


An exploration by Volum and the Newmark J-School, with support from the New York Times and the New York City Media Lab.

Objective

Test, refine, and document Volum's volumetric video capture process for use in the field, and make short volumetric videos that readers can interact with in AR.

Toolkit

The Volum Box depth camera, built around the Intel RealSense D415 depth + RGB camera, provides the volumetric video capture. The photogrammetry process is used to capture and produce a static reconstruction of the location in 3D. Using the Volum workflow devleoped by Ben Kreimer, the volumetric capture and photogrammetry scene scan are then merged together in Unity, recreating the captured scene for viewing in augmented reality.

The Challenges

There is no field-ready hardware available.
There is no established post-production workflow for field capture.

The Context

For 12 weeks in the spring of 2020 (including a month of the COVID-19 pandemic), we looked at the existing Volum workflow.

Team

Screen Shot 2020-04-03 at 3.47.49 PM.png

() | Matt MacVey | Keishel Williams |


What we did

A look behind the scenes at our first live capture efforts

Watch a screen capture of our first prototype

Loading…

What we learned


Next Steps

Finish Azure Kinect version of the Volum Box
Refine multiple 3D camera workflow
Optimize photogrammetry and volumetric video content for better performance on mobile and non-mobile platforms

Final Presentation

_____________________________________________

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.