An exploration by Volum and the Newmark J-School, with support from the New York Times and the New York City Media Lab.
Test, refine, and document Volum's volumetric video capture process for use in the field, and make short volumetric videos that readers can interact with in AR.
The Volum Box depth camera, built around the Intel RealSense D415 depth + RGB camera, provides the volumetric video capture. The photogrammetry process is used to capture and produce a static reconstruction of the location in 3D. Using the Volum workflow devleoped by Ben Kreimer, the volumetric capture and photogrammetry scene scan are then merged together in Unity, recreating the captured scene for viewing in augmented reality.
There is no field-ready hardware available.
There is no established post-production workflow for field capture.
For 12 weeks in the spring of 2020 (including a month of the COVID-19 pandemic), we looked at the existing Volum workflow.
What we did
A look behind the scenes at our first live capture efforts
Watch a screen capture of our first prototype
What we learned
Finish Azure Kinect version of the Volum Box
Refine multiple 3D camera workflow
Optimize photogrammetry and volumetric video content for better performance on mobile and non-mobile platforms