Skip to content

Week 9 Creating 3D models in Pix4D

Introduction

In this lab, I processed UAS imagery collected during 3D modeling and mapping missions using Pix4D Mapper. The goal was to turn raw aerial photographs into spatial data products that can be used in GIS and remote sensing software. I chose Pix4D because it is widely used for Structure-from-Motion (SfM) and Multi-View Stereo (MVS) processing, which allows users to generate point clouds and 3D models from overlapping images. Even though the workflow may look simple, accurate results depend on correct project setup and careful review of the input data.

Overview

This document summarizes my end-to-end Pix4D workflow, including logging into the software, creating and setting up a new project, and using the Help Manual to better understand key tools and outputs. I verified image metadata, selected appropriate processing templates, and adjusted camera-related settings when needed to improve model accuracy. The imagery came from two prior Skydio missions: an “accident/structure” scene and a vertical “light pole” scan. Before processing, I checked that the datasets were suitable for SfM by reviewing organization, overlap assumptions, per-image properties (e.g., exposure time, ISO, f-stop), and whether geographic tags were included. Finally, I processed the imagery to generate a densified point cloud, 3D mesh outputs, and basic quality assessments while documenting best practices to avoid common processing errors.

Processing

I initially tried to process my dataset in Pix4D, but the application produced frequent errors—especially when loading the project and applying settings—so the workflow was unstable. Because of this, I decided to use a different photogrammetry application, Agisoft Metashape Professional, instead of Pix4D.
All of my images were stored on an SD card, so I imported the full photo set into Metashape and ran the camera alignment first to confirm that the cameras were solving correctly and that the tie points were being generated as expected. As shown in the screenshots, the KHS House dataset was organized as one chunk with 424 images, and after alignment it produced approximately 290,909 tie points, which clearly formed the structure of the house and the surrounding area as a point cloud.
For this project, I did not choose a rapid/low-resolution option. I processed the imagery at high resolution (high quality), which significantly increased the computational load. Because of that, the processing time was extremely long—over 15 hours. To make this feasible, I started the upload and processing early in the morning at NISW, when I could leave the computer running for an extended period without interruptions. During processing, I periodically checked the project quality (tie point distribution, gaps, and noise) to make sure the workflow was progressing correctly. As the point cloud began to form—initially looking like scattered dots coming together—I used mesh/visualization options to make the reconstruction look clearer and more complete.
The second screenshot also shows that different datasets can produce different point densities (for example, 345 images with about 52,787 tie points), which reminded me that results depend heavily on image overlap, viewing angles, and the quality settings used.
image.png
image.png
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.