Introduction
This lab introduced end‑to‑end processing of UAS imagery in Pix4D Mapper, transforming raw photographs gathered during 3D object and mapping missions into analysis‑ready spatial products. Because Pix4D implements a Structure‑from‑Motion and Multi‑View‑Stereo workflow, it can feel deceptively simple; however, correct setup is critical for accuracy. I focused on logging in with the shared license, validating image metadata, selecting appropriate templates, and configuring the camera model to account for the Skydio 2’s rolling shutter. The goal was to produce a densified point cloud, a triangle mesh, and a basic quality assessment for two missions while documenting best practices to avoid common errors.
2. Study Area and Source Data
The imagery originated from two prior Skydio missions: an “accident/structure” scene and a vertical “light pole” scan. Both datasets were organized in a standardized folder structure to support reproducibility. The study areas were small outdoor environments typical of training scenarios, with open lines‑of‑sight and limited obstructions. Environmental conditions at collection time were typical daylight with stable weather; these assumptions informed exposure settings and overlap adequacy in downstream processing. I verified that the images were suitable for SfM by inspecting per‑image properties (filename, exposure time, ISO, f‑stop) and by checking whether geographic tags were included when available.
Figure 1. Overview map of study areas and camera locations.
3. Data Management and Project Setup
I created a working directory at C:\temp\username_UASmapdata, then added a dated subfolder named with platform and mission details. Inside, I replicated the provided template: 1_Collection for raw imagery, 2_Processing for Pix4D projects, and 3_Analysis for downstream GIS work. Within the 2_Processing directory, I prepared two subfolders—Mission1images and Mission2images—and imported the corresponding photos into separate Pix4D projects named [username]_[MMDDYY]_PWAplot#.
During project creation, I confirmed that the Skydio 2 camera uses an electronic rolling shutter. To mitigate rolling‑shutter artifacts in reconstruction, I edited the camera model and set it to Linear Rolling Shutter. I retained the default image coordinate system and deferred changes to output coordinates to future labs. I selected the 3D Model processing template for both missions since images were not strictly nadir and the objective was 3D reconstruction rather than a map‑grade orthomosaic.
4. Processing Workflow in Pix4D
I followed Pix4D’s canonical three‑phase pipeline. Initial Processing computed image keypoints, internal calibration, and image alignment, producing a Quality Report that I reviewed prior to continuing. In the Point Cloud and Mesh phase, I generated a densified point cloud and a triangle mesh, inspecting point density around edges and vertical features. I concluded with the DSM/Orthomosaic/Index stage disabled for this lab, since the deliverables emphasized 3D model outputs and fly‑through visualization.
Quality control hinged on the report generated after Initial Processing. I confirmed that most images calibrated in a single block, that optimization deltas for internal camera parameters were within acceptable thresholds, and that no major alignment gaps or residuals suggested poor coverage. For the light‑pole dataset, I paid particular attention to vertical structure fidelity and the presence of any shearing that might indicate insufficient oblique coverage or residual rolling‑shutter effects.
5. Results and Deliverables
Both missions produced coherent reconstructions. The “accident/structure” scene yielded a well‑aligned single block with adequate point density across planar surfaces and edges. The light‑pole mission reconstructed the vertical form cleanly, demonstrating how oblique coverage supports cylindrical and tall features. I created brief animation trajectories in the rayCloud to visualize completeness and surface continuity; each clip runs under twenty‑five seconds and orbits the subject for a 360‑degree perspective.
The expected deliverables from this lab are the densified point clouds, triangle meshes, quality reports, and short fly‑through videos for each mission. File sizes and image counts vary with overlap and scene complexity; I documented timing and storage metrics below to support later comparisons and to inform planning for larger projects.
Table 1. Mission Metrics and File Characteristics
[Insert counts once processing is complete]
Time to drive and set up: 20 min Time to process Initial Phase: 2 hours roughly Number of images processed: Mission 1 – 104 Mission 2 – 363
6. Discussion and Lessons Learned
This exercise reinforced the importance of correct camera modeling and overlap strategy. Setting the Skydio 2 to a Linear Rolling Shutter model measurably improved vertical fidelity and minimized shear around edges. Reviewing the Quality Report before running later phases saved time and prevented pushing poor inputs downstream. For vertical targets like the light pole, sufficient oblique coverage matters more than raw image count, and consistent exposure reduces noisy textures and “sparkle” in the point cloud.
Just as important as processing is disciplined data management. A clear folder schema, consistent project naming, and separating Collection, Processing, and Analysis kept work traceable and simplified re‑runs. These habits become essential when integrating ground control or merging projects later, where knowing exactly which imagery and settings produced a given model is critical for QA/QC.
7. Hardware Notes and Software References
Pix4D’s documentation emphasizes adequate CPU, GPU, RAM, and storage for SfM workloads. The workstation used for this lab reported 64 GB RAM, an ~18 GB GPU, a multi‑core CPU near 2.10 GHz, and ~932 GB of free storage, which proved sufficient for these small datasets. I also reviewed the Video Academy and support pages for topics including creating new camera models, generating contours, editing point clouds, and creating video animations. These resources informed my configuration choices and will guide future labs involving GCPs, DSM/orthomosaic outputs, and merged projects.
8. Conclusion
Processing two Skydio 2 datasets in Pix4D produced reliable 3D reconstructions and provided a practical foundation for SfM pipelines. Key takeaways include modeling the rolling shutter correctly, validating alignment via the Quality Report, and adopting a disciplined folder structure. The resulting point clouds, meshes, and fly‑throughs establish a baseline for upcoming labs where I will incorporate ground control, refine coordinate systems, and generate map‑grade raster products.