Share
Explore

icon picker
3D Scanning: Key Notes & Considerations

Lighting in 3D environment

How does lighting work in a 3D environment?

In Blender, lighting behaves much like it does in the real world. It influences how objects appear, how shadows are cast, and how materials respond to light. To achieve the desired look, the lighting setup must consider not just the light sources themselves, but also how objects are shaded and how they interact with the light.
Key factors that affect this interaction include:
Surface materials: Properties like roughness, metallic, and specular determine how light reflects or scatters across a surface.
Object normals and geometry: The shape and direction of surfaces affect how light hits and bounces off them.
Render engine (Eevee vs. Cycles): Cycles offers more physically accurate lighting, while Eevee is faster but may require more tweaking for realism.In Blender, lighting behaves much like it does in the real world. It influences how objects appear, how shadows are cast, and how materials respond to light. To achieve the desired look, the lighting setup must consider not just the light sources themselves, but also how objects are shaded and how they interact with the light.
Light sources and reflections: see the table below

Lighting Options
Light Type
Usage
Common Info
Point Light
Emits light equally in all directions from a single point (like a bulb)
Use with falloff settings (Inverse Square is realistic)
Sun Light
Simulates distant sunlight; consistent direction and intensity
Rotate to set direction; size affects shadow softness
Spot Light
Emits a cone of light, like a flashlight or stage spotlight, useful for dramatic lighting.
Control angle, blend, and falloff to shape the beam. Customized Gobo Light texture possible.
Area Light
Emulates light from a flat surface, like a softbox, great for indoor or studio lighting. But more expensive for render.
Size and shape affect softness and spread
Ambient Light
Not a separate light, but controlled via World settings
Use HDRI environments for realism; adjust strength in World tab
Mesh Light
Emissive materials on mesh objects
Set Emission shader in Material; enable “Multiple Importance Sampling”
HDRI (Environment)
Image-based lighting from spherical images. super realistic reflections and lighting, very easy global illumination.
Load in World Shader using Environment Texture node
Volume Light
Light interacting with particles or fog. great for god rays but require high render time.
Use with World Volume or cube with Principled Volume shader
There are no rows in this table

During the scanning process, how is lighting handled?

Base on the scanning techniques, Lighting requirements varies:

Photogrammetry Scans:

Lighting: Use soft, diffuse lighting to minimize harsh shadows and highlights. Shadows can confuse the photogrammetry software and lead to errors in the mesh reconstruction. Avoid direct light sources or strong directional lighting.
Surface Texture: Surfaces with high texture and contrast (e.g., rough, detailed, patterned) work much better for photogrammetry.
Smooth or low-texture surfaces are challenging to capture accurately because they lack visual features for the software to track between images.

Gaussian Splatting (GS) to Mesh

Compared to photogrammetry, Gaussian Splatting is a much newer technique that offers several advantages. It provides better handling of lighting interactions, enables highly detailed captures, and can even perform well on low-texture surfaces, which are typically difficult for traditional photogrammetry.
Lighting setup requirements are relatively simple, and light-object interactions can be captured effectively: even on transparent or reflective surfaces.
Though Gaussian Splatting preserve full Spherical Harmonics lighting info in Point cloud data, while converting GS data to Mesh might compress Spherical Harmonics lighting info, causing quality lost.
More Info on Gaussian Spatting Technique
Overview of Gaussian Splatting (GS)
Category
Details
File Format
.ply (Point Cloud Format)
Stored Data
- XYZ Coordinates - RGB Colors - Spherical Harmonics (for lighting)
Lighting Interaction
- Excellent, if using full (uncompressed) GS PLY with Spherical Harmonics - Reduced, if using compressed GS PLY (flat, less reactive)
File Size
- Large when uncompressed (due to rich per-point data) - Smaller when compressed (but loses lighting/shading detail)
Surface Compatibility
Works well on low-texture and high-detail surfaces (better than photogrammetry)
Tools & Compatibility
- Spline: Auto-compresses GS PLY (may reduce quality) - Polycam: Displays GS PLY in optimized view - Potshot: GS capture/view - KIRI Engine
Quality Output
- High fidelity for static objects/environments - Ideal for photorealism & light-sensitive use cases
Drawbacks
- Not mesh-based (hard to edit/animate) - Large file sizes - Fewer tools available (new tech)
Best Use Cases
- VR/AR visualization - Scans of detailed or low-texture objects - Static environments - High-fidelity previews
There are no rows in this table
Are there any special precautions?

Gaussian Splatting (GS) generates high-quality visuals but often results in large file sizes and heavy scene data. This can slow down workflows—especially for animation-ready characters—due to increased processing demands. Since GS isn't mesh-based, it’s not directly compatible with rigging or deformation workflows, and conversion may be required. Thorough testing is essential before integrating GS into animation pipelines.
Photogrammetry may not always deliver the most realistic results and can be time- and labor-intensive to achieve the desired visual quality.
Photogrammetry vs. GS to Mesh
Method
Strengths
Limitations
1. Photogrammetry
- Well-established, with standardized workflows - High-quality results when done professionally - AI-assisted post-processing can enhance detail
- Requires controlled lighting and camera setup - Needs retopology for animation - Mesh optimization may reduce quality
2. Gaussian Splatting → Mesh
- Newer technique with superior visual detail - Handles low-texture surfaces better - Captures more accurate lighting info
- Mesh conversion may reduce detail - Pipeline is still evolving - Fewer editing tools and less animation readiness
There are no rows in this table

Physical build Vs. 3D creation

Which types of sets or elements are more relevant to build physically? Which ones would benefit more from being created directly in 3D?
In general, large-scale environments and effects benefit the most from being built in 3D. However, objects that require physical interaction can be more challenging and time-consuming to create and manage in 3D.

Physical vs 3D Sets Comparison
Aspect
Physical Sets
3D Sets (Digital)
Realism
Very high (natural lighting, real materials, tactile)
Can be highly realistic with time, skill, and render power
Actor Interaction
Direct and seamless
Requires green screen or tracking;
Setup & Build Time
Time-consuming; depends on materials and construction
Faster for concept work; longer if high detail is needed
Cost
Expensive for large or complex builds
Expensive for high-end renders or simulation-heavy scenes
Flexibility & Revisions
Limited once built
High — elements can be tweaked, duplicated, re-lit easily
Lighting Control
Limited to physical lights and setups
Fully controllable and adjustable
Environment Scale
Limited by space and budget
Unlimited — ideal for vast or imaginary worlds
Camera Movement Freedom
Physical constraints apply
Full freedom (fly-throughs, impossible angles)
Post-Production Integration
Natural, especially with actors
Requires compositing, matchmoving, and cleanup
Best For
Close-ups, actor interaction, tangible realism
Backgrounds, large-scale environments, VFX-heavy scenes
There are no rows in this table
3D Sets Difficulties
Name
Column 2
Notes
Interaction
Clipping Artifacts
When a character holds or touches an object, parts may intersect or pass through each other if not carefully animated—like fingers clipping through a cup. This demands detailed adjustments to hand and object positioning.
Loss of Real-World Detail
Soft or elastic objects (like pillows or fabric) are harder to replicate accurately. Subtle deformations from touch or weight are often lost, making the interaction feel less convincing compared to real-world physics.
There are no rows in this table

Freedom of manipulation

- How much flexibility does 3D give us in terms of modifying objects or environments?
- Can we easily change the color of an object, a wall or a set element?
- Are there technical limitations we should be aware of?

Modifications to objects or environments should remain as flexible as possible throughout the project. In general, changes are much easier to implement when planned early in the workflow. As the project progresses, technical and creative constraints tend to build up, making later adjustments more complex and time-consuming.

Changes to elements like colors or design features are typically manageable, but more specific information is needed to provide clearer guidance and direction.

Scanning constraints

1. Is there a minimum or maximum size for an object to be scanned effectively?
2. Are there any limits regarding the types of surfaces, shapes or patterns that can be scanned? For example: transparent, reflective, very dark surfaces, complex geometries or repetitive patterns.
3. In general, what are the main technical limitations we should keep in mind when preparing the set and props?


1. Static Objects / Environments: There are fewer constraints on maximum size for static objects or environments. Larger scenes can be scanned or captured using a 360-degree camera or a wide-angle lens, which should provide sufficient data for generating the scene within a 3D environment.

2. Surface Types & Technique Considerations: The effectiveness of scanning techniques depends on the surface types and the project requirements. A thorough round of testing is recommended to tailor the workflow for optimal results.

3. General Notes: Transparent and reflective surfaces should be avoided when using photogrammetry, as well as very dark or low-texture materials. Complex geometries are typically manageable, but tests are necessary to help identify and plan around any limitations.

4. Technical Limitations: The primary technical bottleneck is converting scans into animation-ready, rig-ready meshes, which is often the most time-consuming part of the process. Previsualization and testing are essential to streamline the workflow and avoid delays.

5. Additional Note: Using a motion capture suit for animation should not present any issues, provided that the character is successfully rigged and skinned.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.