Lighting in 3D environment
How does lighting work in a 3D environment?
In Blender, lighting behaves much like it does in the real world. It influences how objects appear, how shadows are cast, and how materials respond to light. To achieve the desired look, the lighting setup must consider not just the light sources themselves, but also how objects are shaded and how they interact with the light.
Key factors that affect this interaction include:
Surface materials: Properties like roughness, metallic, and specular determine how light reflects or scatters across a surface. Object normals and geometry: The shape and direction of surfaces affect how light hits and bounces off them. Render engine (Eevee vs. Cycles): Cycles offers more physically accurate lighting, while Eevee is faster but may require more tweaking for realism.In Blender, lighting behaves much like it does in the real world. It influences how objects appear, how shadows are cast, and how materials respond to light. To achieve the desired look, the lighting setup must consider not just the light sources themselves, but also how objects are shaded and how they interact with the light. Light sources and reflections: see the table below
During the scanning process, how is lighting handled?
Base on the scanning techniques, Lighting requirements varies:
Photogrammetry Scans:
Lighting:
Use soft, diffuse lighting to minimize harsh shadows and highlights. Shadows can confuse the photogrammetry software and lead to errors in the mesh reconstruction. Avoid direct light sources or strong directional lighting. Surface Texture:
Surfaces with high texture and contrast (e.g., rough, detailed, patterned) work much better for photogrammetry. Smooth or low-texture surfaces are challenging to capture accurately because they lack visual features for the software to track between images. Gaussian Splatting (GS) to Mesh
Compared to photogrammetry, Gaussian Splatting is a much newer technique that offers several advantages. It provides better handling of lighting interactions, enables highly detailed captures, and can even perform well on low-texture surfaces, which are typically difficult for traditional photogrammetry. Lighting setup requirements are relatively simple, and light-object interactions can be captured effectively: even on transparent or reflective surfaces. Though Gaussian Splatting preserve full Spherical Harmonics lighting info in Point cloud data, while converting GS data to Mesh might compress Spherical Harmonics lighting info, causing quality lost. More Info on Gaussian Spatting Technique Overview of Gaussian Splatting (GS)
Are there any special precautions?
Gaussian Splatting (GS) generates high-quality visuals but often results in large file sizes and heavy scene data. This can slow down workflows—especially for animation-ready characters—due to increased processing demands. Since GS isn't mesh-based, it’s not directly compatible with rigging or deformation workflows, and conversion may be required. Thorough testing is essential before integrating GS into animation pipelines.
Photogrammetry may not always deliver the most realistic results and can be time- and labor-intensive to achieve the desired visual quality.
Photogrammetry vs. GS to Mesh
Physical build Vs. 3D creation
Which types of sets or elements are more relevant to build physically? Which ones would benefit more from being created directly in 3D?
In general, large-scale environments and effects benefit the most from being built in 3D. However, objects that require physical interaction can be more challenging and time-consuming to create and manage in 3D.
Physical vs 3D Sets Comparison
Freedom of manipulation
- How much flexibility does 3D give us in terms of modifying objects or environments?
- Can we easily change the color of an object, a wall or a set element?
- Are there technical limitations we should be aware of?
Modifications to objects or environments should remain as flexible as possible throughout the project. In general, changes are much easier to implement when planned early in the workflow. As the project progresses, technical and creative constraints tend to build up, making later adjustments more complex and time-consuming.
Changes to elements like colors or design features are typically manageable, but more specific information is needed to provide clearer guidance and direction.
Scanning constraints
1. Is there a minimum or maximum size for an object to be scanned effectively?
2. Are there any limits regarding the types of surfaces, shapes or patterns that can be scanned? For example: transparent, reflective, very dark surfaces, complex geometries or repetitive patterns.
3. In general, what are the main technical limitations we should keep in mind when preparing the set and props?
1. Static Objects / Environments:
There are fewer constraints on maximum size for static objects or environments. Larger scenes can be scanned or captured using a 360-degree camera or a wide-angle lens, which should provide sufficient data for generating the scene within a 3D environment.
2. Surface Types & Technique Considerations:
The effectiveness of scanning techniques depends on the surface types and the project requirements. A thorough round of testing is recommended to tailor the workflow for optimal results.
3. General Notes:
Transparent and reflective surfaces should be avoided when using photogrammetry, as well as very dark or low-texture materials. Complex geometries are typically manageable, but tests are necessary to help identify and plan around any limitations.
4. Technical Limitations:
The primary technical bottleneck is converting scans into animation-ready, rig-ready meshes, which is often the most time-consuming part of the process. Previsualization and testing are essential to streamline the workflow and avoid delays.
5. Additional Note:
Using a motion capture suit for animation should not present any issues, provided that the character is successfully rigged and skinned.