Skip to content
Environments 2 - Synced to students

icon picker
Week 02 - Part 2: Set Dressing & Render Pipeline

Summary

When our lighting and composition are in place, we can start dressing and filling up our scene. ​
image.png

Placing assets

When placing assets, keep these guidelines in mind:
What story do you want to tell?
Think of the natural order of things: how did a hole in a roof/wall influence the rest of the scene?
Think of the influence of time: How long has that river been there? How long has this vehicle been abandoned?
ok
Work from big → medium → small → details
ok
Avoid straight lines, straight lines will break the realism

Lighting pass 2.0: Update lighting

When you start placing in your assets, you will notice that your lighting starts to react/act differently. → Objects will appear brighter/darker, more saturated, sky too bright/dark, etc.. Work with the lights you already have. Rotate, translate, play with the intensity, etc..

Lighting pass 2.1: Make lighting chefs kiss

Once we updated our lights, we can add more lights to pop out shapes/focal point, make shapes readable. Here we can play and stretch with realism, don’t BREAK realism. (Don’t have two suns for example)

Post processing

Once your assets and lighting are in place, we can do the final cherry on top called, Post Processing.

THINGS TO KEEP IN MIND:

Don’t light your scene as you are color grading → you will end up with weird values.
Don’t go overboard with vignetting, adding grain and chromatic abberation.
When you add Curves in Photoshop → put the blending mode to Luminosity so it doesn’t affect your colors.
Add a bit more contrast and saturation to your focal point (subtle).
You can also color grade your scene to sell more of the story and mood. → Don’t color your assets more green/blue/orange, etc.. → you do this in post.
Post processing is subtly adding different layers of grain, color grade, vignette, contrast, etc..
You can work the values in the Post Process Volume in Unreal Engine, but this is tedious work.

How to render

Overview

have a look at this: The movie render queue allows you to have more control over your renders. Screenshots are good for WIPs, but for the final render you want more control. ​HANDY CONSOLE COMMANDS
r.DepthOfFieldQuality: 4
r.BloomQuality: 5
r.Tonemapper.Quality: 5
r.RayTracing.GlobalIllumination: 1
r.RayTracing.GlobalIllumination.MaxBounces: 2
r.RayTracing.Reflections.MaxRoughness: 1
r.RayTracing.Reflections.MaxBounces: 2
r.RayTracing.Reflections.Shadows: 2
r.TemporalAA Upsampling 3
r.ScreenPercentage 150 or 200
image.png
Do this for your Anti-Aliasing:
image.png
By upping this sample count to 64 or higher, you also get rid of the noise in your render.

Terminology

Deferred shading

Lights are applied deferred. This means that instead of directly reading data and coming to a result, materials write out their attributes into GBuffers. Lighting passes then read in the perpixel material properties and performing lighting with them.

Lighting paths

There are 3 different paths, fully dynamic, partially static, fully static. This is the difference between movable lights, stationary lights and static lights.

Lit translucency

Translucency is lit and shaded using a single forward pass to guarantee correct blending with other translucency. This has a performance impact though.

Sub surface shading

Materials have a subsurface profile made for materials like wax or jade. This is lower quality but cheaper than skin rendering.

Ambient occlusion

AO based on only the depth buffer. This means smoothing groups and normal maps do not affect the result.

Bloom

Used to simulate the effect of bright lights in a LDR image.

More info

Pipeline

The way Unreal Engine draws its frames is based around a concept of a retained mode. The scene draws are prepared in advance instead of building them every frame. It also does a lot of caching and draw call merging to exploit the fact that Static Meshes change infrequently, and so their data can be reused across frames. The engine interfaces with the GPU through API functions. An important one you should remember is the draw call. A draw call is a request to draw meshes on the screen using shaders. A GPU has to wait and can only start drawing after the engine’s rendering code has finished on the CPU, the resulting draw calls are translated into GPU code by the driver on the CPU and the necessary data was pushed from RAM to VRAM. Only then can the GPU do it’s job.

Draw calls

Understanding draw calls is a crucial part of understanding performance bottlenecks. Every instruction the CPU gives to the GPU is a draw call. Usually drawing a mesh needs 2-3 draw calls (geometry, shader, textures).
You can check the number of draw calls in your scene by using the “stat scenerendering” command.
A healthy number of draw calls is around 700. 1000 is ok. 1300 should be your upper limit.
No matter how strong your GPU, if your CPU can’t process the draw calls fast enough your frames will drop. Later more on how we can optimized our scenes.

Virtual shadow maps

Virtual shadow maps are the next step in shadow rendering. They offer a performant, non-RTX based system that can produce soft high-resolution shadows. It is one of the core technologies behind lumen, next to the real-time GI. ​More info: ​

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.