Difference between revisions of "Compositing Terragen Render Elements"
(Added two main section headers)
|Line 92:||Line 92:|
The “tgSurfPos” and “tgCloudPos” render elements contain data that describes the position of a point on a surface or cloud layer respectively,
The “tgSurfPos” and “tgCloudPos” render elements contain data that describes the position of a point on a surface or cloud layer respectively, from .
[[File:.jpg|none|867px|tgSurfPos combined with tgCloudPos]]
Revision as of 02:41, 10 March 2020
Rebuilding the Image from Lighting Elements
Compositing with Terragen's render elements allows you to recreate and fine-tune the final rendered image or "beauty pass". The compositing project should be set up in linear colour space and you will use an "Additive" workflow approach, which means setting the render element's merge or blending mode to the equivealent of "additive" in your 2d software package. In the Nuke software, this means setting the merge node to "Plus", while in the Fusion software, this means setting the merge node's "Apply Mode" to "Normal", "Operator" to "Over", and "Alpha Gain" to "0.0".
The simplest example of combining render elements to match the Terragen beauty pass, which is sufficient for many projects, is this:
tgRgb = tgSurfRgb + tgAtmoRgb + tgCloudRgb
Note that each of these render elements are blended or merged together in an "additive" way, and that the end result matches the Terragen beauty pass.
Sometimes you'll want to adjust or fine-tune your image, so each of the three "Rgb" render elements above can be recreated by combining other render elements. The newly combined elements then replace the "Rgb" render element in the comp.
There are two methods for recreating the "tgSurfRgb" element. The first gives you control over direct and indirect lighting. The second gives you control over the diffuse and specular as well.
For control over direct and indirect lighting on a surface use:
tgSurfRgb = tgSurfDirect + tgSurfIndirect.
For control over the diffuse and specular lighting on a surface choose this:
tgSurfRgb = tgSurfDirectDiff + tgSurfDirectSpec + tgSurfIndirectDiff + tgSurfIndirectSpec.
As you can see, using either of these methods results in matching the Terragen beauty pass, and adds the ability to "dial in" the amount of surface lighting you want.
Likewise the atmosphere "tgAtmoRgb" render element can also be recreated by combining the tgAtmoDirect and tgAtmoIndirect render elements, which will give you control over the atmosphere's direct and indirect lighting.
tgAtmoRgb = tgAtmoDirect + tgAtmoIndirect
And finally, the clouds "tgCloudRgb" render element can be recreated in a similar fashion, by combining the "tgCloudDirect" and "tgCloudIndirect" render elements.
tgCloudRgb = tgCloudDirect + tgCloudIndirect
Results of additive workflow
In the image below we can see the compositing node layout and the results of the “additive” workflow.
The “tgAlpha” channel can also be reconstructed with alternate render layers: tgAlpha = tgSurfAlpha + tgAtmoAlpha + tgCloudAlpha
Terragen can also save render elements for data that is often used by compositors in order to create masks within the compositing software. Please be aware that each compositing package will have its own way of extracting the data.
There are two depth render layers available, one for "Surface Depth" and the other for "Cloud Depth". This type of data is often referred to as a depth map or Z-depth. The data records the distance between the camera and an object or terrain, or a cloud.
In this example, I've added a layer of smog over the Terragen beauty pass render element by passing the data stored in the tgSurfDepth render element to the composting software's fog filter.
In this next example, the tgSurfDepth render element and the tgCloudDepth render element have been combined and passed along to the compositing software's fog filter to create a layer of smog in the foreground.
There are two motion vector render elements available, one for "Surface Motion" and the other for "Cloud Motion".
The "tgSurf2dMotion" render element contains data that describes the motion of objects within the image or frame. The X vector and Y vector values are stored in the red and green channels of the image. In CGI it's typically faster to render without realistic 3d motion blur, but by saving the 2d vector data an approximation of the motion blur can be made within the compositing software.
The "tgCloud2dMotion" render element contains data that describes the motion of the cloud layers within the image or frame. In the example below note how by using this render element we can isolate the cloud layers and apply 2d vector blur to only the cloud layers.
The “tgSurfPos” and “tgCloudPos” render elements contain data that describes the position of a point on a surface or cloud layer respectively, in world space. World position in measured in meters and in Terragen's coordinate space. Sometimes it may be necessary to reverse the Z channel and/or swap Y and Z channels if you need to combine the render element with position passes from other software.
A single object or device in the node network which generates or modifies data and may accept input data or create output data or both, depending on its function. Nodes usually have their own settings which control the data they create or how they modify data passing through them. Nodes are connected together in a network to perform work in a network-based user interface. In Terragen 2 nodes are connected together to describe a scene.
A vector is a set of three scalars, normally representing X, Y and Z coordinates. It also commonly represents rotation, where the values are pitch, heading and bank.