Difference between revisions of "Rendering Methods"
(Removed "Back to the Guide" as I'm moving things around)
|Line 50:||Line 50:|
Latest revision as of 07:28, 10 August 2019
Terragen 4 uses two different rendering methods. One method is micropolygon rendering and the other is raytracing. It's most accurate to say that TG is a hybrid renderer, because the different rendering techniques are normally used together to render a scene. This page gives background information on the two methods and how they're used in TG.
Micropolygon rendering is a technique that takes a surface and breaks it up into little polygons, or micropolygons, that are smaller than the pixels in the rendered image. These micropolygons are then displaced and shaded. You can read more about displacement here. Shading is basically the process of calculating the colour for a micropolygon, taking factors like surface colour and lighting into account.
Micropolygon rendering is particularly suited to rendering procedural data. Procedural data is something which is calculated on an as-needed basis using a mathematical formula. The prime example of something procedural is a fractal. If you've ever used a fractal viewing program you'll know you can zoom in on the fractal almost infinitely. This is because the fractal is generated mathematically and each time you zoom a new view of the fractal is calculated. The amount of detail is more or less unlimited. You can find out more about procedural data here.
Rendering procedural data is at the core of what TG does. Procedural data allows TG to render complex surfaces with very high levels of detail without having to store masses of data. Micropolygon rendering is ideal for rendering procedural data because it helps to limit how much needs to be rendered. If you have procedural data with effectively infinite levels of detail you need to strike a balance between rendering too much detail, which would be slow, and not enough detail, which would look bad. The ideal level of detail is a polygon somewhat smaller than a pixel in the finished image. This also means that an appropriate level of detail is used depending on the distance from the camera. Areas further from the camera can be rendered with less detail. A pixel for an area close to the camera might cover a few millimetres in world space whereas a pixel for an area far from the camera might cover tens, hundreds or even thousands of metres. Micropolygon rendering is an effective technique for breaking surfaces up into polygons that give an appropriate level of detail.
Another big advantage of a micropolygon renderer is that it's able to work with displacement efficiently. Massive amounts of displacement is one of TG's strengths. When a surface has been broken up into micropolygons each of those micropolygons can be moved in 3D space in any direction. Moving the micropolygons like this is called displacement. You can read more about displacement here.
Micropolygon rendering is used, by default, to render the terrain and bodies of water.
Raytracing is the other rendering technique used by TG. You may already be familiar with what raytracing does as it's a common technique used in other renderers. It works by projecting lines, or rays, into the scene. When one of these rays hits an element in the scene the renderer calculates the shading of the scene element. Rays might also collect shading information on their way through the scene, when passing through clouds for example.
There are two main types of rays. The first is primary rays. Primary rays start from the camera and are projected out into the scene through the pixels of the image. Imagine you were holding a section of screen door mesh in front of your face. A primary ray would start from your eye and then travel out into the world through one of the holes in the mesh (the equivalent of a pixel in the rendered image). The ray would finish where it hit some object out in front of you. Higher quality results are given by using multiple rays for each pixel in the rendered image.
The other type of ray is a secondary ray. Secondary rays begin where a primary ray hits an element in the scene. A good example of a secondary ray is a reflection ray. Let's say a primary ray hits a reflective object in the scene. We need to find both what that point on the reflective surface is actually reflecting and what colour it should be reflecting. This is found by sending a secondary ray out into the scene to see what it hits. Secondary rays are also used for things like calculating lighting and shadows.
You might come across node parameters with names like "Enable secondary" or "Visible to other rays". These parameters typically effect how a node interacts with secondary rays. Lets's say you had an object node and you unchecked the Visible to other rays checkbox. This would mean that even though the object was still visible to the camera it would not be hit by secondary rays. One consequence of this is that the object wouldn't show up in reflections.
It's not really possibly to use raytracing by itself to render a scene in TG. You can do this, by turning on the Ray trace everything (not recommended) parameter in the Extra tab of the Render node. However, as the parameter name suggests, this is not recommended. A big reason for this is that the raytracer doesn't support displacement as efficiently as the micropolygon renderer. If you try it you will find that terrain renders blockily, even at high levels of detail. There are other settings you can tweak to improve the results, such as the Ray detail multiplier in the Render Subdiv Settings node, but they can greatly increase render time.
By default Terragen renders in a hybrid fashion, using both micropolygon rendering and raytracing. Micropolygon rendering is used to render the terrain, water and the "background" object. For these kinds of surfaces, raytracing is used mainly as secondary rays, for reflection, lighting and shadows.
An important point is that raytracing is also used as the default method for rendering objects such as imported models. This is controlled by the Ray trace objects parameter in the Render node. The reason that raytracing is used for objects is that they're effectively static data and can be rendered efficiently using the raytracer. The raytracer can give higher visual quality for a given render quality setting than can be achieved with the micropolygon renderer. To sum up, raytracing makes objects look better and render more quickly.
The one disadvantage to using raytracing for objects is that the raytracer doesn't support displacement. It is able to convert displacement data into bump mapping data, but bump mapping doesn't give the same visual quality as displacement. Think about a bark texture on a tree. Using displacement the tree bark can have a real 3D shape and the silhouette of the trunk would show lumps and bumps. With bump mapping the shape of the bark is faked using lighting effects. The underlying surface remains smooth but gives an impression of being a real 3D shape. However if you looked at the silhouette of the trunk you would see the flatter shape of the underlying geometry.
If your scene has a model that requires displacement to look good you will want to change the object's Render method to Force displacement. If your version of Terragen doesn't have this option you can turn off Ray trace objects on the render node, but this affects all objects in the scene. Either of these changes will make objects render using the micropolygon renderer. You might also want to increase the "micropoly detail" setting for the render node.
When using raytracing for objects and/or "Defer atmo/cloud" the quality and speed is highly dependent on antialiasing and sampling settings, much more so than with micropolygon rendering. You can find more specific information about that here.
You can find some more advanced information on raytracing in Terragen here:
There is a lot of information available on both micropolygon and raytrace rendering, both in books and on the internet. If you want to find out more a good place to start is Wikipedia. Here's some information on micropolygons and the Reyes rendering algorithm:
Reyes rendering is a well known micropolygon rendering architecture made famous by Pixar's PRMan (aka RenderMan) renderer. TG doesn't use the Reyes architecture specifically, but many of the concepts are the same.
Here's some information about raytracing:
Literally, to change the position of something. In graphics terminology to displace a surface is to modify its geometric (3D) structure using reference data of some kind. For example, a grayscale image might be taken as input, with black areas indicating no displacement of the surface, and white indicating maximum displacement. In Terragen 2 displacement is used to create all terrain by taking heightfield or procedural data as input and using it to displace the normally flat sphere of the planet.
A single element of an image which describes values for color and/or intensity, depending on the color system which the image uses. Groups of ordered pixels together form a raster image.
A parameter is an individual setting in a node parameter view which controls some aspect of the node.
A single object or device in the node network which generates or modifies data and may accept input data or create output data or both, depending on its function. Nodes usually have their own settings which control the data they create or how they modify data passing through them. Nodes are connected together in a network to perform work in a network-based user interface. In Terragen 2 nodes are connected together to describe a scene.