Difference between revisions of "Rendering Methods"

From Terragen Documentation from Planetside Software
Jump to: navigation, search
Line 16: Line 16:
 
Raytracing is the other rendering technique used by TG. You may already be familiar with what raytracing does as it's a common technique used in other renderers. It works by projecting lines, or rays, into the scene. When one of these rays hits an element in the scene the renderer calculates the shading of the scene element. Rays might also collect shading information on their way through the scene, when passing through clouds for example.
 
Raytracing is the other rendering technique used by TG. You may already be familiar with what raytracing does as it's a common technique used in other renderers. It works by projecting lines, or rays, into the scene. When one of these rays hits an element in the scene the renderer calculates the shading of the scene element. Rays might also collect shading information on their way through the scene, when passing through clouds for example.
  
There are two main types of rays. The first is primary rays. Primary rays start from the camera and are projected out into the scene through the pixels of the image. Imagine you were holding a section of screen door mesh in front of your face. A primary ray would start from your eye and then travel out into the world through one of the holes in the mesh (the equivalent of a pixel). The ray would finish where it hit some object out in front of you. Higher quality results are given by using multiple rays for each pixel in the rendered image.
+
There are two main types of rays. The first is primary rays. Primary rays start from the camera and are projected out into the scene through the pixels of the image. Imagine you were holding a section of screen door mesh in front of your face. A primary ray would start from your eye and then travel out into the world through one of the holes in the mesh (the equivalent of a pixel in the rendered image). The ray would finish where it hit some object out in front of you. Higher quality results are given by using multiple rays for each pixel in the rendered image.
  
 
The other type of ray is a secondary ray. Secondary rays begin where a primary ray hits an element in the scene. A good example of a secondary ray is a reflection ray. Let's say a primary ray hits a reflective object in the scene. We need to find both what that point on the reflective surface is actually reflecting and what colour it should be reflecting. This is found by sending a secondary ray out into the scene to see what it hits. Secondary rays are also used for things like calculating lighting and shadows.
 
The other type of ray is a secondary ray. Secondary rays begin where a primary ray hits an element in the scene. A good example of a secondary ray is a reflection ray. Let's say a primary ray hits a reflective object in the scene. We need to find both what that point on the reflective surface is actually reflecting and what colour it should be reflecting. This is found by sending a secondary ray out into the scene to see what it hits. Secondary rays are also used for things like calculating lighting and shadows.
Line 47: Line 47:
 
http://en.wikipedia.org/wiki/Reyes_rendering
 
http://en.wikipedia.org/wiki/Reyes_rendering
  
Reyes rendering as a well known micropolygon rendering architecture made famous by Pixar's PRMan (aka RenderMan) renderer. TG doesn't use the Reyes architecture specifically, but many of the concepts are the same.
+
Reyes rendering is a well known micropolygon rendering architecture made famous by Pixar's PRMan (aka RenderMan) renderer. TG doesn't use the Reyes architecture specifically, but many of the concepts are the same.
  
 
Here's some information about raytracing:
 
Here's some information about raytracing:

Revision as of 00:56, 14 January 2013

Introduction[edit]

Terragen 2 uses two different rendering methods. One method is micropolygon rendering and the other is raytracing. It's most accurate to say that TG is a hybrid renderer, because the different rendering techniques are normally used together to render a scene. This page gives background information on the two methods and how they're used in TG.

Micropolygon Rendering[edit]

Micropolygon rendering is a technique that takes a surface and breaks it up into little polygons, or micropolygons, that are smaller than the pixels in the rendered image. These micropolygons are then displaced and shaded. You can read more about displacement here. Shading is basically the process of calculating the colour for a micropolygon, taking factors like surface colour and lighting into account.

Micropolygon rendering is particularly suited to rendering procedural data. Procedural data is something which is calculated on an as-needed basis using a mathematical formula. The prime example of something procedural is a fractal. If you've ever used a fractal viewing program you'll know you can zoom in on the fractal almost infinitely. This is because the fractal is generated mathematically and each time you zoom a new view of the fractal is calculated. The amount of detail is more or less unlimited. You can find out more about procedural data here.

Rendering procedural data is at the core of what TG does. Procedural data allows TG to render complex surfaces with very high levels of detail without having to store masses of data. Micropolygon rendering is ideal for rendering procedural data because it helps to limit how much needs to be rendered. If you have procedural data with effectively infinite levels of detail you need to strike a balance between rendering too much detail, which would be slow, and not enough detail, which would look bad. The ideal level of detail is a polygon somewhat smaller than a pixel in the finished image. This also means that an appropriate level of detail is used depending on the distance from the camera. Areas further from the camera can be rendered with less detail. A pixel for an area close to the camera might cover a few millimetres in world space whereas a pixel for an area far from the camera might cover tens, hundreds or even thousands of metres. Micropolygon rendering is an effective technique for breaking surfaces up into polygons that give an appropriate level of detail.

Another big advantage of a micropolygon renderer is that it's able to work with displacement efficiently. Massive amounts of displacement is one of TG's strengths. When a surface has been broken up into micropolygons each of those micropolygons can be moved in 3D space in any direction. Moving the micropolygons like this is called displacement. You can read more about displacement here.

Micropolygon rendering is used, by default, to render the terrain, water and sky.

Raytracing[edit]

Raytracing is the other rendering technique used by TG. You may already be familiar with what raytracing does as it's a common technique used in other renderers. It works by projecting lines, or rays, into the scene. When one of these rays hits an element in the scene the renderer calculates the shading of the scene element. Rays might also collect shading information on their way through the scene, when passing through clouds for example.

There are two main types of rays. The first is primary rays. Primary rays start from the camera and are projected out into the scene through the pixels of the image. Imagine you were holding a section of screen door mesh in front of your face. A primary ray would start from your eye and then travel out into the world through one of the holes in the mesh (the equivalent of a pixel in the rendered image). The ray would finish where it hit some object out in front of you. Higher quality results are given by using multiple rays for each pixel in the rendered image.

The other type of ray is a secondary ray. Secondary rays begin where a primary ray hits an element in the scene. A good example of a secondary ray is a reflection ray. Let's say a primary ray hits a reflective object in the scene. We need to find both what that point on the reflective surface is actually reflecting and what colour it should be reflecting. This is found by sending a secondary ray out into the scene to see what it hits. Secondary rays are also used for things like calculating lighting and shadows.

You might come across node parameters with names like "Enable secondary" or "Visible to other rays". These parameters typically effect how a node interacts with secondary rays. Lets's say you had an object node and you unchecked the Visible to other rays checkbox. This would mean that even though the object was still visible to the camera it would not be hit by secondary rays. One consequence of this is that the object wouldn't show up in reflections.

It's not really possibly to use raytracing by itself to render a scene in TG. You can do this, by turning on the Ray trace everything (not recommended) parameter in the Extra tab of the Render node. However, as the parameter name suggests, this is not recommended. A big reason for this is that the raytracer doesn't yet support displacement, or at least not efficiently. If you try it you will find that terrain renders blockily, even at high levels of detail. There are other settings you can tweak to improve the results, such as the Ray detail multiplier in the Render Subdiv Settings node, but they can greatly increase render time.

Rendering[edit]

By default TG renders in a hybrid fashion, using both micropolygon rendering and raytracing. Micropolygon rendering is used to render the terrain, water and sky. Raytracing is used mainly as secondary rays, for reflection, lighting and shadows.

An important point is that raytracing is also used as the default method for rendering objects such as imported models. This is controlled by the Ray trace objects parameter in the Render node. The reason that raytracing is used for objects is that they're effectively static data and can be rendered efficiently using the raytracer. The raytracer can give higher visual quality for a given render quality setting than can be achieved with the micropolygon renderer. To sum up, raytracing makes objects look better and render more quickly.

The one disadvantage to using raytracing for objects is that the raytracer doesn't support displacement. It is able to convert displacement data into bump mapping data, but bump mapping doesn't give the same visual quality as displacement. Think about a bark texture on a tree. Using displacement the tree bark can have a real 3D shape and the silhouette of the trunk would show lumps and bumps. With bump mapping the shape of the bark is faked using lighting effects. The underlying surface remains smooth but gives an impression of being a real 3D shape. However if you looked at the silhouette of the trunk you would see the flatter shape of the underlying geometry.

If your scene requires displacement on models to look good you will want to turn off Ray trace objects. This will make objects render using the micropolygon renderer. You might also want to increase the detail and antialiasing settings for the render node.

You can also turn on raytracing for atmosphere rendering. This is done with the Ray trace atmosph. parameter in the Render node. This is off by default. Raytracing the atmosphere can give better quality results at lower detail settings than the micropolygon renderer, but it's not such a clear benefit as it is for rendering objects. It may take some experimentation to get the best results.

When using raytracing for objects and/or atmosphere the quality and speed is highly dependent on antialiasing and sampling settings, much more so than with micropolygon rendering. You can find more specific information about that here.

More Information[edit]

You can find some more advanced information on raytracing in TG here:

http://www.planetside.co.uk/forums/index.php?topic=8300.0

There is a lot of information available on both micropolygon and raytrace rendering, both in books and on the internet. If you want to find out more a good place to start is Wikipedia. Here's some information on micropolygons and the Reyes rendering algorithm:

http://en.wikipedia.org/wiki/Micropolygon
http://en.wikipedia.org/wiki/Reyes_rendering

Reyes rendering is a well known micropolygon rendering architecture made famous by Pixar's PRMan (aka RenderMan) renderer. TG doesn't use the Reyes architecture specifically, but many of the concepts are the same.

Here's some information about raytracing:

http://en.wikipedia.org/wiki/Ray_tracing_(graphics)


Back to the Guide

Literally, to change the position of something. In graphics terminology to displace a surface is to modify its geometric (3D) structure using reference data of some kind. For example, a grayscale image might be taken as input, with black areas indicating no displacement of the surface, and white indicating maximum displacement. In Terragen 2 displacement is used to create all terrain by taking heightfield or procedural data as input and using it to displace the normally flat sphere of the planet.

A single element of an image which describes values for color and/or intensity, depending on the color system which the image uses. Groups of ordered pixels together form a raster image.

A parameter is an individual setting in a node parameter view which controls some aspect of the node.

A single object or device in the node network which generates or modifies data and may accept input data or create output data or both, depending on its function. Nodes usually have their own settings which control the data they create or how they modify data passing through them. Nodes are connected together in a network to perform work in a network-based user interface. In Terragen 2 nodes are connected together to describe a scene.