|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Friday, 17 November 2000||Author: Alex Young|
|Published to: enhance_articles_multimedia/Video Articles||Page: 1/1 - [Std View]|
3D Rendering Software
With the hype around Linux and Lord of the Rings, I expect a lot of people are wondering how Linux can be used to generate the sort of advanced 3D images seen in films today. Linux has a couple of rendering engines available; there are also a number of important concepts in this field that users should know.
First, the RenderMan interface. In an excellent introduction to RenderMan Joshua Kolden raises the point that RenderMan is not actually a man at all! Joshua goes on to highlight the fact that the RenderMan interface provides a way for advanced three-dimensional modellers to speak with three dimensional renderers. Therefore, RenderMan is an interface through which information about three-dimensional geometry and image attributes can be passed. This interface is designed to be both batch and real time, so we can view scenes quickly for an overview, then set up many machines on a network to render them in detail over a longer period of time.
When looking at rendering software, you may stumble across the term "shader." A shader controls the appearance of an object in your scene. The scene is simply the collection of objects, light sources and the viewpoint (a camera). Therefore, if you wish an object to appear as if it were made from plastic, you could use a plastic shader to change the surface of the object. There are five types of shaders that the RenderMan interface defines: surface, displacement, light, volume and image. A freely available shader editor isTKMatMan. You can use this to generate shaders for use with BMRT, for example.
As you may expect, surface shaders define how the surface of an object appears and how light reacts to it. Displacement shaders change the way surfaces pinch or bump; for example, the standard displacement shader is called "bumpy." Light shaders describe the directions, amounts, and colours created from a light source. Volume shaders can be used to create effects such as fog, and image shaders describe final transformations made to pixel values. Image shaders may sound relatively trivial, but they can offer valuable control over the way images appear when rendered. The artists that created The Iron Giant, for example, used custom shaders to make the giant have a more traditional two dimensional quality, while retaining the feeling of size and robustness that three-dimensional rendering gave.
When using RenderMan compliant software, you may come across a RIB file. RIB stands for RenderMan Interface Bytestream, and a RIB file contains a description of objects, lightsources, the viewpoint and other commands that are used to render a scene. This page gives an example of a RIB file that will produce a scene featuring quadratic primitives.
It is important to remember that "RenderMan" does not mean a rendering program. A rendering program that is RenderMan-compliant simply satisfies the specification of the RenderMan Interface. It is a kind of middleware, and is used commonly in the computer graphics industry and plays a major role in creating photo-realistic images and animation. When creating something like Lord of the Rings, for example, artists may actually use MaYa running on Windows NT, and then have the image actually generated with renderman compliant renderers running on a cluster of high-end Linux machines. And I suppose running Linux for time consuming operations is good idea since running around rebooting machines only showing a blue screen would end up becoming rather painful.
In adition to RenderMan related jargon, there is a whole world of terminology used in rendering, including the actual methods used to render images.These include raytrace, radiosity and scanline. When you have set up your scene and previewed it with fast, simplistic rendering to get a feel of how the scene will appear in the final render, you will undoubtedly want to leave your machine to render photo-realistic scenes. This may take anything between a few minutes to the end of your natural life (consider rendering a heavily texture mapped scene on a Palm Pilot using radiosity, and complex obejcts with reflective surfaces). Hence, methods of rendering and the permissable operations and detailaffect the quality and rendering time of your images.
Scanline rendering works by drawing an imaginary line from the viewer's eye, through a pixel, and to the 3D scene. Depending on the attributes of the object this line hits, a different colour will be produced. The rendering engine will then go along to the next pixel and repeat the process. The problem with scanline rendering, however, is determining what can and can't be seen. Consider determining the depth of objects in the scene to assess if they should be drawn. This additional process was ignored by many early scanline renderers, and they simply rendered from the"back" of the scene, overwriting each pixel that had already been drawn when the position was closer to the viewer. The purpose, therefore, of a Z-buffer is tocheck the Z values of each position in the scene, so that the renderer may use thelowest Z values for rendering.
Common shading techniques are gouraud and phong shading. If you want to learn more about these rendering techniques, this site features good descriptions complete with diagrams. Very basically, gouraud shading creates a smooth graduation of colours on the polygons that make up an object by comparing the colours at the vertices of the polygon. Each pixel of the polygon is coloured instead of filling the whole thing, making the end result far smoother. Phong shading extends the model, and gives good results with a lower polygon count, but requires more computation and is therefore far slower. Hence, with current software and hardware, you may commonly use gouraud shaded previews of your scenes and view them in realtime.
With a raytrace renderer, however, the light is actually traced from polygons and its behaviour is determined by the polygons reflectivity and transparancy. When a ray hits a polygon, it may of course refract and bounce off other polygons. Rays will continue to bounce and split from polygons until they meet a light source, or a recursion level is met. It is more natural for shadows and reflections to be modelled with a raytracing algorithm than with scanline, but the process is slower due to the complexity and amount of calculations required.
Radiosity is one of the most accurate rendering techniques available. With raytracing, the characteristics of light is broken down into abstractions of specular, ambient and diffuse light. Radiosity goes futher and breaks the scene down into patches. The scene is then rendered according to how the light from each of these surfaces reacts with each other surface.
With all this jargon aside, you'll probably be wanting to get your hands dirty and play with some software. You can find BMRT here. Of course, BMRT can handle RIB files, so you can use it with many free and commercial tools that support RIB. BMRT is capable of stunning results: check out the images in the gallery on its home page. To create images of that quality will clearly take time and patience, but playing with the tools is great fun anyway. Many people do notlike BMRT because it is closed source; however, it has been used in commercial animations, and I believe that it is worth learning to use dueto the RenderMan compliance and quality of rendering.
You can find POV Ray here. POV Ray is widely used, free, and the source code is available, hence POV Ray has been ported to many platforms. There also exist extensions for distributed rendering which I have found very useful (with my small and modest cluster of Debian machines). There is a PVM version, and alsoMPI, although I have only tested the PVM version.There are many front-ends to POV Ray, this site features many useful tools such as a texture editor and a front-end.
POV Ray processes text files to render images. I recommend following a few tutorials on howto write these files by hand, and then when you get an idea of how POV Ray works youcan move on to using front-ends and modellers. Something I especially like about POV Rayis the vibrant community. There are many good tutorials and utilities out there, completewith avid enthusiasts and their galleries.
For a relatively concise reference of commonly used terminology in rendering,have a look at this link. There's also a comp.graphics.rendering.raytracing FAQ. I implore you to put your Linux machine into overdrive and venture into the worldof 3D rendering. Of course, you don't have to render in three dimensions . . .