**Contents**

## Summary

We are not going to repeat what we explained already in the last chapters. Let's just make a list of the terms or concepts you should remember from this lesson:

- Computers deal with discrete structures which is an issue, as the shapes we want to represent in images are continuous.
- The triangle is a good choice of rendering primitive regardless of the method you use to solve the visibility problem (ray tracing or rasterisation).
- Rasterisation is faster than ray tracing to solve the visibility process (and is the method used by GPUs), but it is easier to simulate global illumination effects with ray tracing. Plus, ray tracing can be used to both solve the visibility problem and shading. If you use rasterisation, you need another algorithm or method to compute global illumination (but it is not impossible).
- Ray tracing has its own issues and challenges though. The ray-geometry intersection test is expensive and the render time increases linearly with the amount of geometry in the scene. Acceleration structures can be used to cut the render time down, but a good acceleration structure is hard to find (one that works well for all possible scene configuration). Ray tracing introduces noise in the image, a visual artefact which is hard to get rid of, etc.
- If you decide to use ray tracing to compute shading and simulate global illumination effects, then you will need to simulate the different paths light rays take to get from light sources to the eye. This path depends on the type of surface the ray will interact with on its way to the eye: is the surface diffuse, specular, transparent, etc. There are different ways you can simulate these light paths. Simulating them accurately is important as they make it possible to reproduce lighting effects such a diffuse and specular inter-reflections, caustics, soft shadows, translucency, etc. A good light transport algorithm is one that simulates all possible light paths efficiently.
- While it's possible to simulate the transport of light rays from surface to surface, it's impossible to simulate the interaction of light with matter at the micro- and atomic scale. However the result of these interactions is predictable and consistent. Thus we can attempt at simulating them using mathematical function. A shader implements some mathematical model to approximate the way a given surface reflects light. The way a surface reflects light is really the visual signature of that object. This is how and why we are capable of visually identifying what an object is made of: skin, wood, metal, fabric, plastic, etc. therefore being able to simulate the appearance of any given material is of critical important in the process of generating photo-realistic computer generated images. Again this is the job of shaders.
- There is a fine line between shaders and light transport algorithms. The way in which secondary rays are spawned from surface to compute indirect lighting effects (such as indirect specular and diffuse reflections) depends on the object material type: is the object diffuse, specular, etc. We will learn in the section on light transport, how shaders are used to generate these secondary rays.

**real-time**rendering when a scene can be rendered from 24 to 120 frames per second (24 to 30 fps is the minimum required to give the illusion of movement. A video game typically runs around 60 fps). Anything below 24 fps and above 1 frame per second is considered to be

**interactive rendering**. When a frame takes from a few seconds to a few minutes or hours to render, we are then in the category of

**off-line rendering**. It is very well possible to achieve interactive or even real-time frame rates on the CPU. How much time it takes to render a frame depends essentially of the scene complexity anyway. A very complex scenes can takes more than a few seconds to render on the GPU. Our point here is that you should not associate GPU with real-time and CPU with off-line rendering. These are different things. In the lessons of this section, we will learn how to use OpenGL to render images on the GPU, and we will implement the rasterisation and the ray tracing algorithm on the CPU. We will write a lesson dedicated to looking at the pros and cons of rendering on the GPU or the CPU.

**signal processing**relate to each other. This is a very important aspect of rendering, however to really understand this relationship you need to have solid foundations in signal processing which potentially requires to also understand Fourier analysis. We are planning to write a series of lessons on these topics once the basic section is complete. We think it's better to ignore this aspect of rendering if you don't have a good understanding of the theory behind it, rather than presenting it without being really able to explain why and how it works.

Now that we have reviewed these concepts you know what you can expect to find in the different section devoted to rendering, especially the sections on light transport, ray tracing and shading. In the section on light transport we will of course speak about the different ways global illumination effects can be simulated. In the section devoted to ray tracing techniques, we will study techniques specific to ray tracing such as acceleration structures, ray differentials (don't worry if you don't know what the is is for now) etc. In the section on shading, we will learn about what shaders are, we will study the most popular mathematical models developed to simulate the appearance of various materials.

We also talk about purely engineering topics such as multi-threading, multi-processing or simply different ways the hardware can be used to accelerate rendering.

Finally and more importantly, if you are new to rendering and before you start reading any lessons from these advanced sections, we recommend that you read the next lessons from this section. You will learn about the most basic and important techniques used in rendering:

- How does the perspective and orthographic projection work? We will lean how to project points onto the surface of a "virtual canvas" using the perspective projection matrix in order to create images of 3D objects.
- How does ray tracing work? How do we generate rays from the camera to generate an image?
- How do we compute the intersection of a ray with a triangle?
- How do we render more complex shapes than a simple triangle.
- How do we render other basic shapes, such as spheres, disks, planes, etc.
- How do we simulate things such the motion blur of objects, or optical effects such as depth of field.
- We will also learn more about the rasterisation algorithm and learn how to implement the famous REYES algorithm.
- We will also learn about shaders, we will learn about Monte-Carlo ray tracing, and finally about texturing. Texturing is a technique used to add surface details to an object. A texture can be an image but also be generated procedurally.

Ready to move on to the next lesson?