Building a More Advanced Rendering Framework

News (August, 31): We are working on Scratchapixel 3.0 at the moment (current version of 2). The idea is to make the project open source by storing the content of the website on GitHub as Markdown files. In practice, that means you and the rest of the community will be able to edit the content of the pages if you want to contribute (typos and bug fixes, rewording sentences). You will also be able to contribute by translating pages to different languages if you want to. Then when we publish the site we will translate the Markdown files to HTML. That means new design as well.

That's what we are busy with right now and why there won't be a lot of updates in the weeks to come. More news about SaP 3.0 soon.

We are looking for native Engxish (yes we know there's a typo here) speakers that will be willing to readproof a few lessons. If you are interested please get in touch on Discord, in the #scratchapixel3-0 channel. Also looking for at least one experienced full dev stack dev that would be willing to give us a hand with the next design.

Feel free to send us your requests, suggestions, etc. (on Discord) to help us improve the website.

And you can also donate). Donations go directly back into the development of the project. The more donation we get the more content you will get and the quicker we will be able to deliver it to you.

7 mns read.

This series of lessons is currently being developed/written (Q3/Q4 2022). We are not too sure where we are going with this new series so expect to see things changing quite radically from time to time until it eventually settles down.

In this lesson we will look at the different parts a rendering system is made of, what they do and where they fit in the big picture. In this particular chapter, we will provide a global overview of the rendering pipeline. In chapter 4, we will look at the data pipeline part more specifically. In the last chapter, we will wrap everything we have learned in this lesson into a functional program. This program won't produce any image yet but will do a lot of work for us already so that when we get to the next lesson (to look at the ray-triangle intersection test again), we will have to write relatively little code to produce our very first image.

Preamble

As mentioned in this lesson's introduction (and we will surely repeat this), we chose to base this series of lessons on accelerating ray-tracing, on the source code of an open-source project called Embree. Embree is a project developed by Intel. Its main function is to help Intel's team do some research notably with respect to acceleration structures. Why did we choose this project? Because it's open-source, well organized, well written, and well structured. Also because Embree is designed for studying acceleration structures and how the ray-geometry interesting tests can be accelerated through the use of multi-threading and vectorization; topics that this section is devoted to. Embree's ray-geometry library is used by quite a few commercial applications as it is considered to be one of the most robust, advanced, and efficient solutions for ray queries of different types inside 3D scenes.

xx can we write a little more about Embree's history here? Mention that Embree's lib / API is used by many commercial applications such as ... xx
PS: we are not affiliated with the Embree project or Intel in any ways.

We also mentioned in the previous chapter that Embree, which was initially released in the early 2010s, has significantly evolved over the years. The last version is significantly more complex than the first version. We will therefore base this lesson on Embree's original version. As we progress through the series, we will study techniques that were developed in later versions. At this point in our learning process, starting right from the very end would make it too complicated considering the level we are currently at. While this is an advanced section, we still aim to have a gradual progression.

Note that variable or class/struct names will change as we write the lessons. As a rule of thumb, we will try to reuse the names used in the Embree project however when we think that a name can be more descriptive than the name they chose for a particular function, structure, class, or variable, we will use our terminology. In addition, the names are also evolving from version to version within the Embree project itself. We will also adjust the names as we step through the different versions of the library so that we stay as close as possible to the original project's nomenclature. Please remember that this is not documentation on how to use Embree. We intend to write our code but we will generally follow the same structure and use the same techniques. Though stripped down of the nonessential parts (easier to compile, easier to look at and study, not so overwhelming). This lesson will not teach you how to use Embree's code but as we are using the same structure and follow the same principles, if you decide to look at the project's code at some point in the future, you should somehow feel at home if you have read this series of lessons before.

With This preamble out of the way, let's now look into this very first version of the framework.

A first version of a possible rendering framework

The figure below shows what the bear skeleton of a rendering application looks like. Please take a moment to go through it. We will first give a general explanation before detailing each section one by one.

This image provides an overview of one possible rendering framework. When we say "one possible design" it is not to suggest that there are many possible ways by which images of 3D scenes can be produced. Consider the buildings in a city such as New York. They generally all perform the same function, they are all tall towers with elevators, office spaces, windows, stairs, air-conditioning systems, and so on. And yet no one building is the same. Rendering systems are similar to buildings. They all render 3D images, load geometry, convert the geometry to triangles or quads, use acceleration structures of some sort, and have a scene description containing a list of lights, cameras, objects, materials, etc., and yet each program is different. The principles are the same. The form changes.

At a glance, the first observation we can make when we look at this figure, is that this doesn't look too complicated. Maybe that's because we simplified the process for you but still, the process looks rather simple (it's when you get into the details that it gets quite complicated). This is good because before diving into one of these core components, having a clear picture of the overall system is essential.

STEP 1 & 2: in the first two steps you create some global objects that are going to be used across the application. We will come back on what these objects are later. This is also more or less where you define what the settings for your render process will be such as the number of samples per pixel used, the size of the image, and the number of threads you will render the image with (multi-threading will be studied in one of the next lessons), etc.

Then the next two steps are about - defining the content of the scene we wish to render, - the render process itself.

STEP 3: for the generation of the scene content part, the main focus is on adding 3D geometry to the scene. We can either load the content of a file from disk or generate some geometry procedurally. More on that topic later. Note that this is also where we generate materials and bind them to the geometry. Finally, once we are done describing the scene content (which includes objects, lights, cameras as well as materials as just mentioned) we are ready to render the frame. We call this part the data pipeline (a term that is entirely our own) because as you will see in chapter 4 which will be entirely devoted to this part, this is where we manipulate data extensively (allocate memory, move memory around, do a lot of bookkeeping, process data, etc.). The process by which objects (geometry which we will call shapes for now) are created can be quite convoluted. However, it will make more use of your knowledge of C++ programming than of mathematics.

STEP 4: This step can be decoupled in broadly two sub-steps. We need to prepare the scene for rendering and the most important process in this stage is building the acceleration structure. Acceleration structures are at the heart of ray-tracing (without them ray-tracing is unbearably slow). The application's performances depend a great deal on the acceleration structure. Building one is time-consuming (for a nontrivial scene); this is one of the reasons why they are challenging (besides the fact that creating a good acceleration structure is an extremely difficult challenge too, as we will learn in the next lessons). We also need to create the camera through which the scene will be rendered.

Finally we are ready to render the frame. This is where the light transport algorithms come into play. Light transport algorithms define how we simulate the light-matter interactions (and which ones of these interactions we simulate). While it's possible to use multi-threading and vectorization in the early stages of the rendering pipeline (notably when we build the acceleration structure), rendering the frame is where parallelism and vectorization will be the most impactful. Finally, when the frame is completed, we store it in a file (or display it on the screen).

Functional programming

xx WIP xx