This series of lessons is currently being developed/written (Q3/Q4 2022). We are not too sure where we are going with this new series so expect to see things changing quite radically from time to time until it eventually settles down.
What is this new series of lessons about?
Why did we write this series of lessons? The basic section only introduces the very foundations of rendering, and while every modern rendering solution out there is built upon these foundations, they are also significantly more complex. More complex in terms of code, but also of features and performances.
Our objective with this series of lessons is to understand the techniques that are used by modern rendering solutions. Our goals are the following:
- Get the big picture: building a more complete system to understand how the different parts that are typically making up a 3D rendering solution fit and interact with each other.
- Getting some insights about the techniques used by GPUs for ray-tracing: this is an extremely important topic. As hardware-accelerated ray-tracing is becoming ever more common, it's important to understand that GPU manufacturers such as Intel, AMD, or Nvidia spent a significant (in bold here, really significant) amount of time and engineering resources designing the most efficient system on the CPU prior to building ray-tracing enabled GPUs. Why? Because before you start setting things up in stone, you better be sure that the system you will encode on the chip is robust and bug-free but also the most efficient system that you can design (tweaks in the way an acceleration structure is built can impact performance significantly). And to do so, simulating things on the CPU to test various methods and configurations remains the best approach. Why is this important? Because the methods we are about to learn are the very foundations upon which these hardware-accelerated ray-tracing solutions are built. Therefore understanding them, will give you an insight into what the silicon on your GPU does when you use it to render 3D scenes using ray-tracing.
To clear any possible confusion: this series of lessons is not about implementing ray-tracing on the GPU or using a 3D API such as DirectX or Vulkan that supports ray-tracing. It is about learning the kind of code that was used to design ray-tracing-enabled GPUs aka what essentially lies under the hood of a 3D graphics API. Sadly because such APIs hide a lot of the complexity that goes into rendering and ray-ray-tracing, it will become harder for the next generation of students in computer graphics to gain access to the low-level magic that's embarked in the GPUs hardware and drivers to get ray-tracing working. One of our goals is to be sure that they are places where this work remains documented before it gets difficult to put the pieces of the puzzle together.
Our goal is not to study and write code that competes with industry-grade solutions. Our goal is to leave you with a general understanding of how they work. Furthermore, some of these solutions use code that is open source. Surely you can study this code yourself, but it's often rather overwhelming to look at. These lessons will break it down and present it to you in a more digestible form.
Organization of the lessons: walking in the engineers' footsteps.
The lessons are organized so that the whole series can be read in chronological order if desired. The series is currently divided into two broad sections:
- The first section called "Accelerating Ray-Tracing" is mostly devoted to studying the general architecture of a rendering system as well as any methods that allow to speed up ray-tracing. This essentially includes acceleration structures and taking advantage of the hardware parallelism/vectorization capabilities. Eventually, our goal is to cover things such as motion blur, rendering subdivision surfaces, and quad and hair geometry. This will essentially depend on the time we can dedicate to this project.
- The second section is devoted to light transport algorithms, mainly path-tracing and volume rendering using stochastic sampling methods to start with.
To get to what the current polished/final industry-grade solutions look like today, people working on these projects went through many iterations. From a pedagogical standpoint, we think it makes sense to walk in their footsteps rather than study the most recent techniques right from the get-go. This is why we will start with studying and developing rendering frameworks that are inspired by what professional rendering solutions looked like originally (about 10 years ago). As we progress through the series, we will update these frameworks with the more recent designs.
As an example of this, in this lesson we will first get inspired by the design of what an open-source project called Embree (an Intel-supported project) looked like in its first version. But as we progress in the series, we will see how this framework evolved as new versions were released. As the current framework is much more complicated than the original framework, starting from the original version is a much better place to begin our journey. Walking in the engineer's footsteps will make it longer but it will also make it (relatively) easier. In other words, we start simple and add layers of complexity as we progress through the lessons. In one lesson we will use Embree's first version to start with; in two additional lessons, we will present the changes introduced in Embree's versions 2 and 3 respectively. Each version will have its own lesson. As a result, the series will be more granular than the beginner's series.
Sometimes techniques used in early versions of a given solution are different from the techniques used in the most recent version. Going through these evolutions is also a chance to be exposed to different programming methods and approaches; hopefully, we become better programmers in the process (at least more knowledgeable). This is another reason why learning by following the engineers' footsteps is a good approach.
Mind the changes
As we are progressing through the series and don't have any clear idea of where we will finish it yet, be aware that the series structure and content is likely to considerably evolve other the next 12 months (starting in September 2022). As an example, we plan to explain how motion blur is supported in an acceleration structure but we are not quite sure when we will be able to get to that lesson and were it will fit in the series yet. The same applies to other "advanced" techniques such as rendering subdivision surfaces, rendering quads, or hair geometry.
How will we reach our goals?
As already mentioned the code of industry-grade solutions contained hundreds of files and sometimes hundreds of thousands of lines (complex 3D software can have more than a few million lines). Finding the breadcrumb trail to understand what's going on in there, and how it works can be hard (not to mention that some projects are better written than others). We've done the work for you by stripping down the code of these solutions to the bare minimum and packaging them in a single file that can easily be compiled from the command line (and study the techniques implemented in these files of course along the way). This is not to say the project will not contain a few files in the end, but each core component will be first at least studied once in isolation (which is what sets Scratchapixel apart from other educational material or open-source projects).
As we progress towards the finishing line (if it does exist), we shall learn about some C++ programming techniques as well as learn about (advanced) techniques used to render 3D scenes with ray-tracing of course.
Who is it for?
If you are new to either C++ programming or computer graphics, start with the beginner's section.
This series of lessons is designed for people who are both relatively comfortable with programming in C++ as it will be using more complex coding techniques and already have a good understanding of how rendering and ray-tracing in particular work.
What will we learn in this lesson?
At the end of this lesson we will have an overview of the different parts that a rendering system is made of (and where they fit in the rendering pipeline). We will study one possible implementation of such a pipeline. Our code won't produce any images of 3D objects yet. We will produce our first image in the next lesson where we will look again at the ray-triangle intersection method.