Building a More Advanced Rendering Framework
Keywords: Acceleration structure, ray-tracing, modern 3D rendering frameworks, rendering pipeline, hardware-accelerated ray-tracing, motion blur, quad-ray intersection test, triangle-ray intersection test, subdivision surfaces, hair geometry, Embree rendering framework .Want to report bugs, errors or send feedback? Write at feedback@scratchapixel.com. You can also contact us on Twitter and Discord. We do this for free, please be kind. Thank you.

News (August, 31): We are working on Scratchapixel 3.0 at the moment (current version of 2). The idea is to make the project open source by storing the content of the website on GitHub as Markdown files. In practice, that means you and the rest of the community will be able to edit the content of the pages if you want to contribute (typos and bug fixes, rewording sentences). You will also be able to contribute by translating pages to different languages if you want to. Then when we publish the site we will translate the Markdown files to HTML. That means new design as well.

That's what we are busy with right now and why there won't be a lot of updates in the weeks to come. More news about SaP 3.0 soon.

We are looking for native Engxish (yes we know there's a typo here) speakers that will be willing to readproof a few lessons. If you are interested please get in touch on Discord, in the #scratchapixel3-0 channel. Also looking for at least one experienced full dev stack dev that would be willing to give us a hand with the next design.

Feel free to send us your requests, suggestions, etc. (on Discord) to help us improve the website.

And you can also donate). Donations go directly back into the development of the project. The more donation we get the more content you will get and the quicker we will be able to deliver it to you.

9 mns read.

This series of lessons is currently being developed/written (Q3/Q4 2022). We are not too sure where we are going with this new series so expect to see things changing quite radically from time to time until it eventually settles down.

What is this new series of lessons about?

Why did we write this series of lessons? The basic section only introduces the very foundations of rendering, and while every modern rendering solution out there is built upon these foundations, they are also significantly more complex. More complex in terms of code, but also of features and performances.

Our objective with this series of lessons is to understand the techniques that are used by modern rendering solutions. Our goals are the following:

Note that when we speak about hardware-accelerated ray-tracing, not all the calculations are done on the silicon. Only some operations are hardware encoded such as the ray-tracing intersection test (which is tightly linked to the acceleration structure as will discover in this series). A lot of work (particularly data management) is done by the GPU's driver which is pure code. A bug in one of the original Intel's ARC GPU drivers caused ray-tracing to be x100 slower than it should have been (the scene data was stored on the computer's memory rather than the GPU's local memory. Hopefully this was a driver issue and could thus easily be fixed, not a problem with the hardware itself).

To clear any possible confusion: this series of lessons is not about implementing ray-tracing on the GPU or using a 3D API such as DirectX or Vulkan that supports ray-tracing. It is about learning the kind of code that was used to design ray-tracing-enabled GPUs aka what essentially lies under the hood of a 3D graphics API. Sadly because such APIs hide a lot of the complexity that goes into rendering and ray-ray-tracing, it will become harder for the next generation of students in computer graphics to gain access to the low-level magic that's embarked in the GPUs hardware and drivers to get ray-tracing working. One of our goals is to be sure that they are places where this work remains documented before it gets difficult to put the pieces of the puzzle together.

Until recently GPUs were not doing any ray-tracing and were solely rendering 3D graphics using a raster pipeline. A lot of the processes that go into rasterization were burned into the GPU's hardware and it became incredibly hard to find any educational resources that would describe how rasterization could be implemented on the CPU. That's actually one of the reasons why we wrote a lesson on this topic).
Companies such as Nvidia started to hire a large number of some of the most talented graphics research engineers (starting around the late 2000s) many of whom specialized in ray-tracing acceleration structures (we will mention some of them during these lessons), which eventually led to the launch of the first hardware-accelerated ray-tracing enabled GPUs commercially built and released around the years 2017-2018. Intel has followed a similar course but first acquired and developed the Embree code base which was used by the company as a testbed for designing their GPU hardware-accelerated ray-tracing enabled GPU (the Arc series announced in 2021).

We plan to write an article on this topic but if you have been part or know somehow who was/is part of a team that worked on the development of hardware-accelerated ray-tracing solutions in either one of these companies, we would love to hear from you or your friend.

Our goal is not to study and write code that competes with industry-grade solutions. Our goal is to leave you with a general understanding of how they work. Furthermore, some of these solutions use code that is open source. Surely you can study this code yourself, but it's often rather overwhelming to look at. These lessons will break it down and present it to you in a more digestible form.

Organization of the lessons: walking in the engineers' footsteps.

The lessons are organized so that the whole series can be read in chronological order if desired. The series is currently divided into two broad sections:

To get to what the current polished/final industry-grade solutions look like today, people working on these projects went through many iterations. From a pedagogical standpoint, we think it makes sense to walk in their footsteps rather than study the most recent techniques right from the get-go. This is why we will start with studying and developing rendering frameworks that are inspired by what professional rendering solutions looked like originally (about 10 years ago). As we progress through the series, we will update these frameworks with the more recent designs.

As an example of this, in this lesson we will first get inspired by the design of what an open-source project called Embree (an Intel-supported project) looked like in its first version. But as we progress in the series, we will see how this framework evolved as new versions were released. As the current framework is much more complicated than the original framework, starting from the original version is a much better place to begin our journey. Walking in the engineer's footsteps will make it longer but it will also make it (relatively) easier. In other words, we start simple and add layers of complexity as we progress through the lessons. In one lesson we will use Embree's first version to start with; in two additional lessons, we will present the changes introduced in Embree's versions 2 and 3 respectively. Each version will have its own lesson. As a result, the series will be more granular than the beginner's series.

Sometimes techniques used in early versions of a given solution are different from the techniques used in the most recent version. Going through these evolutions is also a chance to be exposed to different programming methods and approaches; hopefully, we become better programmers in the process (at least more knowledgeable). This is another reason why learning by following the engineers' footsteps is a good approach.

Mind the changes

As we are progressing through the series and don't have any clear idea of where we will finish it yet, be aware that the series structure and content is likely to considerably evolve other the next 12 months (starting in September 2022). As an example, we plan to explain how motion blur is supported in an acceleration structure but we are not quite sure when we will be able to get to that lesson and were it will fit in the series yet. The same applies to other "advanced" techniques such as rendering subdivision surfaces, rendering quads, or hair geometry.

How will we reach our goals?

As already mentioned the code of industry-grade solutions contained hundreds of files and sometimes hundreds of thousands of lines (complex 3D software can have more than a few million lines). Finding the breadcrumb trail to understand what's going on in there, and how it works can be hard (not to mention that some projects are better written than others). We've done the work for you by stripping down the code of these solutions to the bare minimum and packaging them in a single file that can easily be compiled from the command line (and study the techniques implemented in these files of course along the way). This is not to say the project will not contain a few files in the end, but each core component will be first at least studied once in isolation (which is what sets Scratchapixel apart from other educational material or open-source projects).

As we progress towards the finishing line (if it does exist), we shall learn about some C++ programming techniques as well as learn about (advanced) techniques used to render 3D scenes with ray-tracing of course.

Who is it for?

If you are new to either C++ programming or computer graphics, start with the beginner's section.

This series of lessons is designed for people who are both relatively comfortable with programming in C++ as it will be using more complex coding techniques and already have a good understanding of how rendering and ray-tracing in particular work.

What will we learn in this lesson?

At the end of this lesson we will have an overview of the different parts that a rendering system is made of (and where they fit in the rendering pipeline). We will study one possible implementation of such a pipeline. Our code won't produce any images of 3D objects yet. We will produce our first image in the next lesson where we will look again at the ray-triangle intersection method.