**Contents**

**News (August, 31)**: We are working on Scratchapixel 3.0 at the moment (current version of 2). The idea is to make the project open source by storing the content of the website on GitHub as Markdown files. In practice, that means you and the rest of the community will be able to edit the content of the pages if you want to contribute (typos and bug fixes, rewording sentences). You will also be able to contribute by translating pages to different languages if you want to. Then when we publish the site we will translate the Markdown files to HTML. That means new design as well.

That's what we are busy with right now and why there won't be a lot of updates in the weeks to come. More news about SaP 3.0 soon.

We are looking for native Engxish (yes we know there's a typo here) speakers that will be willing to readproof a few lessons. If you are interested please get in touch on Discord, in the #scratchapixel3-0 channel. Also looking for at least one experienced full dev stack dev that would be willing to give us a hand with the next design.

Feel free to send us your requests, suggestions, etc. (on Discord) to help us improve the website.

And you can also donate). Donations go directly back into the development of the project. The more donation we get the more content you will get and the quicker we will be able to deliver it to you.

Let's see how we can now reproduce the image we rendered in the lesson on rasterization. We know how to load a geometry file from disk and how to render this object using ray-tracing. Let's put these two techniques together in our program. In theory, if all works fine (mathematics never lie), we should get two perfectly identical images. Though if what we learned and said about ray-tracing is true, ray-tracing should take more time than rasterization. Let's validate these assumptions.

First, we will integrate the function to read the geometry file we described in the lesson Polygon Mesh to our program's code. All the data read from the file (number of faces, the face and vertex arrays, the point, normal and st coordinates arrays) are passed to the TriangleMesh constructor (line 5).

The function readGeometryFile() returns a pointer to a new dynamically allocated instance of the TriangleMesh class. This instance holds the geometry data of the object we just read (the geometry has been triangulated at this point if it contained faces with more than 3 vertices). This instance is added to the object list which is then passed on to the render() function (line 3).

The rest of the program is pretty conventional for a ray-tracer. We loop over each pixel in the image, generate a primary ray (whose direction and origin are transformed by the camera-to-world 4x4 transformation matrix), and test if this primary intersects any of the objects in the scene. In the castRay() we call the trace() function which is where we loop over all the objects contained in the objects list.

The intersect() method of the triangle mesh is finally called. To test if the ray intersects the mesh, we call the ray-triangle intersection test function for each triangle in the mesh. As explained in the previous chapter, as we do so, we also keep track of the closest intersection distance in case the ray intersects more than one triangle (we also need to keep track of the intersect triangle index as well as the hit point barycentric coordinates. This information will be required for shading).

Finally if the ray intersects the mesh, we perform shading in the castRay() function (lines 47-56). We first call the getGeometryData() method of the intersected mesh to compute the normal and the texture coordinates at the intersection point. We then use this information to create the checkerboard pattern and compute the facing-ratio which mixed together form the final pixel color (line 56).

A small note about shading. Don't worry too much at the moment if you don't understand the code that relates to shading. It will be explained in the introduction to shading. Though note that the code is pretty similar to the one we used in the lesson on rasterization. First, we use the barycentric coordinates of the hit point to interpolate the actual object st or texture coordinate. The final texture coordinates of the hit point is then used to compute a checkerboard pattern. As for normals, we have two options. Face normal which we talked about already in the lesson on rasterization, can be computed by taking the cross product of two of the triangle edges. We can also use the hit point barycentric coordinates to interpolate the vertex normals (read from the geometry file), the same way we interpolated texture coordinates. As mentioned, this code will be soon explained (check the lesson introduction to shading).

## Result

First let's look at the visual result. As you can see the image produced by the ray-tracer is perfectly identical to the image produced by the raterizer (beside the background color of course). As mentioned at the beginning of this lesson, mathematics never lie. If you do the right thing, the two rendering techniques should produce exactly the same result. We have at least proven that this was the case, which is also a way of validating that our programs do the right thing. Hooray!

Now let's compare rendering time. Ray-tracing: 15.22 seconds. Rasterizer: 0.00628 second (we ran the two programs on the same machine). As expected, with a naive implementation, ray-tracing is 2421 times slower. This is a very large difference. Hopefully, as mentioned in the previous chapter, this can be improved to some extend using acceleration structures. We won't study acceleration structures in this section but you can find lessons on this topic in the Advanced Ray-Tracing section. Despite being slow, we will keep using ray-tracing in the next lessons to learn about shading. Ray-tracing as explained before, is a much better technique when it comes to shading. Shading has very much to do with computing visibility between points in space, and ray-tracing is well suited for that. It offers a unified framework to solve both the visibility problem and shading.

## Conclusion

Congratulation. At this point of your learning journey, you now know the difference between ray-tracing and raserization, know how to implement the two algorithms and produce an image of a 3D model viewed from a given viewpoint that is consistent regardless of the technique you use to render this image. You have also been able to judge for yourself the difference in performance between the two techniques. In the next lesson, we will learn about transforming objects. We will then be ready to start our first lesson on shading.

## Exercises

Compute the bounding box of the object and implement the ray-box intersection test to accelerate the render.