**Contents**

**News (August, 31)**: We are working on Scratchapixel 3.0 at the moment (current version of 2). The idea is to make the project open source by storing the content of the website on GitHub as Markdown files. In practice, that means you and the rest of the community will be able to edit the content of the pages if you want to contribute (typos and bug fixes, rewording sentences). You will also be able to contribute by translating pages to different languages if you want to. Then when we publish the site we will translate the Markdown files to HTML. That means new design as well.

That's what we are busy with right now and why there won't be a lot of updates in the weeks to come. More news about SaP 3.0 soon.

We are looking for native Engxish (yes we know there's a typo here) speakers that will be willing to readproof a few lessons. If you are interested please get in touch on Discord, in the #scratchapixel3-0 channel. Also looking for at least one experienced full dev stack dev that would be willing to give us a hand with the next design.

Feel free to send us your requests, suggestions, etc. (on Discord) to help us improve the website.

And you can also donate). Donations go directly back into the development of the project. The more donation we get the more content you will get and the quicker we will be able to deliver it to you.

## A Simple Perspective Matrix

**A word of warning again**. The matrix we will present in this chapter is different from the projection matrix that is being used in APIs such as OpenGL or Direct3D. Though, it technically produces the same results. In the lesson 3D Viewing: the Pinhole Camera Model we learned how to compute the screen coordinates (left, right, top and bottom) based on the camera near clipping plane and angle-of-view (in fact, we learned how to compute these coordinates based on the parameters of a physically based camera model). We then used these coordinates to decide if projected points were visible or not in the image (they would only be visible if their coordinates where contained within the screen coordinates). In the lesson Rasterization: a Practical Implementation, we learned how to remap the projected point coordinates to NDC coordinates (coordinates in the range [-1,1]) using the screen coordinates. In other words, to avoid having to compare the projected point coordinates to the screen coordinates, we remapped the point coordinates first to the range [-1,1] using the screen coordinates. Deciding whether a point is visible or not is just a matter of testing if any of its coordinates is lower than -1 or greater than 1.

In this chapter, we will use a slightly different approach. We will assume that the screen coordinates are (-1,1) for the left and right coordinates and (-1,1) for the bottom and top coordinates (assuming a square screen) to start with (since this is the range we want to test the coordinates against), and we will account for the camera field-of-view by scaling the projected point coordinates directly (rather than using the screen coordinates scaled by the angle-of-view to remap the points coordinates to NDC space). Both methods have the same effect.

Recall from the lesson on Geometry that the multiplication of a point by a matrix is as follows:

$$ \begin{equation} \begin{bmatrix} x & y & z & w \end{bmatrix} * \begin{bmatrix} m_{00} & m_{01} & m_{02} & m_{03}\\ m_{10} & m_{11} & m_{12} & m_{13}\\ m_{20} & m_{21} & m_{22} & m_{23}\\ m_{30} & m_{31} & m_{32} & m_{33} \end{bmatrix} \end{equation} $$ $$ \begin{array}{l} x' = x * m_{00} + y * m_{10} + z * m_{20} + w * m_{30}\\ y' = x * m_{01} + y * m_{11} + z * m_{21} + w * m_{31}\\ z' = x * m_{02} + y * m_{12} + z * m_{22} + w * m_{32}\\ w' = x * m_{03} + y * m_{13} + z * m_{23} + w * m_{33} \end{array} $$Also, remember from the previous chapter, that point P', i.e. the projection of P onto the image plane, can be computed by dividing the x- and y-coordinates of P by the **inverse** of the point z-coordinate:

How do we compute P' using a point-matrix multiplication?

First, x', y' and z' (the coordinates of P') in the equation above needs to be set with x, y and -z respectively (where x,y and z are the coordinates of the point P we want to project). Why do we want to set z' to -z instead of just z? Remember that when we transform points from world space to camera space, all points defined in the camera coordinate system and located in front of the camera have a negative z-value. This is due to the fact that by default, cameras always point down the negative z-axis (figure 1). We will also assign z to z' but invert its sign so that z' is positive:

$$ \begin{array}{l} x' = x,\\ y' = y\\ z' = -z \:\:\: z' > 0\\ \end{array} $$If somehow within the point-matrix multiplication process, we could manage to divide x', y' and z' by -z, then we would actually end up with:

$$ \begin{array}{l} x' = \dfrac {x}{-z},\\ y' = \dfrac {y}{-z}\\ z' = \dfrac {-z}{-z} = 1\\ \end{array} $$Which, as we know, are the equations to compute the projected point P' coordinates (don't worry too much about z' for now). Thus, again the question is, is it possible to get the same result with a point-matrix multiplication? If so, what would that matrix look like? Let's consider the problem step by step. First we said we needed to set the coordinates x', y' and z' with the coordinates x, y and -z respectively. This is simple. In fact, a simple identity matrix (with a slight modification) will do the trick:

$$ \begin{equation} \begin{bmatrix} x & y & z & (w=1) \end{bmatrix} * \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 1\\ \end{bmatrix} \end{equation} $$ $$ \begin{array}{l} x' = x * 1 + y * 0 + z * 0 + w * 0 &=&x\\ y' = x * 0 + y * 1 + z * 0 + w * 0 &=&y\\ z' = x * 0 + y * 0 + z * -1 + w * 0 &=&-z\\ w' = x * 0 + y * 0 + z * 0 + (w=1) * 1 &=&1\\ \end{array} $$Note here that the point we multiply the matrix with, has homogeneous coordinates or at least is implicitly assumed to be a point with homogeneous coordinates and whose fourth coordinate, w, is set to 1. The second step requires to divide x' and y' by -z. Now, recall what we said in the previous chapter about points with homogeneous coordinates. Point P is a point with homogeneous coordinates, and its fourth coordinate, w, is equal to 1. This is the condition for making it possible to multiply 3D points which originally are 3D points with Cartesian coordinates, by 4x4 matrices. This doesn't mean though, that the point-matrix multiplication operation can't set the value of w' (the fourth coordinates of the transformed point P') to something different than 1 (we know w' is always equal to 1 when affine transformation matrices are used, but this doesn't have to be the case with other types matrices such as ... projection matrices of course). To convert the point with homogeneous coordinates back to a point with Cartesian coordinates, we need to divide x', y' and z' by w' as explained in the previous chapter:

This operation requires to divide x', y', z' by w', and guess what, if somehow w' was equal to -z, then we would exactly get what we are looking for: dividing x', y' and z' by -z.

**The trick is to use to the conversion from homogeneous to Cartesian coordinate in the point-matrix multiplication process to perform the perspective divide (dividing x and y by z to compute the projected point coordinates x' and y'). This requires to assign -z to w'.**

The question now is: can we change our perspective projection matrix (which is just a slightly modified version of the identity matrix at this stage) so that the result of the point-matrix multiplication sets w' to -z? To answer this question, let's look again at the point-matrix multiplication but let's focus for now on the w' coordinate only:

$$ \begin{array}{l} w' = x * m_{03} + y * m_{13} + z * m_{23} + w * m_{33} \end{array}{} $$We know that the point P w-coordinate is equal to 1. Thus the above equation becomes:

$$ \begin{array}{l} w' = x * m_{03} + y * m_{13} + \color{red}{z * m_{23}} + 1 * m_{33} \end{array}{} $$But this is actually not important. What's important, is to note that z which is multiplied by the matrix coefficient \(m_{23}\) (in red) is used in this equation. And z, is exactly what we want w' to be set with or more exactly -z. It is trivial to note that if the matrix coefficient \(\color{red}{m_{23}}\) was actually set to -1 and all the other matrix coefficients involved in computing w' were set to 0 (\(m_{03}\), \(m_{13}\) and \(m_{33}\) respectively), then we would get:

$$w' = x * 0 + y * 0 + \color{red}{z * -1} + 1 * 0 = -z.$$Which is exactly the result we are looking for. In conclusion, to set w' to -z, the coefficients \(m_{03}\), \(m_{13}\) \(\color{red}{m_{23}}\) and \(m_{33}\) of the perspective projection matrix need to be set to 0, 0, -1 and 0 respectively. If we make these changes to our previous matrix, here is what the perspective projection matrix now looks like:

$$ \left[ \begin{array}{rrrr}x & y & z & 1\end{array} \right] * \left[ \begin{array}{rrrr} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & \color{red}{-1}\\ 0 & 0 & 0 & 0 \end{array} \right] $$When this matrix is used in a point-matrix multiplication, we get:

$$ \begin{array}{ll} x' = x * 1 + y * 0 + z * 0 + 1 * 0 & = & x\\ y' = x * 0 + y * 1 + z * 0 + 1 * 0 & = & y\\ z' = x * 0 + y * 0 + z * -1 + 1 * 0 & = & -z\\ w' = x * 0 + y * 0 + z * -1 + 1 * & = & -z \end{array} $$Then divide all coordinates by w' to set the point's homogeneous coordinates back to Cartesian coordinates:

$$ \begin{array}{ll} x' = \dfrac{x'=x}{w'=-z},\\ y' = \dfrac{y'=y}{w'=-z},\\ z' = \dfrac{z'=-z}{w'=-z} = 1. \end{array} $$This is exactly the result we were aiming at. At this point in the chapter, we have a simple perspective projection matrix which can be used to compute P'. However we still need to account for two things. First, we need to remap z' to the range [0,1]. To do so, we will be using the camera near and far clipping planes. Finally, we need to take into account the camera angle-of-view. This parameter controls how much of the scene we see (remember that we aim to simulate a pinhole camera model which is defined by a near and far clipping planes as well as a field-of-view).

## Remapping the Z-Coordinate

Another goal of the perspective projection matrix is to normalize the z-coordinate of P, that is, to scale its value between 0 and 1. To do so, we will use the near and far clipping planes of the camera (you can find more information on clipping planes in the lesson 3D Viewing: the Pinhole Camera Model). To achieve this goal, we will set the coefficients of the matrix used to calculate z' to certain values: $$z' = x * m_{20} + y * m_{21} + z * \color{green}{m_{22}} + 1 * \color{red}{m_{23}}$$

We will change the third (in green) and fourth (in red) coefficients of the third column to fulfil two conditions: when P lies on the near clipping plane, z' is equal to 0 after the z-divide, and when z lies on the far clipping plane, z' is equal to 1 after the z-divide. This remapping operation is obtained by setting these coefficients to:

$$-\dfrac{f}{(f-n)},$$and

$$-\dfrac{f*n}{(f-n)}$$respectively, where \(n\) stands for the near clipping plane and \(f\) for the far clipping plane (you can find a derivation on these equation in the next chapter). To convince you that this works, let's look at the result of z' when P lies on the near and far clipping planes (\m_{20}\) and \(m_{21}\) are equal to 0):

$$ \dfrac{\dfrac{-(z'=z=-n)*f-f*n}{(f-n)}}{(w'=-1*z=n)}= \dfrac{\dfrac{n*f-f*n}{(f-n)}}{(w'=-1*z=n)}=0 $$ $$ \dfrac{\dfrac{-(z'=z=-f)*f-f*n}{(f-n)}}{(w'=-1*z=f)}= \dfrac{\dfrac{f*f-f*n}{(f-n)}}{(w'=-1*z=f)}= $$ $$\dfrac{\dfrac{f*(f-n)}{(f-n)}}{(w'=-1*z=f)}=\dfrac{f}{f}=1$$When z equals \(n\) (the near clipping plane) you can see in the first line of the equation that the numerator is equal to 0. Therefore the result of the equation is 0. In the second line, we have replaced z with \(f\), the far clipping plane. By rearranging the terms, we can see that the (f-n) terms cancel out, and we are left with f divided by itself, which equals 1.

Our modified perspective projection matrix that projects P to P' and remaps the z'-coordinate of P' from 0 to 1 now looks like this:

$$ \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -\dfrac{f}{(f-n)} & -1\\ 0 & 0 & -\dfrac{f*n}{(f-n)}& 0\\ \end{array}\right] $$**z-fighting**can arise in depth-based hidden surface renderers. It is therefore important to make this interval as small as possible in order to minimise the depth buffer precision problem.

## Taking the Field-of-View into Account

All we need to do to get a basic perspective projection matrix working, is to account for the **angle of view** or field-of-view (**FOV**) of the camera. We know that by changing the focal length of a zoom lens on a real camera, we can change how much we see of a scene (the extent of the scene). We want our CG camera to work in the same way.

The size of the projection window is [-1:1] in each dimension. In other words, a projected point is visible, if its x- and y-coordinates are within the range [-1:1]. Points whose projected coordinates are not contained in this range are invisible and are not drawn.

Note that in our system, the screen window maximum and minimum values do not change. They are always in the range [-1:1] regardless of the value used for the FOV (we assume that the screen is a square). When points coordinates are contained within the range [-1,1] we say that they are defined in NDC space.

Remember from chapter 1, that the goal of perspective projection matrix, is to project point onto the screen and remap their coordinates to the range [-1,1] (or to NDC space).

The distance to the screen window from the eye position does not change either (it is equal to 1). When the FOV changes, however, we have just shown that the screen window should accordingly become larger or smaller (see figures 2 and 5). How do we reconcile this contradiction? Since we want the screen window to be fixed, what we will change instead are the projected coordinates. We will scale them up or down and test them against the fixed borders of the screen window. Let's work through a few examples.

Imagine a point whose projected x-y coordinates are (1.2, 1.3). These coordinates are outside the range [-1:1], and the point is therefore not visible. If we scale them down by multiplying them by 0.7, the new, scaled coordinates of the point become (0.84, 0.91). This point is now visible, since both coordinates are in the range [-1:1]. This action would corresponds to the physical action of zooming out. Zooming out means decreasing the focal length on a zoom lens or increasing the FOV. For the opposite effect, multiply by a value greater than 1. For example, imagine a point whose projected coordinates are (-0.5, 0.3). If you multiply these numbers by 2.1, the new, scaled coordinates are (-1.05, 0.63). The y-coordinate is still contained within the range [-1:1], but now the x-coordinate is lower than -1 and thus too far to the left. The point which was originally visible becomes invisible after scaling. What happened? You zoomed in.

To scale the projected coordinates up or down, we will use the field-of-view of the camera. The field-of-view (or angle-of-view) intuitively controls how much of the scene is visible to the camera. See the lesson 3D Viewing: the Pinhole Camera Model for more information.

**horizontal angle**(which is also the case in Maya).

The value of FOV, however, is not directly used; the tangent of the angle is used instead. In the CG literature, the FOV can be defined as either the angle or half of the angle that is subtended by the viewing cone. We believe it is more intuitive to see the FOV as the angular extent of the visible scene rather than as half of this angle (as represented in figures 3 and 5). To find a value that can be used to scale the projected coordinates, however, we need to divide the FOV angle by two. This explains why the FOV is sometimes expressed as the half-angle. Why do we divide the angle in half? What is of interest to us is the right triangle inscribed in the cone. The change in this angle between the hypothenuse and the adjacent side of the triangle (or the FOV half-angle) controls the length of the triangle's opposite side. By increasing or decreasing this angle, we can scale up or down the border of the image window. And since we need a value that is centered around 1, we will take the tangent of this angle to scale our projected coordinates. Note that when the FOV half-angle is 45 degrees (FOV is then 90 degrees), the tangent of this angle is equal to 1. Therefore, when we multiply the projected coordinates by 1, the coordinates do not change. For values of the FOV lesser than 90 degrees, the tangent of the half-angle gives values smaller than 1, and for values greater than 90 degrees, it gives values greater than 1. But the opposite effect is needed. Recall that zooming in should correspond to a decrease in FOV, and so we need to multiply the projected point coordinates by a value greater than 1. To zoom out means that the FOV increases, so we need to multiply these coordinates by a value less than 1. Thus, we will use the reciprocal of the tangent or in other words, one over the tangent of the FOV half-angle.

Here is the final equation to compute the value used to scale the coordinates of the projected point:

$$S = \dfrac{1}{\tan(\dfrac{fov}{2}*\dfrac{\pi}{180})}$$And thus we have the final version of our basic perspective projection matrix:

$$ \left[\begin{array}{cccc} S & 0 & 0 & 0 \\ 0 & S & 0 & 0 \\ 0 & 0 & -\dfrac{f}{(f-n)} & -1\\ 0 & 0 & -\dfrac{f*n}{(f-n)}& 0\\ \end{array}\right] $$## Are There Different Ways of Building this Matrix?

Yes and no. Some renderers may have a different implementation of the perspective projection matrix. This is the case with OpenGL. OpenGL used a function called glFrustum to create perspective projection matrices. This call takes as arguments, the left, right, bottom and top coordinates in addition to the near and far clipping planes. Unlike our system, OpenGL assumes that the points in the scene are projected on the near clipping planes, rather than on a plane that lies one unit away from the camera position. The matrix itself might also look slightly different. Be careful about the convention used for vectors and matrices. The projected point can be represented as either a row or column vector. Check also whether the renderer uses a left- or right-handed coordinate system, as that could change the sign of the matrix coefficients. Despite these differences, the underlying principle of the perspective projection matrix is the same for all renderers. They always divide the x- and y- coordinates of the point by its z-coordinate. In the end, all matrices should project the same points to the same pixel coordinates, regardless of the conventions or the matrix that is being used. We will study the construction of the OpenGL matrix in the next chapter.

## Test Program

To test our basic perspective projection matrix, we wrote a small program to project the vertices of a polygonal object (the Newell's teapot) onto the image plane using the projection matrix we developed in this chapter. The program itself, is simple in its implementation. A function is used to build the perspective projection matrix. Its arguments are the camera's near and far clipping plane, as well as the camera field-of-view defined in degrees. The vertices of the teapot are stored in an array (line 5). Each point is then projected onto the image plane using a simple point-matrix multiplication (line 51). Note that we first transform the points from world or object space to camera space. The function multPointMatrix computes the product of a point with a matrix. Note how we create the fourth component, w (line 25), and divide the result of the new point's coordinates by w, only if w is different than 1 (line 28). **This is where and when the z or perspective divide occurs**. A point is only visible if its projected x- and y- coordinates are contained within the interval [-1:1] (regardless of the image aspect ratio). Otherwise the point is outside the boundaries of the camera's screen boundaries. If the point is contained within this interval, we need to remap these coordinates to raster space, i.e. pixel coordinates. This operation is simple. We remap the coordinates from [-1:1] to [0:1], multiply by the image size, and round the resulting floating digit to the nearest integer, as pixel coordinates must be integers.

To test our program, we have rendered an image of the teapot in a commercial renderer using the same camera settings and combined it with the image produced by our code. They match, as expected (the teapot geometry and the files of this program can be found in the Source Code chapter at the end of this lesson).

## What's Next?

In the next chapter, we will learn how to construct the perspective projection matrix used in OpenGL. The principles are the same, but instead of mapping the points to an image plane one unit from the camera position, it projects the point onto the near clipping plane and it remaps the projected point coordinates to NDC space using the screen coordinates which are themselves computed from the camera near clipping plane and angle-of-view. This results in a different matrix. We will then learn about the **orthographic projection** matrix.