Perlin Noise: Part 2

News (August, 31): We are working on Scratchapixel 3.0 at the moment (current version of 2). The idea is to make the project open source by storing the content of the website on GitHub as Markdown files. In practice, that means you and the rest of the community will be able to edit the content of the pages if you want to contribute (typos and bug fixes, rewording sentences). You will also be able to contribute by translating pages to different languages if you want to. Then when we publish the site we will translate the Markdown files to HTML. That means new design as well.

That's what we are busy with right now and why there won't be a lot of updates in the weeks to come. More news about SaP 3.0 soon.

We are looking for native Engxish (yes we know there's a typo here) speakers that will be willing to readproof a few lessons. If you are interested please get in touch on Discord, in the #scratchapixel3-0 channel. Also looking for at least one experienced full dev stack dev that would be willing to give us a hand with the next design.

Feel free to send us your requests, suggestions, etc. (on Discord) to help us improve the website.

And you can also donate). Donations go directly back into the development of the project. The more donation we get the more content you will get and the quicker we will be able to deliver it to you.

6 mns read.

In this chapter, we will learn about a fun technique that consists of using a 2D Perlin noise to displace the vertex of a mesh to create a terrain. As we mentioned in the first lesson on noise, the noise function is a very useful "procedural texture" primitive from which more complex procedural textures can be created such as for example the fractal or the turbulence pattern. We will use the Perlin noise to start with, and then show an example of a terrain generated using a fractal pattern instead.

This technique will be useful to better understand the importance of computing the derivatives of the Perlin noise function which is the topic of the next chapter.

After reading this chapter, you will be able to reproduce the image below.

The idea behind this technique is very simple and similar to what we call displacement mapping. If you look at the grid from the top, you can more easily see that if you overlay the noise image onto the grid you get a perfect match: to each vertex of the grid corresponds a pixel in the noise image. As you know we can define the coordinates of the pixels in some sort of normal device coordinates (the pixel coordinates are then in the range [0,1]). The same can be done with the grid vertices: these are technically called texture coordinates. Let's look at the function we will be using to create the grid:

PolyMesh* createPolyMeshPlane( uint32_t width = 1, uint32_t height = 1, uint32_t subdivisionWidth = 40, uint32_t subdivisionHeight = 40) { PolyMesh *poly = new PolyMesh; poly->numVertices = (subdivisionWidth + 1) * (subdivisionHeight + 1); poly->vertices = new Vec3f[poly->numVertices]; poly->st = new Vec2f[poly->numVertices]; float invSubdivisionWidth = 1.f / subdivisionWidth; float invSubdivisionHeight = 1.f / subdivisionHeight; for (uint32_t j = 0; j <= subdivisionHeight; ++j) { for (uint32_t i = 0; i <= subdivisionWidth; ++i) { poly->vertices[j * (subdivisionWidth + 1) + i] = Vec3f(width * (i * invSubdivisionWidth - 0.5), 0, height * (j * invSubdivisionHeight - 0.5)); poly->st[j * (subdivisionWidth + 1) + i] = Vec2f(i * invSubdivisionWidth, j * invSubdivisionHeight); } } poly->numFaces = subdivisionWidth * subdivisionHeight; poly->faceArray = new uint32_t[poly->numFaces]; for (uint32_t i = 0; i < poly->numFaces; ++i) poly->faceArray[i] = 4; poly->verticesArray = new uint32_t[4 * poly->numFaces]; for (uint32_t j = 0, k = 0; j < subdivisionHeight; ++j) { for (uint32_t i = 0; i < subdivisionWidth; ++i) { poly->verticesArray[k] = j * (subdivisionWidth + 1) + i; poly->verticesArray[k + 1] = j * (subdivisionWidth + 1) + i + 1; poly->verticesArray[k + 2] = (j + 1) * (subdivisionWidth + 1) + i + 1; poly->verticesArray[k + 3] = (j + 1) * (subdivisionWidth + 1) + i; k += 4; } } return poly; }

The texture coordinate of the vertex is computed line 16. This is also a space in which the coordinates are in the range [0,1]. The vertex in the upper left corner of the grid has the texture coordinates [0,0], while the vertex in the lower-right coordinate has the texture coordinate [1,1]. It becomes thus easy to use the vertex texture coordinates to do a lookup in the noise image.

Note that in this example, we use the noise image to read the values from the 2D noise function, but we could evaluate the 2D noise function directly if we wanted to. We just decided to re-use the array we created in the first chapter to output the result of the 2D noise function to an image file. When an image is used to displace the vertices of an object, we say that this image is a height map.

In case of a height map, we generally use the brightness (the luminance for example) of the pixels' color to control the amplitude of displacement. Generally the brighter the pixel, the greater the displacement though of course you can map the pixel value to displacement in a completely different way if you wish. It all depends on the effect you intend to create. All you need to remember is that you use an image to somehow control the amount by which the vertices of the object are displaced or moved along for example their normal.

for (unsigned j = 0; j < imageHeight; ++j) { for (unsigned i = 0; i < imageWidth; ++i) { // Perlin noise is in the range [-1:1] float perlinNoise = PerlinNoise::evalAtPoint(Vec3f(i, j, 0) * (1 / 128.f)); noiseMap[j * imageWidth + i] = (perlinNoise + 1) * 0.5; } } // displace for (uint32_t i = 0; i < poly->numVertices; ++i) { Vec2f st = poly->st[i]; uint32_t x = std::min(static_cast(st.x * imageWidth), imageWidth - 1); uint32_t y = std::min(static_cast(st.y * imageHeight), imageHeight - 1); poly->vertices[i].y = 2 * noiseMap[y * imageWidth + x] - 1; }

Keep in mind that the values of the Perlin noise are in the range [-1,1]. But we remapped these values to the range [0,1] when we stored the values in the image buffer (line 5). Though the values are mapped again back to the range [-1,1] when we displace the vertices later on (line 15) because that way, the mesh will stay centred around the origin along the y-axis. We will either push the vertices upward (if the values are greater than 0) or downward (if the values are lower than 0) and if the value is 0, the vertex y-coordinate will stay 0.

If you render this mesh with the noise image applied on top as a texture map, you should get something similar to the first image of this chapter. Note how the white/bright area of the noise image corresponds to bump in the mesh, while dark areas in the image corresponds to dents or valleys (and note how the amount of displacement is proportional to the pixel values).

As mentioned in the introduction of this chapter, you can use more interesting procedural patterns to displace the mesh vertices, such as a fractal pattern which can be construct as a weighted sum of noise layers. Check the previous lesson on noise to learn now to generate a fractal pattern using the noise function. Here is the code to generate the fractal image that was used to displace the mesh:

uint32_t numLayers = 5; float maxVal = 0; for (uint32_t j = 0; j < imageHeight; ++j) { for (uint32_t i = 0; i < imageWidth; ++i) { float fractal = 0; float amplitude = 1; Vec3f pt = Vec3f(i, j, 0) * (1 / 128.f); for (uint32_t k = 0; k < numLayers; ++k) { fractal += (1 + PerlinNoise::evalAtPoint(pt)) * 0.5 * amplitude; pt *= 2; amplitude *= 0.5; } if (fractal > maxVal) maxVal = fractal; noiseMap[j * imageWidth + i] = fractal; } } for (uint32_t i = 0; i < imageWidth * imageHeight; ++i) noiseMap[i] /= maxVal;

A fractal image generally contains details of higher frequency than in a single layer of noise. Thus, to see these details in the displacement, it is likely that you will have to increase the density of the mesh itself. Here is a render of the mesh displaced with a fractal image.

As suggested in the previous lesson, this technique can be used to generate realistic terrains (hopefully the image above is convincing enough). We can also use the noise function to create and animate water surfaces. In this example, the procedural pattern we used (a fractal) is pretty simple. You can play with the parameters a bit to modify the look of the terrain, but we will also learn in another lesson how to add effects such as erosion to increase the realism of the terrain.

As a final note, we haven't computed the normal of the mesh after displacement. How do we do that? This is the topic of our next chapter. We will learn how to use the noise function derivatives to compute a "true" normal at the vertex position after displacement.