A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://jacco.ompf2.com/2024/05/08/ray-tracing-with-voxels-in-c-series-part-3/ below:

Ray Tracing with Voxels in C++ Series – Part 3 – Jacco’s Blog

In this series we build a physically based renderer for a dynamic voxel world. From ‘Whitted-style’ ray tracing with shadows, glass and mirrors, we go all the way to ‘unbiased path tracing’ – and beyond, exploring advanced topics such as reprojection, denoising and blue noise. The accompanying source code is available on Github and is written in easy-to-read ‘sane C++’.

This series consists of nine articles. Contents: (tentative until finalized)

  1. Starting point: Voxel ray tracing template code. Lights and shadows.
  2. Whitted-style: Reflections, recursion, glass.
  3. Stochastic techniques: Anti-aliasing and soft shadows (this article).
  4. Noise reduction: Stratification, blue noise.
  5. Converging under movement: Reprojection.
  6. Path tracing: Basics first; then: Importance Sampling.
  7. Denoising fundamentals.
  8. Acceleration structures: Extending the grid to a multi-level grid.
  9. Acceleration structures: Adding a TLAS.
In This Article…

In this article we move beyond Whitted-style ray tracing with some stochastic rendering techniques. This will unlock new and interesting features, such as soft shadows from area lights, motion blur and depth of field. It all starts with anti-aliasing, as this simple technique already uses the ingredients we will need for the more impactful techniques. And finally, we will see how stochastic rendering solves the main problem of Whitted-style ray tracing: Performance.

Starting Point

As with the previous articles, we start with the standard template for this series, which you can find in its repository on Github. Let’s change Renderer::Trace to a smaller version of the code used in article 2:

float3 Renderer::Trace( Ray& ray, int depth, int, int )

{

    scene.FindNearest( ray );

    if (ray.voxel == 0) return float3( 0.5f, 0.6f, 1.0f );

    static const float3 L = normalize( float3( 3, 2, 1 ) );

    return max( 0.3f, dot( ray.GetNormal(), L ) );

}

When you zoom in on an edge, maybe after taking a screenshot, an issue emerges:

Figure 1: ‘Jaggies’: rasterization / point sampling artifacts.

The ‘jaggies’ on the edges are aliasing artifacts. They result from displaying an image on a computer screen, which is a raster of more or less square pixels. If we zoom in on a small set of pixels, the root cause of the problem emerges.

Figure 2: Point samples detect geometry only at a single point on the area of a pixel.

The pixel colors are supposed to be a representation of the geometry that we can see though a small rectangle. But, the ray tracer samples the geometry using a single ray, here depicted by a yellow dot. So, when a pixel should display a blend of the sky color and a shaded voxel, it actually displays one or the other.

Take the bottom right pixel. Based on the sample, that pixel is blue. A better color would be ~90% blue and ~10% grey: This ratio depends on the area of the blue and the grey patches. IN this particular case, this ratio could theoretically be calculated, but this becomes hard when the scene is more complex. We can however estimate it, by using multiple rays per pixel.

Figure 3: Multi-sampling gives us a coarse estimate of the true color of a pixel.

This time, the bottom right corner more or less accurately estimates the color to be half blue, half grey, since two rays hit the sky, and two rays hit the geometry. The top-right pixel however is still just blue, despite the extra work we spent on it. We can fix this by sending more rays, and as the number of rays approaches infinity, the answer will become correct, no matter how complex the things we ‘see’ through the pixel. This technique is called numerical integration. We estimate the energy arriving through a pixel by taking a number of point samples. More samples improve the estimate.

– o –

Code

Let’s implement anti-aliasing with multiple rays per pixel. On line 37 of Renderer::Tick we currently create a single primary ray for each pixel:

Ray r = camera.GetPrimaryRay( (float)x, (float)y );

Let’s make that four rays.

Ray r1 = camera.GetPrimaryRay( (float)x, (float)y );

Ray r2 = camera.GetPrimaryRay( x + 0.5f, (float)y );

Ray r3 = camera.GetPrimaryRay( (float)x, y + 0.5f );

Ray r4 = camera.GetPrimaryRay( x + 0.5f, y + 0.5f );

float3 sample1 = Trace( r1 );

float3 sample2 = Trace( r2 );

float3 sample3 = Trace( r3 );

float3 sample4 = Trace( r4 );

float3 pixel = 0.25f * (sample1 + sample2 + sample3 + sample4);

First observation after running this program: This works. The jaggies have substantially been reduced. Second observation: It now runs four times slower. Yeah, ray tracers tend to do that. Luckily, the template lets you remedy that relatively easily. Open up camera.h, and you’ll see the screen resolution used by the template:

// default screen resolution

#define SCRWIDTH 1280

#define SCRHEIGHT 800

// #define FULLSCREEN

// #define DOUBLESIZE

Change the resolution to 640 by 400, and uncomment DOUBLESIZE. This will render every pixel as a 2×2 block, effectively cutting the work the program has to do to 25%.

Figure 4: 2×2 samples per pixel greatly reduce the jaggies.

There is however a different solution to the performance problem, and that solution is a bit more fundamental.

– o –

One Sample

Instead of sending 4, 9, 16, 25 or even infinite rays through each pixel, we can do something clever: We send one ray through every pixel, but, instead of aiming it at a fixed position on the pixel, we aim at a random position. This has two consequences:

  1. We immediately get our original performance back, because each pixel is calculated using one ray;
  2. Geometry edges are now noisy.

The code we need looks like this:

Ray r = camera.GetPrimaryRay( x + RandomFloat(), y + RandomFloat() );

float3 pixel = Trace( r );

And the output is noisy, as promised:

Figure 5: Random point samples on pixels yield a noisy result.

It turns out that things improve when we send multiple random rays through each pixel. Try the following snippet, starting at line 32:

// trace a primary ray for each pixel on the line

for (int x = 0; x < SCRWIDTH; x++)

{

    float3 pixel( 0 );

    for ( int sample = 0; sample < 8; sample++ )

    {

        Ray r = camera.GetPrimaryRay( x + RandomFloat(), y + RandomFloat() );

        pixel += Trace( r );

    }

    screen->pixels[x + y * SCRWIDTH] = RGBF32_to_RGB8( pixel * 0.125f );

}

This produces the following image:

Figure 6: Multiple random samples show a result that is still noisy, but closer to the correct solution.

It’s still noisy, but it is approaching the result that we are looking for, similar to the 2×2 raster of samples that we used earlier.

This experiment reveals that there is something special about the noisy pixels. More of them get us closer to the correct result, because the probability that a random sample finds a particular color through a pixel is proportional to the area of that particular color. Let that sink in, because it is important. We just replaced the calculation of the area of a sub-pixel feature by the chance that we hit it with a random ray. Or, in math terms, we replaced an integral by the expected value of a random process.

– o –

Converge

Now that we know that more random samples improve the quality of the anti-aliasing, it’s time to add something cool to the ray tracer. We start with a modification to Renderer::Init in renderer.cpp. This function is called once before all frames are drawn and is useful to do things we need only once. Modify it as follows:

void Renderer::Init()

{

    accumulator = new float3[SCRWIDTH * SCRHEIGHT];

    memset( accumulator, 0, SCRWIDTH * SCRHEIGHT * sizeof( float3 ) );

}

The pixel plotting loop is restored to the single random sample variant:

// trace a primary ray for each pixel on the line

for (int x = 0; x < SCRWIDTH; x++)

{

    Ray r = camera.GetPrimaryRay( x + RandomFloat(), y + RandomFloat() );

    float3 pixel = Trace( r );

    screen->pixels[x + y * SCRWIDTH] = RGBF32_to_RGB8( pixel );

}

The accumulator array is pure magic. We use it to calculate multiple samples per pixel over time. It starts empty, with every pixel set to 0. Observe the low-level magic used here: The 32-bit integer number 0, when interpreted as 32-bit floating point data, is also 0.0f, which allows us to quickly zero the large accumulator array. We then add a value for each pixel to it:

for (int x = 0; x < SCRWIDTH; x++)

{

    Ray r = camera.GetPrimaryRay( x + RandomFloat(), y + RandomFloat() );

    accumulator[x + y * SCRWIDTH] += Trace( r );

}

The value in the accumulator will thus get brighter with each frame. But, if we divide the accumulated values by the number of frames, we will get the correct average. So, just above function Renderer::Tick, we add a line:

Then, we add a line just before the #pragma omp parallel line, so that the value is available to all rendering threads:

const float scale = 1.0f / spp++;

Almost there. We now modify the pixel loop so that the average of each accumulated pixel is plotted to the screen:

// trace a primary ray for each pixel on the line

for (int x = 0; x < SCRWIDTH; x++)

{

    Ray r = camera.GetPrimaryRay( x + RandomFloat(), y + RandomFloat() );

    accumulator[x + y * SCRWIDTH] += Trace( r );

    float3 average = accumulator[x + y * SCRWIDTH] * scale;

    screen->pixels[x + y * SCRWIDTH] = RGBF32_to_RGB8( average );

}

Now run this. The result is absolutely perfect anti-aliasing, thanks to some basic probability theory.

Figure 7: Perfectly converged anti-aliasing for a stationary camera, accumulated over time.

One problem remains: When you move the camera, nothing happens, or at best a massive smear is the result. There’s an easy fix for that: Whenever the camera moves, we should clear the accumulator, and restart accumulation. Replace the last line of Renderer::Tick with:

if (camera.HandleInput( deltaTime ))

{

    spp = 1;

    memset( accumulator, 0, SCRWIDTH * SCRHEIGHT * sizeof( float3 ) );

}

That’s all! Function Camera::HandleInput will let you know when user input changed the view; when this happens, we reset the static spp (‘samples per pixel’) variable to 1, and clear the values in the accumulator. This time the image is noisy only when the camera moves, and quickly converges when it becomes stationary.

– o –

Soft Shadows

With perfect anti-aliasing in place, it turns out to be pretty easy to add soft shadows to the renderer.

Soft shadows represent a pretty high end rendering feature. They occur naturally when light is cast by an area light, i.e. any light that is not a point.

Figure 8: Soft-shadows in a photograph.

Even the sun is an area light: In fact, point lights were pretty rare until we got LEDs, which sometimes approach point lights.

Some terminology: The soft edge of a soft shadow is called the penumbra, this is the area from which the area light source is partially visible. The part of the shadow for which the light is fully blocked is called the umbra. To calculate a soft shadow in a ray tracer, we are thus interested in the visibility of a light source. For a point light, this is all or nothing: A shadow ray can get there, or it is blocked. For an area light, we may be able to get to some part of the light, or all of it, or some percentage. You probably can see where this is going: If we would fire a shadow ray to a random point on a light source, and we would determine the probability that this ray arrives unobstructed, then this probability can perfectly replace visibility.

Let’s setup a basic area light to play with. The simplest shape is a horizontal rectangle. A random point on such a rectangle is easily obtained:

float3 RandomPointOnLight()

{

    return float3( RandomFloat() - 1, 3, RandomFloat() - 1 );

}

Recall that the world is 1x1x1, located at the origin, so this would be a large plane floating some distance diagonally above it. We use the new RandomPointOnLight function in modified version of the Renderer::Trace() code from the first article:

1

2

3

4

5

6

7

8

9

10

11

12

13

float3 Renderer::Trace( Ray& ray, int depth, int, int )

{

    scene.FindNearest( ray );

    if (ray.voxel == 0) return float3( 0.4f, 0.5f, 1.0f );

    float3 I = ray.IntersectionPoint();

    float3 L = RandomPointOnLight() - I;

    float distance = length( L );

    L = normalize( L );

    float cosa = max( 0.0f, dot( ray.GetNormal(), L ) );

    Ray shadowRay( I, L, distance );

    if (scene.IsOccluded( shadowRay )) return 0;

    return 20 * ray.GetAlbedo() * cosa / pow2f( distance );

}

Using the RandomPointOnLight function we now apply the same magic that allowed us to do anti-aliasing with just one ray. Without the accumulator, things are interesting already:

Figure 9: Taking random samples on an area light source yields soft shadows, where the density of the black spots is proportional to the visibility of the light source.

Even with one sample per pixel, the soft shadow already emerges. Every individual sample is still ‘hit’ or ‘miss’, but on average, these samples capture the smoothly varying visibility of the area light. That becomes even more clear when we use the accumulator:

Figure 10: Combining the accumulator with point samples on the pixel and the light yields a perfect result.

That looks just gorgeous. And here comes the kicker: One sample through every pixel not only gets us this beautiful soft shadow; we still have the anti-aliasing as well! And all of that with an absolutely tiny piece of code.

– o –

And Finally

There’s one last thing to introduce here. So far we have been working with rather boring test data. To improve on that situation I have converted a 3D mesh from author PabloDebusschere on Sketchfab to voxels and compressed it using zlib. You can find the resulting file, viking.bin, in the assets folder. To load it, replace Scene constructor by the following code:

Scene::Scene()

{

    // allocate room for the world

    grid = (uint*)MALLOC64( GRIDSIZE3 * sizeof( uint ) );

    gzFile f = gzopen( "assets/viking.bin", "rb" );

    int3 size;

    gzread( f, &size, sizeof( int3 ) );

    gzread( f, grid, size.x * size.y * size.z * 4 );

    gzclose( f );

}

After carefully picking a good looking camera view, the result is as shown below.

And with that we end this week’s episode. In the next episode we will look at methods to reduce the noise that is clearly visible when the camera is moving. See you then!

– o –

Challenges

Challenge 1. Combine the concepts of this article with those described in article 1 and 2. Concrete, add support for multiple lights, as well as reflective and refractive materials to produce high quality images. Make sure to share some online. And if you made a game at the end of article 1, by now it should look extremely nice!

Challenge 2. Now that everything is random, we can solve the ‘splitting ray’ issue of Whitted-style ray tracing. For your glass material, chose randomly between the reflected and refracted directions. The probability of each should of course be determined by Fresnel.

Challenge 3. You can find several .bin files in the assets folder. Create some code that adds them to a world, even when their size is not exactly 128x128x128 voxels, as with the Viking scene. You can even modify Scene::Scene to load several .bin files, perhaps at a custom location in the world.

Challenge 4. In another series of articles on this blog you can find instructions for adding a skydome to your scene. Follow the instructions. 🙂

Etc.

If you want to share some of your work, consider posting on X about it. You can also follow me there (@j_bikker), or contact me at bikker.j@gmail.com.

Want to read more about computer graphics? Also on this blog:

Other articles in the “Ray Tracing with Voxels in C++” series:

  1. Starting point: Voxel ray tracing template code. Lights and shadows.
  2. Whitted-style: Reflections, recursion, glass.
  3. Stochastic techniques: Anti-aliasing and soft shadows (this article).
  4. Noise reduction: Stratification, blue noise.
  5. Converging under movement: Reprojection.
  6. Path tracing: Basics first; then: Importance Sampling.
  7. Denoising fundamentals.
  8. Acceleration structures: Extending the grid to a multi-level grid.
  9. Acceleration structures: Adding a TLAS.

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4