I’ve had an idea floating around in my head for several months now, but evening classes, a hard drive failure, then a GPU failure prevented me from doing much about it until this weekend. First, a couple of screenshots of what I’ll be talking about.

Cornell box, I choose you!

The above screenshots are of real-time single-bounce GI in a static scene with fully dynamic lighting. There are 7182 patches, and the lighting calculations take 36ms per frame on one thread of an Intel Core 2 @2.4 GHz. The code is not optimized.

The basic idea is simple and is split into two phases.

A one-time scene compilation phase:

  • Split all surfaces in the scene into roughly equal sized patches.
  • For each patch i, build a list of all other patches visible from i along with the form factor between patches i and j:
    F_{ij} = \frac{cos \phi_i \, cos \phi_j}{\pi \, |r|^2} \, A_j
    where:
    Fij is the form factor between patches i and j
    Aj is the area of patch j
    r is the vector from i to j
    Φi is the angle between r and the normal of patch i
    Φj is the angle between r and the normal of patch j

And a per-frame lighting phase:

  • For each patch i, calculate the direct illumination.
  • For each patch i, calculate single-bounce indirect illumination from all visible patches:
  • I_i = \sum_j D_j \, F_{ij}
    where:
    Ij is the single-bounce indirect illumination for patch i
    Dj is the direct illumination for patch j

So far, so radiosity. If I understand Michal Iwanicki’s GDC presentation correctly, this is similar to the lighting tech on Milo and Kate, only they additionally project the bounce lighting into SH.

The problem with this approach is that the running time is O(N2) with the number of patches. We could work around this by making the patches quite large, running on the GPU, or both. Alternatively, we can bring the running time down to O(N.log(N)) by borrowing from Michael Bunnell’s work on dynamic AO and cluster patches into a hierarchy. I chose to perform bottom-up patch clustering similarly to the method that Miles Macklin describes in his (Almost) realtime GI blog post.

Scene compilation is now:

  • Split all surfaces in the scene into roughly equal sized patches.
  • Build a hierarchy of patches using k-means clustering.
  • For each patch i, build a list of all other patches visible from i along with the form factor between patches i and j. If a visible patch j is too far from patch i look further up the hierarchy.

And the lighting phase:

  • For each leaf patch i in the hierarchy, calculate the direct illumination.
  • Propagate the direct lighting up the hierarchy.
  • For each patch i, calculate single-bounce indirect illumination from all visible patches clusters.

Although this technique is really simple, it supports a feature set similar to that of Enlighten:

  • Global illumination reacts to changes in surface albedo.
  • Static area light sources that cast soft shadows.

That’s basically about it. There are a few of other areas I’m tempted to look into once I’ve cleaned the code up a bit:

  • Calculate directly and indirect illumination at different frequencies. This would allow scaling to much larger scenes.
  • Perform the last two lighting steps multiple times to approximate more light bounces.
  • Project the indirect illumination into SH, HL2 or the Half-Life basis.
  • Light probes for dynamic objects.

You can grab the source code from here. Expect a mess, since it’s a C++ port of a C# proof of concept with liberal use of vector and hash_map. Scene construction is particularly bad and may take a minute to run. You can fly around using WASD and left-click dragging with the mouse.

Fighting fireflies

December 23, 2010

I’ve been playing around with path tracing on and off for longer than I care to admit. Although my dalliances never produced anything earth-shattering (and certainly nothing I’d be willing to post about on ompf), I’ve found it to be an endlessly fascinating subject. No matter how much I read about it, I only ever seem to be scratching the surface.

One of the biggest headaches I encountered were caused by “fireflies”: those bright pixels that can occur when a sampling a strong response combined with a small PDF somewhere along the path. For a long time, I was “fixing” these by hand painting over the offending pixels and pretending like nothing ever happened. Eventually though, the guilt of this gnawed away at me long enough to motivate finding some kind of better solution.

My first thought was to write a filter that estimated variance in an image and replace any “bad” pixels it found with a weighted average of their neighbours. Luckily, my second thought was of shadow maps, the only other context in which I’d read about variance before. Based on the ideas in that paper, I accumulate two separate per-pixel buffers: one storing the running sum of the samples and the other storing the sum of their squares. Having these two buffers then makes it trivial to compute the sample variance of each pixel in the image.

My path tracer already had support for progressive refinement, so it was straightforward to add a separate “variance reduction” pass that would run at the touch of a button. During this pass, the N pixels with the highest variance are identified and oversampled a few hundred times, which hopefully reduces their variance sufficiently. If not, I just run the pass again.

As an example, here’s a render of the Manifold mesh from Torolf Sauermann’s awesome model repository, stopped after only a few paths have been traced per pixel:

I’ve highlighted a few areas that contain fireflies and below is a comparison of those areas before and after running the variance reduction pass:

And here’s the complete result of running the pass; it’s still noisy, but the pixels with particularly high variance have been cleaned up reasonably well:

I’m not too hot at statistics, but I would guess that this adds bias to the final result, which is frowned upon in some circles (but not others). Admittedly, a better solution would be to simply not generate so much variance in the first place, but this will do as a kludge until then. At least it’s better than painting pixels by hand!

Ok, since this post was mostly just an excuse to dump some results from my path tracer, here they are.

The XYZ RGB Dragon from Stanford’s 3D Scanning Repository:

A heat map of the BIH built for the same model.

The other Dragon from Stanford:

Manifold again, from jotero.com.

Some Stanford Bunnies:

Crytek’s updated version of Marko Dabrovic’s Sponza model:

And Stanford’s Lucy, just to prove that I can:
I’ve not been able to find any close up renders of Lucy to compare with, but I believe that the “pimples” on the model are noise in the original dataset.