Skip to main content

Upscaling half resolution screen space effects

When working with diffuse lighting and ambient occlusion in screen space it is often very tempting to do computations in lower resolution. Most of it is blurry anyway, and for any kind of GI/path tracing, diffuse lighting is undoubtedly the bottleneck. Here is a test scene with all colours set to white and no textures.



Enabling only the diffuse lighting, the image looks strangely familiar.



You quickly realise that diffuse lighting is the lion's share of the entire image. Since everything is the same colour, two overlapping objects can be told apart only because they differ in diffuse lighting. Therefore, lowering the resolution of diffuse lighting also means that a lot of edges will be half resolution and the same diffuse lighting suddenly looks like this.



Not acceptable (click on image to view full resolution), but note that the image looks perfectly fine over larger areas where there are no edges, and also at the contours towards the skybox. I've come to think of two solutions to this problem:

1) Render at half resolution. Detect edges and re-render pixels near edges during upsampling. This would probably work very well, but I didn't try it yet.

2) A cheaper solution would be to cover up faulty pixels on the edges using neighbouring pixels from the same surface (it's all blurry, remember?), practically retouching the edges much the same way you retouch images in photoshop.

I decided to try the latter and got some interesting results. First I create a 2D "retouching" vector field. It is basically just a distance offset, telling each pixel where to fetch it's samples. In the middle of a surface this will be (0,0) and near an edge it will point away from the edge. If you have any way of classifying surfaces in a shader this is actually really cheap to do. I just use a unique number for each smoothing group to identify smooth surfaces and for each pixel, I check the eight neighboring pixels, average the offset of the ones that are in the same smoothing group. Ta-da, the average offset will now point in a direction away from each edge, and the retouch vector field looks something like this (here visualized upscaled and with absolute values):



Now if you process the downscaled, half resolution, diffuse lighting through this retouch field during upscaling, the resulting image will magically look like this:



Congratulations, you just saved ~75% processing time for your diffuse lighting. However, there are artifacts, as always. But I found the results to be acceptable in most situations. Computing diffuse lighting in half resolution (quarter pixel count) allowed me to do eight samples per pixel instead of two, resulting in more accurate lighting and less noise.

Another really nice property of the retouch vector field is that once you've created it, you can reuse the same field for any screen space upscaling you might do. I for instance reuse the same field when upscaling screen space reflections, and I'm hoping to use it also for smoke particles once I get there.

Comments

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …

Undo for lazy programmers

I often see people recommend the command pattern for implementing undo/redo in, say, a level editor. While it sure works, it's a lot of code and a lot of work. Some ten years ago I came across an idea that I have used ever since, that is super easy to implement and has worked like a charm for all my projects so far.

Every level editor already has the functionality to serialize the level state (and save it to disk). It also has the ability to load a previously saved state, and the idea is to simply use those to implement undo/redo. I create a stack of memory buffers and serialize the entire level into that after each action is completed. Undo is implemented by walking one step up the stack and load that state. Redo is implemented in the same way by walking a step down the stack and load.

This obviously doesn't work for something like photoshop unless you have terabytes of memory laying around, but in my experience the level information is usually relatively compact and seriali…