Skip to main content

Adventures in Screen Space

Eight years ago, just when I first started writing this blog my second post was about screen space ambient occlusion. I used that renderer for all my physics experiments, leading up to the fluid simulation that became Sprinkle. At that point I left desktop computing in favor of mobile devices. Ten games later I'm now back to desktop machines and I'm completely blown away by all the computing power.

For the first Sprinkle game I had to make dedicated geometry with holes in it when drawing large alpha blended overlays because the fill rate was so terrible. Now I'm running hundreds of lines of code really doing complex computations per pixel. Sorry, you have probably already adjusted but this will take me a while. 

So what would be more fitting than to freshen up that old physics renderer (well more like starting from scratch, but still). I have been wanting to experiment with physics in VR for a while and now is the time. For this I need a renderer that can handle a truly dynamic world with no precomputed lighting.


I have implemented screen space ambient occlusion with temporal reprojection filtering that takes a lot of the noise away without smearing out the result. I've always hated shadow maps. They are hard to implement and the result is usually disappointing, so for this renderer I tried doing shadows entirely in screen space using ray marching towards the light source. It's a bit of an experiment, but I find the results really interesting. The characteristics are very different from regular shadow maps – instead of getting precise but jagged shadows this one gives imprecise and smooth, blurry shadows. I can't really decide if I like it or not. For a sunny outdoor setting, regular shadow maps are probably better, but for the more diffuse, indoor lighting this is quite promising.



There is also depth of field close to the camera done in four passes on half resolution and motion blur on everything. I'm going for a old, analogue look on the final result, so any imperfections that can tone down the artificial computer graphics characteristics is a good thing. The ambient occlusion and screen space shadows do add a little bit of noise, but there is one cheap and paradoxically efficient way of hiding unwanted noise: add more noise. So at the final stages of the pipeline I add 5-7% of greyscale noise which hides some of the noise in occluded areas and adds to the analogue look.


I have a bloom pass as well and I just started playing with tone mapping. I'm not sure I'm really getting it, but I'll keep experimenting. For anti-aliasing my friend Ludde Andersson over at Scaupa pointed me to a temporal reprojection method that I found very interesting. Since I'm already doing temporal repojection for the occlusion and shadows it was quite easy to do the same for anti-aliasing. The idea is to move the viewport at sub-pixel resolution every frame and smooth out the result with an accumulation buffer. It also turned out that one of me absolute favourite games Inside has a great presentation on the topic from last years GDC. The results are absolutely stunning. I'm not sure I have ever come a cross a new rendering technique that is so clever and simple yet produces so fantastic results with almost no computational overhead. Am I missing something or why aren't everybody using this?


Comments

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …

Undo for lazy programmers

I often see people recommend the command pattern for implementing undo/redo in, say, a level editor. While it sure works, it's a lot of code and a lot of work. Some ten years ago I came across an idea that I have used ever since, that is super easy to implement and has worked like a charm for all my projects so far.

Every level editor already has the functionality to serialize the level state (and save it to disk). It also has the ability to load a previously saved state, and the idea is to simply use those to implement undo/redo. I create a stack of memory buffers and serialize the entire level into that after each action is completed. Undo is implemented by walking one step up the stack and load that state. Redo is implemented in the same way by walking a step down the stack and load.

This obviously doesn't work for something like photoshop unless you have terabytes of memory laying around, but in my experience the level information is usually relatively compact and seriali…