Skip to main content

Putting on the shades

For rendering, I wanted to go one step (but not more than one step) beyond the flat-shaded polygons that are typical for physics demos. I wanted shadows, but I have never been a big fan of sharp edges, such as with shadow volumes or even shadow maps. Soft shadows is really what enables a whole new level of realism.

I worked on an ambient occlusion project for Ageia a couple of years ago, just before they got acquired by Nvidia and I think that's a really good alternative to real global illumination. My approach at the time was to compute ambient oclcusion in 3D on the second generation PhysX hardware (that was never released) and dynamically update low-res light maps for the parts of the scene that changed. It had good potential, but shortly after Crysis was released and the screen-space methods (SSAO) started taking off. I've always been curious about SSAO ever since but never got around to implement one, so I thought this was a good opportunity.

I found this article, which is a really good introduction. I didn't end up using exactly that method, but it's quite similar. Vertex position and normals are rendered to an FBO, then the occlusion is computed to another FBO and the final image is rendering to the framebuffer using deferred lighting. During the final pass occlusion values are blurred, using only samples in the same plane. Ideally one would want to do this with a gauss kernel, but I only do it horizontally and vertically to save some shader cycles. I'm still on my three year old MacBook Pro with an ATI X1600... Additionally I'm also blurring the normals a tiny bit during the final pass to soften the edges.

There's not an awful lot of time spent on the rendering part, but I'm quite happy with it for the time being. Here's a video demonstrating the scene with and without ambient occlusion.






Comments

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …