Skip to main content

Visualizing complex algorithms

I have had this idea for a long time about some tool to follow the steps of a complicated algorithm in some way other than stepping through it in the debugger, trying to mentally visualize what's going and frenetically scribbling unrecognizable geometrical figures on envelopes and old receipts.

A simple visualization sandbox is useful for visualizing the result, but it's really hard to visualize intermediate steps. GJK is a good example of a complex, iterative algorithm that has a visual representation at every step. A typical demo application would visualize the resulting closest points, normal, etc, but not the internal steps of the algorithm itself, so if something goes wrong, you're screwed. The typical way to deal with it is to write parts of the algorithm in an isolated visualization environment and then lift them over to the real implementation when they are "tested". However, for visualization code to be useful, it needs to stay in the actual implementation so when real-world problems start coming in, they can be tracked down quickly.

To do this properly, you would need to step the algorithm from the outside - that is, make an extra API to control it and poll the relevant information from the outside. Such an API would likely be substantially larger than the original API itself, and it's definitely not something you would want to ship.

Another approach would be to add visualization code to the algorithm implementation, but this does not tie in very nicely with modern rendering engines. Say you want to use OpenGL for rendering. It would be convenient to draw lines, points, text, etc and then wait for some input event before going any further. However, the way OGL (or D3D) works, you fill a back buffer and won't see it until the entire frame is drawn. There might be a way to go around this, but it would be hard to integrate into any other application. I was for a moment considering running the rendering loop in a separate thread and feed the visualization thread with commands from the algorithm thread and also include a pause command, that would block until the user wants to step further.

The solution I finally settled on was to use an event stream that is filled in by the algorithm implementation using macros and a visualization framework for rendering the stream. Hence, the algorithm runs all the way from beginning to end, recording visualization events along the way. There is a pause event which indicates that the visualization framework should wait for user input.

I also needed something for removing graphics that was previously added. Say you have the normal of a plane, and the plane is changing every iteration. One way to do that would be to start referencing all the events and add explicit remove events, but that would quickly get quite messy. Instead, I added push and pop commands, so a subset of the graphics can be visualized and then removed before going any further. My visualization app can walk both forwards and backwards in the recorded event stream, which makes it easy to follow what's going on. It turned out to be quite useful and it doesn't bloat the code too much, typically something like this:

DEBUG_PRINT("Computing stuff");
for(int i=0; i<10; i++)
{
...
computePoint(p);
computeNormal(n);
DEBUG_PUSH();
DEBUG_PRINT("Point and normal at iteration " + i);
DEBUG_POINT(p);
DEBUG_LINE(p, p+n);
DEBUG_PAUSE();
DEBUG_POP();
}

This has proven to be really useful for debugging GJK, EPA, MPR, etc. I use automatic unit testing on distance queries, save the state for failing configurations and load them up in the debugger to manually inspect what went wrong. It gives a pretty clear view on various precision problems, termination critera, etc that would be very hard to get an intuitive understanding of otherwise.


My visualization app in the middle of a penetration depth calculation.

The downside of my approach is that you can't combine it with stepping through a regular debugger. It would be quite nifty to see the graphical representation updating while stepping through the code in the debugger. I guess I could pass the event stream over a network socket to a different process, but that sounds just a tad too ambitious for my needs at the moment :)

Comments

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …