Skip to main content

### The life of an impulse

For a long time I've wondered how impulses develop inside a rigid body solver. Mainly for trying to come up with a prediction of where the final magnitude would be as early as possible. I dumped the data for all impulses out to a CSV file and graphed it for a couple of simple test scenarios. Let's start with a small test case, a box pyramid of two layers (three boxes). All dirty tricks are switched off, so this is a plain vanilla solver. The Y axis is impulse magnitude for the normal impulses and the X axis is iterations.

At a first glance, there seems to be no suprises here. Impulses form asymptotic curves towards their solution and there is not a lot of action after about twelve iterations. But look again, especially at the bright yellow curve, that seems to steadily decrease and form a non-horizontal asymptote, and what about the deviation around iteration eight and nine? Let's try a more complex exmple, with a box pyramid of five rows (15 boxes). There is around 90 contacts, and it would be too messy to show them all in the same graph, so I chose ten of them from the middle of the stack:

This is quite interesting - most of the curves are asymtotic to a non-horizontal line. Even at 64 iterations they have not found a stable configuration. Most interesting is maybe contact 45, which first goes up and then steadily decreases, while most othe contacts increase. Somehow, the solver is shifting around the weight. Maybe just because it can. Another graph we could study is the relative velocity along the normal direction (v from my previous post) and how it changes while iterating:

This graph is from the same scenario, but it's not from the same run, so contact numbering might not match exactly. Interesting here is that all curves steadily decrease after just a few iterations and seem asymptotic to a horisontal line, so our odd graph fellow above doesn't actually mean that the velocity is shifting similarly. Most likely it is an effect of an overdetermined system (with four co-planar contact points, we can shift the impulses around a bit and still get the same result. Not that the solver really has a good reason to do this, but it seems to be doing it anyway).

I also tried a variant of the solver that doesn't clamp accumulated normal impulses. Hence, the delta impulse is just clamped to be positive at each iteration. This prevents the accumulated impulse to ever have a negative inclination:

This is more what I expected. Each impulse is asymptotic to a horizontal line. Now if there only was a way to predict that asymptote and use it as initial guess, we would end up with a really stiff solver without having to keep track of contact points from the previous frame. I'm personally quite sceptical that it can be done from just looking at the curves, but it might be worth a shot. What i find most interesting is how good visual results you can get from just terminating the whole proces after three or four iterations, considering how far off the curves are.

### Comments

1. Intressant läsning!

2. I've played around with this idea a bit. I pretty much abandoned the idea because you are likely to overshoot and introduce energy. The solver also becomes less adaptive to changing contact configurations because it is aiming for a steady state that may not exist in the current configuration.

That said, I think it is still an interesting idea and worth pursuing at a research level.

3. My idea was to do a couple of iterations, make an estimate and use that as an input to PGS, so it would still have a chance to compensate for the overshoot. But I agree it's quite unlikely this would actually work...

4. Wow, good to see you back in physics Dennis!
You just have to give me a call and an update about life and everything!

I think the asymptotic shortcut you are looking for is Chebyshev acceleration. However, is doesn't work overly well for projections.

/Kenneth

### Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident. I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow". Taking depth i

### Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part. My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post . The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is with

### Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while. I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here . Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.