Skip to main content

Explaining the rigid body solver

Following my last post about scientific papers not being written for engineers I will do an attempt at explaining a rigid body solver without equations:

Even though a rigid body scene may consists of hundreds of objects and thousands of contact points, a popular way to solve the problem is to solve each contact point in sequential order, one at a time. It sounds kind of lame, and compared to other methods it is, but if iterated a couple of times it gives really good results, and this is what most games are actually using, so let's focus on solving one contact without friction first:

So you have two objects and one contact point with a contact normal. Start by computing the velocity at the contact point for both objects and compute the difference between those vectors. Project that difference onto the contact normal (dot product). This is the contact's relative velocity along the contact normal and it indicates how much the objects are moving towards each other or away from each other at the contact point. Let's call this velocity v. If v is positive, the objects are moving away from each other and we're done. If v is negative we need to proceed onto computing and applying an impulse.

This is the key computation in the solver. The ultimate question we want to answer is - how big of an impulse do we need to apply to make the objects stop moving towards each other? The direction of the impulse is going to be the contact normal (since there is no friction yet), so we're only looking for the magnitude - a scalar quantity. The velocity v that we just computed is also a scalar quantity. The impulse magnitude will be proportinonal to v - the more the objects are approaching each other, the bigger impulse we need to apply. Easy! So what we're really looking for is the correlation between those two (another scalar quantity).

Let's assume we're applying an impulse of magnitude 1.0 (unit impulse) and see how big of a velocity change that would cause. The beauty of linear systems is that everything scales.. well.. linearly, so if we know how much of a velocity change a unit impulse causes we can just compare it to the desired velocity change, v, and adjust it accordingly. Say a unit impulse would cause a velocity change of 0.25 and v is -0.5, then we just apply two unit impulses (magnitude 2.0) and we'll reach our target relative normal velocity (zero). So make a copy of the linear and angular velocities for the two objects, apply an equal but opposite unit impulse and measure the relative contact velocity again following the exact same procedure as described above. Now that you know how much velocity change a unit impulse causes, the final computation boils down to a simple division. That's it folks. Apply the impulse you just computed on both objects in opposite directions and they will not move towards each other any more at the contact point.

Friction can be handled in the exact same way as described above, but in the direction of tangential relative velocity at the contact point. Friction impulses need to be capped to the magnitude of the normal impulse scaled by the friction coefficient, so that if the normal impulse is 2.0 and the friction coefficient is 0.9 your maximum friction impulse is 1.8. After you compute the friction impulse magnitude using above method, if it's more than 1.8 just apply 1.8.

Now to make the solver stable you need to go over all contacts several times, and there is also a method called "accumulated impulses" that improves accuracy a lot which is not covered here, but most importantly, you need to compensate for penetration. This is usually done in a very pragmatic way - if the objects are in penetration, adjust the target relative contact velocity so that it is not zero, but slightly positive (the more penetration the more positive). This means that after we're done solving, the objects will move slightly away from each other along the contact normal instead of not moving at all.

Comments

  1. Great, although not completely well described in the paragraph to calculate the final impulse, but I guess it's because you didn't want to use formulas and units here.
    Good job :)

    ReplyDelete
  2. Thanks Dennis. This was concise and extremely useful.

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …