Tuesday, March 16, 2010

Trapped in configuration space

I've spent the last couple of weeks tinkering with GJK and ended up rewriting it from scratch. Ever since I saw Casey's video about SAT-GJK a couple of years ago I've wanted to try it, so I finally did. Now Casey's video only deals with binary overlap status, not closest points, but it was relatively easily to extend his approach and even do a linear sweep operation. I especially like how his method does not include any divisions or square roots, which makes it very robust. I must add though that it's not straightforward to get a robust implementation that works on both polyhedra and quadric objects. Especially the termination criteria needed a lot of attention.

I knew from the beginning that decent visualization tools would be required to pull off a new GJK implementation, and it's certainly not trivial to visualize all the information needed. There are the two objects, the configuration space object, the simplex which comes in four flavors, normals, closest points, etc, etc. Here's a screenshot from my test bed in the middle of a GJK distance calculation. The algorithm can be stepped and visualized at both the add and the reduce step of each iteration:


On the left is the configuration space object, the red triangle is current simplex, with closest point and normal. The corresponding object space closest points are also visualized on the two objects. I could have spent even more time on visualization, and I might just do that, because next I want to try out a couple of ideas for penetration depth computation.

The new implementation is both more robust and higher performance that the old one from Meqon one, which uses Johnson's distance algorithm from the original paper. I'm not sure how it compares to the Bullet implementation with geometric distance algorithm. Part of the performance increase is due to a more efficient caching scheme. To old version only supported an initial 1-simplex, which was the previous frame's closest points, while the new uses the whole previous simplex, which is a bit more complicated, since it can deform badly due to motion, but it really pays off, and most of the times it will terminate immediately after the first iteration.

Here are some performance comparisons:

Scene with 16-vert polyhedra
Old: 22.5 ms
New: 17.1 ms
New+caching: 9.3 ms

Scene with cylinders
Old: 2.6 ms
New: 2.5 ms
New+caching: 1.7 ms

Scene with boxes
Old: 10.7 ms
New: 8.8 ms
New+caching: 6.3 ms

Scene with mixed objects
Old: 19.3 ms
New: 13.9 ms
New+caching: 6.9 ms

So, depending on the scenario, and how much motion there is, the new version ranges from slightly faster to almost three times faster. More importantly though, is that the new version is more numerically robust.

I find it quite fascinating how Casey's approach with caching is very similar to the old Lin-Canny closest feature algorithm. Both methods are based on voronoi regions, but Casey's operates in configuration space, while Lin-Canny uses object space. Casey's approach uses implicit voronoi regions determined by support functions, while Lin-Canny uses explicit pre-computed voronoi regions, but the underlying principle, that the closest distance is always found within the voronoi regions of a particular feature pair is the same.

What currently keeps me awake at night is if there might be an implicit configuration space counterpart to V-Clip, which would then work for penetrating objects as well? The only implicit penetration depth method I'm aware of that has been proven to work is EPA, but it's so awkward I'm not even going to try implementing it. Intuitively I feel like there must be a simpler way to compute exact penetration depth that doesn't involve dynamic data structures.

4 comments:

  1. Why not use SAT as penetration depth algorithm? It is simple and stable.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. It's not a bad idea, but it's kind of nice to support all convex shapes with the same algorithm. Small penetrations I handle with just reusing the last valid contact normal, so a deep penetration solver wouldn't be used very often.

    ReplyDelete
  4. Caching of simplices was already part of Stephen Cameron's GJK implementation. Things can get nasty when the cached simplex has rotated in the new frame so that the newly added support point creates a simplex that is almost flat (affinely dependent). For resting contacts this does not happen so you will see perforance gains there without the numerical issues.

    If you take a good stare at Johnson's formula you'll see that it is also testing for containment of the origin in the Voronoi regions of the simplex. The sign tests of the determinants are actually the plane tests of the Voronoi regions. So the only thing that Casey is doing differently is that he does not rely on computing the closest point (and normal to the sub-simplex) using the Barycentric coordinates found by Johnson's algorithm. Taking an explicit normal from a sub-simplex is cheaper and less noisy, but requires you to maintain proper handedness of the simplex or add case distinction on the side of the plane the origin is located. If you do need a closest point, you may find that the projection of the origin onto the sub-simplex does not lie on the sub-simplex at all due to rounding noise in the computed normal. With Johnson's algorithm both the computation of the normal and the closest point rely on the same set of Barycentric coordinates so closest point is guaranteed to be an internal point of the last found simplex. Then again, going through Barycentric coordinates adds more noise to the final result, so you will have more trouble terminating. Either way, both approaches require additional hacks for termination in all cases.

    In my case, one of these hacks involves checking for actual progress. If the distance does not drop significantly each iteration then it is safe to assume that we ran into numerical issues. Comparing distances needs a division. (You can use rational numbers but this becomes messy pretty quick.) On the bright side, a single GJK iteration (including support point computation) takes thousands of cycles. A division takes roughly 50, and we need only one division per iteration, so the added division is not going to hit us hard. Furthermore, divisions hardly have an influence on the robustness. It's the additions and subtractions that make our lives miserable.

    BTW, square roots are never needed in GJK, that is, if your objects are (dilated) polytopes (polyhedra, triangles, spheres, capsules).

    Gino

    ReplyDelete