Skip to main content

Thoughts on shape casting

I implemented a different type of shape casting yesterday. The most common way, to my knowledge at least, is to do it with conservative advancement:

1) Run the GJK algorithm to find closest point to the origin in configuration space
2) Use the distance and angle to compute a safe estimate to move (cast) the configuration space object towards the origin, in the cast direction.
3) Repeat until below a certain threshold, or the direction to the origin is pointing away from the cast direction (in which case it is a miss).

Even though this is an iterative method, relying of GJK, which in itself is iterative, this is actually pretty fast, because it converges quickly, like two or three iterations and you can benefit from warmstarting GJK with the simplex from the previous iteration.

However, there is a completely different method, which I thought I came up with, but after googling it for a while it seems Jay Stelly has mentioned similar methods before across the Bullet and MollyRocket forums. The method uses MPR on a swept CSO. The idea is to sweep the CSO in the direction of the cast far enough to enclose the origin, and then raycast from the origin to find the distance to the surface.

1) Construct the swept CSO. This may sound complicated, but since support functions are additive it's as easy as adding a vector to the existing support function. The resulting shape will be a swept version of the original CSO. The direction of the vector should be the sweep direction, and the length should be "long enough" to pass the origin. As showed later on, the distance here doesn't really matter much, so just keep it reasonable to avoid numerical issues. I use the distance from the geometric center of the CSO to the origin.

2) Use MPR, or XenoCollide, to find the distance from the origin to the surface of the swept CSO, in the sweep direction. If the origin is not inside the CSO, we know there is a miss.

3) Subtract the distance found in step 2) from the distance computed in step 1).

4) That's it! The difference in step 3 is the final sweep distance. Imagine a line going from the original CSO through the origin to the point computed using MPR in step 2). The line will have the same direction as the sweep and the origin will divide it into two parts, the first part, from the original CSO to the origin, and the second part from the origin to the surface of the swept CSO. You know the total length of the line (determined in step 1) and you know the length of the second part of the line (computed in step 3), so therefore the first part must be the difference. I know, I know, a picture would be ideal, but I'm lazy. Bug me about it and I might post one :)

Although I haven't done any side-by-side comparisons yet, it seems to me much more efficient to do this second approach, since it is much simpler, less prone to numerical precision issues and (yet to be proved) faster. So if this is a known method, how come everybody is still talking about conservative advancement?


I quickly profiled the new method against the old one (conservative advancement). Each session is 10 000 sweep queries with different angles, computing closest points, sweep distance and contact normal.

Box - Box (old)
Min 10 ms
Avg 16 ms
Max 30 ms

Box - Box (new)
Min 11 ms
Avg 14 ms
Max 16 ms

Cylinder - Cylinder (old)
Min 14 ms
Avg 40 ms
Max 99 ms

Cylinder - Cylinder (new)
Min 20 ms
Avg 35 ms
Max 53 ms

Polyhedron - Polyhedron 128 verts (old)
Min 99 ms
Avg 130 ms
Max 252 ms

Polyhedron - Polyhedron 128 verts (new)
Min 98 ms
Avg 130 ms
Max 185 ms

It turns out that conservative advancement is slightly faster in best case, slightly slower on average, but much slower in worst case, which makes sense, since it is doubly iterative. I have not noticed any differences in robustness so far, but I haven't stressed it either.


  1. You can merge the two loops (conservative advancement and gjk) into a single one. Did you try that? One of the Bullet casters implements this.

  2. I didn't try it, but I was under the impression that you couldn't get closest point information with that method since the CSO is deforming and the simplex may no longer represent points on the surface. I can very well have misunderstood the method though? Thanks for the tip anyway, I might have a look at the bullet implementation!


Post a Comment

Popular posts from this blog

Cracking destruction

Smash Hit is a game built entirely around destruction. We knew from the beginning that destruction had to be fully procedural and also 100% reliable. Prefabricated pieces broken the same way every time simply wouldn't be enough for the type of game we wanted to make. The clean art style and consistent materials made breakage easier, since everything that breaks is made out of the same, solid glass material. Procedural breakage Physically based breakage is hard. Really hard. I remember NovodeX had some kind of FEM approximation for breakage in the early days of PhysX, but it never worked well and didn't look very convincing. I think for most game scenarios it is also completely overkill, especially for brittle fracture, like glass. I designed the breakage in Smash Hit around one rough simplification – objects always break where they get hit. This is not true in the real world, where tension builds up in the material, and objects tend to break at their weakest spot, but hey, we&…

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …