Skip to main content

Thoughts on shape casting

I implemented a different type of shape casting yesterday. The most common way, to my knowledge at least, is to do it with conservative advancement:

1) Run the GJK algorithm to find closest point to the origin in configuration space
2) Use the distance and angle to compute a safe estimate to move (cast) the configuration space object towards the origin, in the cast direction.
3) Repeat until below a certain threshold, or the direction to the origin is pointing away from the cast direction (in which case it is a miss).

Even though this is an iterative method, relying of GJK, which in itself is iterative, this is actually pretty fast, because it converges quickly, like two or three iterations and you can benefit from warmstarting GJK with the simplex from the previous iteration.

However, there is a completely different method, which I thought I came up with, but after googling it for a while it seems Jay Stelly has mentioned similar methods before across the Bullet and MollyRocket forums. The method uses MPR on a swept CSO. The idea is to sweep the CSO in the direction of the cast far enough to enclose the origin, and then raycast from the origin to find the distance to the surface.

1) Construct the swept CSO. This may sound complicated, but since support functions are additive it's as easy as adding a vector to the existing support function. The resulting shape will be a swept version of the original CSO. The direction of the vector should be the sweep direction, and the length should be "long enough" to pass the origin. As showed later on, the distance here doesn't really matter much, so just keep it reasonable to avoid numerical issues. I use the distance from the geometric center of the CSO to the origin.

2) Use MPR, or XenoCollide, to find the distance from the origin to the surface of the swept CSO, in the sweep direction. If the origin is not inside the CSO, we know there is a miss.

3) Subtract the distance found in step 2) from the distance computed in step 1).

4) That's it! The difference in step 3 is the final sweep distance. Imagine a line going from the original CSO through the origin to the point computed using MPR in step 2). The line will have the same direction as the sweep and the origin will divide it into two parts, the first part, from the original CSO to the origin, and the second part from the origin to the surface of the swept CSO. You know the total length of the line (determined in step 1) and you know the length of the second part of the line (computed in step 3), so therefore the first part must be the difference. I know, I know, a picture would be ideal, but I'm lazy. Bug me about it and I might post one :)

Although I haven't done any side-by-side comparisons yet, it seems to me much more efficient to do this second approach, since it is much simpler, less prone to numerical precision issues and (yet to be proved) faster. So if this is a known method, how come everybody is still talking about conservative advancement?

[EDIT]

I quickly profiled the new method against the old one (conservative advancement). Each session is 10 000 sweep queries with different angles, computing closest points, sweep distance and contact normal.


Box - Box (old)
Min 10 ms
Avg 16 ms
Max 30 ms

Box - Box (new)
Min 11 ms
Avg 14 ms
Max 16 ms

Cylinder - Cylinder (old)
Min 14 ms
Avg 40 ms
Max 99 ms

Cylinder - Cylinder (new)
Min 20 ms
Avg 35 ms
Max 53 ms

Polyhedron - Polyhedron 128 verts (old)
Min 99 ms
Avg 130 ms
Max 252 ms

Polyhedron - Polyhedron 128 verts (new)
Min 98 ms
Avg 130 ms
Max 185 ms

It turns out that conservative advancement is slightly faster in best case, slightly slower on average, but much slower in worst case, which makes sense, since it is doubly iterative. I have not noticed any differences in robustness so far, but I haven't stressed it either.


Comments

  1. You can merge the two loops (conservative advancement and gjk) into a single one. Did you try that? One of the Bullet casters implements this.

    ReplyDelete
  2. I didn't try it, but I was under the impression that you couldn't get closest point information with that method since the CSO is deforming and the simplex may no longer represent points on the surface. I can very well have misunderstood the method though? Thanks for the tip anyway, I might have a look at the bullet implementation!

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …