Skip to main content

Mesh collisions

This is usually where physics gets messy. It's all fun and games until someone suggests that maybe all objects are not perfectly convex, such as... the game level? There is convex decomposition, yes, but I'm not totally convinced that's a silver bullet. Convex decomposition is awfully complicated, and requires heavy preprocessing. It's definitely a good idea in many cases, but there will always be raw triangles.

Convex decomposition or not, you need some sort of mid-phase, finding which triangles/primitives collide with a specific object. A lot of work has been put into this, and I think most people today agree that a quantized, binary AABB tree is the ideal solution.

What's interesting here is how people usually query these AABB trees. The output from the broad phase is a list of pairs with overlapping bounding volumes. These pairs are then processed one by one, and in the case of a triangle mesh, the mid-phase is engaged to find the relevant triangles/primitives. After that, the near phase finds the actual contacts.

What I would like to suggest is to query the whole dynamic scene against the AABB tree all at once. That is, instead of colliding the AABB tree with a single object (single AABB) multiple times, you collide it with another AABB tree, representing all moving objects. This is especially relevant if combined with a Dynamic Bounding Volume Tree broad phase, as suggested by Erwin Coumans. In this case, all moving objects are already in a dynamic AABB tree of their own. Objects tend to appear in clusters, so objects close together also tend to collide with the same triangles. Doing the mid-phase this way saves you from drilling down the compressed AABB tree multiple times, which gives an instant performance gain. The tree/tree traversal is a bit of a mind job at first, but the Bullet DBVT implementation is a really good reference.

As usual, I'm really lazy and not doing my side-by-side comparisons, but it might show up later.

Comments

  1. Or you could just cache the location in the midphase tree for dynamic objects. Probably avoiding traversing the tree altogether in many cases.

    ReplyDelete
  2. That's a very good point, I kind of like that the tree approach is stateless though.

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …

Undo for lazy programmers

I often see people recommend the command pattern for implementing undo/redo in, say, a level editor. While it sure works, it's a lot of code and a lot of work. Some ten years ago I came across an idea that I have used ever since, that is super easy to implement and has worked like a charm for all my projects so far.

Every level editor already has the functionality to serialize the level state (and save it to disk). It also has the ability to load a previously saved state, and the idea is to simply use those to implement undo/redo. I create a stack of memory buffers and serialize the entire level into that after each action is completed. Undo is implemented by walking one step up the stack and load that state. Redo is implemented in the same way by walking a step down the stack and load.

This obviously doesn't work for something like photoshop unless you have terabytes of memory laying around, but in my experience the level information is usually relatively compact and seriali…