Skip to main content

Posts

Showing posts from 2010

Schedulicious

I've run into a pretty interesting/annoying problem while working on parallization of the physics pipeline.
I started off with a pretty standard thread pool, but proper physics requires quite a bit of synchronization. Especially in the solver, you need several full barriers to coordinate the threads. This is not so bad when you run thousands of objects, because the overhead of the synchronization is dwarfed by the massive simulation, but what would be more impressive is to go from say 3 ms per frame down to 1 ms.
Ideally you want your physics engine, rendering pipeline, AI, etc, all data parallel and run them sequentially every frame. The more common scenario of running physics on a separate thread and rendering on a separate thread introduces two extra frames of lag, which combined with double or triple buffering on the graphics card, plus another frame delay in the display (at least some modern TV sets do this) gives you very sluggish reponse. A secondary benefit from data-paralle…

Milkmans mistake

After two posts full of boring text and graphs I feel like I have to give away a crowd pleaser. Again, not a realtime simulation. About 120.000 particles, running at almost one FPS.

The life of an impulse

For a long time I've wondered how impulses develop inside a rigid body solver. Mainly for trying to come up with a prediction of where the final magnitude would be as early as possible. I dumped the data for all impulses out to a CSV file and graphed it for a couple of simple test scenarios. Let's start with a small test case, a box pyramid of two layers (three boxes). All dirty tricks are switched off, so this is a plain vanilla solver. The Y axis is impulse magnitude for the normal impulses and the X axis is iterations.


At a first glance, there seems to be no suprises here. Impulses form asymptotic curves towards their solution and there is not a lot of action after about twelve iterations. But look again, especially at the bright yellow curve, that seems to steadily decrease and form a non-horizontal asymptote, and what about the deviation around iteration eight and nine? Let's try a more complex exmple, with a box pyramid of five rows (15 boxes). There is around 90 cont…

Explaining the rigid body solver

Following my last post about scientific papers not being written for engineers I will do an attempt at explaining a rigid body solver without equations:
Even though a rigid body scene may consists of hundreds of objects and thousands of contact points, a popular way to solve the problem is to solve each contact point in sequential order, one at a time. It sounds kind of lame, and compared to other methods it is, but if iterated a couple of times it gives really good results, and this is what most games are actually using, so let's focus on solving one contact without friction first:
So you have two objects and one contact point with a contact normal. Start by computing the velocity at the contact point for both objects and compute the difference between those vectors. Project that difference onto the contact normal (dot product). This is the contact's relative velocity along the contact normal and it indicates how much the objects are moving towards each other or away from each …

Academic frustration and running liquids

I experimented with fluids about five years ago, at Meqon. Back then I didn't really have a clue about fluid dynamics. I guess it's fair to say I still don't have a clue about fluid dynamics, but I did read a few papers lately on SPH.
It's kind of remarkable really, how complicated all those papers are written just to land at something that is quite simple. Sometimes I get the feeling that the research community are intentionally making scientific papers look more complicated to boost their egos. I wish there were research papers written for engineers, not other researchers, where you could read stuff like "And when this little fellow gets squished, it kind of pushes outwards on all the neighboring particles, proportional to their respective distance and density."
I don't like formulas. I can read them, but it's very unintuitive for me, and probably many other programmers. I tend to believe that 3D computer graphics programmers are especially prone to …

Practicing quadrupel windsor

Starting to get soft body stacking and interaction in place. Especially the friction between two soft bodies is a little tricky. This is not a fake rigid body interpolation, but a true particle representation, which can compress to a single point and then return to it's original shape. It's quite expensive to be honest, since it takes more than just the regular verlet loop to get this kind of stability, but demo runs at 60 FPS.

Mesh collisions

This is usually where physics gets messy. It's all fun and games until someone suggests that maybe all objects are not perfectly convex, such as... the game level? There is convex decomposition, yes, but I'm not totally convinced that's a silver bullet. Convex decomposition is awfully complicated, and requires heavy preprocessing. It's definitely a good idea in many cases, but there will always be raw triangles.
Convex decomposition or not, you need some sort of mid-phase, finding which triangles/primitives collide with a specific object. A lot of work has been put into this, and I think most people today agree that a quantized, binary AABB tree is the ideal solution.
What's interesting here is how people usually query these AABB trees. The output from the broad phase is a list of pairs with overlapping bounding volumes. These pairs are then processed one by one, and in the case of a triangle mesh, the mid-phase is engaged to find the relevant triangles/primitives. …

Movie time

It was a long time since I posted a movie, so I thought I'd record a little snippet of what I'm working on at the moment - mesh collisions. Without the fancy SSAO renderer it runs in 60 FPS. Stay tunes for details!

STL

I really like the idea of using standardized versions of simple data structures. Small snippets of code rewritten over and over again by thousands of people. I bet there are a handfull of programmers world-wide at this moment implementing a dynamic array or a hash table. As much as I like the idea, I hate the only standard out there - STL.
There are several problems:
1) The interface is horribly over-engineered and just plain sucks. Even the simplest of simple operations turns out as three or for lines of code. Really long ones. Say I want to remove an element from a vector. Instead of just offering a dead simple myVector.erase(element) I have to bother with iterators and find functions (of course defined in another include file), ending up with this:
std::vector::iterator it = find(myVector.begin(), myVector.end(), element); if (it != vec.end()) vec.erase(it);
This does only remove the first occurance of "element", which is understandable. If you want to remove all of them you ne…

Visualizing complex algorithms

I have had this idea for a long time about some tool to follow the steps of a complicated algorithm in some way other than stepping through it in the debugger, trying to mentally visualize what's going and frenetically scribbling unrecognizable geometrical figures on envelopes and old receipts.
A simple visualization sandbox is useful for visualizing the result, but it's really hard to visualize intermediate steps. GJK is a good example of a complex, iterative algorithm that has a visual representation at every step. A typical demo application would visualize the resulting closest points, normal, etc, but not the internal steps of the algorithm itself, so if something goes wrong, you're screwed. The typical way to deal with it is to write parts of the algorithm in an isolated visualization environment and then lift them over to the real implementation when they are "tested". However, for visualization code to be useful, it needs to stay in the actual implementatio…

Thoughts on shape casting

I implemented a different type of shape casting yesterday. The most common way, to my knowledge at least, is to do it with conservative advancement:
1) Run the GJK algorithm to find closest point to the origin in configuration space 2) Use the distance and angle to compute a safe estimate to move (cast) the configuration space object towards the origin, in the cast direction. 3) Repeat until below a certain threshold, or the direction to the origin is pointing away from the cast direction (in which case it is a miss).
Even though this is an iterative method, relying of GJK, which in itself is iterative, this is actually pretty fast, because it converges quickly, like two or three iterations and you can benefit from warmstarting GJK with the simplex from the previous iteration.
However, there is a completely different method, which I thought I came up with, but after googling it for a while it seems Jay Stelly has mentioned similar methods before across the Bullet and MollyRocket forums. Th…

Greetings from the inside!

So after spending a few weeks outside the CSO, I got around to step inside and experiment with a number of different penetration distance algorithms this week. However, I failed to come up with an alteriative to EPA that can actually guarantee a global minimum separation. These are the ones I tried:
1) Use MPR to raycast the inside of the CSO in a direction estimated by the geometric center of the CSO, use the resulting normal and iterate until convergence. This works in 95% of all cases and is really fast. The downside is that there is no way to guarantee or measure if this really is the global minimum.
2) Same method as in 1) but use multiple search directions. I was for a while under the impression that two local minima had to be at least 90 degrees apart, so casting six rays should be safe (of which three can be optimized away). This works in 99% of all cases, but there is still that annoying one percent where it finds a local minima.
3) Same method as in 1) but trace the edges of th…

Trapped in configuration space

I've spent the last couple of weeks tinkering with GJK and ended up rewriting it from scratch. Ever since I saw Casey's video about SAT-GJK a couple of years ago I've wanted to try it, so I finally did. Now Casey's video only deals with binary overlap status, not closest points, but it was relatively easily to extend his approach and even do a linear sweep operation. I especially like how his method does not include any divisions or square roots, which makes it very robust. I must add though that it's not straightforward to get a robust implementation that works on both polyhedra and quadric objects. Especially the termination criteria needed a lot of attention.

I knew from the beginning that decent visualization tools would be required to pull off a new GJK implementation, and it's certainly not trivial to visualize all the information needed. There are the two objects, the configuration space object, the simplex which comes in four flavors, normals, closest po…

Oh how hard can it be

Funny, I'm falling into my own how-hard-can-it-be convex hull trap I described just three posts ago! It turns out my idea for a support mapping-based convex hull generator isn't that great after all. It works perfectly in 2D, but in 3D it turns out the surface can get concave during the process, which of course will mess up the whole thing, so just forget everything I ever said about convex hull now would you (except perhaps the part about it always being harder than you think)? Thanks. Back to the drawing board. There must be some way to compute convex hulls using only the support direction..