Skip to main content

Greetings from the inside!

So after spending a few weeks outside the CSO, I got around to step inside and experiment with a number of different penetration distance algorithms this week. However, I failed to come up with an alteriative to EPA that can actually guarantee a global minimum separation. These are the ones I tried:

1) Use MPR to raycast the inside of the CSO in a direction estimated by the geometric center of the CSO, use the resulting normal and iterate until convergence. This works in 95% of all cases and is really fast. The downside is that there is no way to guarantee or measure if this really is the global minimum.

2) Same method as in 1) but use multiple search directions. I was for a while under the impression that two local minima had to be at least 90 degrees apart, so casting six rays should be safe (of which three can be optimized away). This works in 99% of all cases, but there is still that annoying one percent where it finds a local minima.

3) Same method as in 1) but trace the edges of the last portal. After the initial surface point has been found, tilt the search direction in the direction of each edge, so that it falls outside the last portal, choose the best direction and iterate until convergence. This way we can sample the neighboring triangles and search those directions. Again, there is a 99% success rate. Unfortunately in some cases all neighboring triangles will lean towards the initial triangle, so we will just end up at the same local minima again.

4) Broken EPA. I tried using EPA without tracking connectivity and without keeping the polytope convex. Simply a triangle soup, where the closest triangle is split and valid triangles (the ones that are portals) are kept. It doesn't work.

5) Full EPA. I used the one found in Bullet as inspiration and it works really well in all cases. I'm really impressed by the Bullet implementation by Nathanael Presson. EPA is an awfully complicated algorithm to implement, but it seems very stable and quite fast.

So all this fuzz to come up with the conclusion that EPA is the way to go? I'm still contemplating wether there is a way to combine MPR and EPA to make an early out and just find the surface using MPR. It might not be possible.

I'm still not using EPA for all penetrations. In most cases using MPR with the last known separation axis works excellent, even just sampling and projecting support mappings on the last axis works really well. Physics is surprisingly forgiving about something, for once.

Comments

  1. arent you supposed to be busy hugging exotic animals, surfing and generally staying away from computers!! ??

    ReplyDelete
  2. Hugging a configuration space obstacle is almost as fun, trust me :)

    ReplyDelete
  3. There is an alternative to EPA if you restrict yourself to polytopes only. In that case the CSO is a polytope itself that can be computed explicitly. The solution is rather brute force. Simply check each face of the CSO and find the one that is closest to the origin. A CSO of polytopes has a complexity of O(m n) for two polytopes having m and n vertices each, so you are looking at a quadratic solution. However, due to the sheer simplicity of the approach, building the Minkowski sum may outperform EPA for lower complexity polytopes.

    Check out Efi Fogels's PhD thesis on Minkowski sums of polytopes.

    http://www.cs.tau.ac.il/~efif/publications/thesis/thesis.pdf

    Gino

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …