Sunday, March 24, 2013

Convex Hulls Revisited

I have written about 3D convex hull generation here before. I find it a very appealing problem because it is so well defined and to my knowledge there is no de facto standard algorithm or implementation. I come back to this topic every now and then since I need a good implementation myself.

Quickhull is probably the most popular algorithm, but it is hard to implement in a robust way. The qhull implementation has a somewhat questionable license and more significantly it is a really complex piece of software and contains a bunch of other features. I'm on a quest to create a fast, robust convex hull generator that is free to use and is self-contained in a single cpp file.

I'm currently experimenting with an algorithm based on the support mapping, often used in physics and collision detection. The support mapping for a point cloud for a given direction is the point that is farthest in that direction, which simply means finding the point with maximum dot(dir, point). The supporting point for any direction is guaranteed to be on the convex hull of the point cloud. Very convenient.

The algorithm starts with the smallest possible closed mesh - three vertices connected by two triangles facing in opposite directions. Any three points will do, as long as they are on the convex hull (supporting vertices). Each triangle is then exanded by the supporting vertex in the normal direction. This is done by splitting the triangle into three triangles, all connected to the new vertex. This expansion step may cause the mesh to become concave, so the expansion step needs to be followed by an unfolding step, where convave edges are "flipped" to make the mesh convex again.

Flipping one edge may cause nearby edges to become concave, so this step needs to be repeated until all edges are convex, effectively "untangling" any wrinkles introduced by the expansion. Below is a short clip visualising the construction of a hull through a series of expansion and unfolding steps. For clarity, there is only 12 points and they are all on the convex hull.



The interesting thing about this method is that it is based primarily on topology. Both the expansion and the unfolding step guarantee that the mesh is kept well-defined and closed, so there are no degenerate cases. The only critical computation is how to determine wether an edge is convex or not. I'm still investigating the most robust alternative here. My current one does not deal with all degenerate cases, but I'm pretty sure this can be done.

The algorithm has a number of desirable properties:
  • Can be used with a tolerance in the expansion step for automatic simplification.
  • Relatively easy to implement (mine is about 400 lines of code).
  • Handles co-planar and degenerate input data.
  • The output mesh is built entirely from the input vertices.
  • Can be modified to use any support mapping function, not just point clouds.
I'm not entirely sure this is a novel idea. I'd actually be surprised if it is, given its simplicity, but I haven't seen any references to it before. Please write a comment if you know. I'll get back with more details and a performance comparison later.

21 comments:

  1. Bullet physics has a robust and fast convex hull implementation, based on Preparata and Hong. You can check https://code.google.com/p/bullet/source/browse/trunk/src/LinearMath/btConvexHullComputer.cpp. It will be trivial to make it a single-file solution, if that's what you want.

    ReplyDelete
  2. Hi Erwin, yes I have tried that implementation, but I didn't find performance satisfactory. Especially for small hulls it is really slow. My initial tests suggest a 10x speedup with my method for 10.000 points and about 30x speedup with 100 points. I need this for a realtime fracture application, so good performance on small point clouds is essential.

    ReplyDelete
  3. I like your convex hull idea, it basically describes EPA, right? Are the any drawbacks or robustness issues?

    The Bullet btConvexHullComputer uses fixed point integer math, basically slow software float emulation, for robustness. Using actual floats or doubles should make it much faster, at the cost of some robustness.

    What termination conditions do you use? Is it an unmodified support mapping?

    ReplyDelete
    Replies
    1. I was wondering about the termination criterion too. Maybe it's better to maintain a list of un-included points, and terminate when it is empty? In every expansion step you can test containment only in the added volume, and in every edge-flip step test containment on the volume bounded by the two old triangles and the two new ones. This might be faster than recomputing support mapping, but maybe there are efficient ways to update a support mapping and not entirely recalculate it. Dunno.

      Delete
  4. I take back my last comment: if I'm guessing correctly you drop all non-support vertices to start with, so *all* of the remaining vertices are on the hull and the step count is known in advance. In particular, the support mapping is fixed throughout the process.
    Can you please elaborate on the representation you use for support mapping and its initial computation complexity? If this complexity is >n Log(n) then such algorithms (that assume pre-calculated support mapping) would be less valuable academically, but still might be valuable in practice (if indeed you have a precomputed support mapping in your disposal).

    I know of a similar published counterpart by Clarkson & Shor, here's a brief online survey: http://www.eecs.tufts.edu/~mhorn01/comp163/algorithm.html . They do not limit the search to support vertices, but they might offer several improvements in the expansion+edge-flip stage (note their 'visibility pyramid' and 'conflict graph').

    And again, if you use a fast support mapping generation scheme (which I'm not aware of) maybe this is indeed an academic novelty!

    ReplyDelete
  5. Erwin: I haven't thought about it that way, but you are right. It is quite similar to how EPA works. Though I think both traditional EPA and the Clarkson & Shor method use an ngon during the expansion step, which is more complex and sensitive to numerical precision. I always split the triangle in three new ones, so the connecticity us predetermined, which keeps the topology of the mesh unconditionally well-behaved regardless of numerical precision. Even degenerate cases give meshes with proper connectivity (but some triangles have no area of course).

    Ofek: It is a very good idea to prune points incrementally during the expansion and unfolding steps. I have wanted to implement that but didn't get around to it yet. I currently prune vertices after the unfolding step, but only if the associated expansion step made a "big enough" expansion (currently >20% of point cloud size).

    I do not "drop" all non-supporting points at the start. Not sure how to do that efficiently, but the support mapping dot(n, p) will never choose a non-supporting point, and as described above I occasionally prune the points against the current hull to make the search faster. It's also worth noticing that I remove supporting points from the cloud immediately after they have been used during the expansion step. This prevents the same point to be used for multiple expansions (otherwise they can cause invalid topology). The algorithm is terminated when no face can be further expanded.

    I need to run further tests on this thing, but it looks very promising so far.

    ReplyDelete
  6. Just to clarify I should mention that I don't do any initial precomputation of the points. All remaining points are scanned linearly for each expansion step (though the number remaining points decrease rapidly while pruning).

    ReplyDelete
  7. Hi Dennis,

    I recall that the "traditional" EPA by Gino van den Bergen starts with a tetrahedron (rather than a triangle) and either splits the triangle by adding a center vertex (as you do) or splitting its 3 edges in the middle (this creates better quality triangles).

    Either way, you keep on having triangulated convex hull. You only have to remove some termination/optimization that EPA makes: stop expanding the hull when the distance to the hull is further than the current minimum distance.

    I'll implement this convex hull idea on the GPU, for the Bullet 3.x GPU rigid body pipeline for some nice fracture demos.

    ReplyDelete
  8. In EPA, Gino's used a silhouette to make the concave polytope convex again. Your idea of swapping edges is interesting, but it seems hard to proof that it will terminate (and not cycle forever). I suppose you are happy if the solution just works fine in all of your tests :-) ?

    ReplyDelete
  9. Yes, exactly. That's what I'm worried about myself. Determining wether an edge is concave is the only critical computation, and it *should* be possible to do this in a numerically robust way but comparing the angles both before and after the flip (in case the engle changes near 180 degrees). Cycling forever should not be possible, since points are removed at the expansion step, so you would eventually run out of points. I guess the unfolding step could be a candidate for endless cycling, but I don't really see how, and not sure how to proove it... (it never happened so far)

    ReplyDelete
    Replies
    1. IIUC, in every expansion and flip the overall volume must increase, so a cycle is not possible.

      Delete
    2. Do you mean an explicit volume computation after each expansion and flip? When the triangles get very flat (edges flip near 180 degrees), the volume computation can get sensitive to numerical precision.

      Delete
    3. No - I meant only to answer the claim that 'it seems hard to *prove* that it will terminate (and not cycle forever)'. The mathematical proof is easy. The implementation, however...

      If you're willing to sacrifice some simplicity for sake of robustness, perhaps you can accumulate almost-co-planar points into non-triangular facets temporarily, and defer the triangulation of these to a single final stage?

      Delete
  10. QuickHull produces good hulls for physics because it merges faces. This is important if you want a full manifold via clipping. The qhull code is not easy to digest. Instead I recommend looking at this Java implementation: http://www.cs.ubc.ca/~lloyd/java/quickhull3d.html. Both Dirk and I are using convex hull code inspired by this. Speed comes from good memory management (pooling, etc).

    ReplyDelete
  11. I agree that the implementation of a convex hull algorithm should be simple. My Quickhull code is exactly what you describe. One .cpp file with a couple of hundred lines of codes. I like convex hulls as well and currently think of giving a presentation on this topic at the physics tutorial next year.

    I don't see how your algorithm has an advantage over Quickhull though. The Quickhull library is indeed a terrible beast, but the algorithm itself is pretty good and easy to understand. The Java code Erin posted above is a great start.

    If you choose an iterative algorithm where you add one point after another, you have to deal with numerical issues. What makes the Quickhull implementation so complex is that it allows you to build convex hulls in higher dimensions, but you don't need this. On the other hand there is a lot of experience in this code. So why not use this and build something on top of it. I like the face merging approach since you need it for physics anyway - coplanar faces would add severe instabilities. Post-merging has it own difficulties so I think Quickhull is the way to go.

    ReplyDelete
  12. This algorithm is really interesting. I think the it is more elegant than quickhull and I like how it keeps the topology intact no matter what.

    Actually, the algorithm in 3d is very similar to incremental flipping for solving delaunay triangulations in 2d.

    The delaunay algorithm starts by inserting a point in a triangle, creating 3 new triangles. The edges opposite the new point are put on a stack. The stack holds edges that might be nondelaunay. The top of the stack is popped until each nondelaunay edge is flipped to it's delaunay counterpart. (As with your algorithm, new edges will need to be added to the stack in the process)

    In the delaunay algorithm, the nondelaunay test can be expressed as the determinant of a 3x3 matrix. In your algorithm, the concavity test can be expressed as the determinant of a 3x3 matrix.

    As you probably know, it is possible to calculate the 2d delaunay triangulation by lifting the points into 3d, calculating the convex hull and then projecting the downward facing triangles back.

    So, my theory is that, if we want to solve the 2d delaunay triangulation of a set of points in 2d, and decide to do so using both the 2d incremental flip-based method and your algorithm (on a lifted set of points), we would end up with almost the same code.

    When the 2d points are lifted to 3d, the third coordinate becomes x^2+y^2. The concavity test in your algorithm then becomes exactly the same as the the delaunay test in the incremental flipping algorithm! (See the determinant here for how the test is done in delaunay algorithms: http://en.wikipedia.org/wiki/Delaunay_triangulation#Algorithms)

    The difference between the two algorithms is that, in 3d, it is always possible to flip the edge shared by the triangles if the concavity test fails, whereas in 2d, this is not the case when the delaunay test fails. Likewise, in 3d, you always insert the new point in the triangle no matter where it's projection is located, whereas in 2d you only insert it if it is inside the triangle. In both these cases, creating new triangles in 2d would cause intersections, but this is not the case in 3d.

    The reason your algorithm works (in a delaunay context) is probably because only the down-facing triangles are kept when projecting the points back to 2d, and this removes the intersections.

    Not having to worry about intersecting simplices is a great property, and if it extends to higher dimensions, it might be a lot easier to create the delaunay triangulation in 3d by calculating the convex hull in 4d using your algorithm, than using the 3d incremental flip-method that I currently use.

    I would be really grateful if someone could take a look at this and see if there is any truth to it or if there is something I have missed :)

    Basically, I'm suggesting that this algorithm is a more general case of incremental flipping.

    I just stumbled upon your blog and I think it's very interesting so far, great work! (I started following you on twitter too :p)

    ReplyDelete
  13. Gustav: Thanks for your input. I haven't implemented delaunay triangulation myself, so I may not be aware of the pitholes, but it does sound very weird to solve a 2D problem in 3D. Maybe I am missing something? I'm planning to publish a first version of the convex hull code soon, so you're welcome to try it out!

    ReplyDelete
    Replies
    1. Thanks for your reply!

      Here is a good visualisation of the process (2d->3d->2d):
      http://www.cis.upenn.edu/~jean/gbooks/new281-282.pdf

      To see the relationship between the two algorithms it's of course easier if you know how incremental flipping works. See part 5.1 of this paper if you're interested: http://www.cse.iitk.ac.in/users/vision/dipakmj/papers/04276112.pdf

      What's interesting is that if we use your algorithm to solve the delaunay triangulation by lifting the points to 3d, we almost end up with incremental flipping in 2d.

      The structure of the algorithm would be the same. The determinant (delaunay/concavity) test would be the same.

      The difference would be: when inserting a new point in a triangle, your algorithm uses a third coordinate and a plane test, whereas incremental flipping finds the triangle that contains the new point in 2d. Also, when done, we would need to only keep the downward facing triangles, which again uses the third coordinate.

      So the way I see it, your algorithm is a more general version of incremental flipping, or simply the convex hull equivalent. Incremental flipping seems to me the special case of your algorithm, where the third coordinate is x^2+y^2 for all points and where more constraints have been introduced.

      I'm not sure if this is correct, and I'm ready to be proven wrong, but after working with the incremental flipping algorithm for a while and now your algorithm, it seems logical to me.

      I'm planning to implement your algorithm in 3d and hopefully 4d soon, so I might prove myself wrong, we'll see :)

      Delete
    2. A correction:
      ... your algorithm uses a third coordinate and _several_ plane tests ...

      Incremental flipping takes just any point and finds the triangle. Your algorithm takes any triangle and finds the point (furthest away). Slightly different, but similar :)

      Delete
  14. I have implemented this (or very similar) about two years ago in Codemasters's EGO Physics. It was most useful, and quick to update the bounding volumes of deforming mesh segments.

    Needed some tweaking and careful considerations for co-planar point handling(which algorithm doesn't ;)

    As with others, roughly 400 line monolithic c++ function :)

    ReplyDelete
    Replies
    1. This page is a really good start http://www.cs.huji.ac.il/course/2004/compgeom/slides/CG-lecture01-CH-6spp.pdf.
      This method is described as Incremental 3D.

      Delete