Skip to main content

Academic frustration and running liquids

I experimented with fluids about five years ago, at Meqon. Back then I didn't really have a clue about fluid dynamics. I guess it's fair to say I still don't have a clue about fluid dynamics, but I did read a few papers lately on SPH.

It's kind of remarkable really, how complicated all those papers are written just to land at something that is quite simple. Sometimes I get the feeling that the research community are intentionally making scientific papers look more complicated to boost their egos. I wish there were research papers written for engineers, not other researchers, where you could read stuff like "And when this little fellow gets squished, it kind of pushes outwards on all the neighboring particles, proportional to their respective distance and density."

I don't like formulas. I can read them, but it's very unintuitive for me, and probably many other programmers. I tend to believe that 3D computer graphics programmers are especially prone to finding formulas unintuitive since we are trained to visualize complex things in our minds. It's easier to visualize a description in plain english than a mathematical formula. For me at least. Maybe I just suck at math :)

A good engineering resource for physics is Danny Chapman's rowlhouse. There is source code to his physics project, jiglib, including SPH simulation and also his comments and ideas.

Back in 2005 I couldn't simulate running liquids, but only goo and clay-like fluids. I realize now that it was related to my urge to express everything through pairwise interactions. One would believe that a particle-based fluid is just a bunch of point masses interacting, but in SPH, a particle represents a sample of the implicit field, not just a point mass.

Anyway, so I implemented fluids in my solver, first as plain SPH, but I found it difficult to tweak, quite unstable and it doesn't interact well with rigid bodies, so I was searching for a semi-implicit formulation instead. It was actually quite tricky to come up with the right formulation, but eventually I think I nailed it. It looks good, it's unconditionally stable and it interacts naturally with rigid bodies without any hacks. Since it's semi-implicit it does get squishy at low iteration counts, but hey - fluids are no more incompressible than rigid bodies, right?






Oh, and a note on performance - no they are NOT real-time simulations. Quite far from it actually. The second one runs at maybe one fps. Surface generation takes a good amount of time, but the number one bottleneck is actually neighbor finding. I haven't looked into it that much, since it's madness doing this on a CPU nowadays anyway, but it's kind of frustrating that something sounding so simple is the most time consuming part. Finding the neighboring particles is usually three to four times as expensive as running the solver. What's even more frustrating is that it's probably mostly due to cache misses, so laying out the whole arsenal of SIMD tools is not going to help even the tiniest bit... the whole pipeline should scale well with cores though, so could someone please get that Larrabee going again?


Comments

  1. I totally agree about the academic part... I have struggled with many papers that describe simple things in complicated ways.

    Martin Magnusson wrote about this in a very funny article:

    http://www.martinmagnusson.com/publications/magnusson-2006-scientific-writing.pdf

    ReplyDelete
  2. What algorithm are you using to find neighbors?

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

A better depth buffer for raymarching

When doing any type of raymarching over a depth buffer, it is very easy to determine if there is no occluder – the depth in the buffer is farther away than the current point on the ray. However, when the depth in the buffer is closer you might be occluded or you might not, depending on a) the thickness of the occluder and b) if there are any other occluders behind the first one and their thickness. It seems most people assume a) is either infinite or a constant value and b) is ignored alltogether.

Since my new renderer is entirely based around screen space raymarching I wanted to improve on this to make it more accurate. This has been done before, but mostly in the context of order independent transparency (I think).

Let's look at a scene where the occluders are assumed to have infinite depth (I have tweaked the lighting for more distinct shadows to get a better look at raymarching artefacts, so the lighting does not exactly match the environment in these screenshot).


At a first …