Skip to main content

Uniformly distributed points on sphere

Sooner or later everybody will need uniformly distributed points on a sphere. There doesn't seem to be a standard method for doing this, so I wrote a very simple iterative algorithm that pushes verts away from each other while continuously normalizing the point data. This will eventually find a stable state where the distance between any two neighboring points are very similar. Performance is terrible but it gets the job done, so only use this for offline stuff. There is also an option for distributing points on a hemisphere (y>0). Set the number of iterations to at least the number of input points for a good distribution.

Source code: uniformpoints.cpp



Comments

  1. Or in other words a simple (projected) gauss seidel position solver.

    ReplyDelete
  2. Yes exactly. This must per definition be applied physics :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Cracking destruction

Smash Hit is a game built entirely around destruction. We knew from the beginning that destruction had to be fully procedural and also 100% reliable. Prefabricated pieces broken the same way every time simply wouldn't be enough for the type of game we wanted to make. The clean art style and consistent materials made breakage easier, since everything that breaks is made out of the same, solid glass material. Procedural breakage Physically based breakage is hard. Really hard. I remember NovodeX had some kind of FEM approximation for breakage in the early days of PhysX, but it never worked well and didn't look very convincing. I think for most game scenarios it is also completely overkill, especially for brittle fracture, like glass. I designed the breakage in Smash Hit around one rough simplification – objects always break where they get hit. This is not true in the real world, where tension builds up in the material, and objects tend to break at their weakest spot, but hey, we&…

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …