Sunday, October 15, 2017

Depth of field in VR

I have always been very fascinated by depth of field in computed graphics. For me, it often defines photo realism, mimicking the shortcomings of a real camera. Naturally it is a poor match for interactive applications because the computer doesn't know what the user is looking at, but I've tried to squeeze in depth of field in as many of the Mediocre games as I could get away with. In Smash Hit I wanted to use it for everything, but Henrik though it looked too weird and made it harder to aim (he was probably right), so we ended up only enabling only it in the near field. In Does not Commute and PinOut, which both have fixed camera angles I'm doing a wonderful trick, enabling a slight depth of field by seamlessly blurring the upper and lower parts of the screen. This is super cheap and takes away depth artefacts completely.

Anyway, since I'm so obsessed with depth of field I've started experimenting with it in VR. This has opened up a whole can of new problems and frustrations and I'd like to share some of my findings so far.

When I first tried it out with a fixed focal length it just looked weird and I couldn't really put my finger on why it was so different from a flat screen. Being able to focus on the blur itself gives an extremely artificial look. Some people say your eyes can't focus on different things in VR because the screen is always at the same distance from your eyes. This is only partly correct. You can still turn your eyes independently in VR. A lot of what is focus in real life is not related to the lens in the eye, but the angle of your eyes (vergence). This is what causes double vision behind or in front of what you're focusing on and is probably a far more important depth cue than the blur itself. This is already in VR "for free", it wasn't until I tried depth of field in VR that I understood why all VR experiences I've tried has been a bit "messy". Like a three dimensional clutter of too much information. When the depth of field is there and coincides with the double vision that is already there it gives a certain calmness to the image that is very pleasant.



Using depth of field as tool for directing the viewers attention towards a specific area is probably never going to work in VR. I'm still not sure why it works so extremely well in 2D but fails so miserably in VR, but it's probably because the viewer in VR can in fact "focus" on the out of focus areas by adjusting the angle of the eyes. However, depth of field could still add a lot to VR by removing the visual clutter, not really adding anything new but making the experience less painful.

In order to make an adaptive depth of field, one that adjusts the focal length dynamically, I've implemented a system that is pretty similar to what has been used in (D)SLR cameras for decades – a set of focus points that are all weighed together, making the central points more dominant. For each focus point I shape cast a small sphere from the camera and record the hit distance. The reason I'm using sphere shape casting instead of raycasting is that I want focus points to ignore minor gaps between geometry. It also gives smoother focus transitions when moving the camera.



The focus point system works relatively well, but it tends to ignore small objects in the foreground, not because the shape cast will miss them (it does not) but because very few of them hit, only making a minor contribution to the final focal length. To overcome this I introduced a minimum and maximum forced focus range, in which everything is in focus. We are now leaving physical territory, because with a real lens there is only a singular depth where objects are in focus. However, a forced focus range is definitely not any less accurate then having focus everywhere, so who cares. The forced focus range starts with the focus point-computed focal length. For any focus point closer than that I simply adjust the range to include that distance. This modification turned out pretty good. It tends to keep most objects in the center area of the screen in focus all the time, so to some extent it cancels out the whole point of adding depth of field, but having out of focus objects in the peripheral vision and behind the main objects in the center gives a much more pleasant looking image.

It does have some drawbacks of course. You can still focus on the blur, but it really only looks weird when it happens in the near field, which with the focus range modification can only happen if you look away from the center (and honestly, the lenses in todays HMDs are so bad that everything is blurry there anyway).

So is it worth the hassle? I'm not sure, but I think so. When it works it looks truly awesome, and when it fails it's definitely annoying. I'll keep experimenting with this one.

Finally, here is how I do the actual depth of field. Probably nothing new in there, but everyone does it a little differently. Here is my version:

1) Fill up the DOF (depth of field) buffer using the final composited image before bloom and tone mapping. Store in in an RGBA texture, where the alpha channel represents the amount of blur. I use a half size texture for this. To compute the blur amount, I'm using the formula k*(1/focalPoint-1/distance).

2) Blur the alpha channel of the DOF buffer horizontally and vertically. The size of the blur is based on alpha value. The blur must be depth aware, so that: A) Fragments further away than the current fragment do not contribute to the blur. This prevents objects behind something to blur what is in front, and B) Fragments that are closer do contribute, but only scaled by their alpha value. This will make blurry objects in front of something sharp bleed out over their physical extent.

3) Blur the RGB values of the DOF buffer horizontally and vertically based on the blurred alpha. This pass must also be depth aware in exactly the same way as B) above.

4) Mix the RGB values of the DOF buffer into the final image using the blurred alpha channel.


Wednesday, September 13, 2017

Adventures in Screen Space

Eight years ago, just when I first started writing this blog my second post was about screen space ambient occlusion. I used that renderer for all my physics experiments, leading up to the fluid simulation that became Sprinkle. At that point I left desktop computing in favor of mobile devices. Ten games later I'm now back to desktop machines and I'm completely blown away by all the computing power.

For the first Sprinkle game I had to make dedicated geometry with holes in it when drawing large alpha blended overlays because the fill rate was so terrible. Now I'm running hundreds of lines of code really doing complex computations per pixel. Sorry, you have probably already adjusted but this will take me a while. 

So what would be more fitting than to freshen up that old physics renderer (well more like starting from scratch, but still). I have been wanting to experiment with physics in VR for a while and now is the time. For this I need a renderer that can handle a truly dynamic world with no precomputed lighting.


I have implemented screen space ambient occlusion with temporal reprojection filtering that takes a lot of the noise away without smearing out the result. I've always hated shadow maps. They are hard to implement and the result is usually disappointing, so for this renderer I tried doing shadows entirely in screen space using ray marching towards the light source. It's a bit of an experiment, but I find the results really interesting. The characteristics are very different from regular shadow maps – instead of getting precise but jagged shadows this one gives imprecise and smooth, blurry shadows. I can't really decide if I like it or not. For a sunny outdoor setting, regular shadow maps are probably better, but for the more diffuse, indoor lighting this is quite promising.



There is also depth of field close to the camera done in four passes on half resolution and motion blur on everything. I'm going for a old, analogue look on the final result, so any imperfections that can tone down the artificial computer graphics characteristics is a good thing. The ambient occlusion and screen space shadows do add a little bit of noise, but there is one cheap and paradoxically efficient way of hiding unwanted noise: add more noise. So at the final stages of the pipeline I add 5-7% of greyscale noise which hides some of the noise in occluded areas and adds to the analogue look.


I have a bloom pass as well and I just started playing with tone mapping. I'm not sure I'm really getting it, but I'll keep experimenting. For anti-aliasing my friend Ludde Andersson over at Scaupa pointed me to a temporal reprojection method that I found very interesting. Since I'm already doing temporal repojection for the occlusion and shadows it was quite easy to do the same for anti-aliasing. The idea is to move the viewport at sub-pixel resolution every frame and smooth out the result with an accumulation buffer. It also turned out that one of me absolute favourite games Inside has a great presentation on the topic from last years GDC. The results are absolutely stunning. I'm not sure I have ever come a cross a new rendering technique that is so clever and simple yet produces so fantastic results with almost no computational overhead. Am I missing something or why aren't everybody using this?


Thursday, February 25, 2016

Pinball physics

This might be the first time I reveal something about an upcoming title on this blog, but here it is – we are currently working on a pinball game. Not a classical pinball game, it has a pretty cool twist, but the basic mechanic is still very similar. As a physics programmer this sounded easy at first. Hey, it's just a sphere. What could possibly be easier to simulate?

I wouldn't call myself a pinball player, but I've always enjoyed it and spent a lot of time with the old Amiga games Pinball Dreams/Fantasies and even more with Slam Tilt. It's an interesting challenge to create a pinball simulation because it is quite the opposite of what physics programmers are usually facing – forget stacks of boxes, thousands of objects, streaming geometry and convex decomposition. With pinball it's all about detail and accuracy. We are not aiming for maximum realism in this game, but I think realism is a good place to start and then tweak the parameters to fit the gameplay.

Let's go over the pinball components. There is the ball, not much to say. It's a perfect, solid sphere. The table is a glossy, very slippery, polished surface, similar to the floor in a bowling alley. We also have a lot of curved geometry, ramps and various obstacles. The last component are the flippers. This might be the hardest part to get right. They are built with a double-coiled solenoid actuating a hinged, plastic flipper covered with high friction rubber. The main coil causes the initial strong pull, and at the end of the stroke it switches to just using the outer layer, holding the flipper up without overheating.


This particular setup seems to be important to give pinball it's signature characteristics and enables a lot of advanced playing techniques that simply wouldn't work with a different setup, for example the live catch, drop catch and tip pass.

Since the ball is solid steel it has a fairly high moment of inertia. The glossy table causes the ball to mostly slide across the surface, but when touching high-friction rubber on a flipper, it has to start rotating. In physical terms this translates linear momentum into angular momentum, causing the ball to drop a lot of speed. This gives the player control, being able to capture the ball using the flippers. The reverse is of course also true, if the ball is rotating heavily and touching a flipper, it will translate angular momentum into linear momentum.

Curved geometry is important in order to alter the direction of motion of the ball without loosing too much momentum. This is a bit of a headache in physics, since we're used to model most things using convex polyhedra, and now suddenly everything is concave! On the up-side collision detection will not be a performance bottleneck with a single sphere, so we can easily use detailed triangle meshes for everything, but on the down-side no matter how many triangles we use, they still aren't curved – they're flat!

Another thing in pinball is the speed. The ball can easily travel at 6 m/s, which translates to more than three times the diameter per frame at 60 FPS. Fortunately, this can easily be solved using substepping, as performance is not an issue.

I'm only starting to explore this world and I will post more details as I dive deeper into each specific part.

Tuesday, February 24, 2015

Stack allocated containers

No matter what fancy allocator you come up with, memory allocation will always be expensive. In order to reduce memory allocations I have been using stack-allocated containers for the past four-five years and I think it has worked really well, so I thought I'd write a post about them here. These methods have been used in all our games (Sprinkle, Granny Smith, Smash Hit as well as our new project).

Using containers is really convenient and often necessary. Take the array for example:

MyArray<Object> array;

Everybody's got one. The problem with these are that you need to allocate memory when putting stuff in them. In many cases you know on beforehand approximately how many objects you want to put in there, reducing it to a single allocation:

MyArray<Object> array;
array.reserve(50);

But if this code is in a function that is called frequently it would be much nicer if those first fifty objects were reserved on the stack instead of the heap, like this:

MyArray<Object, 50> array;

Now, the array object would have built-in storage for fifty objects and still be able to grow beyond that using regular heap allocations. Great! Or? What if you want to pass such an array by refence to a method? Say, a string splitting method:

void splitString(const String& del, MyArray<Object, 50>& result);

The obviously problem is that the output array now has to be stack allocated for exactly fifty objects. If we try to pass another array it won't compile, because C++ requires arguments with matching template arguments:

MyArray<Object, 8> result;
String str="this is a test";
str.split(" ", result);

Compiler error!

This effectively kills the entire charm of using template arguments for the array size. So how can we design a container class that can be passed around by reference while still offering stack allocation with a template argument?

What I've done is to create a base class, without stack allocation that can be used as a regular array:

template<class T>
class MyArray
{
    ...
    void* mData;
};

And then a subclass for the stack-allocated version:

template<class T, int size>
class MyArrayStack : public MyArray
{
    MyArrayStack()
    {
        mData = mStackStorage;
    }
    ...
    char mStackStorage[size*sizeof(T)];
};

The problem now becomes resizing the array at the base class, because the base class won't know wether the data storage is heap or stack allocated, and we don't want to have virtual methods and a vtable for such a lightweight class. Therefore, the base class needs to be aware of the subclass and avoid freeing memory when it is stack-allocated.

Fortunately this is not very complicated, since the stack allocation will always be placed immediately after the array object itself in memory, so we can just look at the pointer when resizing. If the storage pointer is right after the object itself in memory we have a stack allocation:

void resize(int newSize)
{
    ... allocate heap memory and copy over contents from mData
    if (mData != ((char*)this)+sizeof(MyArray))
        free(mData);  
}

Now we can modify the string splitting method to accept the base class:

void splitString(const String& delimiter, MyArray& result);

This will enable us to pass any stack allocated size array we want:

MyArrayStack<String, 8> result;
str.splitString(" ", result);

or

MyArrayStack<String, 128> result;
str.splitString(" ", result);

Compiler happy!

So the next big question would be – is this safe? There is still a small (well microscopic) chance that a non-stack allocated array object lines up right at the end of the last stack frame, and at the next memory adress is our heap-allocated storage data for it. In practice though this won't happen, because in standard memory allocators there is a header before the allocated data. Even if you use your own allocator without a header, this won't happen, because the memory area it is using would still need to be allocated by the OS, which will put a header in front of it.

I'm currently using this for arrays, sets, hash tables as well as memory streams, memory buffers and a few other things. It has been incredibly useful and probably one of the most important optimizations I have ever added to my framework. This effectively avoids memory allocations altogether without the hassle of using the typical "maxSize" for method arguments. I'm currently not using it for string objects (which all have a fixed stack allocated storage that depends on the project), but I'm kind of tempted to refactor that.

Friday, February 20, 2015

Physics tutorial at GDC 2015

I will give a talk about the fracture algorithm in Smash Hit at GDC 2015. Come by room 304 on Tuesday at 1:45 PM. The session will cover the actual fracture algorithm as well as the physics engine to support it and how fracture affects other subsystems in the game. There will be be cool, shiny videos of stuff breaking :-P



Wednesday, May 28, 2014

Hail to the hall - Environmental Acoustics

One of our early goals with Smash Hit was to combine audiovisual realism with highly abstract landscapes and environments. A lot of effort was put into making realistic shadows and visuals, and our sound designer spent long hours finding the perfect glass breaking sound. However, without proper acoustics to back up the different environments, the sense of presence simply would be there.

To achieve full control over the audio processing and add environmental effects I needed to do all the mixing myself. Platform dependent solutions like OpenAL and OpenSL cannot be trusted here, because support for environmental effects is device/firmware specific and missing in most mobile implementations. Even it was available it would be virtually impossible to reliably map parameters between OpenSL and OpenAL. As in most cases with multi-platform game development, DIY is the way to go.

Showcasing a few different acoustic environments

Software mixer

Writing a software mixer is quite rewarding – a small, well defined task with a handful operations performed on a large chunk of data, thus very suitable for SIMD optimization. A software mixer is also one of those few subsystems that has, or can have, a real work analogue - the physical audio mixer. I chose a conventional, physical abstraction, so my interface classes are named Mixer, Channel, Effect, etc, but there might better ways to structure it.

The biggest hurdle when writing a software mixer turned out to be the actual mixing. Two samples playing at the same time are added together, but what happens if they both play at maximum volume? The intuitive implementation, and what also happens in the real world is clipping. This is what most real audio programs do. Clipping is a form of distortion, where minimum and maximum audio levels are simply clamped above or below the physical threshold, effectively destroying or reshaping the waveform. In an audio software, you would typically adjust the levels manually in order to avoid clipping, but in games, where audio is interactive this can be really tricky. Say for instance you have a click sound for buttons. If there are no other sounds playing you want the click played back at maximum volume, but if there is music in the background the volume level needs to be lowered. If there is an explosion nearby it needs to be adjusted even further to avoid clipping.

One way to reduce clipping is to transform the output signal in a non-linear fashion, so that it never really reaches the maximum level. This still has the problem that it will affect the result when there is only one sample playing. Hence, when the click is played back in isolation it won't be at maximum volume.

Some people suggest that the output should be averaged in the case of multiple channels. So if there are three sounds, A B C playing at once. You mix them as (A+B+C)/3. This is not a good way to do it, because the formula doesn't know anything about the content of each channel (B and C can for instance be silent, still resulting in A played back at a third of the volume).

What we need is some form of audio compression - an algorithm that compress audio dynamically, based on the current levels. Real audio compressors are pretty advanced, with a sliding window to analyze the current audio content and adjust the levels accordingly. Fortunately there is a "magic formula" that sounds good enough in most cases. I found this solution by Viktor T. Toth: mix=A+B-A*B, but when adapting it to floating point math I realized that a slight modification into: mix=A+B-abs(A)*B is more suitable to better deal with negative numbers. Each channel is added to the mix separately, one at a time, using the following pseudo code:

mix = 0
for each channel C
  mix = mix+C - abs(mix)*C
next

This means that if there is only one channel playing, it will pass unmodified through the mixer. The same applies if there are two channels but one is completely silent. If both channels have the maximum value (1.0), the result will be 1.0, and anything in between will be compressed dynamically. It is definitely not the best or most accurate way to do it, but considering how cheap it is, it sounds amazingly good. I use this for all mixing in Smash Hit, and there are typically 10-20 channels playing simultaneously, so it does handle complex scenarios quite well.


I use three separate mixers in Smash Hit – the HUD mixer, which is used for all button clicks and menu sounds, the gameplay mixer, which represent all 3D sounds, and the music mixer which is used for streaming music. The gameplay mixer has a series of audio effects attached to it to emulate the acoustics of different room types.

Reverb

Given how useful a reverb effect is in game development, it's quite surprising to me how difficult it was to find any implementations or even an explanation online. At a first glance, the reverb effect seems much like a long series of small echoes, but when trying it, the result sounds exactly like that – a long series of small echoes, not the warm, rich acoustics of a big church. If one tries to make the echoes shorter, it turns more and more metallic, like being inside a sewage pipe.

There is a great series of blog posts about digital reverberation by Christian Floisand that contains a lot of the theory and also a practical implementation: Digital reverberation and Algorithmic Reverbs: The Moorer Design.

It uses a series of parallel comb filters passed through all-pass filters in series. The comb filters is basically a short delay line with feedback, representing reflected sounds, while the all-pass filter are used to thicken and diffuse the reflected sound by altering the phase. I don't know enough signal theory to fully understand the all-pass filter, but it works great and implementation is fairly easy.

In addition to the comb filters and all-pass filters I also added a couple of tap-delays (delay line without feedback), representing early reflections on hard surfaces, as well as low-pass filters in each comb filter allowing a great way to control the room characteristics. Christian's article suggest the use of six comb filters, but for performance reasons I cut it down to four. I'm using four tap-delays and two all-pass filter, plus a pre-delay on the entire late reflection network.


All audio is processed in stereo in Smash Hit, so the reverb needs to be processed separately on the left and right channel. I slightly randomize the loop time in the comb filters differently for the left and right channel, which gives the final mix a very nice stereo spread and a much better sense of presence.

In addition to reverb I also implemented a regular echo as well as a low-pass filter. The parameters of these three filters are used to give each room its unique acoustics.

Tuesday, May 13, 2014

Cracking destruction


Smash Hit is a game built entirely around destruction. We knew from the beginning that destruction had to be fully procedural and also 100% reliable. Prefabricated pieces broken the same way every time simply wouldn't be enough for the type of game we wanted to make. The clean art style and consistent materials made breakage easier, since everything that breaks is made out of the same, solid glass material.

Procedural breakage

Physically based breakage is hard. Really hard. I remember NovodeX had some kind of FEM approximation for breakage in the early days of PhysX, but it never worked well and didn't look very convincing. I think for most game scenarios it is also completely overkill, especially for brittle fracture, like glass. I designed the breakage in Smash Hit around one rough simplification – objects always break where they get hit. This is not true in the real world, where tension builds up in the material, and objects tend to break at their weakest spot, but hey, we're not really after accurate breakage here. What we want is breakage that feels right, and looks cool.

In Smash Hit, when a breakable object gets hit, and the exerted impulse is above a certain threshold, I carve out a small volume around the point of impact and break that up into several smaller pieces that become new dynamic objects.


It sounds simple, but looks quite convincing. However, there are a few obstacles to overcome in the implementation:


  • Carving out a piece from a generic object in a robust way
  • Slicing that volume
  • Check for topology changes in the original object. Carving out a piece may cause it to fall apart.

Plane splitting 

Carving out a volume from a generic mesh in a robust way is a really complex geometric problem. Furthermore, since every object in Smash Hit is physically simulated we want the resulting objects to be well-behaved in a physics-friendly format. From a physics point of view, I represent all objects as compounds of convex shapes, so the resulting broken object must also be a collection of convex shapes. Luckily there is one safe way to split up a convex object into two new objects that are also guaranteed to be convex – slicing it with a plane.

So to carve out a piece from the original object, I slice all the convex shapes with five bounding planes, slightly randomized around the point of impact. It will increase the total number of convex shapes used in the original object, but that is inevitable due to the fact we are making the object more concave. It takes a bit of bookkeeping to get the splitting code right, but as long as the plane splitting is robust, so will the overall breakage algorithm.

The carved out piece can be split up into smaller pieces using the same plane-splitting, so all we need is a really fast, reliable plane-splitting method for convex objects.


Robust splitting

The classic problem with plane-splitting is not handling degenerate cases. Say for instance one of the faces coincide with the splitting plane, so that only one edge crosses the plane due to floating point precision. That can easily result in degenerate geometry, non-closed polyhedra or even broken data structures and hard crashes. To avoid that, I start by determining a side for each vertex (above or below the plane) and then base all edge-splitting on that information, hence only split an edge if the corresponding two vertices are on opposite sides of the plane. The split point for each edge is capped to be a certain distance away from both vertices, so that the resulting two shapes are always closed, well-defined and all edges are above a certain length.

I tried quite a few data structures for doing the plane-splitting before finding one that works well in practice. The biggest hurdle was that I needed a way to track vertices through multiple splits in order to keep vertex normals consistent. It might not be visible at a first glance, but vertex normals are used extensively to make nice gradients in the glass rendering and create a certain softness to the material. Recomputing those normals after splitting an object would create a deviation in the material that is very visible to the player. In the end, the half-edge data structure turned out perfect for the job, offering both high performance and very small memory footprint. I even use 16 bit integers instead of pointers to keep the size down.

Tracking topology

After carving out a hole in the object, it needs to be checked for topology changes. Depending on the shape, it might have been split into two or more separate objects. Finding connected components in a series of  inter-connected nodes is classic graph theory. What makes out a connection in this case is wether two shapes are touching face to face. The most elegant way to do this would be to track split faces between shapes and remember the connections, but it takes a lot of bookkeeping. I'm using a more pragmatic approach, where the distance between shapes are measured geometrically. I have earlier on this blog written about the GJK algorithm and how immensely useful it is for all kinds of geometric operations. In this case, we only want to know if two object are within a certain distance of each other. This can be done as a simple overlap test in configuration space, which is extremely fast.

The algorithm for finding connected components goes like this:
Assign each object a unique id
For each object A
  For each object B
    If id[A] and id[B] are different and they are touching
      Replace all id[A] and id[B] with min(id[A], id[B])
  Next
Next

After we're done all shapes with the same id represent the same rigid body. This is obviously not a very fast general algorithm due to cubic complexity, but since the number of shapes in a rigid body is usually within a few dozen it works well in practice. The nice thing about it is the complete lack of state, which simplifies further breakage.


Simulation pipeline

A final word on breakage is the importance of getting the simulation pipeline right. With an off-the-shelf physics engine, contact impulses are typically analyzed after the simulation step to determine if something breaks. This give the impression of unbreakable objects colliding and then artificially split up afterwards. The key to natural looking breakage is to limit the contact forces of a breakable object during the simulation and break them up while preserving some of the motion. There are tricks to mix in some of the pre-break velocity on broken pieces, but I went all in and actually do multiple solves during breakage, so a simulation step typically involves running the following steps:
  1. General collision detection
  2. Run solver with capped impulses on breakable objects
  3. Check if some contacts reached the cap and in that case break objects
  4. Collision detection for new objects
  5. Run the solver again on affected contacts with capped impulses
  6. Repeat from 3
  7. Integrate
This means that objects are created and broken several times during a simulation step before proceeding to integration. I have capped the number of solves per simulation step to three in order to limit performance impact.

This is the first game where I'm giving my low level physics engine a proper work-out. It has been in development on and off for almost four years so it's great to finally see it in action. Good destruction ties very deeply into the simulation pipeline, so it's not ideal to plaster it on top of an existing physics engine.