Skip to main content

How Granny Got the Look


In my last post I mentioned briefly how all graphical objects in Granny Smith are made out of 2D polygons which are transformed into 3D at load time. It was never initially designed for the sometimes complex environments in the game, but we decided to stick to this method instead of involving a separate 3D modelling software. I think, at times real 3D objects could have come in handy, but overall the current workflow is preferable since it's much more efficient. There is no need to track separate files or assets - every level is totally self-contained. Because the 2D data is so small, we don't even use instancing, so there is no risk of trashing another level when altering objects.

This is how a factor level looks in the editor.


The most fundamental transform is a simple extrude, but we can also apply a chamfer or fillet in the process. This is used extensively, especially for round hills and other natural shapes in the game. This beveling is done by gradually shrinking the polygon while extruding it, in one or more iterations.

Shrinking or expanding a polygon is not as easy as it sounds. For "well-behaved" polygons there is no problem, but sometimes vertices disappear in tight corners, or new ones must be added. I tried several different ways of implementing this myself, but it's really hard to find a robust method that never fails, or even one that failes gracefully. After a few failed attempts, I found the awesome Clipper library, which can do all kinds of polygon operations, including expanding and shrinking. It's reasonably fast, very robust and super-easy to use, I highly recommend. Even when using the Clipper library it was not trivial to get the beveling to work correctly. The Clipper library does not track or correlate vertices, so after a shrink operation you have no idea how the new vertices correlate to the old ones, hence it is really hard to stitch the two polygons together with triangles. It took me a few tries to implement a robust stitching algorithm, but I finally came up with one that deals with all well-behaved polygons, and most (but not all) tricky corner cases.




Shadows also deserves a mention. I started experimenting with different methods for smooth soft shadows very early on. The traditional way of using precomputed low resolution shadow maps didn't quite fit our needs, because all geometry is generated on the fly, and levels can be quite large. Since most geometry in Granny Smith is front-facing, extruded, flat, 2D polygons I came up with a scheme where the shadows are semi-transparent triangles based on projecting the 2D polygons onto each other. Thanks to Clipper, I already had a great toolbox for this. Each polygon is expanded and clipped against overlapping polygons in the background. The resulting "shadow" polygon then uses vertex coloring to smooth out the penumbra and a special shadow shader to achieve quadratic fall-off. All these shadow triangles are put in separate vertex buffers and rendered simultaneously. The engine supports dynamic soft shadows as well, but it's rather expensive, because all the expensive geometry clipping needs to happen every frame, so it's only being used in a very few places in the game. Because shadows are computed from the 2D polygons, there will only be shadows in the XY plane. To somewhat overcome this I also added self-shadowing along the extruded surface using the same vertex coloring scheme, so creases get a darker tint.
The anatomy of Granny Smith

The characters are basically a composite of 2D sprites, but slightly rotated to compensate for the camera angle, so it's somewhere in between an oriented sprite and a billboard. There was never a discussion about using "real" 3D characters for this game. My personal opinion is that 2D character are highly underrated for this type of game. 2D characters obviously yield better performance, but I'd also argue they also look better in many cases, especially on low resolution devices. Polygon aliasing completely disappear, because all edges are drawn into the alpha channel of the texture and rendered with mipmapping, so the characters always look crisp and sharp.


Characters is not the only area where we combine 2D graphics with 3D objects. Grass and foliage are two examples. The grass is added procedurally, while the decals for trees and bushes are placed manually along the rim of the object to hide sharp polygon edges and add extra detal. The grass is rendered with a special vertex shader to sway in the wind.


After completing a level, you get a replay in vignette and sepia color and with occasional bluring plus added dust and scratches to give the impression of an old movie. The vintage effect is just a shader effect in a post pass, but the replay itself features alternative camera angles and slow-motion effects which took some time to get right. The camera angles are determined by what's going on in the game. For example, it often switches to slow-motion close up right before fracturing an object. The replay data is basically just recorded input for the player, plus position correction data in case of divergence, similar to a networked multiplayer game, but I also record "special events" that are used to trigger camera angles. The replay data is analyzed and all the camera angles are decided before the replay starts. Choosing camera angles on the fly wouldn't work since you want to switch camera before the action happens. Getting everything in the game to support slow-motion playback without diverging too much was a real challenge, and you can still see artefacts here and there. The replay system is also driving the apple thief playback (in very subtle slow-motion) during regular gameplay.


Comments

  1. Hi, congrats on creating great game!

    I think it's great that you as a technology (physics, graphics) inclined programmer are able to find pleasure and fullfillness (at least I hope you are) while making small, independent games. I was always convinced that the only reasonable way to have an interesting job in graphics/animation/physics programming in games would be to become part of a huge team that actually has positions for people doing exclusively this. You have shown that it's not necessarily the case.

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …