Skip to main content

Development environment


A lot of people have asked about our development environment so I thought I'd write a post about it here. Both Sprinkle an Granny Smith as well as our two titles in development follow roughly the same pattern. I do all day-to-day development and testing on Mac and PC, compiling to native Win32/MacOS applications. Hence the code is cross-platform and mouse input is used to emulate touch input. My preference for actual coding is Visual Studio due to its superior debugging and Intellisense. On the Mac I use Textmate, shell scripts and makefiles. I only use XCode for iOS testing and deployment. I have used the XCode debugger on a few occasions, but it very rarely manages to do any meaningful on-target debugging. This might be due to my broken project files though.. I run Visual Studio through VMWare on the Mac as well, so it's actually only one physical Machine. I used a separate development PC laptop before, but the VMWare solution is far superior in almost every aspect and removes a lot of clutter.

I'm not using any off-the-shelf game engine, but rather a collection of homebrew classes that work well together, so you could say the "engine" is developed specifically for each title. Examples of such classes are strings, streams, input handling, data containers, vector math, scripting, sound, etc. It does not contain any code for actual game objects, rendering pipeline, update loop, etc, so it's rather low level. I personally think this is better than using a game engine in most cases, since you can churn out more performance, and you never hit any artificial boundaries of a third party solution.

I do use certain libraries for specific tasks, such as zlib, TinyXML (just switched to RapidXML), Lua, Box2D, Clipper, etc, but it's very isolated pieces of software that do one thing well. If they wouldn't work for a project they can easily be replaced individually. All libraries are included in the project as source code, no static or dynamic libraries.

The build system is a python script that scans the source tree and outputs a makefile, Visual Studio project or Android ant files. I have not yet tried XCode project file generation because XCode project files (dirs actually) are totally horrible. To be fair Visual Studio project files are horrible too. Actually I don't think I have ever seen an IDE with a sane project file format. Is it really that hard? I should probably try cmake, but it takes time to learn, and doesn't necessarily update when you upgrade your IDE. I generally think that about a lot of things - learning a tool or middleware is often more of an investment than just doing it yourself. Besides, if you do it yourself you gain 100% insight into the inner workings and can fix problems immediately when they show up instead of communicating with support and/or wait for a fix. Anyway.

The desktop binaries read raw assets (XML, jpeg, png, etc) directly from the data folder for convenience while both iOS and Android require asset conversion. The asset conversion script is quite similar to the build system - it scans a directory tree and outputs a makefile. Make will then automatically keep track of assets that need conversion using the file modification date, and it can branch out on multiple threads (using the -j switch) for increased performance. Make is really just as awesome for converting assets as it is for compiling source code. On iOS some images are compressed using texturetool and some are using our custom format called MTX, which is essentially just a way to compress PNG images into JPEG with a separate, compressed alpha channel to save space. The Android version also uses MTX compression as well as general LZ compression on most data files. The APK itself is not compressed so generic asset compression is more important here.

The Android version uses the NativeActivity system available from Android 2.3, so basically the whole game, including setup code is C++. We do have a very thin layer of Java to handle in-app billing and query device capabilities, etc. It's quite remarkable how much of the code is identical between the iOS and Android version. It's really just the setup code, touch input and audio back-end that are completely different.

Sprinkle and Granny Smith both have built-in level editors that are accessible on the Mac and PC version. All level design and graphic assets are done within that editor (except for textures, which are done in Photoshop and Illustrator). Scripting is done in Lua using a regular text editor. That's about it.

Comments

  1. I was looking att your Granny game. Is that developed using OpenGL?
    It looks to be in 3D, when the Granny crashes into walls etc. So are you using some other 3D physics Engine then ?

    Thanks

    ReplyDelete
  2. Hi Brody, yes OpenGL ES is more or less the only option on mobile. You are correct about the crashes. They are not using the Box2D engine, but my own custom 3D physics engine. If you look at my posts from 2010 you can read more about the development of that engine.

    ReplyDelete
  3. Hello!, Very interesting/instructive reads here. I was just wondering how you handle OpenGL rendering between all the platforms? Since OpenGL ES is only on mobile devices and OpenGL 2/3/4 is available on the desktop. Do you have several renderer implementations, each implemented against different OpenGL versions (ES 1/2 and 3/4 at the desktop). Or do you simply make sure to only use the OpenGL ES subset, while running against OpenGL 2/3/4 on the desktop?

    ReplyDelete
  4. Hi Morgan, sorry for the late reply. So far I haven't had a lot of problems really. I'm just using the OpenGL ES 2 subset of features in OpenGL. We don't ship anything on desktop anyway (apart from Sprinkle on Mac App Store, but that was an exception), so I haven't focused on high-end rendering on mac/win32.

    ReplyDelete
  5. Congrats for the games, they are really impressive, and seeing that you don't use a game engine, it's even more impressive (or should I say "awesome enough").
    Do you intend to make the level editors publicly available?
    ps. my kid (and I) love your games!

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …