Skip to main content

General wisdom

I'm following quite a few game programming blogs, and whenever there is a post about a lifehack or general wisdom that can help me simplify my work I'm all ears. So, I thought I'd share some of my own experiences:

Automate everything that can be automated. Especially project file generation. Editing project files is a real energy drainer, and even though IDE's are trying to make the process smooth, it never is. This becomes a big problem first when multiple platforms come into the picture. Personally I have a Python script that take a few input parameters, scans the source tree and outputs a nice project file for Visual Studio, or a makefile. You have to bite the sour apple every time Visual Studio changes it's project file format, but it's so worth it. I also have similar scripts for documentation, distribution and in some cases code generation. Writing the scripts take a while, but they can be reused, you get better at writing them every time you do it, and it's more fun than doing dull monkeywork over and over again.

Minimize external library dependencies. People are way too eager on including external libraries and middleware in their projects. I think it is very common that the usage of libraries and middleware end up costing way more than it would have done just writing the code yourself. Only include an external library to your project if it: 1) Solves one specific task extremely well. 2) Can be trusted doing that. 3) Puts you in control of all memory and IO operations. 4) Can easily be included as source code.

Keep everything in the same project. This ties into the last criteria for using external libraries above. I want all third party libraries to be part of the source tree. Not a dynamic library, not a static library, not even a separate project in Visual Studio, just plain soure code in a separate folder. This is important, because it simplifies cross-platform development, especially when automatically generating project files. It also completely takes away the problems with conflicting runtimes for static libraries, mismatching PDB's, etc. It's all going in the same binary anyway, just put your files in the same project and be done with it.

Refactor code by first adding new and then remove old. I used to do it the other way around for a long time, ripping out what's ugly, leaving the code base in a broken state until the replacement code is in place. Yes, it sounds kind of obvious in retrospect, but it took me a long time to actually implement this behavior in practice. The only practical problem I've experienced is naming clashes. I usually add a suffix to the replacement code while developing and then remove once the original code is gone. As an example, if you want to replace your Vector3, create a new called Vector3New, and then gradually move your code base over to using Vector3New, while continuously testing, and when you're done, remove the original Vector3 and do a search/replace for Vector3New to Vector3.

Don't over-structure your code. This one is really hard. People often talk about code bases lacking structure, but I think it's a much worse and more common problem that a code base has inappropriate structure, or just too much of it. Consider this - given two implementation of some algorithm, where one is a couple of large messy functions in a single file and the other is fifteen files with a ton of inherited classes, abstract interfaces, visitors and decorators. Given none of them suits your current needs, which one would you rather refactor? My point is that you shouldn't try to structure something until you know all the requirements. Not to save time first building it, but because it's a pain in the ass to restructure something that already has structure. You can compare it to building a house. Would you rather start with a pile of building material or first disassemble an existing building? To me that's a no-brainer, even if the pile happens to be quite messy. Hence, never define an abstract interface with only one implementation, never write a manager that manages one object, etc. Just start out writing your desired functionality in the simplest possible way, then structure it if and when there is a need for it.

Stay away from modern C++ features and standard libraries. I've tried introducing bits and pieces from STL, boost, exceptions and RTTI throughout the years, but every time I do, something comes out and bites me right in the butt. Buggy implementation, compiler bugs, missing feaures, restrictions on memory alignment, etc. This is depressing and discouraging, but the sad truth we have to deal with. If you want your code to be truly portable without the hassle (not just in theory, but in practice) you'll have to stick to a very small subset of the C++ standard. In my experience it's better to just accept this and design for it rather than putting up a fight.

Use naming prefixes rather than namespaces. I was advocating namespaces for a long time, but now I've switched sides completely and use prefixes for everything. I kind of agree prefixes are ugly, but it has two obvious benefits that just makes it worth it. A) You can search your code base for all instances of a particular class or function, and B) it makes forward declarations as easy as they should be. With namespaces, especially nested, forward declarations is just painful, to a point where you tend to not use them at all, leaving you with ridiculous build times. I usually don't even forward declare classes at the top any more, but rather inline them where needed, like: "class PfxSomeClass* myFunction(class PfxSomeOtherClass& param)".


  1. I agree with all of this and I hope it shows in Box2D. Although I haven't seen the single project idea.

    The last few years I have been refactoring just as you suggest and it works quite well. Nothing beats being able to flip one #define to get back to a functioning version. This also lets you compare performance, quality, etc.

    I often get requests to add some C++ feature, such as namespaces, to Box2D. Now I will just point those requests to this blog. Thanks!

    1. This comment has been removed by the author.

    2. I even use a
      'static volatile bool useNewVersion = true;'

      rather than a define to be able to switch from one code to another at runtime.

      In some cases, I even have more booleans to decide to run the two versions of the code at the same time, on the same test session, to compare results and/or performances.

  2. Thanks Erin! Hey, I just saw you on TV :) I've actually included Box2D as source a couple of times and it's really smooth. It's quite painful with projects that are split up into multiple projects, enforce a certain directory structure, require preprocessor definitions, etc.

  3. I did a little research on the single project idea. Apparently this is not so great for Visual Studio, which can only do parallel builds on projects, not source files. At work, we have dozens of projects in our solutions.

  4. There is the /MP switch that let's you compile a single project on multiple cores. If you haven't enabled it already, I highly recommend. It should help out quite a bit even if you have multiple projects, since you normally just work in one of them:

  5. Ah, but it doesn't work with the pre-compiled headers we use at work. I wonder how the compile and link time would compare: many projects with pre-compiled headers versus a single project with no pre-compiled headers.

  6. I played with the /MP switch today and it works great. Do you use it also for debug builds where it would conflict with incremental rebuilds? Or in other words: Do you turn incremental rebuilds off for debug builds?

    Do you pass the number of processors or do you allow VS to decide for itself?


  7. Agreed. Pretty much describes the Bullet physics SDK as well.

    For build systems I'm now trying out premake4, lua based. Quite smooth.

  8. Nice post, but I don't fully agree on some points, though :

    Keep everything in the same project :
    I found that on big projects (AAA console games), splitting some parts as libraries with version numbers actually help to get everything together.
    With a rigorous build / release system ( and you need one ), you should not have any issue of mismatching pdbs.

    As for using prefixes instead of namespace, this is an interesting point (On my current projects, we are using namespaces ).
    I don't really follow you on the 'forward declaration is awful' point : it's definitively more verbose, but still doable and readable,( and nested namespaces are limited to two levels, and not encouraged).
    But the search argument is still a very good one !

    Last point, the stay away from modern C++ : it's a bit extremist, but in practise, we are always removing RTTI and exceptions from our builds ( although we may be using exceptions in some cases in the editor PC version of the game ).
    We are not using STL for the reasons you describes : they are not really cross platform (or at least not enough : the implementation may differ from one platform to another ).
    But we have our own STL-like container / utility library that is close to the STL !
    Last point : some new features in C++ are really handy and used, like the variadic macros, or the restrict keyword... But I imagine that was not exactly your point !

  9. I do agree on very large code bases, multiple projects can be handy. Mostly because one can use different compiler and preprocessor options on different projects, plus you get rid of the problem with identical names on multiple source files.

    Yes, forward declarations is still doable, but very clunky, and you cannot "inline" forward declarations, like these ones: class ReturnClass* myFunc(class InputClass* arg)


Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident. I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow". Taking depth i

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part. My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post . The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is with

Undo for lazy programmers

I often see people recommend the command pattern for implementing undo/redo in, say, a level editor. While it sure works, it's a lot of code and a lot of work. Some ten years ago I came across an idea that I have used ever since, that is super easy to implement and has worked like a charm for all my projects so far. Every level editor already has the functionality to serialize the level state (and save it to disk). It also has the ability to load a previously saved state, and the idea is to simply use those to implement undo/redo. I create a stack of memory buffers and serialize the entire level into that after each action is completed. Undo is implemented by walking one step up the stack and load that state. Redo is implemented in the same way by walking a step down the stack and load. This obviously doesn't work for something like photoshop unless you have terabytes of memory laying around, but in my experience the level information is usually relatively compact and se