Skip to main content

Jump start jumps

I once heard that elegant code is in some respect contradictory to useful software, because it's often the sum of all exceptions that make a program useful, rather than it being strictly logical. Do you remember the frustration back when the javascript of slow-loading web pages was allowed to switch input focus to a login input box while you were typing a URL? The software was just following it's strict logic, switching input when it was told, while the better way of handling it would be a not-so-elegant check if the user have manually focused something else in between the start loading and finished loading, or a timer checking if the input focus change is still relevant.

I wouldn't say that this contradiction applies to all aspects of software development, but certainly to most parts that interface with people. In a game, the most common part of the code that interface with people, apart from the GUI, is caracter controllers.

One would think that modeling a character with physics would automatically yield natural and intuitive behavior. In reality it's almost the complete opposite - the more real physics you use to model your character, the harder they are to control. The most common way to write a character controller is probably to not use physics at all, but to just use one or several raycasts or shapecasts. But what if you want the character to interact physically with the environment?

I recently spent some time writing and tweaking the character controller of our next game. Without revealing too much, I can say it's not a sequel to Sprinkle and that it includes a character which can interact with the environment. I decided early on that a pure raycast/shapecast based method wouldn't suffice, because then the environment wouldn't respond. One can cheat to some extent by applying downward forces etc, but there will always be situations where it doesn't fully work.

Instead of representing the character with detailed geometry I use two circles stacked on top of each other, slightly overlapping. It's nice to have a round shape for the character, since you want it to slide easily over obstacles. These circles only represent the torso, so in addition I also do a couple of downward raycasts for the legs. The raycasts control a special joint that keeps the character floating above ground. You don't want to rely on regular collision detection and friction for character motion, mostly because you need more control than that, so the spheres are really only for contacts on the sides or above the character.

Since the character can stand on any object, the special joint is reinserted every frame, attaching to whatever the character happens to be standing on. The joint itself constrains the motion of the character relative the other object using the regular constraint solver. It has one target relative velocities and one maximum force to reach that target for each degree of freedom (x, y and rotation in 2D). To me, this is a very intuitive interface to a character controller, instead of using forces. For example, if we want the character to walk left we set the X target relative velocity to -2 m/s and the maximum force to however strong we want the character to be. If we were using explicit forces we would need to adapt the force to the resistance (which of course at the time we don't know). The Y component gets a relative velocity based on the current distance to the ground. If we're below our rest-distance then it should be slightly positive, otherwise slightly negative. The maximum force is however strong the legs should be.

These very useful, one-dimensional velocity constraints are something I have always missed from off-the-shelf physics engines. All physics engines use them internally, but for some reason they are never exposed through the public API (well, Meqon was the exception here of course :)

For jumping, I also use the character joint, but with a higher positive Y relative velocity. If the character is standing on a dynamic object it will automatically get a downward push through the constraint solver. While play testing I noticed that some players consistently press the jump button a few frames too late, falling off a cliff instead of jumping to the next. I'm not sure if this is due to input lag or something else, but I added a few frames of grace period, allowing mid-air jumping if ground contact was recently lost. I'm also recording the ground normal continuously in a small ring buffer, so when a jump occurs I can go back and look for the best (most upward) jump direction. I also implemented an early abort mode, so that if the player releases the jump button quickly the jump terminates, and the character returns to the ground more quickly. All these special cases are supposed to feel intuitive to the extent where they are not even recognized as features. They just make the game easier to play, but has no logical explanation and certainly do not yield more elegant code.


  1. Hey Dennis, I implemented a 3D motor for you:

    I'm not sure that will help for your characters. Feel free to file an issue for a joint request. :)

  2. Nice, it sounds exactly like what I'm using. Thanks!

  3. Interesting post, I was just wondering about using something like this for my next game - oh and by the way thank you Erin for providing the 3D motor! I hope libgdx catch up with this soon

  4. Thanks for the post! I think it's logical. Rigid circles or capsules aren't good approximations of our bodies.

    E.g. when you jump up and down, you shorten and lengthen yourself. This keeps your eyes level and your feet accelerating much faster than at free fall acceleration. Hence we all use higher-than-9.81m/s^2 acceleration for the character controllers. Maybe I'm wrong. Certainly there are other effects. But I think the idea is compelling - we are not rigid circles.

    Or the lag issue. When we walk, we have a soft- and hardware pipeline from central cortex through motor cortex, cerebellum, basal ganglia, and then through the spinal cord... These parts have loops in them and between each other, mind you. The speed of signal is 1 to 100 m/s, switching speed of neurons is 1-2kHz. Your head is at least 2m and many lines of code.. um, neurons away from your feet. It's amazing we don't perceive the lag. When the player wants to jump, she probably forms the command to press the button at the exact time when she wants the character to jump. Now it needs to go through all that laggy pipeline with software layers and abstractions (you think of jumping, but you're moving a finger - look me in the eye and tell me it's not a virtual method override!). Some people are good about compensating that (years of training in front of TV!) and some people aren't (years of ... having a life?)

    When you walk, you keep your eyes on objects far away, so horizon doesn't bob up and down even though your head might be. But sometimes you look closer, and the horizon bobs a lot. Pretty hard to do that on a flat TV... But that's issue for FPS games :)

  5. The problem with jumping ledges can be a matter of design, and not a matter of mechanics.

    I know this is a completely different kind of game, but on a 2D platform game I am making, the same happened.

    My friends would either jump too soon, or to late.

    I eventually found that the problem was that the collision box (loving 2D, a BOX), did not coincide with the perception the player had of his avatar (character).

    However, I admit, and am proud to say, that I am very much in agreement with how you are programming your game. I think that building a simple base engine, directly to the game play you intent, produces much more satisfying results, than to have a very complex physics (or whatever engine) behind a simple API layer.

    Having the "functions" of your game evolve slowly to the pace of your code, is far better than trying to dominate a mad giant API to try "reproducing" the same effect.

  6. This good blog post give us information about Physics games. nice work done..
    Physics games


Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident. I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow". Taking depth i

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part. My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post . The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is with

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while. I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here . Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.