Skip to main content

Impressions of the green robot

I've been working on a mobile, physics-based game over the last five months (I'll post stuff about the project very soon) and today I started toying around with Android and porting the game. I'm not really sure what to think yet honestly. Some things are better than iOS development and other things are quite annoying. I really appreciate having command line tools for everything, and the ability to login to the device and do maintenance. Especially the ability to login and run a shell command on the device via a command line tool on the host. That way you can run scripts on your development machine that coordinate things both on the device and on the host at the same time. Awesome!

When it comes to development tools I think command line tools are far superior to graphical user interace in most cases (except for debuggers). I'm pretty happy with visual studio, but it's probably because I've been more or less forced to use it every day for the last ten years. Nothing beats having good command line tools and the ability to script everything the way you want them.

Being dependent on Java definitely sucks. They have really tried to work around it in the latest releases of the NDK, but it's still there, and you really do notice it. A lot. For a game programmer who keeps all his stuff in C++ this is no worse than Apple's stupid fascination for Objective-C though. A couple of revisions away, the Android NDK will probably be pretty complete, while iOS will always be knee-deep in Objective-C, forcing us to create horrible wrappers.

Android documentation is bad in the best case and non-existent everywhere else, and the whole development procedure is very far from streamlined with a gazillion tools and configuration files to tie everything together. Note though that I'm talking about writing native C++ apps using Open GL ES 2 here, not the Java crap you see in all the tutorials. (By the way, the NDK compiler did not support C++ exceptions until very recently. I talked about exactly this in my previous blog post)

Asset management is the part I like the least about Android so far. You throw your files in the asset folder, and it automatically gets compressed into a zipped bundle. Then you can stream access resources from this bundle using the NDK, but not quite the way you'd expect. On iOS this works beautifully by just translating the path into a location on the device and then you can use fopen, fseek or whatever you like. On Android the tools automatically compress stuff into the bundle based on the file suffix (oh please..), and there doesn't seem to be any way of accessing compressed data from the NDK unless you write your of virtual file system. Solution? Add a .mp3 suffix to all the files! Seriosly...

Comments

  1. Hi Dennis, I am a regular reader and have an unrelated question, I wasn't sure how else to ask you.

    I'm really impressed with your videos showing rigid bodies, fluids and cloth etc interacting together.

    I also found an old paper of yours where you briefly describe a framework to enable these interactions:
    http://www.ep.liu.se/ecp/010/010/ecp01010.pdf

    I am familiar with rigid body and cloth simulations however I'm interested in how seperate simulations may be made to interact realistically as in your videos. Would you care to describe your methods in more detail?

    Thank you for the fantastic blog, its great to see that you are still active.

    ReplyDelete
  2. Hi Graham, thanks for the kind words. My current work does not use separate simulations, but they all go into a unified solver. It would indeed be very difficult to get that level of stability using different solvers for each type. The tricky part is to find a way of expressing all the constraint types in the same way. I might write up how this is done some time in the future, but I need to figure out first what to do with the new engine. Cheers!

    ReplyDelete
  3. Is the unified solver position based, velocity based or both?

    ReplyDelete
  4. Hi Dennis, Having played Sprinkle I am a total fan of your work , very well done indeed. You have inspired me and quite honestly I have not played games for years and your game gave me the old "Lemmings Feel" that I had growing up on such games.

    My background is in Algos attributed to the Cad industry , I am behind some algos that you will find behind autocad etc, and being a game fan I always wanted to make a little game of my own, however having read your blog I am not sure how did you approach game programming for the mobile industry? how did apply the Nvidia engine into IOS , did you program on a PC visual studio and used the Nvidia libs and translated them to IOS ? or did Nvidia supplied the code. What environment would you recommend to program for both IOS and Android , are you using Lua?

    ReplyDelete
  5. Hi Barry, Sprinkle is programmed in C++, mostly in Visual Studio and then "ported" to iOS and Android. For Android I'm using MacOS, the command line tools and the new NativeActivity approach, where you don't need Java. On iOS I use XCode for the actual deployment, but I don't do any actual coding in that environment. We're using Lua for level scripting. Simple things like "if this object is close to that object, open door B" and things like that. Otherwise it's mostly C++. For the next title I will move more stuff over (such as menus and GUI) to Lua for the quicker turnaround times.

    ReplyDelete
  6. Yeah, reading assets with NDK is a challenge. Personally, I went with compiling raw assets into .so files that I dlopen() in my C code.

    ReplyDelete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

Stratified sampling

After finishing my framework overhaul I'm now back on hybrid rendering and screen space raytracing. My first plan was to just port the old renderer to the new framework but I ended up rewriting all of it instead, finally trying out a few things that has been on my mind for a while.

I've been wanting to try stratified sampling for a long time as a way to reduce noise in the diffuse light. The idea is to sample the hemisphere within a certain set of fixed strata instead of completely random to give a more uniform distribution. The direction within each stratum is still random, so it would still cover the whole hemisphere and converge to the same result, just in a slightly more predictable way. I won't go into more detail, but full explanation is all over the Internet, for instance here.

Let's look at the difference between stratified and uniform sampling. To make a fair comparison there is no lighting in these images, just ambient occlusion and an emissive object.


They …