Skip to main content

GDC Rant

It's almost a tradition for me to get grumpy on the last day of GDC, and even though I had a great week this year there are some things that I would like to shine some light on.

A lot of people seem to think of GDC as this cuddly, educational industry event, by game developers for game developers. It might have been in the beginning, but nowadays it is not. GDC is run by UBM Tech, a global, non-transparent corporation, organizing dozens of different conferences for profit. They don't care about the games industry, they care about making money. Every year the passes get more expensive and every year something is excluded. Since last year you don't even get free coffee unless you buy one of the more expensive passes (and as a side note they probably don't even pay for the coffee – look for that "sponsored by" tag).

As a speaker you get a free pass, a shiny tag on your badge and a couple of lunch boxes. That's it. They don't pay for travel or accomodation, which is standard on many other conferences. On top of that they lock in recordings of your preentation in the vault, to which they offer access for an extra $550 unless you already purchased the most expensive pass. Preparing a high quality session takes a lot of time. I know I spent at least a full week on each of my presentations. You do this for free, because you want to share something and then UBM Tech sell it for hard cash.

More and more sessions now come with a "presented by" tag, meaning someone actually paid UBM Tech to give the talk. And even those talks you can't see unless you buy an "Expo Plus" pass, or better.

That said, I still love going to GDC, and I'll probably come back next year, but I really wish there was an industry initiative to organize this in a better way.


Comments

  1. Would love to hear your thoughts on SIGGRAPH

    ReplyDelete
    Replies
    1. I've never been to SIGGRAPH, but since it is run by a non-profit organization I'd believe they don't have these problems.

      Delete
    2. No free pass for presenters (~25% discount), presentation content also locked and sold.

      Delete

Post a Comment

Popular posts from this blog

Bokeh depth of field in a single pass

When I implemented bokeh depth of field I stumbled upon a neat blending trick almost by accident. In my opinion, the quality of depth of field is more related to how objects of different depths blend together, rather than the blur itself. Sure, bokeh is nicer than gaussian, but if the blending is off the whole thing falls flat. There seems to be many different approaches to this out there, most of them requiring multiple passes and sometimes separation of what's behind and in front of the focal plane. I experimented a bit and stumbled upon a nice trick, almost by accident.

I'm not going to get into technical details about lenses, circle of confusion, etc. It has been described very well many times before, so I'm just going to assume you know the basics. I can try to summarize what we want to do in one sentence – render each pixel as a discs where the radius is determined by how out of focus it is, also taking depth into consideration "somehow".

Taking depth into…

Screen Space Path Tracing – Diffuse

The last few posts has been about my new screen space renderer. Apart from a few details I haven't really described how it works, so here we go. I split up the entire pipeline into diffuse and specular light. This post will focusing on diffuse light, which is the hard part.

My method is very similar to SSAO, but instead of doing a number of samples on the hemisphere at a fixed distance, I raymarch every sample against the depth buffer. Note that the depth buffer is not a regular, single value depth buffer, but each pixel contains front and back face depth for the first and second layer of geometry, as described in this post.

The increment for each step is not view dependant, but fixed in world space, otherwise shadows would move with the camera. I start with a small step and then increase the step exponentially until I reach a maximum distance, at which the ray is considered a miss. Needless to say, raymarching multiple samples for every pixel is very costly, and this is without …

A better depth buffer for raymarching

When doing any type of raymarching over a depth buffer, it is very easy to determine if there is no occluder – the depth in the buffer is farther away than the current point on the ray. However, when the depth in the buffer is closer you might be occluded or you might not, depending on a) the thickness of the occluder and b) if there are any other occluders behind the first one and their thickness. It seems most people assume a) is either infinite or a constant value and b) is ignored alltogether.

Since my new renderer is entirely based around screen space raymarching I wanted to improve on this to make it more accurate. This has been done before, but mostly in the context of order independent transparency (I think).

Let's look at a scene where the occluders are assumed to have infinite depth (I have tweaked the lighting for more distinct shadows to get a better look at raymarching artefacts, so the lighting does not exactly match the environment in these screenshot).


At a first …