Friday 9 October 2009

The evolution of parallax

Once upon a time, videogames were bidimensional, and it was fine.
One day, in order to give some depthness to the scene, programmers introduced a technique called parallax scrolling: by moving the elements of the background at different speed, a sense of spatiality and immersivity emerged.

The sensation was outstanding for the age, but the drawback was a very high computational cost, that kept this strategy confined on 16bit machines. With the arrival of 3D graphics, parallax scrolling was abandoned.

In 2001, a Japanese researcher named Tomomichi Kaneko introduced a brilliant modification of texture mapping able to provide "the capability to represent the motion parallax effect to the textures placed on 3D objects". His algorythm was basically a normal mapping, whose strategy is to use a texture's RGB channels to map a 3D normal, overriding the vectors on surfaces; Tomomichi basically suggested to enhance the feeling of movement by displacing textures by a function of the view angle in tangent space.

The tangent space itself is the trick behind all, and let us compute stuff in a comfortable space: instead of doing tons of computations in world space, we "look at things" from the top of the texture.

The generation of the special coordinates (normal, tangent and binormal) is done "client side" during the geometries loading, and sent to the server as vertex attributes.

The normal mapping itself requires at least a couple of textures to be sent and, guess what, I faced again the limits of DirectX9 compliant boards. I wanted to keep at least 4 lights on the scene, and use three textures (diffuse, normal map, and a gloss map). I quickly overflowed the capabilities of my VGA card and learnt an interesting thing.

GLSL is paranoid

I'm not sure it's a GLSL thing, but that's it: it's paranoid. It expects the worst scenario to happen. And therefore does not optimize a thing. I explain.

Suppose you have a code similar to this:

void DoSomething(int number_of_light) {
  ...
  Use( gl_LightSource[number_of_light] );
  ...
}

void main() {
  ...
  for (int i=0; i<TOTAL; i++) dosomething(i); 
  ...
}
What do you expect? You may think that your GLSL compiler will smartly recognize the boundaries of your  request and will fetch from memory just the informations about the TOTAL lights you're going to use. Well, wrong: it will allocate all MAX_GL_LIGHTS informations as uniforms. A mess. By keeping your code so general you instantly reach the limits of the card and the program does not link anymore. I didn't know about this (never read about that) and there's no way to optimize it. You must do several fetches to each specific light source you want to use on the scene, and that way you keep the uniforms count down. Weird.

1 comment:

Anonymous said...
This comment has been removed by a blog administrator.