Wednesday 29 October 2008

La parte peggiore

La Gelmini ci riporta indietro di vent'anni, ai grembiulini in stile ventennio e alle cazzate sul maestro unico. Ma Tremonti e soci hanno ancor meno pudore, e impongono oggi tagli per miliardi di euro all'Università italiana.

L'università pubblica morirà.

Naturalmente si potranno cercare finanziamenti dalle aziende, trasformandosi in fondazioni. Ma visto che le aziende storicamente se ne sbattono della ricerca, e che nessuno fa nulla per niente, è evidente che nella migliore delle ipotesi la nostra ricerca sarà ridotta al lumicino e incanalata nell'utile immediatamente spendibile.

Insomma sepolta.

Il modello americano? Travisato completamente visto che, per fare il solito esempio, l'MIT ha il 2% di finanziamenti privati e il resto è statale. Ovvio, in America la ricerca è tenuta in considerazione.

Ma allora qual è la finalità? E' evidente che se l'università è solo privata, solo una ristretta cerchia di persone potrà accedervi. La agognata "mobilità sociale" finirà dritta nel cesso e, nella migliore tradizione delle "destra alla Silvio", si toglie a chi ha già poco per rimpinguare chi ha già fatto cassa.

La demolizione della scuola pubblica è la demolizione della nostra società - già claudicante di suo per altre ragioni. E' la demolizione del pensiero, della cultura, della libertà in generale: sarà appannaggio di chi potrà (già poteva) permettersela.

E' un giorno triste per l'Italia. Ma non è questo il punto.


Disinteresse diffuso

Il punto è che nessuno dice niente. Rincoglioniti dalle rassicurazioni trasversali, dal bombardamento di minchiate antologiche e dall'assenza di una opposizione (esiste?), gli italiani assistono inebetiti a tutto questo e non proferiscono una parola. Sembra che la cosa non li tocchi, mentre le nostre generazioni future ne pagheranno tutte le conseguenze.

Mi rendo conto di essere direttamente interessato, in quanto studente e dipendente dell'Ateneo, ma trovo stupefacente che un affondo così devastante sull'istruzione pubblica non trovi alcun recepimento nelle persone che mi circondano.

Passino i 600€ in meno all'anno che troverò in busta paga (capirai, già sono sulla soglia di povertà così, piove sul bagnato..).

Passi il fatto di non potermi ammalare, perché mi tolgono i soldi pure per quello (a me! che quando mi son rotto un dito ho fatto ben DUE giorni di malattia!).

Ma non può e non deve passare che l'istruzione pubblica finisca nel cesso. Italia, svegliati per la puttana. Spegni quella cazzo di TV e realizza il guano in cui ci stiamo ficcando.

Tuesday 28 October 2008

Unnatural convolution

By using a simple convolution filter, with a small 5x5 kernel running on a fragment shader, one can obtain a nicer and softer shadows.

This trick was invented by Anirudh Shastry; it's not physically correct, it's prone to artifacts (and that's why I'm gonna use stencils) but it's good to see and pretty fast. Moreover, I didn't have to move the whole rendering pipeline in one shader, which is good.

Don't you know, talking 'bout the convolution it sounds like a whisper

Hard shadows destroy realism. It's a matter of fact, they look too unnatural, expecially if the rest of the scene appear softly shaded and we overlap a simulated cone of light.

Plenty of solutions have been proposed. Furthermore, Nvidia cards are so smart that they automatically apply the percentage closer filtering on shadow maps. ATI cards don't, but proposed a pretty way to implement it (by using their proprietary multiple fetching primitives).

I don't want to use proprietary features, which could lead to unexpected situations, so I took the decision to try something new: by using stencil buffers, I'm gonna separate "potentially shadowed" texels and blur them, with some kind of convolution filter:


Pros: it's REALLY fast (dirt work is done via fragment shader), it's lightweight (just an additional RGB texture is required), and it's image-space based, so I don't need to program lighting equations for each texture unit in the shader!

Cons: additional rendering steps are needed, one for each light; self-shadowing artifacts are supposed to emerge and there's no difference between shadows near and far away from light (which is best aspect of smarter algorythms like PCSS or VSM).

We'll see. In the meanwhile, I express my disappointment for ATI's engineers who didn't really implement glConvolution*D() in their drivers :| .

Thursday 23 October 2008

Please, someone help us

Espresso, 23 ottobre 2008

«Maroni dovrebbe fare quel che feci io quand’ero ministro dell’Interni (…). Gli universitari? Lasciarli fare. Ritirare le forze di polizia dalle strade e dalle università, infiltrare il movimento con agenti provocatori pronti a tutto, e lasciare che per una decina di giorni i manifestanti devastino i negozi, diano fuoco alle macchine e mettano a ferro e fuoco le città. Dopo di che, forti del consenso popolare, il suono delle sirene delle ambulanze dovrà sovrastare quello delle auto di polizia e carabinieri. Le forze dell’ordine dovrebbero massacrare i manifestanti senza pietà e mandarli tutti in ospedale. Non arrestarli, che tanto poi i magistrati li rimetterebbero subito in libertà, ma picchiarli a sangue e picchiare a sangue anche quei docenti che li fomentano. Non quelli anziani, certo, ma le maestre ragazzine sì».

For foreign people's comprehension sake, I'll (try to) translate what's above:


«Maroni should do what I did when I was minister (…). University students? Let them do. Take police away from streets and universities, infiltrate the movement with our agents and let riots devastate shops, set cars on fire and put the cities for the fire and the sword for ten days.

And then, strong of popular consensus, the sound of ambulances will overwhelm the sirens of police. Forces of order should massacre protesters, without any mercy. They should send 'em all to the hospital. It's pointless to put them under arrest: magistrates would release them quickly. We should beat them instead, beat them to blood, and beat professors who stir them up. Not the old ones, sure, just the young girls».


These are not senteces of a simple drunk ignorant idiot: Francesco Cossiga is our previous president. And is the incredible outing of a senator.

When such people feels free to throw up this kind of phrases, it means that the country is definitively submerged of shit.

There's no future for us, this way; maybe it's really time for a civil war. It's really time for sirens.

Help us before it's late.

Saturday 18 October 2008

Causticity

A preliminary shot of my "translucency shadow mapping" engine, showing just the "filtrating light" layer. Light illuminates the glass, which projects its texture on the transparent radiation box, which is supposed to project on the wall as well.

Seems working, but actually it doesn't.

The reason is subtle: the more translucent a surface is, the more it shouldn't stop the light; this means that the projected shadow should appear brighter and colorful, but it does not.

This is due to the fact that I create a sort of translucency map by rendering the translucent objects from light's pov, and the resulting image gets darker as the alpha decreases! Some kind of "alpha inversion" is needed, maybe pre-calculated. But this means much more memory, linearly dependent from the number of lights and the resolution of viewport!! Not suitable.

A better solution should be caustic mapping, which is definitively one of the coolest algorythms I've ever seen. The results are impressive, but there are (as usual) some drawbacks: additional geometries (for refractive vertex grid and the projected points), no way to sum up many translucent contributions and so on. It would be interesting to implement it some day, but there's no time to play with it right now.

Thursday 16 October 2008

Covert alpha operations

The screenshot on the left shows my shadow mapping engine rendering two translucent textured surfaces. The first one occludes the spotlight and casts a shadow on the second one which, in turn, casts on the floor and the wall.

Through the eye of the needle

The thumbnail looks fine, but a closer look (click it!) reveils many subtle aliasing problems. Fact is that the "alpha to coverage" is just a workaround to achieve a nice effect (transparency) without adding too much complexity (sorting/splicing/etc). Its trick consists in converting the alpha informations into coverage micropatterns. When the surface is analyzed badly - eg.: shadow map generation from big distance - "non-covert" subfragments reveal themselves. This issue can be partially solved by linear filtering the shadow map, but could also create an awful moiré effect.


Images from the other side

Finally, there remains the open issue of correctly projecting the colors when lights runs through translucent surfaces. The solution I'm trying to develop is the following:
  • render the scene without translucent objects
  • take the depth map of this partial scene (this will block projection beyond dull surfaces)
  • do render anything stays above (using the depth map for depth comparison)
  • take the color map of this scene and blend it over the shadow map
Thoretically it's a working algorythm, but there are surely plenty of complex cases that I'm ignoring.

Wednesday 15 October 2008

Order indipendency

Translucency and transparency are not trivial in realtime CG.

Peaceful ray tracers take their time to throw lines inside and outside objects, while rasterizers have no peace quickly collecting incoming fragments. In order to alpha blend them, they must arrive in a precise order: the farthest, the earliest.

This limitation has a deep impact when rendering a scene: you have to sort your objects in order to obtain a correct blending. Moreover, there are pathological cases which cannot be solved by using standard techniques (such as object1 covered by object2 covered by object3, which is covered by object1 again).

I googled a bit and found out that several different solutions has been offered to obtain the "order indipendent transparency", but just two are really cool and do not make use of external shaders: depth peeling and multisampled alpha coverage. The first one is brilliant: subdivide the scene by peeling away closest depths, and blending everything at the end. Naturally, many renders are required and performances fall.

The second one is less sophisticated, but interesting. It uses the additional samples taken by multisampling (for anti-aliasing and similar) and use the alpha information to create a coverage mask of these samples.


The final composition is stunning, but a correct irradiance on the shadowed object is still missing.

Tuesday 14 October 2008

Projective light cone

Due to its intrinsic nature of texture, the shadow map looks squared.

A nice way to improve its appearance is to use the very same coordinates that project the shadow to map an additional bitmap, which simulates the bottom of the light cone produced by the "spotlight".

Now it would be cool to simulate the whole light cone by using some kind of volumetric escamotage.

Tuesday 7 October 2008

One query, many records, one result


Sometimes you need to serialize the results of one simple db query. Instead of fetching any single row and appending it to a string, I thought it could be nice (and faster) to delegate the string creation to the database server.

I found out there's no explicit way to do it in MySQL, but I created a nice workaround using the group_concat clause:

SELECT GROUP_CONCAT (field)
FROM table
WHERE index IN (...)
GROUP BY null



Usually, group_concat is used to concatenate the results grouped as specified. Instead, I want to concatenate all the results. Passing null we achieve this result: everything is appended to a single record, and we can fetch a single comma separated string with a single query.

Keep in mind the possibility of issues with the size of the result.

Wednesday 1 October 2008

Aliasing galore and FBOs

When you're going to write a 3D shadows engine, your choice is basically restricted to two possibilities: shadow volumes or shadow mapping in one of its flavours.

There's no reason to choose shadow mapping but the performances: shadow volumes are more accurate, because a shadow map is a simple texture spread on a scene and used for depth comparisons. It's obvious that texture's detail affects render realism, and all sort of aliasing will happen.

The quickest way to improve an image-space based algorythm is to improve the image.. space! You should render the scene with higher resolution. But you can't. Reason is that even if you raise the viewport size, anything greater than the actual container window will be dropped.

There are also restrictions for the texture shape and dimensions, but you could get around them using some of the latest extensions (ARB_texture_rectangle, ARB_texture_non_power_of_two)... but anyway, the problem remains: we are in need for detail. There are sophisticated geometric solutions, such as trapezoidal/perspective shadow maps and cascading shadow maps, but we could simply... render elsewhere!

Here come frame buffer objects. They are basically "containers" that you can use as targets for your renders. You can also use them to directly render on a texture, and this is our case: we render there our huge shadow map.



The difference is clear and performance hit is not dramatic. The coolest part is that using FBOs does not take modifications to previous routines (just bind the directly generated texture as usual!), and can be turned on in every moment. Approved!