Friday, 11 February 2011

A softer way of life

Soft computing course: completed! A sparkling 30/30 cum laude closes an astonishing and inspiring experience.

I chose this course primarly because of simulated annealing and neural networks, for I studied these techniques in a very formal way (cybernetics) and from a philosophic point of view (epistemology). I was interested into a more concrete approach, and I found it here. But there was much more!

I discovered a wide range of special neural networks, Hopfield's and Kohonen's in particular, which show several interesting capabilities that I'd like to experiment soon. I finally studied the infamous back-propagation, able to learn non-linearly separable tasks, and that was good. But despite my initial expectations, these weren't the most interesting parts.  

Genetic algorithms, for instance. It's funny to think of data structures "sharing genetic material", maybe discriminating partners according to their own sexual tastes (Booker's model), with their children being subject to random mutations. Well, believe it or not, but that funny stuff works! My experimental data clearly say that GA are a strong and robust way to tackle NP problems. The drawback is the difficulty of modeling the solutions' search space by using simple codified strings. Moreover, genetic algorithms can find a global optimum for a specific problem, but not whole classes.

So, here comes the Genetic Programming. Yep, double upper case here. Whereas the GA were funny because of data sharing genetic information, what we have here are copulating programs... so nice! Massive derivation trees create brand new offspring generation after generation, letting new abilities to emerge and spontaneously losing the useless (a process called automatic feature selection). The way GP works  immediatly calls for a LISP implementation, and that's what I tried to do... even though I'm still stuck on a pure functional, recursive crossover function.

In the end, my personal favourite model, whose I dedicated my blog to: the Particle Swarm Optimization. Yeah, triple upper case. As I wrote in the past, I love the spectrum of emergent behaviours this bunch of simple coordinates show. Just like Conway's game of life, there seems to be some kind of unpredictable lifeforms inside the model, and that is cool. Moreover, it can be quickly extended:


This example is an interesting kind of PSO that prof. Vanneschi proposed, named Multiswarm Repulsive PSO (or MRPSO). It features more than just one swarm living on different "islands", all looking for the same global optimum. After some generation, best individuals are cloned and replace the worst ones by migrating to the next island. Moreover, on some island, the particles run away from the best individual, instead of following its "pheromones trails". This approach solves two possible drawbacks of PSO: the lack of diversity and the tendancy that individuals show of blindly following the leader, thus keeping away from interesting unknown zones of the search space. The boost of performance is stunning, and still there's a lot of space for inventing new ways to improve it!

4 comments:

Ale said...

Insomma imperdibile :)
Facendo apprendimento automatico ho riscoperto un po' il mio interesse verso tutte queste tecniche... Tra l'altro ho scoperto che il back-propagation va bene anche per le reti ricorrenti (non lo sapevo!)

Non vedo l'ora di poter fare gli esami del secondo anno! :D

Ale said...

Ah, chiaramente congratulazioni per il bel voto ;)

Unknown said...

grazie!

Sì, il corso di Mauri è un po' l'apri-pista di Soft Computing :) Avete fatto anche le macchine di Boltzmann quest'anno?

Ad ogni modo, questo insegnamento sembra ritagliato sulla tua figura. Conoscendoti ti stripperai di brutto con la programmazione genetica, che Vanneschi insegna con un piglio quasi amorevole. Decisamente un must-have nel piano di studi.

Anonymous said...

Grazie per questo post meraviglioso. Ammirando il tempo e l'impegno che mettete nel vostro blog e dettagliate informazioni vi offrono.