Machine Learning Workshop – Idiap EPFL 2012

Yesterday I attended to this workshop at EPFL:

http://www.idiap.ch/workshop/mlws/

It was a good opportunity to see old friends and colleagues, and listen about their latest research. In general, the quality of the talks was quite good, ranging from very theoretical machine learning (sparse coding, optimization, etc.) to commercial applications of computer vision (www.faceshift.com).
Somewhere in the middle of that spectrum, I also quite liked the talk about learning image local descriptors (BRIEF and LBGM) as a compact and efficient alternative to SIFT or SURF, which are hand-designed, slower and use more bits. There were also applications to speech, face analysis and even remote sensing.

Have a look at the program and keep an eye on it in the coming days, as the slides will probably become available. You will find several other interesting talks:

http://www.idiap.ch/workshop/mlws/programme-2012

Active Appearance Models

Lately, I have been working with Deformable Models and I am surprised by how well they can work.
In the video above I am using an Inverse Compositional Active Appearance Model, which was trained with images of myself. It’s specially tuned for my face, but I still find it quite impressive how well it can track my face in realtime!
On the other hand, this model is quite sensitive to lighting conditions and partial occlusions. Training it, is also somehow of an art, because, as opposed to discriminative models, increasing the amount of training data might actually decrease performance. This happens because we use PCA to learn the linear models of shape and texture, which will degrade if data has too much variation or noise.
Still, it’s quite impressive what one can achieve by annotating a few images (about 50, in this case). In addition, as one annotates images, one can start training models that will help us landmark the next ones (in a process of “bootstrapping”, similar to the one in compilers).

OpenCV 2.0 and Boost library in Snow Leopard

Since I installed Snow Leopard on my macbook I started having compilation problems. The reason is that my code depends on a couple of external libraries, namely OpenCV and Boost serialization, and these were broken.

Today, I finally managed to solve the problems, by following some hints I found in different websites.
First, I installed OpenCV 2.0 (which should be quite cool because it has cleaner and compact notation for matrix computations, among other things). For this I basically followed the recommendations at:
http://giesler.biz/~bjoern/blog/?p=183#comments
For the small programs that only depended on OpenCV, things started to work again. But other pieces of code I am writing also use Boost serialization library, in order to save and load complex objects to disk.
Compilation was working, but linking was failing. Basically, the problem was that the libraries had been compiled for different architectures. My OpenCV library was built for “i686″, whereas my Boost library was built for x84_64 architectures. I had installed Boost using macports, which for Mac OS X 10.6, builds using x86_64 if the CPU supports it, or i386 otherwise.
You can change this at the macports.conf file by uncommenting the line:
#build_arch i386
In my case, I actually changed it to i686, because I don’t care too much about compatibility with older platforms.
build_arch i686
After that:
http://trac.macports.org/ticket/21408

Increasing the scope

In the past it happened that I didn’t publish some potentially interesting thoughts in this blog, just because they didn’t exactly fit the “about intelligence” topic.
I’m fed up of this self-imposed censorship. In the future the scope will be broader.

ACM Paris Kanellakis Theory and Practice Award 2008

The 2008 ACM Paris Kanellakis Theory and Practice Award was awarded to Corinna Cortes and Vladimir Vapnikfor the development of Support Vector Machines, a highly effective algorithm for classification and related machine learning problems“.

It’s not the first time this award is given to Machine Learning people. In 2004 it was awarded to Yoav Freund and Robert Schapirefor the development of the theory and practice of boosting and its applications to machine learning.”

I found a bit weird that they left Bernhard Boser and Isabelle Guyon out of the prize, because they were Vapnik’s co-authors in the 1992 paper “A training algorithm for optimal margin classifiers“, which I guess is considered to be the first paper on Support Vector Machines…

Anyway, congratulation to the winners. These are indeed elegant algorithms with sound theoretical foundations and numerous sucessful applications to vision, speech, natural language and robotics, to name just a few.

—————————
Remarks:

Thanks to my cousin Rui for the link to this news.

—————————
Related post:

Vapnik’s picture explained.