Machine Learning Summer School

Held in Ile de RĂ© (France), 1-15th September, this school counted with some famous names within the Machine Learning and Artificial Intelligence communities: Rich Sutton (co-author of the widely adopted book on Reinforcement Learning), Isabelle Guyon (co-author of the first paper on Support Vector Machines) and Yann LeCun (known for the convolutional neural network, energy based models and the DjVu image compression technique).

You can check the (almost) complete list of lecturers here. I found the course given by Shai Ben-David, on the Theoretical Foundations of Clustering” quite interesting and intriguing. Clustering seems to be *really* lacking solid theoretical support, which is surprising, given the importance of the problem. Some atempts are being done to axiomatize it, but there are a lot of open questions: what exactly is the class of clustering algorithms? how can you compare different clustering algorithms? why is a partition better than other?
Hope to see more developments in this area in the coming years.

ICVSS 2008

Last week I attended the International Computer Vision Summer School in Sicily, Italy. The main topics were Reconstruction and Recognition. I think the quality of the lectures, organization and location were all quite good, therefore I would recommend it to other PhD students.

Here is a short summary of some of the things we heard about:

Andrew Zisserman (Oxford, UK) – gave an overview of object recognition and image classification, with focus on methods that use “bag of visual words” models. Quite nice for newcomers like me!

Silvio Savarese (UIUC, USA) – talked about 3D representations for object recognition. There is actually a Special Issue of the “Computer Vision and Image Understanding” on the topic at

Luc Van Gool (ETH Zurich, Switzerland) – Lots of cool and fancy demos about 3D reconstruction. They are starting to use some recognition to help reconstruction (opposite direction of S. Savarese).

Stefano Soatto (UCLA, USA) – gave an “opinion talk” on the foundations of Computer Vision and how it can be distinguished from Machine Learning. I would have to read his papers to understand better, but he seems to claim that the existence of non-invertible operations such as
occlusions would support the need for image analysis instead of just “brute-force machine learning”.

We also had Bill Triggs (CNRS) talking about human detection, Jan Koendrick (Utrecht, Netherlands) on “shape-from-shade” and a few tutorials touching stuff as diverse as: SIFT, object tracking, multi-view stereo and photometric methods for 3D reconstruction or
randomized decision forests.

To summarize, I think the message was:

- Traditionally, recognition uses lots of Machine Learning but models keep few 3D information about objects;
- Traditionally, reconstruction uses ideas from geometry, optics and optimization but not learning;
- The future trend is to merge them: use 3D reconstruction to help in recognition tasks and use recognition to help in 3D reconstruction.

Science in Summer time

If everything goes as planned this year I am attending two Summer Schools.

The first one, the International Computer Vision Summer School 2008 , will be hosted in Sicily, Italy in 14-19 July. The program seems to be quite good and it will cover topics like object detection, tracking or 3D reconstruction, among others. There’s also a reading group on “how to conduct a literature review and discover the context of an idea“. The challenge is to see how far back in the past one can track the origins of a scientific idea. For example, the AdaBoost is a well known machine learning meta-algorithm, in which a sequence of classifiers is progressively trained focusing on the instances misclassified by previous classifiers. The set of classifiers is then combined by a weighted average. It was introduced by Freund and Schapire in 1996. This is easy to track, the question however is: can you find the same or similar core idea, or intution, somewhere else back in the past? Possibly from a different domain?
It’s gonna be fun!

The second one is the 10th Machine Learning Summer School, 1-15 September, Ile de Re, France. The program is also quite nice, but I still don’t have the confirmation I can attend it.
I would be specially interested in Rich Sutton‘s lecture on “Reinforcement Learning and Knowledge Representation” although hearing about Active Learning, Bayesian Learning, Clustering, Kernel Methods, etc. also sounds quite appealing.

Looking forward to science in summer time!