On Intelligence by Jeff Hawkins

I would like to recommend this book by Jeff Hawkins, in which the author tries to create a theory about the neocortex.

He claims that the neocortex is basically a hierarchical memory system able to detect temporal and spatial patterns. Jeff Hawkins, and his company Numenta, are now trying to move forward and implementing this “neocortical algorithm” as software running on a computer.

I enjoyed a lot reading it and I am trying now to read the technical papers. So far it looks like a good model, specially for computer vision systems, but it’s not yet clear to me how to solve problems from other cognitive areas such as language processing or planning.
More posts on that for the coming weeks!
Update [2013]: 5 years later, nothing very impressive has come out of Numenta. Even though the ideas on this book are appealing, in practice, all solid Machine Learning results require: 1) a very clear loss function, 2) an efficient optimization algorithm and 3) preferably, lots of data.

Measuring Intelligence

In order to develop artificial intelligence further, it would be important to have a formal and quantitative way to measure intelligence of an agent, being it a human or a machine.
The most famous test for artificial intelligence is the so-called Turing Test, in which “a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test”. There is even a competition, the Loebner Prize which really evaluates different chatbots and choses the one who most resembles a human.

However, this test is nowadays considered to be anthropomorphically biased, because an agent can be intelligent and still not be able to respond exactly like a human.
Marcus Hutter as recently proposed a new way of measuring intelligence, based on the concepts of Kolmogorov Complexity and Minimum Description Length, in which compression = learning = intelligence. The Hutter Prize measures how much one can compress the first 100MB of wikipedia. The idea is that intelligence is the ability to detect patterns and make predictions, which in turn allows one to compress data a lot.
In my opinion this is not yet a totally satisfactory way of measuring general intelligence, for at least two reasons:
- the fact that method A compressed the dataset more than method B, does not necessarily mean that method A is more intelligent. It may simply mean that the developer of the method exploited some characteristic of the (previously known) data. Or it can mean that the method is good to find regularities in such dataset, but not being able to learn other structures in other environments.
- it can not be applied to humans (or animals).
For these reasons, I guess measuring intelligence is still a fundamental open problem in AI.

Artificial General Intelligence

Back in 1956, the founders of the new AI research field (John McCarthy, Marvin Minsky, Allen Newell and Hebert Simon) were deeply convinced that in a period of one generation we would have human-level intelligent computers.
However, after more than 50 years, we are still not able to solve some tasks that humans do without any apparent effort (such as distinguishing a dog from a cat or a horse in any kind of picture). Many frustrating results mark the history of AI: low quality of (early) machine translation systems, lack of robustness of speech recognition and computer vision systems, etc.
The so called “AI winter” is generally perceived to be finished by now, since many researchers have new hopes on building Artificial General Intelligence. Recent contributions from both neuroscience and theoretical computer science were decisive to create this optimism.
Here is a book edited by Ben Goertzel and Cassio Pennachin putting together several of the different renewed ideas.

As I read it, I will post comments on individual chapters concerning different approaches to AGI.

Talking Robots

Talking Robots is a “podcast featuring interviews with high-profile professionals in Robotics and Artificial Intelligence for an inside view on the science, technology, and business of intelligent robotics”.

This podcast is produced at the Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland and it comes out every two weeks.

In future posts we will comment some of the episodes. Stay tunned!