When it comes to artificial intelligence, or AI, nobody disputes what counts as A. The I, on the other hand, is much more hotly contested.
In recent years, a particular subset of AI known as machine learning has replicated feats of human intelligence with astonishing success. It can wipe the floor with world champions in games like chess and Go, mimic the style of established authors, transcribe human speech and even drive a car in traffic. If this is evidence of intelligence at all, though, it is intelligence of a kind radically different from our own.
At its heart, machine learning is based on sophisticated pattern-recognition. If an AI is trained on a dataset that shows the correct course of action in certain situations, it can build on that training to identify the correct course of action in new situations as well. This can be extremely useful, but omits much that is crucial to human reasoning.
In this week’s edition we give a particularly striking example of the difference between the two. By the age of seven months, most children have acquired a sense of “object permanence”—the notion that objects and people continue to exist once hidden or out of sight. For technology powered by machine learning, this conclusion remains out of reach. Such an oversight is of minimal importance in an AI that plays chess, but can be a matter of life and death in a self-driving car.
That is why some researchers are proposing a complement to machine learning known as symbolic reasoning. Unlike a machine learning algorithm, a reasoning engine is not only fed raw data but also taught core concepts—including object permanence—which allow it to interpret that data in ways helpful to the end user. It can also, at least in theory, tell you why it did what it did, another feature that machine learning has so far lacked.
Preliminary tests with such engines have shown they can improve the performance of self-driving cars, but not by enough to be transformative on their own. Their real benefit may instead come from demonstrating the value of using multiple techniques to help an AI learn about the world.