“Perhaps expectations are too high, and… this will eventually result in disaster. Suppose that five years from now, funding collapses miserably as autonomous vehicles fail to roll. Every startup company fails. And there’s a big backlash so that you can’t get money for anything connected with AI. Everybody hurriedly changes the names of their research projects to something else. This condition is called the AI Winter,” said AI expert Drew McDermott in 1984.
In her latest paper titled ‘Why AI is Harder Than We Think’, AI researcher and ousted Google employee Melanie Mitchell, explained how research in AI often follows a cyclic pattern: periods of rapid progress, successful commercialisation, heavy public and private investments, called AI Spring, is often followed by AI winter, characterised by waning enthusiasm, drying up of funding and jobs.
Mitchell argued over-optimism among people, the media and even experts arise from fallacies in our understanding of AI and the intuitions about the nature of intelligence. She outlined four major fallacies:
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Narrow intelligence and general intelligence
One of the most common fallacies is that narrow intelligence is on a continuum with general intelligence. Narrow intelligence refers to a machine’s ability to perform a single task extremely well. Advances made in narrow AI are often described as the first step towards general AI.
For example, Deep Blue, the chess-playing computer was popularly hailed as the first step in the AI revolution, IBM’s Watson system was described as the entry to a ‘new era of computing’, and most recently, OpenAI’s GPT-3 was called a step towards general intelligence. This is commonly called the ‘first step fallacy’, a term coined by philosopher and mathematician Yehoshua Bar-Hillel. In philosopher Hubert Dreyfus’ words, this term means any improvement in our programs, no matter how trivial, is considered as ‘progress’. Like Dreyfus, Mitchell believes the ‘unexpected obstacle’ in the so-assumed continuum of AI progress has been a problem of common sense.
Easy tasks and hard tasks
Moravec’s paradox, named after roboticist Hans Moravec, states that it is comparatively easy to make computers demonstrate adult level performance on intelligence tests or playing games like chess, but it is impossible for them even to exhibit skills of a toddler when it comes to perception and mobility.
It means that tasks that humans perform almost effortlessly, like making sense of what we see, conversing with another person, or even simply walking without bumping into obstacles, can be some of the hardest tasks to accomplish for machines. Conversely, solving puzzles and complex mathematical problems, translating text between thousands of languages are relatively easier for machines than humans.
AI is full of ‘wishful mnemonics’ said Mitchell in her paper. She referred to the terms generally associated with human intelligence being used for the evaluation of AI programs. For example, machine learning and deep learning methods are very different from learning in humans or even animals. Similarly, one of the subfields of machine learning is transfer learning which refers to transferring the knowledge they have gained to newer situations. While this capability is fundamental to humans, it is still an open problem for machines.
Mitchell calls these anthropomorphic terms, shorthands. “This has led to headlines such as “New AI model exceeds human performance at question Answering”; “Computers are getting better than humans at reading”; and “Microsoft’s AI model has outperformed humans in natural-language understanding”. Given the names of these benchmark evaluations, it’s not surprising that people would draw such conclusions,” she stated.
Benchmarks do not give the correct estimates of capabilities for carrying out tasks such as question-answering, reading, and natural language understanding. However, many of these benchmarks allow machines to leverage the statistical correlation to achieve high performance on a test without actually learning the skill. It is true that machines are capable of performing high-precision tasks, but they are still far from achieving general human abilities, which we associate with the benchmarks’ names.
Intelligence is in the brain
It is a widely held belief that intelligence is a non-physical entity and is completely encapsulated in the brain. This has also given rise to the notion that intelligence can be disembodied. The assumption is implicit in most work on AI throughout history. Meaning, researchers believe, to achieve human-level intelligence, we need to simply scale up the machines to ‘match the brain’s “computing capacity” and then develop the appropriate “software” for this brain-matching “hardware.”
However, as many psychological and cognitive studies prove, human intelligence is a strongly integrated system with closely interconnected attributes such as emotions, desires, autonomy, and common sense, most of which can not be separated.
Despite mounting evidence, research in AI has mostly ignored these results. Only a small number of researchers are exploring these ideas under fields like embodied AI and developmental robotics.
Read the full paper here.