“Without a commonsense understanding of the world, the AI systems, even the most advanced ones, will remain somewhat like idiot-savants.”Hector Levesque
For this week’s AI practitioner’s series, Analytics India Magazine(AIM) got in touch with Hector Levesque who has immensely contributed to the fields of knowledge representation and reasoning in artificial intelligence (AI) in the last four decades. Hector received his BSc, MSc and PhD from the University of Toronto in 1975, 1977, and 1981, respectively. After graduation, he accepted a position at the Fairchild Laboratory for Artificial Intelligence Research in Palo Alto, and later joined as a faculty at the University of Toronto in 1984 where he remained until his retirement in 2014. Hector has published over 70 research papers and three books. Four of these papers have won best paper awards from the American Association of Artificial Intelligence (AAAI). He was also recently awarded the Allen Newell Award from the Association for Computing Machinery (ACM) and AAAI. In this interview, Hector gets candid about AI, its glory, its idiocy and its future.
AIM: How did it all begin?
Hector: I have always been fascinated by how things work, natural as well as artificial, and the workings of an autonomous intelligent being is perhaps the most mysterious of them all. While I was an undergraduate in the early 70s, I came across Marvin Minsky’s Semantic Information Processing book. I had just learned to program (in FORTRAN), but I was quite taken with John McCarthy’s idea of programming a system that could take advice, that is, get better by being told in English more and more about its world, rather than having to be reprogrammed. This ability remains elusive sixty years later.
AIM: How close are we to Artificial General Intelligence? Is it possible to achieve AGI? What are the challenges?
Hector: I think we are still very far away. The current generation of AI systems based on deep learning is extremely impressive. The systems display expertise or skill at performing certain tasks (like playing Go, recommending movies, preventing money laundering). What is missing are AI systems that can deal with situations that are new, completely unanticipated by the engineers who built them. To do this, an AI system needs a deeper understanding of the world than merely being able to carry out some predefined task (s). It needs to know in some way that there are things in the world that have certain properties and that these properties can be affected by certain events, among which are those resulting from the actions at its disposal. Without this commonsense understanding of the world, the AI systems, even the most advanced ones, will remain somewhat like idiot-savants.
AIM: What is the role of language in the context of AGI?
Hector: Language will obviously be extremely important for systems interacting with people. You can only do so much by pointing, waving your hands, and making noises. It is tough to communicate about something far away in time or space without using a natural language. But even for AI systems that are not intended to interact with people, we will also want them to understand their world in at least as much depth as we do, and much of our understanding is formulated in linguistic terms, from what we have read and been told.
AIM: In your book “Common Sense”, you talk about the Turing test. Could you elaborate?
Hector: The well-known Turing Test was proposed as a way to judge the success of an AI proposal in terms of whether the external behaviour of the AI system would be indistinguishable to an examiner from that of a human. I believe the focus on external behaviour is just right, but the test has misled researchers into thinking that it is sufficient to somehow deceive examiners, not to understand what underlies the behaviour in humans.
To me, reasoning is like arithmetic. It means bringing symbolic representations together, and operating on them to produce new ones in a systematic, meaningful way. What makes this reasoning and not arithmetic is that the representations in question stand not for numbers, but for propositions; things that can be true or false.
AIM: Which sub-domain of AI excites you the most?
Hector: Not too surprisingly, the sub-domain of AI that excites me the most is my area of knowledge representation and reasoning. This is the part of AI that tries to understand how ordinary knowledge about mundane things can be represented in a computer in such a way that it can be brought to bear as necessary in deciding how to behave.
The area of knowledge representation (KR) divides into a number of subareas: description logics, formal ontologies, commonsense theories, cognitive robotics, etc. I’ve worked a bit in all those subareas. There are international conferences in KR that can be found at the kr.org website and a graduate-level textbook from Elsevier that I wrote with Ron Brachman called “Knowledge Representation and Reasoning”. An introductory textbook aimed at first-year undergraduates is my “Thinking as Computation: A First Course” from MIT Press.
AIM: What are the popular misconceptions about AI?
Hector: I think one broad misconception about AI is that deep learning systems will just get better and better in an incremental way until they are as good as people at dealing with whatever comes up in real life. There is a tendency to underestimate the range of unusual, unexpected things that happen all the time and to think that with enough training on what has happened in the past will enable an AI system to be able to deal with what will happen in the future.
AIM: Which domain of AI will come out on top in the next 10 years?
Hector: I don’t think any area of AI will come out on top. I believe that AI researchers will come to realise that different parts of the AI puzzle need to be solved by very different approaches and techniques. There is no silver bullet for AI. What works for walking over rough terrain should not be expected to work for writing an essay or flying a commercial aircraft.
AIM: What’s your advice to the next generation of AI researchers?
Hector: For the next generation of researchers, I recommend a university background in computer science, with courses in discrete mathematics, symbolic logic, and probability, and with special attention to AI (all branches!) starting in third or maybe fourth year. Follow your dream. Do not be discouraged by what the experts say. Work hard. Have fun!