Artificial intelligence is a buzzword that is on everyone’s lips. The downside of this is that people take the liberty to call almost anything AI. Going by their claims, every company in the world seems to be AI-driven; whether they have a strong AI component or not is notwithstanding. And this seems to irk the researchers and puritans of the field to a great extent.
One such researcher is Michael I Jordan, who did not hold back his displeasure about how the term AI was being thrown around so casually. “People are getting confused about the meaning of AI in discussions of technology trends,” he said. He believes that computers do not have the capabilities currently to compete with humans but people talk about it as they do.
Problem with terming everything as AI
Jordan is one of the leading researchers in the field of artificial intelligence and machine learning. He is working as a professor in the electrical engineering and computer science department and the department of statistics at the University of Berkley. He is one of the leading authorities on machine learning. One of his most significant contributions to the field is the transformation of unsupervised machine learning to help find structure in data without pre-existing labels. His work has helped transform unsupervised machine learning from a collection of algorithms to an intellectually coherent field. Unsupervised learning is an important component of several scientific applications but suffers from an absence of labelled training data; Jordan’s contribution helps remediate this challenge to a great extent.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
He is now actively working on helping scientists, engineers, and other stakeholders understand the full potential of machine learning. Jordan talks about how sci-fi-ed discussions on AI and superintelligence are fun, but beyond that, they are mere distractions. He notes that there has not been enough focus on the real problem of building planetary-scale machine learning-based systems that actually work, deliver value to humans, and do not amplify inequities.
Jordan is very vocal about the perception of AI. In 2019, he wrote an article, ‘Artificial Intelligence—The Revolution Hasn’t Happened Yet‘. He wrote that the term AI is misunderstood by the public and technologists alike. He traces back to the 1950s when the term was first coined and writes that people then aspired to build computing machines that possessed human-level intelligence. The aspiration remains the same, but instead of making computers’ intelligence per se, they have capabilities that augment human intelligence.
He argues that despite developments in the field, often referred to as AI technology; it does not necessarily involve high-level reasoning or thought. Moreover, these systems neither form semantic representations and inferences like humans nor formulate and pursue long-term goals. Jordan suggests that the most pressing problems can be solved if we chart well-thought interactions between humans and machines. He asserts that the intelligent behaviour of large scale systems ‘arises as much from the interactions among agents as from the intelligence of individual agents’.
He says that the developments in machine learning are indicative of the emergence of a new field of engineering altogether, similar to the emergence of chemical engineering in the 1990s from foundations in chemistry and fluid mechanics. He terms machine learning as the first human-centric engineering field — meaning that it is built at the intersection of people and technology.
He also advocates for a revitalised discussion of engineering in a more society-building and intellectual sphere.
Similar opinions
Jordan is not the only one who has expressed concern over AI’s path and its broader perception. Turing award winner Yoshua Bengio, who is also considered one of the founding fathers of deep learning revolutions, said that AI is not magic – in answer to the question about the biggest misconception around AI. He said that while there is amazing progress in AI, the community is still very far from human-level intelligence.
A recent paper published by Margaret Mitchell, former Google scientist and one of the strongest voices in AI ethics, spoke about fallacies in AI research. One of them is ‘wishful mnemonics’. Mitchell says that often terms associated with human intelligence are used to evaluate AI programs. She refers to these anthropomorphic terms as shorthands. These shorthands have led to misleading news headlines like “New AI model exceeds human performance at question answering” and “Microsoft’s AI model has outperformed humans in natural-language understanding”. Unfortunately, just comparisons give false impressions about the potential and limitations of AI.