“From three to eight years we will have a machine with the general intelligence of an average human being,” said Marvin Minsky, the founder of MIT’s AI Lab, in 1970. Five decades later, artificial general intelligence (AGI) remains a distant dream. Hyperoptimism, misinformation, and exaggerations in popular media often bred several misconceptions regarding AI.
Some of the most common misconceptions include:
Artificial intelligence and machine learning are the same.
Artificial intelligence and machine learning have become buzzwords and are sometimes used alternatively. Although the technologies are closely related, they are not the same. Vagueness in definitions, tech companies, playing up their capabilities enabled by overzealous PR firms, and clueless reporters legitimising the interchangeable usage without fact-checking, etc., have confused. AI refers to machines exhibiting human-like intelligence through different techniques. Machine learning is one of those techniques. AI’s ultimate goal is to develop an intelligent system to simulate human thinking and intelligence. Meanwhile, machine learning teaches machines to learn from the given data to produce desired output. AI aims to make machines more human-like; ML helps in making machines learn like humans.
AI does not require human intervention.
A layman may often get the impression that machines are advanced enough to learn on their own. However, the reality is machines are not yet developed enough to make their own decisions. A specialist would still be required to formulate the problem, prepare the models, prepare a training data set, identify and eliminate potential biases, etc. AI models are still dependent on humans.
Sanjeev Azad, associate vice president (technology), GlobalLogic, gives two examples:
- AI-powered chatbots can improve customer interactions and help increase sales: Unless properly trained with continuous customer interactions datasets, standard FAQ-based chatbots may adversely affect customer interactions.
- AI-enabled technologies can automate threat detection and response without the need for human intervention: Hackers or cybercriminals are indeed harnessing the power of good AI algorithms and exploiting digital systems. Human-led cyber defence organisations must be one step ahead to prevent evolving cyber-attacks.
AI will take away jobs.
People feared job losses during the industrial revolution. As established in the previous point, the fear is unfounded as machines still need humans to operate. Even in the future, AI takes over a few roles, which would only generate new jobs.
“AI technology is at the helm of digitisation, with businesses relying heavily on it. There is a growing demand for AI-based jobs, which will offer tremendous scope for the students in the coming future. Many misconceptions like AI doesn’t need humans, or it will take away jobs, are mere speculations. The reality is, the sector has an abundance of jobs and not enough talent to fill the roles. Students who are adept at AI are in great demand and can revolutionise fields like Robotics, Computation, Agriculture, Healthcare and Data Science, among others,” said Prateek Agrawal, Associate Professor, Lovely Professional University.
The fear of job loss is also prevalent among many workers in the low-skill category. “Many researchers have been trying for years to design robots that can perform these kinds of simple tasks. While there is some success in very well-mapped domains, no existing robot can do this well in natural spaces. This phenomenon is known as Moravec’s paradox: it is much easier to build AI systems that perform at a human level on high-level cognitive tasks than it is to build AI systems that can learn rudimentary perceptual and motor skills that small children easily perform,” said Dr Debashis Guha, Director, Master of Artificial Intelligence in Business, SP Jain School of Global Management.
All AI systems are complex.
It is commonly believed that all AI systems are highly complicated and are less explainable. However, like several human-based processes and traditional software, some AI systems are simple and easy to explain. AI explainability is emerging as a rich area of research that gives insight into why a particular system works a certain way and helps in improving the transparency of decision making. Even when the AI systems cannot be fully explained, we may still understand how they make decisions better than we understand the human decision-making process.
AI and objectivity.
AI systems are believed to be highly objective. But in reality, they are only as good as the data it is trained on. Data scientists working on these systems may intentionally or unintentionally introduce biases based on their preferences. Many times these biases remain unexposed till the algorithms are used publicly.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my heart’s content.