MITB Banner

What Should Tech Media Know Before They Report On Emerging Technologies Like AI?

Growing interest combined with a lack of understanding of the inner workings and understanding of what is going on is creating the perfect storm of interest and ignorance, resulting in a misinformation epidemic in the field.
Share

We have all come across titles like “AI is brewing your next whisky,” “AI Can Make You Jobless,” or “AI Can Predict Your Future,” and many others. Most individuals may agree that the popular media’s coverage of the artificial intelligence phenomenon is fraught with misinformation and likely ignorance. AI researchers are aware of it, as are some journalists, and the regular media consumer is likely aware of it. The articles’ content is unclear, obscure, and often has misrepresented the research or study done to grab more eyeballs. 

In an article titled ‘The AI Misinformation Epidemic,’ Zachary Lipton, who was a Ph D candidate then and is currently an Assistant Professor at Carnegie Mellon University, described how the interest in machine learning increased among the general public. However, the growing interest combined with a lack of understanding of the inner workings and understanding of what is going on is creating the perfect storm of interest and ignorance, resulting in a misinformation epidemic in the field. 

Let us now understand how well the tech media can report on emerging tech keeping the below-mentioned points in mind. Rather than being lazy or remaining ignorant – it’s better to crack a book, read the research, and understand the discipline’s language.

No misleading and jargonised use of AI

Writing a piece on artificial intelligence needs a clear understanding of the subject. Most of the time, terms like AI, machine learning, deep learning, reinforcement learning, etc., are used interchangeably without diving deep into the minute details of the field — which are not only different but are unique in nature and applications. It reminds us of the fact that “Everything cannot be termed as AI.”

Take these titles for instance, “AI can now design cities, but should we let it?,” or “Google AI creates its own ‘child’ AI that’s more advanced than systems built by humans,” — AI is characterised as a supernatural, independent agent with free-will. It’s capable of accomplishing impossible tasks that are beyond human capabilities. But, in truth, what today’s AI researchers and engineers create are nothing more than computer programmes that can mimic some characteristics of human intelligence.

Correctly depict expectations from AI

It was John McCarthy who first coined the term “Artificial Intelligence” in 1956. Initially, the use of the term was confined to research papers and the research community; however, the coming up of sci-fi movies on AI by the movie industry created hype around AI, and the word trickled down into general use. The presentation of AI in the media these days gives an impression to readers that AI can be a ‘panacea’ to all their problems. 

Expecting an AI to build a city for you or having the exact brain-like feature is foolish. A mere prototype is portrayed as a new invention, while industry leaders sometimes make exaggerated claims; for example, Elon Musk said that by 2019 the world would have cars that will run on their own while the passenger will sleep. It’s better to leave things for researchers working in their domain to decide. It’s crucial to remember that the euphoria will lead to unrealistic expectations, disillusionment and may lead to yet another AI winter.

Image Credits: Towards Data Science

Ethical Concerns

Being the fourth pillar of democracy, it becomes much more necessary for media houses to report widely on the ethical concerns related to these emerging techs as well. From privacy issues related to facial recognition to bias in data sets to train the AI systems — all issues need to be brought out into the public domain. This will ensure fairness, transparency, explainability, privacy, security, and robustness in the field. As AI tools become increasingly commoditised, this will become a bigger problem. Hence, recent risks, such as the use of deep fake videos and text to discredit a competitor or the use of artificial intelligence to launch sophisticated cyberattacks, must be reported accurately.

Better training of journalists and more integrity is the need of the hour. However, to conclude, pointing fingers solely at the journalists will not work out as one of the causes of the hype around AI is the uneven distribution of resources. Hence, a closer interaction between researchers and journalists can be the right step ahead.

PS: The story was written using a keyboard.
Share
Picture of kumar Gandharv

kumar Gandharv

Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India