What Should Tech Media Know Before They Report On Emerging Technologies Like AI?

Growing interest combined with a lack of understanding of the inner workings and understanding of what is going on is creating the perfect storm of interest and ignorance, resulting in a misinformation epidemic in the field.

Advertisement

We have all come across titles like “AI is brewing your next whisky,” “AI Can Make You Jobless,” or “AI Can Predict Your Future,” and many others. Most individuals may agree that the popular media’s coverage of the artificial intelligence phenomenon is fraught with misinformation and likely ignorance. AI researchers are aware of it, as are some journalists, and the regular media consumer is likely aware of it. The articles’ content is unclear, obscure, and often has misrepresented the research or study done to grab more eyeballs. 

In an article titled ‘The AI Misinformation Epidemic,’ Zachary Lipton, who was a Ph D candidate then and is currently an Assistant Professor at Carnegie Mellon University, described how the interest in machine learning increased among the general public. However, the growing interest combined with a lack of understanding of the inner workings and understanding of what is going on is creating the perfect storm of interest and ignorance, resulting in a misinformation epidemic in the field. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Let us now understand how well the tech media can report on emerging tech keeping the below-mentioned points in mind. Rather than being lazy or remaining ignorant – it’s better to crack a book, read the research, and understand the discipline’s language.

No misleading and jargonised use of AI

Writing a piece on artificial intelligence needs a clear understanding of the subject. Most of the time, terms like AI, machine learning, deep learning, reinforcement learning, etc., are used interchangeably without diving deep into the minute details of the field — which are not only different but are unique in nature and applications. It reminds us of the fact that “Everything cannot be termed as AI.”

Take these titles for instance, “AI can now design cities, but should we let it?,” or “Google AI creates its own ‘child’ AI that’s more advanced than systems built by humans,” — AI is characterised as a supernatural, independent agent with free-will. It’s capable of accomplishing impossible tasks that are beyond human capabilities. But, in truth, what today’s AI researchers and engineers create are nothing more than computer programmes that can mimic some characteristics of human intelligence.

Correctly depict expectations from AI

It was John McCarthy who first coined the term “Artificial Intelligence” in 1956. Initially, the use of the term was confined to research papers and the research community; however, the coming up of sci-fi movies on AI by the movie industry created hype around AI, and the word trickled down into general use. The presentation of AI in the media these days gives an impression to readers that AI can be a ‘panacea’ to all their problems. 

Expecting an AI to build a city for you or having the exact brain-like feature is foolish. A mere prototype is portrayed as a new invention, while industry leaders sometimes make exaggerated claims; for example, Elon Musk said that by 2019 the world would have cars that will run on their own while the passenger will sleep. It’s better to leave things for researchers working in their domain to decide. It’s crucial to remember that the euphoria will lead to unrealistic expectations, disillusionment and may lead to yet another AI winter.

Image Credits: Towards Data Science

Ethical Concerns

Being the fourth pillar of democracy, it becomes much more necessary for media houses to report widely on the ethical concerns related to these emerging techs as well. From privacy issues related to facial recognition to bias in data sets to train the AI systems — all issues need to be brought out into the public domain. This will ensure fairness, transparency, explainability, privacy, security, and robustness in the field. As AI tools become increasingly commoditised, this will become a bigger problem. Hence, recent risks, such as the use of deep fake videos and text to discredit a competitor or the use of artificial intelligence to launch sophisticated cyberattacks, must be reported accurately.

Better training of journalists and more integrity is the need of the hour. However, to conclude, pointing fingers solely at the journalists will not work out as one of the causes of the hype around AI is the uneven distribution of resources. Hence, a closer interaction between researchers and journalists can be the right step ahead.

More Great AIM Stories

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MORE FROM AIM
Amit Raja Naik
Oh boy, is JP Morgan wrong?

The global brokerage firm has downgraded Tata Consultancy Services, HCL Technology, Wipro, and L&T Technology to ‘underweight’ from ‘neutral’ and slashed its target price by 15-21 per cent.