MITB Banner

Do AI Researchers Fully Understand How Algorithms Work?

Share

Even though there has been a groundswell of research in AI and ML, there is a body of work being presented with missing code and data. According to AI researcher Reza Zadeh, adjunct professor at Stanford University and CEO at Matroid tweeted that in 400 AI papers presented, only 6% included code and 30% included test data. Science journalist Matthew Hutson mentioned this problem of replication crisis or benchmarking in an article. Though the field of AI is booming, researchers are finding it difficult to benchmark their findings.

Sometimes, people who develop algorithms do not have a strong basis of why or how these work in the context. This is similar to a black box, where the internal working of a device/object is not known or hidden from the user of that device. In this article, we discuss newer research results in AI and ML where instead of it being better, has actually served to be less useful than its predecessors.

Do Researchers Fully Understand The Way Algorithms Work

Ali Rahimi, AI researcher at Google, emphasizes that ML researchers do not fully comprehend the way algorithms works. He even says that, it has become a form of alchemy. By comparing ML algorithms with alchemy, he emphasises that researchers tend to believe assumptions involved in their work around algorithms rather than understand why one algorithm is better/worse than the other. Also, he mentions that the approach followed in ML research still leans on a trial-and-error method.

In a research paper published by him along with other AI researchers at Google, the authors point out that the advancements are strictly not in line with the approach followed in developing them. They focus on latest case studies which show that newer findings need not necessarily be advantageous over the older algorithms. In fact, the findings suggest that newer work in ML have overlooked mechanisms as simple as hyperparameter tuning and ablation studies. Furthermore, the paper presents four key areas of improvement in ML which are listed below.

  1. High number of publicly available datasets .
  2. Cheaper hardware resources for computing ML
  3. Growing number of researchers in AI and ML
  4. Availability of open source ML platforms such as TensorFlow.

Now they emphasise that these features, in spite of being beneficial, will dwarf the real progress in ML. As a result, change structures and incentives are discussed in the end which call for rigor in research. The change structures suggest a number of improvements such as setting standards for research evaluation, recording and sharing experimental notes, improving review standards among other academical and practical aspects in researching. Lastly, Rahimi and team emphasis on improving knowledge instead of vigorously developing newer algorithms without thinking about its internal functions.

Image courtesy : sciencemag.org

Benjamin Recht, Associate Professor at University of California, Berkeley and also the co-author of Rahimi’s talk at NIPS 2017, has a similar stand on this opinion, where he goes on to say that AI needs to take insights from physics since it gives a better view of understanding phenomenon-wise. A few researchers have already incorporated the physics perspective in AI.

Differing Opinions

Yann LeCun, director of AI research at Facebook, has a contrasting opinion to Rahimi’s “alchemy” reference to ML. He strongly censures the blasphemous labelling of ML as ‘alchemy’. In response to Rahimi’s talk, he writes that theoretical understanding is irrelevant if practicality is not gained from theories. In his words, he says,

“Criticizing an entire community (and an incredibly successful one at that) for practicing “alchemy”, simply because our current theoretical tools haven’t caught up with our practice is dangerous. Why dangerous? It’s exactly this kind of attitude that lead the ML community to abandon neural nets for over 10 years, despite ample empirical evidence that they worked very well in many situations. Neural nets, with their non-convex loss functions, had no guarantees of convergence (though they did work in practice then, just as they do now). So people threw the baby with the bath water and focused on “provable” convex methods or glorified template matching methods (or even 1957-style random feature methods)”

As mentioned above, he stresses that, extremely focussing on theoretical working rather than looking at its implementation will render ML useless in the field. In addition, LeCun suggests to work on the problem instead of blaming the ML community.

Outlook

Due to the harsh criticism received from experts within the AI community, researchers are now digging deep as to what happens inside ML and AI. They are devoting more time towards understanding the working rather than rigorously come out with newer algorithms. On an ending note, balancing both theory and practical aspects is crucial to any science regardless of development.

Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.