Now Reading
Do AI Researchers Fully Understand How Algorithms Work?

Do AI Researchers Fully Understand How Algorithms Work?


Even though there has been a groundswell of research in AI and ML, there is a body of work being presented with missing code and data. According to AI researcher Reza Zadeh, adjunct professor at Stanford University and CEO at Matroid tweeted that in 400 AI papers presented, only 6% included code and 30% included test data. Science journalist Matthew Hutson mentioned this problem of replication crisis or benchmarking in an article. Though the field of AI is booming, researchers are finding it difficult to benchmark their findings.



Sometimes, people who develop algorithms do not have a strong basis of why or how these work in the context. This is similar to a black box, where the internal working of a device/object is not known or hidden from the user of that device. In this article, we discuss newer research results in AI and ML where instead of it being better, has actually served to be less useful than its predecessors.

Do Researchers Fully Understand The Way Algorithms Work

Ali Rahimi, AI researcher at Google, emphasizes that ML researchers do not fully comprehend the way algorithms works. He even says that, it has become a form of alchemy. By comparing ML algorithms with alchemy, he emphasises that researchers tend to believe assumptions involved in their work around algorithms rather than understand why one algorithm is better/worse than the other. Also, he mentions that the approach followed in ML research still leans on a trial-and-error method.

In a research paper published by him along with other AI researchers at Google, the authors point out that the advancements are strictly not in line with the approach followed in developing them. They focus on latest case studies which show that newer findings need not necessarily be advantageous over the older algorithms. In fact, the findings suggest that newer work in ML have overlooked mechanisms as simple as hyperparameter tuning and ablation studies. Furthermore, the paper presents four key areas of improvement in ML which are listed below.

  1. High number of publicly available datasets .
  2. Cheaper hardware resources for computing ML
  3. Growing number of researchers in AI and ML
  4. Availability of open source ML platforms such as TensorFlow.

Now they emphasise that these features, in spite of being beneficial, will dwarf the real progress in ML. As a result, change structures and incentives are discussed in the end which call for rigor in research. The change structures suggest a number of improvements such as setting standards for research evaluation, recording and sharing experimental notes, improving review standards among other academical and practical aspects in researching. Lastly, Rahimi and team emphasis on improving knowledge instead of vigorously developing newer algorithms without thinking about its internal functions.

Image courtesy : sciencemag.org

Benjamin Recht, Associate Professor at University of California, Berkeley and also the co-author of Rahimi’s talk at NIPS 2017, has a similar stand on this opinion, where he goes on to say that AI needs to take insights from physics since it gives a better view of understanding phenomenon-wise. A few researchers have already incorporated the physics perspective in AI.

Differing Opinions

Yann LeCun, director of AI research at Facebook, has a contrasting opinion to Rahimi’s “alchemy” reference to ML. He strongly censures the blasphemous labelling of ML as ‘alchemy’. In response to Rahimi’s talk, he writes that theoretical understanding is irrelevant if practicality is not gained from theories. In his words, he says,

See Also
India AI Committee

“Criticizing an entire community (and an incredibly successful one at that) for practicing “alchemy”, simply because our current theoretical tools haven’t caught up with our practice is dangerous. Why dangerous? It’s exactly this kind of attitude that lead the ML community to abandon neural nets for over 10 years, despite ample empirical evidence that they worked very well in many situations. Neural nets, with their non-convex loss functions, had no guarantees of convergence (though they did work in practice then, just as they do now). So people threw the baby with the bath water and focused on “provable” convex methods or glorified template matching methods (or even 1957-style random feature methods)”

As mentioned above, he stresses that, extremely focussing on theoretical working rather than looking at its implementation will render ML useless in the field. In addition, LeCun suggests to work on the problem instead of blaming the ML community.

Outlook

Due to the harsh criticism received from experts within the AI community, researchers are now digging deep as to what happens inside ML and AI. They are devoting more time towards understanding the working rather than rigorously come out with newer algorithms. On an ending note, balancing both theory and practical aspects is crucial to any science regardless of development.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top