Now Reading
10 most impressive Research Papers around Artificial Intelligence

10 most impressive Research Papers around Artificial Intelligence

Amit Paul Chowdhury

Artificial Intelligence research advances are transforming technology as we know it. The AI research community is solving some of the most technology problems related to software and hardware infrastructure, theory and algorithms. Interestingly, the field of AI AI research has drawn acolytes from the non-tech field as well. Case in point — prolific Hollywood actor Kristen Stewart’s highly publicized paper on Artificial Intelligence, originally published at Cornell University library’s open access siteStewart co-authored the paper, titled “Bringing Impressionism to Life with Neural Style Transfer in Come Swim with the American poet and literary critic David Shapiro and Adobe Research Engineer Bhautik Joshi.

Essentially, the AI-based paper talks about the style transfer techniques used in her short film Come Swim. However, Stewart’s detractors dismissed it as another “high-level case study.”

Meanwhile, the community is awash with ground-breaking research papers around AI.  Analytics India Magazine lists down the most cited scientific papers around AI, machine intelligence, and computer vision, that will give a perspective on the technology and its applications.

Most of these papers have been chosen on the basis of citation value for each. Some of these papers take into account a Highly Influential Citation count (HIC) and Citation Velocity (CV). Citation Velocity is the weighted average number of citations per year over the last 3 years.

Iris AI dips into her extensive research knowledge

A Computational Approach to Edge Detection: Originally published in 1986 and authored by John Canny this paper, on the computational approach to edge detection, has approximately 9724 citations. The success of this approach is defined by a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution.

Besides, the paper also presents a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. This helps in establishing the fact that edge detector performance improves considerably as the operator point spread function is extended along the edge.

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: This research paper was co-written by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, Claude E. Shannon, and published in the year 1955. This summer research proposal defined the field, and has another first to its name — it is the first paper to use the term Artificial Intelligence. The proposal invited researchers to the Dartmouth conference, which is widely considered the “birth of AI”.

A Threshold Selection Method from Gray-Level Histograms: The paper was authored by Nobuyuki Otsu and published in 1979. It has received 7849 paper citations so far. Through this paper, Otsu discusses a nonparametric and unsupervised method of automatic threshold selection for picture segmentation.

The paper delves into how an optimal threshold is selected by the discriminant criterion to maximize the separability of the resultant classes in gray levels. The procedure utilizes only the zeroth- and first-order cumulative moments of the gray-level histogram. The method can be easily applied across multi threshold problems. The paper validates the method by presenting several experimental results.

Stay Connected

Get the latest updates and relevant offers by sharing your email.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift: This 2015 article was co-written by Sergey Ioffe and Christian Szegedy. The paper received 946 citations and reflects on a HIC score of 56.

Kristen Stewart

The paper talks about how training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training. This is a result of change in parameters of the previous layers. The phenomenon is termed as internal covariate shift. This issue is addressed by normalizing layer inputs.

Batch normalization achieves the same accuracy with 14 times fewer training steps when applied to a state-of-the-art image classification model. In other words, Batch Normalization beats the original model by a significant margin.

Deep Residual Learning for Image Recognition: The 2016 paper was co-authored by Kaiming He, Xiangyu Zhang, and Shaoqing Ren. The paper has been cited 1436 times, reflecting on a HIC value of 137 and a CV of 582. The authors have delved into residual learning framework to ease the training of deep neural networks that are substantially deeper than those used previously.

Besides, the research paper explicitly reformulates the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. The research also delves into how comprehensive empirical evidence show that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Distinctive Image Features from Scale-Invariant Keypoints: This article was authored by David G. Lowe in 2004. The paper received 21528 citations  and explores the method for extracting distinctive invariant features from images. These can be utilized to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination.

The paper additionally delves into an approach which leverages these features for image recognition. This approach can help identify objects among clutter and occlusion while achieving near real-time performance.

See Also

Dropout: a simple way to prevent neural networks from overfitting: The 2014 paper was co-authored by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. The paper has been cited around 2084 times, with a HIC and CV value of 142 and 536 respectively. Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks.

The central premise of the paper is to drop units (along with their connections) from the neural network during training, thus preventing units from co-adapting too much. This helps in significantly reducing overfitting, while furnishing major improvements over other regularization methods.

Induction of decision trees: Authored by J. R. Quinlan, this scientific paper was originally published in 1986 and summarizes an approach to synthesizing decision trees that has been used in a variety of systems. The paper specifically describes one such system, ID3, in detail. Additionally, the paper discusses a reported shortcoming of the basic algorithm, besides comparing the two methods of overcoming it. To conclude the paper, the author presents illustrations of current research directions.

Apple published its first artificial intelligence research paper

Large-Scale Video Classification with Convolutional Neural Networks : This 2014 paper was co-written by 6 authors, Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. The paper has been cited over 865 times, and reflects on a HIC score of 24, and a CV of 239.

Convolutional Neural Networks (CNNs) are proven to stand as a powerful class of models for image recognition problems. These results encouraged the authors to provide an extensive empirical evaluation of CNNs on large-scale video classification. This was accomplished using a new dataset of 1 million YouTube videos belonging to 487 classes.

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference: The paper was published in 1988. Judea Pearl is the author to this article. The paper presents a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty.

Pearl furnishes a provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic.

What Do You Think?

If you loved this story, do join our Telegram Community.

Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
In Love
Not Sure

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top