Now Reading
ACL 2020 Announces Its Best NLP Research Papers

ACL 2020 Announces Its Best NLP Research Papers

The 58th Association for Computational Linguistics (ACL) Conference announced its best research papers on computational linguistics. Besides the best paper award, the conference also announced other awards which include the honourable mention papers, best theme paper and best demonstration paper.

Computational linguistics is the study of language from a computational perspective where the computational linguists are interested in providing knowledge-based or data-driven models of various kinds of linguistic phenomena. Computational linguistics researchers work on a number of natural language processing projects such as speech recognition systems, text-to-speech synthesizers, automated voice response systems, text editors, among others. 

How To Start Your Career In Data Science?

Below here we have listed the best research papers on computational linguistics from ACL 2020 Conference: –

Best Paper: 

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList

About: A team of researchers from Microsoft Research, the University of Washington and the University of California, Irvine introduced a model-agnostic and task agnostic methodology for testing NLP models known as CheckList. It includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. 

CheckList tests individual capabilities of the NLP model using three different test types. Further, CheckList reveals critical bugs in commercial systems developed by large software companies, indicating that it complements current practices well. The implementation of this methodology is also available on GitHub.

Read the paper here.

Honourable Mention Papers – Main Conference:

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

About: In this research paper, the researchers investigated whether it is still helpful to tailor a pre-trained model to the domain of a target task. They presented a study across four domains, which are biomedical and computer science publications, news, and reviews and eight classification tasks, showing that the second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings.

Read the paper here.

Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics

About: In this paper, the researchers add to the case to stop using BLEU as the de facto standard metric when evaluating MT systems. Instead, one must use other metrics such as CHRF, YISI-1, or ESIM, as they are more powerful in assessing empirical improvements.

Read the paper here.

Best Theme Paper:

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

About: In this paper, the researchers made an argument that the language modelling task cannot, in principle lead to learning of meaning because it only uses the form as training data. They stated, “Our aim is to advocate for an alignment of claims and methodology: Human-analogous natural language understanding (NLU) is a grand challenge of artificial intelligence, which involves mastery of the structure and use of language and the ability to ground it in the world.”

Read the paper here.

Honourable Mention Paper – Theme:

How Can We Accelerate Progress Towards Human-like Linguistic Generalization?

About: This research paper describes the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm that has become a crucial tool for estimating progress in NLU.

The paradigm consists of three stages, which are pretraining of a word prediction model on a corpus of arbitrary size, fine-tuning (transfer learning) on a training set representing a classification task and evaluation on the test-set drawn from the similar distribution as that of the training set.

See Also

Read the paper here.

Best Demonstration Paper:

GAIA: A Fine-grained Multimedia Knowledge Extraction System

About: In this paper, the researchers presented GAIA, which is claimed to be the first open-source multimedia knowledge extraction system. GAIA takes a huge stream of unstructured as well as heterogeneous multimedia data from different sources and languages as input, and then produce a coherent, structured knowledge base, indexing entities, relations, and events.

Read the paper here.

Honourable Mention Papers – Demonstrations:

Torch-Struct: Deep Structured Prediction Library

About: In this paper, the researchers introduced Torch-Struct, which is a library for the structured prediction that is designed to take the benefits of as well as integrate with vectorized-, auto-differentiation-based frameworks. TorchStruct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model.

Read the paper here.

Prta: A System to Support the Analysis of Propaganda Techniques in the News

About: In this paper, the researchers presented Prta, the PRopaganda persuasion Techniques Analyzer. Prta makes online readers aware of propaganda by automatically detecting the text fragments in which propaganda techniques are being used as well as the type of propaganda technique in use. 

Read the paper here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top