8 Outstanding Papers Presented at ACL 2023

The 61st chapter of the Association for Computational Linguistics brought together researchers and practitioners from the field of computational linguistics
Listen to this story

The 61st chapter of the Association for Computational Linguistics is ongoing in Toronto, Canada, bringing together researchers and practitioners from the field of computational linguistics. Out of the plethora of research papers showcased at the annual meet, we have picked out eight brilliant ones that caught our attention.

Backpack Language Models

AI language models exhibit gender bias in pronoun distributions, favouring gendered pronouns based on context. This bias can be flipped by replacing stereotypically associated professions. However, achieving consistent de-biasing across all contexts is challenging. Backpack LM addresses this by leveraging non-contextual sense vectors, capturing multiple aspects of a word’s meaning. By incorporating Backpack LM, we can mitigate biases and create fairer, more inclusive language models with improved interpretability and control.

Authors: John Hewitt, John Thickstun, Christopher D. Manning, and Percy Liang.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Do Androids Laugh at Electric Sheep? Humour “Understanding” Benchmarks from The New Yorker Caption Contest

Can AI models truly grasp humour? In the paper, researchers test them with the New Yorker cartoon caption contest tasks like matching jokes to cartoons, identifying winning captions, and explaining their humour. The authors explored the capabilities of both multimodal models, which directly engage with cartoon images, and language-only models, which are provided with rich descriptions. 




Authors: Jack Hessel, Ana Marasovic, Jena D. Hwang, Lillian Lee, Jeff Da, Rowan Zellers, Robert Mankoff and Yejin Choi.

Don’t Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments

Current language models lack the ability to ground to real-world environments. Existing approaches place the burden of generating executable plans on the language models themselves, which leads to challenges in maintaining grammaticality, faithfulness, and controllability. To address this, in this paper, researchers introduced Pangu, a framework that leverages the discriminative power of language models for grounded language understanding, instead of relying on their generative capabilities.

Authors: Yu Gu, Xiang Deng and Yu Su.

Minding Language Models’ (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

Large-scale neural language models lack basic Theory of Mind (ToM) — the ability to reason about the mental states of other people. Researchers propose SymbolicToM, a plug-and-play approach that enables reasoning about belief states of multiple characters using explicit symbolic representation. It tracks each entity’s beliefs, estimations of others’ beliefs, and higher-order reasoning through graphical representations, enhancing precision and interpretability in reading comprehension tasks.

Authors: Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi and Yulia Tsvetkov.

The Mechanical Bard: An Interpretable Machine Learning Approach to Shakespearean Sonnet Generation

Researchers explore automated generation of Shakespearean sonnets, utilising constrained decoding to adhere to meter, rhyme scheme, length, and poetic conventions. The approach produces sonnets resembling human-authored ones, with lyrical language, literary devices, and adherence to genre constraints, as confirmed by human evaluation.

Authors: Edwin Agnew, Michelle Qiu, Lily Zhu, Sam Wiseman and Cynthia Rudin.

World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models

Grounding language in the physical world is crucial for understanding word meanings. To introduce the factor in language models, researchers present Grounded Open Vocabulary Acquisition (GOVA), which explores grounding and bootstrapping in open-world language learning. Their initial approach is object-oriented BERT (OctoBERT), a visually-grounded language model that pre-trains on image-text pairs with a focus on grounding.

Authors: Ziqiao Ma, Jiayi Pan and Joyce Chai.

Forgotten Knowledge: Examining the Citational Amnesia in NLP

Have you ever wondered how old are the papers you cite? What if we fail to read older papers and benefit from important ideas? In this paper, researchers explore questions like these about Natural Language Processing (NLP) papers with data and graphs. 

Authors: Janvijay Singh, Mukund Rungta, Diyi Yang and Saif Mohammad.

Causes and Cures for Interference in Multilingual Translation

This research paper from Meta explores the little-understood phenomenon of interference, broadly defined as a negative interaction between different translation directions in a multilingual machine translation model. “Interference trends can be tricky to measure,” lead author Uri Shaham acknowledged in a December 16, 2022 tweet, summing up the paper’s central questions — “What causes interference or synergy between language pairs in multilingual translation? Do we actually need specialised algorithms to alleviate interference?”

Authors: Uri Shaham, Maha Elbayad, Vedanuj Goswami, Omer Levy, Shruti Bhosale.

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR