Filling The Fluency Gap With Long-Form Question Answering Models

Most of the modern day NLP applications are deployed for chatbots, which is a conversational task involving questions which require mostly one word answers. This makes open-domain question answering (QA) a crucial benchmarking task in natural language understanding (NLU). 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

AI researchers and linguists have been collaborating to figure out a way to supplement the pursuit of General AI with a structure, universal at its core and flexible in its deployment.

Most existing QA tasks are constrained — both to specific knowledge domains and to answers of a single word or phrase from the input passage. They require identifying a simple fact in a single web document, which is then presented as the answer, but existing QA systems can’t offer rich explanations the way people do.

To help advance question answering (QA) and create smarter assistants, Facebook AI has shared the first large-scale data set, code, and baseline models for long-form QA, which requires machines to provide long, complex answers.

Overview Of ELI5 Dataset

via Facebook AI

This new long-form QA dataset challenges existing algorithms because it requires processing many web documents comprising hundreds of thousands of words, identifying the relevant information in those documents, and writing a longform response to an often open-ended question. 

via Facebook AI

To build the data set, the researchers have leveraged a public subreddit titled “Explain Like I’m Five” (ELI5), in which an online community answers questions with responses that 5-year-olds can comprehend. The data set comprises 270K threads of diverse, open-ended questions that require multi-sentence answers. 

QA models for ELI5 mimic what many people do when they’re asked a question: Just like how humans launch their browsers to find/learn about a topic, these QA models too, make an attempt at exploiting the same strategy. 

ELI5 combines the challenges of synthesizing information from multiple sources, answering questions, and generating text into a real-world task, making it a more realistic and difficult task than prior QA data sets

Generating Abstract Models

via Facebook AI

This open source data set has also been used to introduce two models: 

  1. extractive models, which produce answers that are copied word for word from the supporting documents, and 
  2. abstractive models, which can rewrite the information in the supporting documents as needed.

For example, if the same question: How do jellyfish function without a brain and a nervous system, is to be considered,

The abstractive model will give an answer like this:

Jellyfish don’t have brains. They have a bunch of neuron systems that act like a filter to get information back.

Whereas, the extractive model will give answers of the format:

They have an unusual nervous system 451, because jellyfish are not bilaterally symmetrical.

A sequence-to-sequence (seq2seq) approach was used for abstractive modeling to synthesize information from various web sources to write a paragraph-length answer. 

Standard seq2seq models receive a training signal only from predicting the answer, whereas a language model approach would be trained to predict the question, web source, and answer.

Filling The Fluency Gap

The black cat has crossed the road — this sentence might sound simple but when one tries to translate it to native languages, all the historical inferences and semantic sophistications come into play. For some feline lover, that sentence might remind them of fluffy cat pictures and crossing the road is unimportant to them. For some, it could be an ominous warning, and for others a sign of prosperity. A harmless statement like this can throw amateur linguists into disarray.

At a more generalised level, it gets tricky for the machines to respond to queries which are interlaced with references and trends. 

The ELI5 data set and the accompanying baseline models help us make progress toward this goal of attaining a human like understanding of language.

The researchers firmly believe that these models can collectively affect the way people access information, particularly in the development of more intelligent assistants.

Read the full paper here.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM