Now Reading
What Is Text Modular Network?

What Is Text Modular Network?

  • A team of scientists from the Allen Institute for AI developed a general framework called the Text Modular Networks for interpretable systems.

Complex machine learning tasks such as question-answer and numerical reasoning will be easier to solve if they are decomposed into smaller functions that existing methods can solve. Based on this approach, a team of scientists from the Allen Institute for AI developed a general framework called the Text Modular Networks for interpretable systems.

TNN, explained

Text Modular Networks (TMN) learn the textual input-output behaviour of existing models through their datasets. It is different from earlier approaches involving task decomposition, explicitly designed for each task and produced decomposition independently of existing submodels.

REGISTER FOR OUR UPCOMING ML WORKSHOP

For this study, the team selected the Question Answer task to show how to train a next-question generator to produce sub-questions targeting appropriate submodels sequentially. The next-question generator lies at the core of the TMN framework. The output is a sequence of sub-questions and the answers providing a human-interpretable description of the model’s neuro-symbolic reasoning.

The TMNs use only distant supervision learning to learn how to produce these decompositions; additionally, there is no need for an explicit human annotation. The team also observed that by giving appropriate hints, the capabilities of the existing sub-models can be captured by training a text-to-text system to generate the questions in the sub-models training dataset.

To generate questions, the team trained a BART model, which is a denoising autoencoder for pretraining sequence-to-sequence models, and fed preferred vocabulary as hints. The sub-task question models generated the sub-questions and identified appropriate sub-models. Through this, the team was able to extract likely intermediate answers from each step of the complex question. The resulting sub-questions are in the language of corresponding sub-models that can be now used to train the next-question generator to repeat the whole process.

Using the TMN framework, a modular system, MODULARQA, was developed. This system gives the reasoning in natural language by decomposing complex questions to those answerable by two sub-models — neural factoid single-span QA model and a symbolic calculator.

MODULARQA was evaluated on questions from two datasets–DROP and HotpotQA that results in the first cross-dataset decomposition-based interpretable QA system. Its implementation involves multi-hop questions that can be answered using five classes of reasoning found in the existing QA dataset: composition, comparison, conjunction, difference, and complementation.

MODULARQA demonstrated cross-dataset versatility, robustness, sample efficiency and ability to explain its reasoning in natural language. It even outperformed BlackBox methods by 2 percent FI in a limited data setting.

Comparison with previous approaches

Earlier QA systems were generally designed as a combination of distinct modules that comprised outputs of lower-level language tasks to solve higher-level tasks. While a good approach, its application has been limited to pre-determined composition structures.

The question decomposition method has been pursued before as well. However, there are a few issues with this, such as:

See Also

  • A few methods focussed directly on training a model to produce subquestions using question span. This technique is found to be unsuitable for datasets such as DROP.
  • Many techniques generate simpler questions without capturing the required reasoning.
  • An approach where the model collects full question decomposition meaning representations (QDMR) annotations is effective. However, it may still require human intervention and may not generalize well.

In contrast, TMNs start with pre-determined models and generate decompositions in their language.

There have been many multi-hop QA models designed for HotpotQA and DROP. However, these models are often complex that focus on only one of the two datasets. These models could produce post-hoc explanation only on HotpotQA when the supporting sentences are annotated. However, these explanations are not faithful and are often shown to be gameable. With TMN, however, the scientists were able to produce explanations for multiple datasets without needing such annotations; this makes it more generalizable to future datasets.

By comparison, TMN is similar to the models based on neural module networks, which compose task-specific simple neural modules. However, the two approaches differ mainly on two grounds: The formulations of NMN target only one dataset and do not reuse the existing QA system;

It provides an attention-based explanation, the interpretability for which is unclear.

Read the full paper here.

What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top