Google has developed an open domain table-to-text generation dataset called ToTTo to overcome the hallucination problem.
The ToTTo dataset consists of 121,000 training examples and has 7,500 examples each for development and test. The team at Google claims that ToTTo is a suitable benchmark for research in high precision text generation.
The Hallucination Problem
Hallucination refers to generating text that is not ‘faithful’ to the source. At the core of most NLP applications lie generating natural language text from the source content. Examples include summarisation, machine translation, data-to-text generation, etc. However, there have been countless instances where neural systems have generated text unfaithful to the source.
In most cases, the hallucination occurs due to divergence between the source and reference. That said, hallucination has also been observed in cleaned references. Meaning, hallucination happens when the system catches on to wrong correlations between different training data parts.
With the data and models getting bigger and more complicated, hallucination induced by wrong correlations can severely limit the usefulness of neural systems in many real-world situations – a pressing concern, especially when generating text related to medical, financial, or engineering fields. In such cases, it is completely unacceptable to both ‘hallucinate’ non-existent or incorrect content, and omit information.
The process of assessing the faithfulness of a generated text can be challenging. However, the task becomes more comfortable when the source content is in a tabular or structured format. Data in tabular form can also efficiently test a model’s capability for reasoning and numerical inference.
However, the tabular data falls flat when it comes to large scale structured datasets, which are often noisy. It is challenging to infer reference sentences from tabular data, making them unreliable for measuring hallucination.
To overcome this limitation, ToTTo dataset uses a sentence revision-based novel annotation process along with a controlled generation to assess hallucination. The annotations are highly accurate, rendering the dataset a suitable benchmark.
The experiment involved a Wikipedia table and a set of highlighted text acting as a source named x. The goal of the experiment was to produce a single line description y of the source text. The task is: y should describe the highlighted portion x from source, which could be part of a much larger table.
The process involved two steps:
- First the tables, collected from Wikipedia are paired with a summary sentence obtained from the supporting page context. This is done according to heuristics, such as word overlap and hyperlinks referencing the tabular data.
- After step one, there might be phrases in the sentences that are not supported by the table. The annotator deletes such phrases and also decontextualise so that the sentence is standalone.
In one example, the paper considered the following table where the table contents, metadata such as the title, and highlighted cells were given as input to produce the final text.
The tasks face a few challenges:
- The model sometimes outputs phrases which may not be entirely faithful to the source text and hallucination may still creep in.
- Due to the open domain nature of the task, it may struggle when dealing with rare topics. This was also demonstrated during the experiment concerning IBM’s microdrives capacities.
- Though the model seems to perform well on widely accepted metrics such as BLEU (bilingual evaluation understudy), it can not be interpreted as a definitive measure of performance.
Read the paper here.