Research scientists from the Cloud AI team of Google Research published a blog saying that the recent sequence modelling that directly models relationships between all words in a text selection has demonstrated SOTA performance on natural language tasks.
A natural approach to handling form document understanding tasks is first to serialise the form documents and then apply SOTA sequence models. However, form documents often have complex layouts that contain structured objects like – tables, columns, and text blocks. Their variety of layout patterns makes serialisation difficult and limits the performance of strict serialisation approaches. These unique challenges in form document structural modelling have been underexplored.
An illustration of the form document information extraction task using an example from the FUNSD dataset.
Sign up for your weekly dose of what's up in emerging technology.
Image – Google blog
The paper by research scientists Chen-Yu Lee, Chun-Liang Li and co-authors, “FormNet: Structural Encoding Beyond Sequential Modeling in Form Document Information Extraction”, proposed a structure-aware sequence model, called FormNet, to mitigate the sub-optimal serialisation of forms for document information extraction.
They explained their process like this: To begin with, they designed a Rich Attention (RichAtt) mechanism that leverages the 2D spatial relationship between word tokens for attention weight calculation. Then, they constructed Super-Tokens for each word by embedding representations from their neighbouring tokens through a graph convolutional network. In the end, they demonstrated that FormNet outperforms existing methods using less pre-training data and achieves SOTA performance on the CORD, FUNSD, and Payment benchmarks.