Now Reading
How Deep Learning Changed The Game For Natural Language Processing


How Deep Learning Changed The Game For Natural Language Processing


A lot has been written about how deep learning is perfect for natural language understanding. In this article, we will explore why deep learning is uniquely suited to NLP and how deep learning algorithms are giving state-of-the-art results in a slew of tasks such as named entity recognition or sentiment analysis. For example, research scientist Sebastian Ruder points out that word embeddings are one of the most widely known best practices in NLP.



Deep NLP — A Huge Step Forward

Today, deep learning is delivering state-of-the-art results in practically every NLP-related task. Algorithms such as word2vec and GloVe have been pioneers in the field. Although they cannot be considered as deep learning methods — neural network in word2vec is shallow and GloVe implements a count-based method — the models trained with them are used as input data in applying deep learning for NLP approaches. Word embeddings are now considered as a great practice in the NLP field. For example, the NLP framework spaCy integrates word embeddings and deep learning models for tasks such as NER and Dependency Parsing in a native way, allowing the users to update the models or use their own models. A lot of headway is being made in this area as well. For example, Facebook AI Research (FAIR) lab released fastText, a pre-trained vector in 294 languages, which is reportedly better than GloVe or even word2Vec.

Another area where Deep Learning models are being applied with a lot of success is sentiment analysis. The neural networks are implemented in sentiment analysis to compute the belongingness of labels. Sentiment analysis is used to mine subjective text, opinions and sentiments to understand feelings, however, one of the big challenges of sentiment analysis is a lack of labelled data in the NLP field. This is where deep learning plays a pivotal role and deep neural networks, convolutional neural networks and other DL techniques are leveraged to solve a clutch of tasks in sentiment analysis like textual analysis, product review analysis and visual analysis. According to this research paper, Sentiment Analysis Using Deep Learning, DL networks like Recursive Neural Networks, Convolutional Neural Networks, Deep Belief Networks are used for tasks such as word representation estimate, sentence classification, sentence modelling, feature representation and text generation.

Here Is A List Of Tasks That Deep Learning Has Revolutionised

  • Named entity recognition — recognises people and places
  • Translation — translates a sentence from one language into another
  • Abstract summarisation — summarises a paragraph
  • Part of speech tagging — assigns a part of speech to a word
  • Parsing

Relationship Between LSTM And NLP

Long Short Term Memory (LSTM) networks have been around for a long time now. They gave the researchers the ability to train an RNN, and showed remarkable progress in the field of translation. However, this comes with its own set of shortcomings, such as speed limitation or the need for more data. Using LSTMs in production, Google Translate has achieved huge improvements in their machine translation, Google Brain's Lukasz Kaiser mentioned in a post.

See Also

Today, some of NLP’s real-life applications include improving support work in call centres, integrating the latest technology in FAQs, providing multi-lingual support and more. Other interesting use cases include automating resume search in India and analysing stock market predictions with deep learning. For example, Microsoft shared a use case of developing a model to predict the stock market performance of companies invested in by a financial services partner. The Redmond giant trained a deep learning model on text in earnings releases and other sources to drum up valuable insights for investment decision maker. One of the key challenges faced by the Microsoft team was building a predictive model that could do a preliminary review financial documents more thoroughly.

How India Is Using Machines To Summarise Data

India’s well-known Dr Pushpak Bhattacharya, director at the Indian Institute of Technology, Patna and Professor of Computer Science and Engineering at Indian Institute of Technology, Bombay, has collaborated with Elsevier to set up a centre for NLP and machine learning, known as Centre of Excellence at IIT Patna. One of the key objectives of the CoE is developing an automated support system for an article reviewing, especially for journals who face a huge number of entries. Dr Bhattacharyya was quoted by Elsevier saying that the detection of an article whether it is relevant or not relevant is a machine learning problem. He further added that the system being developed compares documents on deep semantic similarity through paragraph vectors. “We created semantic representations of the texts to check how similar they were to existing documents,” he was quoted in Elsevier.



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top