Now Reading
Complete Guide on Language Modelling: Unigram Using Python

Complete Guide on Language Modelling: Unigram Using Python

Language_model

Language modelling is the speciality of deciding the likelihood of a succession of words. These are useful in many different Natural Language Processing applications like Machine translator, Speech recognition, Optical character recognition and many more.In recent times language models depend on neural networks, they anticipate precisely a word in a sentence dependent on encompassing words. However, in this project, we will discuss the most classic of language models: the n-gram models.

In natural language processing, an n-gram is an arrangement of n words. For example “Python” is a unigram (n = 1), “Data Science” is a bigram (n = 2), “Natural language preparing” is a trigram (n = 3) etc.Here our focus will be on implementing the unigrams(single words) models in python.

Deep Learning DevCon 2021 | 23-24th Sep | Register>>

Assumptions For a Unigram Model

1.  It depends on the occurrence of the word among all the words in the dataset.

2.  Probability of a word is independent of all the words before its occurrence.

Code Implementation

Import all the libraries required for this project.

Looking for a job change? Let us help you.
import nltk
nltk.download('reuters')
from nltk.corpus import reuters
nltk.download('punkt')

Reuters dataset consists of 10788 documents from the Reuters financial newswire services.

Store the words in a list.

words = list(reuters.words())
words

len(words)

We will start by creating a class and defining every function in it. The idea is to generate words after the sentence using the n-gram model. Predicting the next word with Bigram or Trigram will lead to sparsity problems. To solve this issue we need to go for the unigram model as it is not dependent on the previous words.

Let’s calculate the unigram probability of a sentence using the Reuters corpus.

class NGrams:
    def __init__(self, words, sentence):
        self.words = words
        self.sentence = sentence
        self.tokens = sentence.split()
    def get_tokens(self):
        return self.tokens
    def add_tokens(self,value):
        temp = self.tokens
        temp.append(value)
        self.tokens = temp
        return self.tokens
    def unigram_model(self):
        self.next_words = np.random.choice(words, size=3)
        return self.next_words

Here we need to calculate the probabilities for all the various words present in the results of the over unigram model. Select the top three words based on probabilities.

   def get_top_3_next_words(self,next_words):
        next_words_dict = dict()
        for word in next_words:
            if not word in next_words_dict.keys():
                next_words_dict[word] = 1
            else:
                next_words_dict[word] += 1
          for i,j in next_words_dict.items():
              next_words_dict[i] = np.round(j/len(next_words),2)
        return sorted(next_words_dict.items(), key = lambda k:(k[1], k[0]), reverse=True)[:3]
    def model_selection(self):
            top_words = self.unigram_model()
            print("unigram-model")
            return top_words
model = NGrams(words=words, sentence=start_sent)
import numpy as np
for i in range(5):
    values = model.model_selection()
    print(values)
    value = input()
    model.add_tokens(value)

The model generates the top three words. We can select a word from it that will succeed in the starting sentence. Repeat the process up to 5 times. The result is displayed below.

print(model.get_tokens())

Final step is to join the sentence that is produced from the unigram model.

print(" ".join(model.get_tokens()))

Final Thoughts

In this article, we have discussed the concept of the Unigram model in Natural Language Processing. Further, we can research on the topic of Bi-gram and Trigram to generate words after the sentences. Finally, I hope this article is useful to you.

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top