Now Reading
How Google Might Help You Find The Next Billion Dollar Idea

How Google Might Help You Find The Next Billion Dollar Idea

Ram Sagar

“Over 20 million active patents and applications exist worldwide with each patent containing an average of ~10,000 words.”

What’s one thing that is common amongst many billion-dollar or even trillion-dollar companies like Google or Amazon? It is their unique, patented idea. Be it Google’s PageRank algorithm or Amazon’s 1-click option; they were first presented as patents at the United States Patent Office. 

Anyone who has read a patent application would know how tedious it can be. Now imagine going through millions of such applications. The chances of missing out on a novel idea are high. And, if the process has to be thorough, it can be time-consuming. For instance, Google’s PageRank patent was granted three years after the application. Few other patents can take longer. There is no certainty. So, Google wants to assist the patent offices with their BERT algorithm. BERT or Bidirectional Encoder Representations from Transformers is a landmark achievement in the history of natural language processing. Especially for Google, this algorithm helped revolutionise its search by making it smarter and faster. Now the search giant wants to leverage the same technology– with some tweaking– to skim through millions of patent applications.

According to Google, patents represent an ideal domain for the application of the BERT algorithm from both a technical fit and a business-value perspective. Patents involve a large and unique text corpus. Over 20 million active patents and applications exist worldwide with each patent containing an average of ~10,000 words and distinct word distributions and written using peculiar syntactic structures. Google has trained BERT algorithm exclusively on patent text to generate contextual synonyms for patents and help identify the novelty of an idea.

How Does BERT Help

Since its release, the BERT algorithm has demonstrated outstanding performance across a number of domains, including search, chatbots, sentiment analysis, and autocomplete. Researchers have suggested that the BERT algorithm is best suited for domains where large amounts of training text are available, and the text is complex with ambiguous uses that can be highly context-specific.



Patent transactions, including licensing, litigation, and acquisitions, total in the hundreds of billions of dollars per year, and patent offices around the world spend upwards of ten billion per year in operational costs, so even small efficiency gains could have large monetary benefits.

Identifying the right terms is especially difficult for patent searching since a patent, by definition, must contain a novel idea, and novel ideas are often described in novel ways. This means that a specific term may be used in a way that it has never been used before. 

Source: Google

“AI is becoming ingrained in the daily life of Americans, facilitated by its rapid integration into products such as voice recognition systems in mobile phones, robotic appliances, satellites, search engines, and so much more,” 

Andrei Iancu, USPTO.
Source: Google

In their white paper demonstrating the use of BERT for patent offices, Google illustrated how their algorithm could be smart enough to understand the context in spite of the synonymous nature of the keywords or the tokenized words. As shown above, BERT is smart enough to weigh the same context term differently in the sample abstracts. The algorithm realises that the traditional relationship between ‘eye’ and ‘needle’ does not exist given the broader context.

To train the model, Google used a Large BERT training implantation using the core open-sourced Python libraries with the following hyperparameters trained on an 8×8 TPU slice on GCP. Of the over 2,000 terms that the USPTO provides as example synonyms, ~200 exist in multiple CPC codes. These synonyms that exist across multiple CPC codes provide a good mechanism to test how well the BERT algorithm is able to generate different synonyms for the same term in different contexts.

According to Google, this is how BERT approaches patent applications:

1. Select a CPC code.

See Also

2. Select a term.

3. Query ‘N’ number of patent documents containing the term within the given CPC code.

4. Generate predictions for each term for each document.

5. Calculate aggregate metrics reflecting the highest predicted terms on average across all ‘N’ documents.

This is the first instance, where Google is extending the capability of BERT to a new domain — patent search. This BERT algorithm is trained exclusively on patent text, focusing primarily on the use case of synonym generation and also opening further innovation towards classification and autocomplete. 

Know more here.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
What's Your Reaction?
Excited
0
Happy
1
In Love
0
Not Sure
1
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top