Advertisement

Active Hackathon

How Google Might Help You Find The Next Billion Dollar Idea

“Over 20 million active patents and applications exist worldwide with each patent containing an average of ~10,000 words.”

What’s one thing that is common amongst many billion-dollar or even trillion-dollar companies like Google or Amazon? It is their unique, patented idea. Be it Google’s PageRank algorithm or Amazon’s 1-click option; they were first presented as patents at the United States Patent Office. 

Anyone who has read a patent application would know how tedious it can be. Now imagine going through millions of such applications. The chances of missing out on a novel idea are high. And, if the process has to be thorough, it can be time-consuming. For instance, Google’s PageRank patent was granted three years after the application. Few other patents can take longer. There is no certainty. So, Google wants to assist the patent offices with their BERT algorithm. BERT or Bidirectional Encoder Representations from Transformers is a landmark achievement in the history of natural language processing. Especially for Google, this algorithm helped revolutionise its search by making it smarter and faster. Now the search giant wants to leverage the same technology– with some tweaking– to skim through millions of patent applications.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

According to Google, patents represent an ideal domain for the application of the BERT algorithm from both a technical fit and a business-value perspective. Patents involve a large and unique text corpus. Over 20 million active patents and applications exist worldwide with each patent containing an average of ~10,000 words and distinct word distributions and written using peculiar syntactic structures. Google has trained BERT algorithm exclusively on patent text to generate contextual synonyms for patents and help identify the novelty of an idea.

How Does BERT Help

Since its release, the BERT algorithm has demonstrated outstanding performance across a number of domains, including search, chatbots, sentiment analysis, and autocomplete. Researchers have suggested that the BERT algorithm is best suited for domains where large amounts of training text are available, and the text is complex with ambiguous uses that can be highly context-specific.

Patent transactions, including licensing, litigation, and acquisitions, total in the hundreds of billions of dollars per year, and patent offices around the world spend upwards of ten billion per year in operational costs, so even small efficiency gains could have large monetary benefits.

Identifying the right terms is especially difficult for patent searching since a patent, by definition, must contain a novel idea, and novel ideas are often described in novel ways. This means that a specific term may be used in a way that it has never been used before. 

Source: Google

“AI is becoming ingrained in the daily life of Americans, facilitated by its rapid integration into products such as voice recognition systems in mobile phones, robotic appliances, satellites, search engines, and so much more,” 

Andrei Iancu, USPTO.
Source: Google

In their white paper demonstrating the use of BERT for patent offices, Google illustrated how their algorithm could be smart enough to understand the context in spite of the synonymous nature of the keywords or the tokenized words. As shown above, BERT is smart enough to weigh the same context term differently in the sample abstracts. The algorithm realises that the traditional relationship between ‘eye’ and ‘needle’ does not exist given the broader context.

To train the model, Google used a Large BERT training implantation using the core open-sourced Python libraries with the following hyperparameters trained on an 8×8 TPU slice on GCP. Of the over 2,000 terms that the USPTO provides as example synonyms, ~200 exist in multiple CPC codes. These synonyms that exist across multiple CPC codes provide a good mechanism to test how well the BERT algorithm is able to generate different synonyms for the same term in different contexts.

According to Google, this is how BERT approaches patent applications:

1. Select a CPC code.

2. Select a term.

3. Query ‘N’ number of patent documents containing the term within the given CPC code.

4. Generate predictions for each term for each document.

5. Calculate aggregate metrics reflecting the highest predicted terms on average across all ‘N’ documents.

This is the first instance, where Google is extending the capability of BERT to a new domain — patent search. This BERT algorithm is trained exclusively on patent text, focusing primarily on the use case of synonym generation and also opening further innovation towards classification and autocomplete. 

Know more here.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

A Case for IT Professionals Switching Jobs Frequently

For Indian companies, the ability to retain employees has become a tight ropewalk between transforming their working models and adopting a hybrid working model successfully. Over 60% respondents in the Qualtrics survey said that they would look for a new job, if forced to return to work from office full time.