The year 2018 has seen a meteoric rise in the number of papers released in the field of AI. There were also numerous tools and techniques open sourced by the giants to carry the baton of AI research forward. Google’s BERT, for instance, introduced new benchmarks for natural language understanding.
Google has been pioneering research in the area of algorithms over the past decade. Their discoveries and innovations are usually open sourced making way for more innovations.
However, Google, somehow infamously came under the scanner when it started to build an extensive portfolio of patents related to deep learning, including patents on essential technologies such as DQN, dropout, and now, batch normalization.
These techniques are widely used across many popular machine learning models. And, since these machine learning models are deployed for speech, vision and other applications, the machine learning community have started to feel the heat.
They are surrounded by a certain sense of paranoia about the rise of machine learning monopoly.
Late last year, the United States Patents and Trademark Office (USPTO) issued the first non-final rejection) for the patent application with the application number 15-009647(batch normalization layers) filed by Google Inc.
Batch Normalization Overview
In the original BatchNorm paper, the authors, Sergey Ioffe and Christian Szegedy of Google introduced this method to address a phenomenon called internal covariate shift.
This occurs because distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization. This makes training the models harder.
The introduction of batch normalizsed networks helped achieve the state-of-the-art accuracies with 14 times fewer training steps.
This reduction in training duty led to the emergence of many improvements within the machine learning community.
For instance, Coconet is a fairly straightforward CNN with batch normalization. This gives Collaborative Convolutional Network (CoCoNet) more power to encode the fine-grained nature of the data with limited samples in an end-to-end fashion. The applications of CoCoNet was demonstrated recently when it was used to churn out Bach like melodies with few clicks.
Benevolence Or A Subtle Attempt At Monopoly?
A patentable invention should meet five significant criteria:
- The patented invention must be constructed of patentable items and subjects.
- The inventions have to be usable in an industrial or other useful context.
- The invention must be original.
- The invention has to be inventive, or not obvious.
- The patent paperwork must meet the requirements of the patent office.
After the issuance of non final rejection, Google, has made some amendments to their initial claims and now their patent for batch normalization is being reconsidered. The European Patent Office, however, have already granted Google a patent for the same batch normalization layers patent application
The USPTO’s first office action can be characterised as “the complete package” of rejection reasons related to software patents, citing 14 instances of prior art (4 patents and 3 technical papers were explicitly cited in the office action), which provide the basis for denying the registrability of Google’s patent.
Registering key patents related to deep learning, despite Google’s promotion of open source software, has become a topic of debate for many online forums. People started picking sides immediately. While one group reproved Google’s moves for being too naive, others put forth Google’s claim to be for the greater good.
The way changes were made after the initial rejection seemed sly to many. And, since batch normalization gained popularity with the masses, few developers are left disconcerted whether to focus on advancement or fear for infringement.
There is no denying the fact that Google has played a key role in democratisation of machine learning. Moreover, there is also a risk involved in keeping their findings out in the open only for some patent troll to claim it later, which can lead to exploitation. However, this also poses the question as to whether we should rely on the benevolence of tech giants.
“Every evolving intelligence will eventually encounter certain very special ideas – e.g., about arithmetic, causal reasoning and economics–because these particular ideas are very much simpler than other ideas with similar uses,” said the AI maverick Marvin Minsky four decades ago.
The claim for the ownership of ideas have been popular at least since the time of Newton and Leibnitz. Innovations are usually result of accumulation of some ingenious ideas. Two people can stumble upon the same idea at the same time oblivious of its existence.
When an invention is tossed between rightful ownership and democratisation, it delays the progress, which the invention has promised in the first place.
For more information check this link.
Register for our upcoming events:
- WEBINAR: HOW TO BEGIN A CAREER IN DATA SCIENCE | 24th Oct
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Our annual ranking of Artificial Intelligence Programs in India for 2019 is out. Check here.
Provide your comments below
What's Your Reaction?
A technology journalist with a master's degree in Robotics. Likes to write about machine learning advancements. email:firstname.lastname@example.org