Now Reading
Batch Norm Patent Granted To Google: Is AI Ownership The Gold Rush Of 21st Century?

Batch Norm Patent Granted To Google: Is AI Ownership The Gold Rush Of 21st Century?


The machine learning community has witnessed a surge in releases of frameworks, libraries and software. Tech pioneers like Google, Amazon, Microsoft and others have insisted their intention behind open-sourcing their technology. However, there has been a growing trend of these tech giants claiming ownership for their innovations.

According to the National Bureau of Economic Research study, in 2010, there were 145 US patent filings that mentioned machine learning, compared to 594 in 2016. Google, especially, has filed patents related to machine learning and neural networks 99 times in 2016 alone.



via WIRED

After getting rejecting initially, the US patent office, recently, has granted the ownership of batch normalisation to Google with an expiration date marked as 2038. 

Here is a timeline of the journey of the BatchNorm patent application:

  • 2015-01-28 Priority to US201562108984P
  • 2016-01-28 Application filed by Google LLC
  • 2016-07-28 Publication of US20160217368A1
  • 2019-09-17 Publication of US10417562B2
  • 2019-09-17 Application granted
  • 2019-10-15 Application status is Active
  • 2038-01-01 Adjusted expiration

Google does patent many of its products but there is a reason why batch normalisation patent gets all the attention. 

Why BatchNorm Touched A Nerve

In the original BatchNorm paper, the authors, Sergey Ioffe and Christian Szegedy of Google introduced this method to address a phenomenon called internal covariate shift. 

This occurs because the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. 

This slows down the training by requiring lower learning rates and careful parameter initialisation. This makes training the models harder.

The introduction of batch normalised networks helped achieve state-of-the-art accuracies with 14 times fewer training steps. 

This reduction in training duty led to the emergence of many improvements within the machine learning community. 

In spite of voicing for the democratisation of technology, companies like Google are rushing to claim ownership of advanced approaches in domains like AI. However, before jumping the bandwagon of critics, one should know that machine learning techniques like batch normalisation are a product of Google. 

The researchers working at Google used the resources available at Google to develop new frameworks to optimise machine learning applications. 

So, claiming ownership does sound right as it has been classified as a defensive patenting. One argument that is being made for Google is that it is better if the creators have the ownership rather than a patent troll claiming ownership and creating hurdles amidst lawsuits. 

See Also
Divya Shanmugam

However, a large half of the machine learning community is still sceptical of this consistent claim of ownership. Few members of the most happening machine learning forums like that of Reddit likened this whole event to a loaded gun that can backfire.

Working Around This Trilemma

There is no denying the fact that Google AI has been pioneering even before machine learning became a worldwide phenomenon. Their technology is being used to power up lives and promote growth. 

The use of BatchNorm has literally become a norm for many of the previous applications. This also is the reason for the rising scepticism amongst the practitioners. For instance, if a computer vision startup has used this technique to train one of its neural networks then should it be checking the mailbox for a probable lawsuit forever? 

That said, ever since the news broke out that Google has eyed batch normalisation technique, the alternative approaches to batch norm technique have gained traction. Here few methods that look promising:

  • Fixup Initialisation: Fixed-update initialisation (Fixup) was aimed at solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialisation.
  • Using Weight Normalisation: Weight normalisation accelerates the convergence of stochastic gradient descent optimisation by re-parameterising weight vectors in neural networks. 
  • General Hamming Network (GHN): The researchers at Nokia technologies in their work illustrated that the celebrated batch normalisation (BN) technique actually adapts the “normalised” bias such that it approximates the rightful bias induced by the generalised hamming distance.
  • Group Normalisation (GN): GN divides the channels into groups and computes within each group the mean and variance for normalisation. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes.
  • Switchable Normalisation (SN): Switchable Normalisation (SN) learns to select different normalisers for different normalisation layers of a deep neural network.
  • Attentive Normalisation (AN): Attentive Normalisation(AN) is a novel and lightweight integration of feature normalisation and feature channel-wise attention.

These are only a few of the many approaches that have surfaced in recent times and we can safely assume that there will be more coming up from this space. 


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
1
In Love
0
Not Sure
0
Silly
0
Scroll To Top