Geoffrey Hinton: When genius runs in the family

In 2012, Hinton and some of his students published a seminal paper which showed that deep neural networks outperformed older models like Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs) at identifying speech patterns.

Widely considered as one of the godfathers of deep learning, Geoffrey Hinton was born in 1947 in Wimbledon, UK. Hinton’s family has generations of overachieving scientists, much like Hinton himself. Hinton had mentioned an instance from his childhood when his mother told him, “Be an academic, or be a failure.” Hinton’s great-great grandfather was George Boole, the founder of Boolean logic and algebra. Boolean logic would later become the mathematical foundation for modern computers. 

His wife, Mary Boole, was also a self-taught mathematician like George and a teacher of algebra and logic. After getting married to George, Mary too began contributing and advising him in his work which was unheard of for a woman in the mid-1800s. She even edited George’s book, ‘The Laws of Thought’, which propounded his theory on Boolean logic. Mary’s uncle, George Everest, was a geographer and a Surveyor General of India, after whom Mount Everest was named.

George Everest

Geoffrey’s great uncle Sebastian Hinton was the inventor of the jungle gym. Boole’s son and Geoffrey’s great grandfather Charles Howard Hinton was a mathematician and a fantasy writer who created the concept of the fourth dimension. Charles’ notion of a tesseract continues to pop up in comic books and in Marvel films even now. 

One of Geoffrey’s cousins, Joan Hinton, was a nuclear physicist and one of the few women to work on the Manhattan Project, a research and development project led by the US during the Second World War that made the first nuclear weapons. A great aunt, Ethell Lilian Voynich, was an author and musician, best known for writing ‘The Gadfly.’ Geoffrey’s father, Howard Hinton, was an entomologist who studied Mexican beetles and was elected as Fellow of the Royal Society. Geoffrey has stated that he was eventually forced to quit academia due to the pressure he felt from his family. His father used to often tell him, “Work really hard and maybe when you’re twice as old as me, you’ll be half as good.” 

Contributions to deep learning

In 2018, Geoffrey Hinton, along with Yoshua Bengio and Yann LeCun, won the Turing Award for their foundational contributions to deep learning. Hinton has been working with deep learning for a long time. In 2012, Hinton and some of his students published a seminal paper titled, ‘Deep Neural Networks for Acoustic Modelling in Speech Recognition‘, which showed that deep neural networks outperformed older models like Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs) at identifying speech patterns. The paper included four different research groups from tech giants like Microsoft, IBM and Google along with the University of Toronto and was significant because it was one of the first times when neural networks were proven to be state-of-the-art. The year turned out to be a breakthrough in AI. 

But Hinton’s history with neural networks goes back far deeper. When Frank Rosenblatt proposed the world’s first neural network machine called Perceptron in the 50s, it was still limited to solving specific functions. Rosenblatt created a divide between supporters of the perceptron and the traditional symbolic method that was being used by Marvin Minsky. While Rosenblatt was overly positive, the perceptron could only solve linearly separable functions and was still unable to solve NOR or NXOR functions. In 1969, Minsky wrote a paper titled, ‘Perceptrons: An Introduction to Computational Geometry’ pointing out the limitations of perception. Minsky’s paper essentially led to what came to be known as the first ‘AI Winter.’

Undeterred, Hinton continued to work on neural nets and published a paper titled, ‘Learning representations by back-propagating errors’ in 1986 with David Rumelhart and Ronald Williams. By then, Hinton had finished his Ph D in neural networks. The paper proved that the multiple hidden layers in neural networks could learn any function, solving the issues with the single-layer perceptron. This was known as the universal approximation theorem. The algorithm took the network’s loss function and backpropagated the errors to update the parameters in the lower layers. 

In 1987, Hinton accepted an offer from the Canadian Institute of Advanced Research (CIFAR) and started a program called Learning in Machines and Brain at the University of Toronto. Slowly, Hinton started teaching and working with others who believed in deep learning, like Ilya Sutskever, who later co-founded OpenAI. By 2009, AI’s focus on deep learning was starting to pay off. Finally, in 2012, Hinton and two other researchers won the annual ImageNet competition by building a computer vision system that managed to recognise 1,000 objects. Hinton and his team had used deep learning. In 2013, Hinton’s company DNNresearch Inc. was acquired by Google, after which he started working part-time at Google. 

Download our Mobile App

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox