Why did AI pioneer Marvin Minsky oppose neural networks?

Yes, I am the devil
Marvin Minsky

The Dartmouth Summer Research Project on Artificial Intelligence in 1956 is widely considered as the founding moment of artificial intelligence as a field: John Mccarthy, Marvin Minsky, Claude Shannon, Ray Solomonoff etc attended the eight-week long workshop held in New Hampshire. 

On the fiftieth anniversary of the conference, the founding fathers of AI returned to Dartmouth. When Minsky took the stage, Salk Institute professor Terry Sejnowski told him some AI researchers view him as the devil for stalling the progress of neural networks. “Are you the devil?” Sejnowski asked. Minsky brushed him off and went on to explain the limitations of neural networks, pointing out neural networks haven’t delivered the goods yet. But Sejnowski was persistent. He asked again: “Are you the devil?”. A miffed Minsky retorted: “Yes, I am.”

Free Masterclass on AI innovation —–>>

Minsky: The AI pioneer

Turing award winner Marvin Minsky has made major contributions in cognitive psychology, symbolic mathematics, artificial intelligence, robot manipulation, and computer vision. As an undergraduate student at Harvard, Minsky built SNARC, considered the ‘first neural network’ by many, using over 3000 vacuum tubes and a few components from the B-52 bomber.

Minsky’s work on artificial intelligence using symbol manipulation in the 1950s and 1960s has been critical in advancing Symbolic AI. His 1960 paper, “Steps toward Artificial Intelligence,” put symbol manipulation at the centre of understanding intelligence. 

Minsky and John McCarthy established MIT Artificial Intelligence Laboratory in the early 1960s. The lab has become famous for its scientific endeavours in modelling human perception and intelligence and its efforts to build practical robots. Minsky has built mechanical hands with tactile sensors, and an arm with fourteen-degree-of-freedom. 

Neural network snaps

In July 1958, the US Office of Naval Research demoed Perceptron: An IBM 704 was fed a series of punch cards, and after 50 trials, the 5-ton computer learned to identify cards marked on the left from cards marked on the right.

“Stories about the creation of machines having human qualities have long been a fascinating province in the realm of science fiction. Yet we are about to witness the birth of such a machine – a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control,” said Frank Rosenblatt, the creator of Perceptron. 

Perceptron had the potential to launch a thousand neural networks, except it faced a stumbling block–Marvin Minsky.

Minsky cast aspersions on the utility of Perceptron. He claimed neural networks could not handle anything beyond what Rosenblatt had demonstrated. Minsky lit into Rosenblatt every chance he got.  

In 1966, a slew of researchers, including Marvin Minsky, assembled at the Hilton hotel in San Juan to review the advances in pattern recognition. John Munson, a scientist at SRI, the Northern California lab, leveraged Rosenblatt’s ideas to build a neural network that could read handwritten characters. He spoke about his research at the conference. In the open forum after the lecture, Minsky stood up and asked: “How can an intelligent young man like you waste your time with something like this?”

Minsky and Seymour Papert dismantled Rosenblatt’s ideas in the 1969 book, ​​Perceptrons: An Introduction to Computational Geometry. The book presented mathematical proofs highlighting Perceptron’s drawbacks, including its inability to perform an ‘exclusive-or’ function. The book sounded the death knell for Perceptron and put neural networks on the back burner.

In the mid-1980s, Geoff Hinton, a young professor at Carnegie Mellon University, and his team built a complex network of artificial neurons, addressing some of the concerns raised by Minsky earlier. He inserted a hidden layer of neurons allowing the networks to learn more complicated functions. However, it had no long-term impact leading to neural networks falling out of favour by the late 1990s. In 2006, Hinton improved on the work of Yann LeCun to introduce a new technique called deep learning. 

Years later, Andrew Ng pitched a project to Google founder Larry Page. He said deep learning would change the game in image recognition, natural language understanding, machine translation and push machines towards true intelligence. Ironically he called it Project Marvin.

Reference: Genius Makers by Cade Metz

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR