There are various AI experts and the numbers are only rising each day. Every single one of them has a different view on the development and consequences of the rapid development of AI. But there is one expert, who is undoubtedly the OG when it comes to learning AI, and also calling out the problems — Andrew Ng.
The Google Brain founder and the brains behind machine learning teachings at Stanford University, Ng recently called out the big-tech narrative about AI doomsday, and how that is just about controlling open source models, which are the biggest threat to the companies.
In a recent interview, Ng said, “The bad idea that AI could make us extinct” is being merged with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements. When you put those two bad ideas together, you get the massively, colossally dumb idea of policy proposals that try to require licensing of AI.”
Definitely, this is also visible in Biden’s executive order for AI safeguard, which is imposing restrictions right now on AI companies, which are most going to target AI startups. Moreover, the regulations also require special licensing and permissions for developers using models outside of the US. “It would crush innovation,” Ng added.
Apart from teaching at Stanford, co-founding Google Brain, and being the chief scientist at Baidu AI group, Andrew Ng is also the co-founder of Coursera, along with Daphne Koller. In fact, his group was one of the first at Stanford to advocate for the use of GPUs in deep learning.
The AI guru
If you are following the recent AI developments, you are sure to have stumbled upon Andrew Ng’s generative AI courses. Not just one, the professor has been launching a new course in generative AI almost every single week, helping people land the AI job they want. He also launched an AI Fund of $175 million in 2018.
According to DeepLearning.AI, Andrew Ng has been teaching the most number of students on the planet, that too without a university.
Ever since he has been teaching courses, people on X and HackerNews alike, have been praising how Andrew Ng is the only person to listen to when it comes to AI. “Based and Ng-pilled,” says one of the posts, and the others say, “He’s a great professor. Hard to take questions elegantly when your class size is like 700 but he manages it.”
Interestingly, Sam Altman, the CEO of OpenAI, has also been one of the interns of Andrew Ng at Stanford. But lately, the views of both the disciple and the teacher have been different when it comes to regulating AI. This is mostly because one benefits from teaching people about it, and the other is trying to build his business in the field and stay on top of it.
“He interned with me,” said Ng in the interview, “I don’t want to talk about him specifically because I can’t read his mind, but… I feel like there are many large companies that would find it convenient to not have to compete with open-sourced LLMs.”
Thoughtful regulations is the answer
Though it is not like Andrew Ng does not side with regulation. Unlike Altman and other big-tech founders who signed the letter “mitigating the risk of extinction from AI should be a global priority,” Ng believes that AI should be thoughtfully regulated.
“I don’t think no regulation is the right answer, but with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting,” Ng said. He agrees that AI has caused harm to the world in some or the other ways such as self-driving cars and the crash of the stock market, but it should be thoughtful.
Undoubtedly, another open source champion, Yann LeCun, the head of Meta AI, agrees with Andrew Ng.
But even though one of the Godfathers of AI, LeCun might be on the side of thoughtful regulations and open source AI, his other counterparts, Geoffrey Hinton and Yoshua Bengio, have been on a spree of giving ammunition to the big-tech lobbying for regulation. It is clear that not all AI experts think alike.
While the debate goes on about the existential risks of AI, one thing is for sure that we can learn from Andrew Ng’s courses about how to build AI. Then it is up to us to figure out how to build a responsible model, or what some ethicists call — aligned AI. Thoughtful regulation is the answer.
Meanwhile, Andrew Ng also said that AI has an Instagram problem. “I’m here to say: Judge your projects according to your standard, and don’t let the shiny objects make you doubt the worth of your work!”