In a conversation with the media, Jerome Pesenti, Vice President of AI at Facebook, said that humans are going to “hit a wall” when asked about the need for continuous advancement in high-performing computational processors. Presenti further stated that in many cases we already have.
When one of the bigwigs of a social media behemoth makes such a statement, it makes everyone wonder. Are we really going to hit a wall? Is this all we can achieve in the AI landscape?
The answer is no. Technology keeps evolving, and when it hits a roadblock, researches find ways to work around it, and this trend will continue in the coming years as well.
Over the years, there have been many breakthroughs, which people did not envision. We will take a look into a few things that were once major pain points in data science and AI that seems to be a possibility today.
Post-Unimate – a commercial robot developed in 1954, robots have become a part of our ecosystem, but they were only used for automation without human intelligence. The behaviour of the robots used to be hardcoded for achieving desired results out of them. However, with the recent advancement in OpenAI, where they have used the automatic domain randomisation technique enables robots in making their own decisions even without training it in the same environment.
Until recently, researchers and developers could only derive outcomes from ML models if it was trained in a different environment. However, OpenAI succeeded in bringing artificial general intelligence (AGI). Although, the application is still limited to one use case, and most of the AI advancement is in pattern matching, the silver lining from OpenAI’s innovation cannot be ignored.
An absence of high computational power is the prime reason for the impediment in the AI landscape. An efficient computing processor is key to making advancement in accomplishing AGI. However, as Moore’s Law is starting to fail, people have become hypercritical about the success of true AI. Although the number of transistors on microchips is not increasing, the quantum computing breakthroughs by Google has brought new hopes in processing a colossal amount of data within seconds.
Besides, various blue-chip companies such as Google, Amazon, Nvidia, Intel, among others, are also working towards making superior AI-based processors empowering developers and researchers to develop AI-based applications. Additionally, companies like Nvidia are offering computation over the cloud with the help of 5G connectivity. With numerous such instances, we can expect further innovation in the computational power to integrate AGI.
One of the pressing issues that have slackened the landscape is the rising cost of AI research. Pesenti stressed on how companies are spending seven figures for R&D in AI, and it may touch up to eight- or nine-figure in the future, which can become unaffordable for many.
Yes, the increasing cost will have a negative impact on the development of AI, but more or less many sectors like healthcare and space technology have already witnessed the same. And everyone is trying to circumvent in their own ways.
In fact, for technology projects, companies can leverage the open-source community to innovate. Such initiatives have also allowed companies to bring down their R&D costs while still making new advancements. The open-source community is still insignificant as one can only witness a few developers who contribute to the projects. There is a need for further encouragement for researchers and programmers to participate in open-source projects for decreasing the R&D costs.
Although common sense is still a miss in AI, there is enough evidence that firms across the world have made breakthroughs in AI landscape. The aforementioned developments are unprecedented and can further motivate researchers and developers to innovate. Clearly, there are numerous challenges such as bias, explainability, and more, that need to be mitigated for achieving the true AI, but might not be beyond our reach.