Listen to this story
OpenAI recently released an AGI roadmap, revealing the short-term and long-term plans, alongside steps to mitigate risks for the betterment of humanity.
OpenAI said that successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Further, it said that success is far from guaranteed, and the stakes will hopefully unite all of us.
OpenAI, in its blog post, said that AGI would come with a serious risk of misuse, drastic accidents, and societal disruption. Although the team is hopeful, as it believes that the upside of AGI is so great, they do not believe it is possible for society to stop its development forever and, thus, it becomes impeccable to get it right.
The team said that AGI has the potential to give everyone incredible new capabilities: “We can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”
OpenAI also hinted at the flaws of its existing methodologies in achieving AGI. The team said: “Of course, our current progress could hit a wall, but we can articulate the principles we care about most—i.e., maximising the good and minimising the bad; access to, and governance of AGI to be widely and fairly shared; and navigating massive risks.”
“We acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimise ‘one shot to get it right’ scenarios,” said OpenAI.
- OpenAI looks to become increasingly cautious with the deployment of its models in the real world. The company would closely be monitoring and restaining the users as well as use cases.
- Plans to work towards more alignment and controllability in the models. The customisation of the models is likely to play a key role in future OpenAI products and services.
- The company looks to align with the right incentives—“a nonprofit that governs us,” and “a cap on the returns their shareholders can earn.”
- OpenAI said that ‘the first AGI will be just a point along the continuum of intelligence’.
- It also said that AI that accelerates science will be a special case that OpenAI focuses on, because AGI may be able to speed up its own progress and thus expand its capabilities exponentially.
There have been numerous debates about a future where AI becomes smart beyond humans’ capacity to understand or control it. While most of these conversations are fuelled by fiction, the topic has gained significant momentum in the last few years after a list of science and industry notables, voiced their opinions on achieving the same.