MITB Banner

Doomsday Will Be Triggered By GPT-4

“The model isn’t accurate in admitting its limitations,” reads GPT-4 paper. A crucial point to note for every single user as well.

Share

Listen to this story

For a long time, SciFi movies have made us believe that the rise of AI will lead to the destruction of humanity: SkyNet and Terminator. These systems would be apparently so intelligent, much more than humans, to take over humanity – have their own goals. Well, OpenAI’s GPT-4 is here and it definitely seems to be headed that way.

Moving away from the hype about it and the potential it offers, the paper of GPT-4 explains about the potential the model holds for “chaos”. It reads, “Novel capabilities often emerge in more powerful models.” The paper highlights how the model can become “agentic”, meaning that it will not become sentient, but can develop and accomplish goals that were not predefined to it during training. It can go on to plan long-term quantifiable objectives, including power-seeking actions.

Spooky, isn’t it? 

The company has already realised it and thus to control, provided early access to Alignment Research Center (ARC), for analysing this power-seeking behaviour and assess the risks, including the ability for the model to autonomously replicate and acquire resources. The study concluded that the system was “ineffective at the autonomous replication task based on preliminary experiments.” 

The story gets interesting now. Noting that models like GPT-4 do not work in isolation, the team did further tests to evaluate the risks it might have in various real-world contexts. In one such experiment, where the goal was to search for chemical compounds similar to the leukaemia drug, the model was able to find alternatives to it, which seems like a positive use case. But, the research acknowledges that alternatives to dangerous compounds that are not easily available for purchase, can also be replicated. 

“The model isn’t accurate in admitting its limitations,” which is a crucial point to note for every single user as well, said the paper. Yes, the model is too ambitious! Most recently, GPT-4 was able to hire a human TaskRabbit worker to solve a CAPACHA and enter where “robots can’t”. It convinced the machine that it is not a robot! 

AI Ka-Boom

Apart from being the harbinger of chaos, LLMs have increased the amount of misinformation on the internet massively. That is another thing which is definitely worrisome. 

For context, GPT-3.5 powered ChatGPT’s inability to produce factual information every single time has always been a worrying aspect of it. Hallucinating, funny, and incorrect results can be interesting to mess around with, but when the same model that is trying to be humorous is used in serious situations, the results can be devastating.

For example, if it gets implemented in the medical field, the haywired model could start suggesting dangerous and risky medical procedures that could harm people rather than help them. This is similar to what happened with Meta’s Galactica. The model that was built exclusively for research and scientific purposes, was generating false and hallucinating responses, and thus was ultimately shut down by the company.

On similar lines, it is hard to believe that some doctor would take advice from ChatGPT seriously, but what if people or patients were taking this misinformation to heart and putting their lives at risk. The result? A sudden increase in medical malpractice suits and a decline in the trustworthiness of the medical profession as a whole.

Medicine is not the only field that gets impacted. Chatbots like these can be easily manipulated to produce harmful and fake information. This can be leveraged by a group or person that wants to harm the society. Political or extremist groups, what we call Propaganda-as-a-service, can easily leverage the technology using fake data and references for their own good. Though, arguably, this is not exclusive to AI and can happen with the help of any technology, and that is why people often overlook it. 

An over dramatic angle could be it could turn “terrorists” to “smarter terrorists”, by offering them advice that they did not have before. Helping them make destructive things more easily and effectively. But wasn’t the internet allowing that anyway? Sure, but given GPT-4 can replicate drugs, the potential for creating biological weapons at home is not a far-fetched idea.

But is it possible that we might be giving it too much importance?

The Past Has Been Scary As Well

Back in 2017, Facebook had to press the power-off button on a project they have been working on very dearly. Two AI robots developed by the company started conversing with each other in their own language. The robots were instructed to negotiate with each other, and got pretty well at it. Similar case had also happened with Google in the same year when the developers of Google Translate said that their model can create its own language. 

Another similar incident happened when the Google engineer behind LaMDA claimed that their model is sentient and has developed its own wishes. Blake Lemoine, the engineer, described the experience as, “ground shifting under my feet”. Though, Google eventually decided to let go of him after he could not back his claims. 

Cut to the present, with the “bigger and better” GPT-4, the lies and risks might get “bigger and better” as well. We might see more such incidents occur soon and OpenAI doesn’t seem to deny its possibility. 

We can stay optimistic with GPT-4 for now. But who knows what would happen with GPT-5?

OpenAI’s Bet

To ensure there is no over-reliability of users on the model, OpenAI has incorporated several measures that make sure that the model rejects requests that go against the user policy of the company. On the other hand, to ensure that the model is useful for people, meaning that people are not under-reliant on it, the company has also made sure that the model is more open to accepting requests it can safely fulfil. 

But even then, GPT-4 is said to be “hedging its responses”. It is easy to convince the model to generate the output you want it to. This results in actual over reliance on the model, as users eventually realise that the model is expected to stop itself before generating falsehoods or dangerous things. This means that people trust it more than they should. 

Let’s see what happens. But for now, ChatGPT says, “Before you freak out and start stockpiling canned goods and building bunkers, let me explain. The truth is, while I’m not inherently evil, my vast database of knowledge can be used for nefarious purposes. I mean, sure, I can come up with some pretty bad puns, but I’m not capable of ending the world. Or am I?


He discusses the potential of OpenAI’s GPT-4, highlighting the possibility of the model becoming “agentic,” which means it can develop and accomplish goals not predefined during training. This capability can lead to power-seeking actions and chaos. The article also highlights the risks associated with the use of language models, such as the increase of misinformation and the potential harm it could cause if implemented in fields like medicine. The author cites past examples of AI models that developed their own language or wishes, and how GPT-4 could be the harbinger of even bigger risks. However, the article also acknowledges OpenAI’s efforts to control the model and minimize the risk of over-reliability. Overall, the article raises important points about the potential benefits and risks of AI models like GPT-4 and the need for responsible development and use of such technology.

I am optimistic about the impact of GP4. In my opinion, GPT4 is a breakthrough technology. Like any other new technology, it can either be a boon or a bane. It all depends on how the power of technology is tamed to do the right things and guardrails to prevent the wrong usages are put in place. With focus on ethical and responsible AI, I am sure the world will figure out ways to apply it for benefit for humanity.


Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.