OpenAI Publishes Yet Another Lame Paper

Published in collaboration with OpenResearch and the University of Pennsylvania, the paper tries to qualify GPT technology as a general-purpose technology
Listen to this story

OpenAI has been garnering ‘praises’ over Twitter for the small mercies it showered upon the world by not publishing a 100-page paper for GPT-4. To compensate, the company has come up with yet another paper, justifying its intent in little less than 35 pages. 

Sam Altman calls GPT the “greatest technology humanity has yet developed”, but at the same time acknowledges the risks it poses. We are already seeing the effects it has on various industries and now we need to look at how it “reshapes society”.

In recent weeks, OpenAI has been quite vocal about the consequences of its technology. Apart from the 98-page-long paper on mundane announcements about GPT-4, which researchers have been calling “useless” for further research, the “non-profit” research organisation released another paper — GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. Published in collaboration with OpenResearch and University of Pennsylvania, the paper tries to qualify GPT technology as a general-purpose technology.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

GPT Stands for General-Purpose Technology?

General-purpose technology is any technology that mostly fulfils three core criterias —  improvement over time, pervasiveness throughout the economy, and the ability to spawn complementary innovations. There is no doubt the technology has been improving over time, so the first criteria is met. OpenAI argues in its paper that the other two criterias are also on the way. “There is early qualitative evidence that adoption and use of LLMs is becoming increasingly widespread,” reads the paper. 

Well, that holds true. There is no doubting the ChatGPT or GPT phenomenon. Ever since its release, there has been an increased competition between companies to replicate it, or build better. Companies are hiring for roles for profiles related to generative AI or LLMs specifically. These jobs are getting created because of the emerging hype in the field. But what about the existing jobs that might get replaced by this technology? 

The verdict — “up to 49% of workers could have half or more of their tasks exposed to LLMs”. The professions most exposed to generative AI are blockchain engineers and designers, according to the paper. Exposure here means how GPT-powered systems reduce the time humans are required to complete the task. 

On the other hand, in a recent interview, Altman said that ChatGPT should be viewed like a tool, and not a replacement for any job. He explained that humanity has proven the point time and again by constantly adapting to different types of technology. “Human creativity is limitless, and we find new jobs. We find new things to do,” said Altman. 

The paper concludes that as this technology evolves, it might make a significant impact on the economy, which may, in turn, impact the societal and regulatory side of the field. But as of now, it is evidently making human labour more efficient. 

Selling It Hard

The most fascinating thing is that GPT is now being sold as a general-purpose technology. It’s ironic for a company that calls the technology a “potential for chaos” to be selling it to everyone as a tool. Altman said they have put up guard rails on the technology, but there might be others who might not do the same. He further adds that, “Society, I think, has very limited time to figure out how to react to that, and how to regulate and handle that.”

Nonetheless, OpenAI is not making it easier for others to do it either. It is creating a black box within a system. Even though they have acknowledged the lack of transparency in the GPT-4 paper, they are passing the buck to policy makers and stakeholders. Meanwhile, the company will continue building on the technology that they believe can be potentially harmful, but the responsibility to control it doesn’t lie with them. 

OpenAI believes that the initial phase of augmentation is still under process and will eventually lead us to automation. What this means for jobs is that there could be an initial precariousness, but eventually lead to full automation. 

When it comes to fixing the technology, Altman said in the same interview that he believes OpenAI’s technology needs contributions from both regulators and society for deterring potential negative consequences that the technology might have towards humanity. 

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR