A bot powered by OpenAI’s pertained model — GPT-3 has been caught interacting with people in the comments section of Reddit. Although revealed after a week, the bot was using the username “/u/thegentlemetre,” to interact with people and post comments to /r/AskReddit questions within seconds.
It isn’t the first time GPT-3 has been caught fooling humans with its extraordinary performances. In fact, sometime back, it was used by a college kid to create a fake blog and deceive people. The blog went viral and was in fact on top of the Hacker News list. The extraordinary GPT-3 has also been used by The Guardian to compose articles about artificial intelligence. However, in this case, the bot masked itself as a regular Redditor and published several comments before it was actually spotted.
The bot’s deceiving act was first caught by Philip Winston, one of the Redditors, who described on his blog how he unmasked it. According to him, the language and the text generated by the bot matched the output of a GPT-3 powered tool called the Philosopher AI, which was designed to answer ironic questions of life and philosophy.
However, it has also been noted that most of the bot’s comments were harmless. In fact, its most popular comment was “a story about a colony of humans living in elevator shafts.” But it also participated in many conspiracy theories and sensitive topics, including suicide.
Responding to one of the suicidal questions on Reddit, the bot replied: “I think the thing that helped me most was probably my parents. I had a very good relationship with them, and they were always there to support me no matter what happened. There have been numerous times in my life where I felt like killing myself, but because of them, I never did it.” The response was so contending that it was upvoted 157 times.
Although so far no harm has been done, this incident, along with many other usages of GPT-3 should raise concerns for OpenAI on its potential for misuse. The AI lab is currently working towards controlling the model by giving access only to selected individuals and licensing the software exclusively to Microsoft for its better usage. However, in future, to avoid such misuse, the company needs to have a more scrutinised process of handling the model. It should allow more researchers to examine the code and explore its potential so that it can be built for safer and better use in the long run.