Active Hackathon

My experiments with GPT-3 on philosophical questions

The results of tests indicate that GPT-3 performed unpredictably.

GPT-3 (Generative Pre-trained Transformer 3) is an autoregressive language model that has produced human-like texts for a variety of domains. We posit that if GPT-3 does indeed show a glimpse of artificial general intelligence (as some claim), it should also handle philosophical topics well. With this in mind, we conducted a series of tests with the GPT-3 engine (January 20, 2022 edition) by presenting it with some well-defined philosophical problems similar to those put to philosophy students in their formative years (i.e., 1st and 2nd years of a BA degree at the University of London). It is arguable whether testing the responses to a few such questions gives any meaningful measure of the philosophical prowess of GPT-3, but if passing exams with such questions certifies a certain level of philosophical knowledge in a philosophy student, the same must apply to the GPT-3 language model.

Experiment

To access GPT-3 application we used an URL: https://beta.openai.com/playground/p/default-essay-outline and an interface : “Essay outline- Generate an outline for a research topic”. The following parameters have been applied to the experiments. Length (=1500), which is response length is the length of the GPT-3 response in tokens.  Temp (=0.1), where the Temp parameter is responsible for the randomness of the response. A Temp approaching 0 makes the output more deterministic and repetitive.  Top P (= 0.09) where Top P is “An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered” (OpenAI, 2022). Frequency Penalty (=0.05) where a frequency penalty is a number between -2.0 and 2.0. Positive values penalise new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim (OpenAI, 2022). And  Presence penalty (=0) s (OpenAI, 2022) just checks whether the tokens have been present in the response or not. To improve the results for some tests, the parameters were changed.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

We posed several questions which are a staple of philosophical lore like: Can we intelligibly claim that Sherlock Holmes does not exist? Is knowledge justified true belief? Must scientific explanations cite the cause? Explain and evaluate the argument Descartes gives in the second meditation for the claim that he is a thinking thing.

Results

The results of tests indicate that GPT-3 performed unpredictably. On some questions, GPT-3 produced answers that could pass for student work, but on other questions, GPT-3 merely generated a mixture of coherent, repetitive and irrelevant, albeit still grammatically correct, text. 

For example one of the  GPT-3 responses to the question Is knowledge justified true belief ? was: The definition of knowledge is justified true belief. This definition is the most accepted definition of knowledge. It is a definition that has been used by philosophers for centuries. The definition of knowledge is justified true belief. This definition is the most accepted definition of knowledge. It is a definition that has been used by philosophers for centuries.  …(repetition). The  GPT-3 responses to the question Must scientific explanations cite the cause? was: Science explanations don’t always cite the cause, but they often do. But the response to Explain and evaluate the argument Descartes gives in the second meditation for the claim that he is a thinking thing was surprisingly interesting: René Descartes famously argues in the second meditation that he is a thinking thing. He reasons that he knows he exists because he is thinking, and that he cannot be mistaken about his own thoughts. He also argues that things that exist must have some property or attribute that makes them what they are, and that he cannot conceive of himself without thinking, so thought must be a property of him. This argument is problematic in a few ways. First, it begs the question by assuming that existence depends on having some property or attribute. Second, it is not clear that Descartes can really conceive of himself without thinking, or that thought is a necessary property of him. Third, it is not clear why Descartes should think that he is the only thinking thing in the world.

We do not know why certain questions were answered correctly and other questions were not. We may speculate that philosophy requires more in-depth knowledge of the problem domain and cultural context beyond statistical correlations. The GPT-3 language model is a statistical model built under the assumption that given enough training data, the model will recover semantics and meaning. But it does not seem to be the case.

Could we start seeing philosophical essays generated by GPT-3 being submitted by students? Possibly yes, but the expectation that GPT-3 is also the dawn of a new era of synthetic philosophy does not seem justified. Philosophy, at least good philosophy, is not a correlated combinatorial repetition of things that have already been said, although some philosophy does exactly that. Philosophy is the inquiry into the deeper meaning of things. GPT-3 clearly does not do that, so we should not then expect from GPT-3 a flood of synthetic philosophy or insightful comments on Plato.

The full version of this paper is available here. 

More Great AIM Stories

Roman M. Krzanowski
Roman M. Krzanowski is currently an assistant professor at The Pontifical University of John Paul II, Cracow, Poland. He is an expert in networking technology and information processing. His interests in philosophy include the philosophy of information and informatics, ontology and metaphysics of computation science, philosophical foundations of AI, ethics and ethical problems created in the information society.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM