How language models perfected plagiarism to an art

Today, most institutions employ text-matching software to counteract plagiarism.
How language models perfected plagiarism to an art

Neural network language models (LMs) are capable of producing grammatical and coherent text. But the originality of the text such models churn out is suspect.

So, are these LMs simply “stochastic parrots” regurgitating text, or have they actually learned how to produce intricate structures that support sophisticated generalisation? 

Why is novelty important? 

The novelty of a generated text tells us how different it is from the training set. Studying the novelty of LMs is important for two main reasons: Models are supposed to learn the training distribution, not just memorise the training set; models that simply copy the training data are more likely to expose sensitive information or reecho hate speech. 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Researchers at John Hopkins University, New York University, Microsoft Research, and Facebook AI Research– in a recent paper– have proposed a method to measure the novelty of the text generated by LMs. The study looked into how well LMs repurpose language in novel ways.

Are language models plagiarising training data? 

To evaluate the novelty of generated text, the researchers introduced a list of analyses  (called RAVEN) that covered both the sequential and syntactic structure of the text. They then applied these analyses to a Transformer, Transformer-XL, LSTM  and all four sizes of GPT-2


Download our Mobile App



According to their findings, all of these models were able to demonstrate novelty in all aspects of the structure. They generated novel n-grams, morphological combinations, and syntactic structures. 74% of the sentences the Transformer-XL generated had a syntactic structure different from the training sentences, and GPT-2 was able to come up with original words (including inflections and derivations). 

That said, for smaller n-grams, the models are still less novel than the baseline (based on the degree of duplication in a model-generated text to a human-generated text). Additionally, there is occasional evidence of large-scale copying. For instance, GPT-2 tends to pirate bigger training passages (more than 1,000 words).

All things considered, it’s safe to assume neural language models do not just plagiarize the training data, and also use constructive processes to combine familiar parts in novel ways. 

Threat to academic integrity?

Neural language models are so good at generating novel text, it has become difficult for statistical and traditional ML solutions to detect machine-obfuscated plagiarism.

AI writing assistants like OpenAI’s GPT-3 are alarmingly simple to use. You can type in a headline and a few sentences on the topic, and GPT-3 will automatically begin filling in the details. The model produces plausible content and endless output, and—most importantly— allows you to communicate with the “robot writer” to correct errors. 

The efficiency stems from the ever increasing size of training data. For context, the entirety of Wikipedia (which consists of more than 6 million articles and 3.9 billion words) makes up only 0.6% of the input size for GPT-3. 

Studies show a shocking number of students use online paraphrasing tools such as SpinBot and SpinnerChief to disguise plagiarised text. Such tools use AI to alter text (such as by replacing words with their synonyms) to give the work a semblance of originality. 

The use of neural language models for paraphrasing is a recent trend, and so far there isn’t enough accumulated data to train plagiarism detection systems (PDS) with. Today, most institutions employ text-matching software to counteract plagiarism. The tools are effective in identifying duplicated text, but struggle to detect paraphrases, translations, and other artful forms of plagiarism. 

Plagiarism detection systems

Plagiarism detection technology taps lexical, syntactical, semantic, cross-lingual text analysis. Some methods focus on non-textual features, such as academic citation images and mathematical content, to uncover plagiarism. Meanwhile, most research concentrates on quantifying the degree to which two sentences are similar to each other to detect AI-aided text paraphrasing. 

According to a paper published by the University of Wuppertal in 2021, obtaining additional training data is the best solution to improve detection of machine-paraphrased text. 

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Srishti Mukherjee
Drowned in reading sci-fi, fantasy, and classics in equal measure; Srishti carries her bond with literature head-on into the world of science and tech, learning and writing about the fascinating possibilities in the fields of artificial intelligence and machine learning. Making hyperrealistic paintings of her dog Pickle and going through succession memes are her ideas of fun.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Evolution of Data Science: Skillset, Toolset, and Mindset

In my opinion, there will be considerable disorder and disarray in the near future concerning the emerging fields of data and analytics. The proliferation of platforms such as ChatGPT or Bard has generated a lot of buzz. While some users are enthusiastic about the potential benefits of generative AI and its extensive use in business and daily life, others have raised concerns regarding the accuracy, ethics, and related issues.