Listen to this story
Last week, Russell Group of universities, which include the likes of Oxford and Cambridge updated their policy on the use of AI in education. Harvard, an Ivy League University, is going further and ‘hiring’ an AI chatbot similar to ChatGPT to teach their flagship programme, Computer Science 50 (CS50).
On the other hand, Bangalore University and RV University and several others in the city have banned the use of AI by their students. Professors have to formulate rules on how to use AI ethically and right now there are no answers to that question.
Tools to check whether a given content is generated by AI or is the original work of the student also hallucinates. There is no way of differentiating between the two. Turnitin, an anti plagiarism software which has updated their capabilities to detect if the work is generated by AI is being extensively used by professors.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Professors run the risk of accusing an original work as being written by ChatGPT, while also not being able to identify how much of each assignment is original or how the student has used generative AI. Since this is not always reliable, teachers are resorting to oral exams, MCQs, and even automated marking systems are being experimented with in schools and universities in Singapore and UAE. This on the other hand, is a time taking process.
The alternative of incorporating AI in universities is banning it. But how does one police that? It is easy to trick Turnitin by editing the piece generated artificially. Sometimes the work produced by generativeAI is much better and with a few checks on the factual relevance can easily pass off as original.
The Ethical Approach
Dr. Tim Bradshaw, the chief executive of the Russell Group said, “This is a rapidly developing field, and the risks and opportunities of these technologies are changing constantly. It’s in everyone’s interests that AI choices in education are taken on the basis of clearly understood values. The transformative opportunity provided by AI is huge and our universities are determined to grasp it.”
Their principles consist of increasing AI literacy of students where they’ll be taught the skills needed to use AI appropriately including concepts of privacy, bias of generative AI, inaccuracy and misinterpretation of the AI, plagiarism etc.
The statement further said, “Appropriate adaptations to teaching and assessment methods will vary by university and discipline, and protecting this autonomy is vital,” said the statement.
Universities have begun offering courses to students to students in multiple universities on the ethical use of AI. Jill Walker Rettberg, a professor of digital culture and co-director of the new research excellence Center for Digital Narrative said, “We recently established tiny courses, 2.5 European credits, which are designed so all students can take them. Some are online, some are only taught on campus. They are very small so that they are accessible for any student,”
The courses are reported to be ‘immensely popular’ with hundreds of students taking each of the courses and they have now become available to staff and outsiders as well.
She added, “It is incredibly important that we, as a university, make sure our students and staff have the basic knowledge to be able to use AI in a responsible and constructive way.”
The Way Forward
The universities are scrambling to find solutions not only in checking who is using the technology to cheat, but also to understand how to use it to teach more effectively, changing the curriculum to allow for skills required in this new landscape of work. It is the white-collar jobs which are evolving to join hands with this technology. It is important to equip the future of the workforce to efficiently use AI.
Some universities are taking a cautious approach and banning the use of AI tools. Students are subjected to random checks, oral exams if suspected of cheating. Other universities and schools are encouraging students to use it, but with certain guidelines.
Thomas Jørgensen, the director of policy coordination and foresight at the European University Association (EUA) said, “Rather than focus on ChatGPT, universities should look at how AI is more broadly used. Universities and academics are thinking hard about the consequences of using new technologies, and the importance of critical dialogue comes up again and again.”