Proctoring technologies tap AI to verify students through 1:1 identification. It uses video and audio recordings of students taking online exams to monitor body movement or noises to flag malpractices.
Delhi Technological University, University of Petroleum and Energy Studies, Dehradun, SRM University, Chennai, O.P. Jindal Global University, Sonipat, Mumbai University and IIT-Bombay leverage AI for exam invigilation in various capacities.
What Could Go Wrong?
The current technology captures a student’s picture every 15 to 20 seconds, resulting in 200-240 photos of the students in an hour, while also collecting biometric data and personally identifiable information (PII) such as full name, date of birth, address, phone number, government IDs etc.
On top of the exam pressure, being continuously exposed to the camera with private data being collected and monitored every second can spike students’ anxiety levels. The surge in the number of cybersecurity mishaps and the lack of adequate policy and implementation further compound the issue.
The collected data is prone to misuse. To make matters worse, India does not have a proper data protection law. The Privacy Data Protection Bill is still under review of the Joint Parliamentary Committee. Experts have cast aspersions on the draft bill for warranting excessive power to the government to use anonymised data. There is also a palpable uncertainty over how the government will use the data from public universities gathered through proctoring technologies.
For starters, the technology used by universities is not foolproof. For instance, there were complaints about technical glitches in the CodeTantra app used by University of Petroleum and Energy Studies, Dehradun. Further, there are cheat codes available on the internet to end-run around proctoring technologies.
The universities should also account for many external factors such as internet connectivity, potential disturbances, etc. There is a high likelihood of students getting flagged for situations not under their control.
What Can Be Done About It?
The letter rightly points out that any justifiable intrusion by the State into people’s right to privacy protected under Article 21 of the Constitution must conform to certain thresholds such as legality, necessity, proportionality and procedural safeguards. The universities have failed to uphold the standards.
And for those reasons, universities should refrain from using AI until the privacy and technical requirements are met.
Universities should conform to a basic protocol while using AI for proctoring.
Firstly, the data should not be used for any purpose other than advancing the proctor application’s performance.
Secondly, universities should ensure the availability of a functional computer or laptop, high-speed internet connection, and other accessories, including a webcam and speakers for all students.
Thirdly, the technology should be tried and tested in all scenarios and have a high threshold for accuracy.
There is always a chance of technology going wrong. To address this, the decision-making on malpractices should be made by human proctors.
Technology, especially automation through AI, has been a lifesaver amidst the pandemic and will continue to be so. However, it is essential to deploy technology responsibly while accounting for privacy and security issues.