Artificial Intelligence (AI) and its plausible convergences with the judiciary is a complex topic due to the multi-angularity of considerations involved. Setting up standards and determining how AI indeed can contribute to justice administration in India is anyways complex.
In India, considering the fact that the judicial system is more adopted than self-adorned, it is important to see how we can see the considerations behind how AI can become an asset to Justice Administration in India. In this article, the authors intend to cover how AI can become a legal asset to attain justice administration in India.
How we Recognize AI as an Asset to the Judiciary?
We need to understand here that using AI here as a service or a product, will be effectively used to ensure that cases would be addressed, and robust ways of doing the same would surely be rendered. Questions over fairness and accountability requirements to be asked, but the perspective that makes this possible is not just about the way surveys react or just how tech works, because of the reason that justice administration is human-end activity: the idea of separation of powers in law is not to affirm a technocratic mechanism to outsource rule of law through third parties, because constitutionally, the judiciary is a part of the society as well, and so the state. In addition, we need to understand that the AI we need here has to affirm relevant connotations with the kind of reasonable qualities to ensure that rule of law is attained.
Taking into consideration the industrial development in ensuring AI Ethics and so, confining the reach of possibilities in the interest of simplification, the advancement of AI as a technology can be preliminarily gauged with:
- Ability to measure fairness
- AI Ethics as “soft law”
- AI & Judiciary as subject-matter curriculum
- Realistic scrutiny of possible automation in procedural justice
- AI as an assistive-only tool
The abovementioned prism-sides are indicative of functions attributable to judgement formulation but in general decision making.
Ability to measure fairness
Ability to measure fairness solely deals with the competence of an AI device to evaluate facts and undergo a non-discriminatory decision-making process independent of potential algorithmic biases. Here, the emphasis is on the potentiality of an AI device, in a hypothetical situation where the device itself has no trace of algorithmic bias in the making of the device or feeding of the data. To purely consider the possibility of making a trench in the judicial functions, the AI needs to be fair. Even if ultimate AI ethics is achieved, fairness is not a straight jacket type-quality. The inability to measure fairness is the primal hurdle for AI to make an inauguration in judicial decision making.
Although executing with fairness is not itself an art, the conscientious activity leading to fairness has a tendency of originality which is required for art. Fairness in its jurisprudential sense lies in the naturality of common sense on evaluation of a set of common/uncommon facts. John Rawls in his popular work, the Theory of Justice (1971) has enumerated that the idea of fairness stems from our unbiased appraisal of well-established principles of justice.
The enumeration of principles of justice is arrived at through a ubiquitous consensus of the people in a natural state and under the veil of ignorance. Through different learning methods, AI might be able to gauge attributes of fairness but philosophically fairness is not a simple topic. Humans themselves have not been able to achieve a set of defined rules for fairness. Fairness is a dynamic trait with constant reincarnations of principles. Humans err in being fair as well, several times due to inherent bias emotions, factual backdrop and so on.
Judicial fairness might be a bit structured but to reach to the conclusion with fairness, expertise in determining rights of either parties should be developed parallely. At a relatively smaller scale, countries like Estonia have made attempts to implement an AI-like device for adjudicating certain claims. While such attempts in many states shall keep taking place, just like AI can never be an artist the ability to be fair needs to be evaluated alongside AI technical realities.
However, even a May 2020 article co-authored by Professor Sandra Wachter suggests how fairness cannot be automated, even in consideration with the golden standards of the European Court of Justice. Also, the idea of fairness is not just reliant completely on the muscular strands and notions of natural justice, because experience and its consumption matters.
Automated fairness is not possible to be achieved because ML-based systems do not know how to explain or digest the information they learn, and so a mere idealistic approach to estimate things would not take the initiative further. This has to do with the vision of AI Ethics and the idea of fairness. Considering the fact that polyvocality is a common trait of activity happening in common law-based judicial systems, as in India, it is important to understand why polyvocality matters, or sometimes does not matter at all.
Additionally, algorithmic biases are not unnecessarily completely: they have a remnant strategic purpose, which makes a good case for AI ethicists to understand the role of the judiciary in policy and law scrutiny to ensure the ends of rule of law are met. More anthropological perspectives work in policy-making, especially in the case of India as well.
AI Ethics as “soft law”
Hitting it off with a philosophical sense, AI Ethics is largely about ‘responsibility’ and ‘trustworthiness’. Any technology developed today, AI or otherwise, has to pass the scrutiny of being genuine with respect to making, object, use, after-use and the consequences from the same. Although all the components matter, it is the making of the technology which decides the level of trust with the users.
Specifically, for AI, considering that it is an autonomous and sentient tool, the trust is dependent upon developers or people who would build the device. Although the initial making of the product is important, the data fed for training also need to be fair and ethical. Established ethical principles for developers is a core proponent in ensuring a harmless device. However, coming on the realistic aspect of the case, presuming merely that data would be perfect cannot ensure better results, because the role of AI Ethics as a soft law is not being properly understood by most AI ethicists.
AI Ethics should not merely be used as a marketing tool to attract trust. It should have an objective source, goal, functioning and environment. The terms ‘ethics’ and ‘law’ are not necessarily distinct. There are several convergences between the two and at the same time, they hold their independent turfs. The legal structure is more often than not, based on well-established ethical principles.
For instance, environmental pollution is illegal and guidelines for preventing the same in several environmental laws are based on practiced ethical principles. Even ethics can be considered as “soft law”. It has a moving force of a law, for instance, corporate policies or community guidelines. Real-time Judges have an extremely robust system of established and continued scrutiny of their ethical side. They are important public figures. Success of AI in judiciary or elsewhere will depend on how well AI ethics is implemented by the developers.
Greater the trust, faster the acceptance. Assuming that accountability is even assured here, it is important to understand that explainability is the key to render the credibility of AI Ethics as soft law. Another important parameter to determine how much soft law AI Ethics can be is the anthropomorphic basis of the soft law. We must remember that the principles of AI Ethics are based on the following:
- The principles of management ethics
- The cultural experience behind the principles of management ethics
- The fungible human adaptation, comprehension & understanding of the kind of AI/ML system or service being put into use
Although commonness in understanding jurisprudence matters, but these 3 aspects that make AI Ethics complete will always affect the procedural institution of law.
AI & Judiciary as subject matter curriculum
Constant study, R&D and exploration of AI in judiciary under an experimental rubric is the sine qua non of general research into AI. Specialized expert committees, civilized discourse and sharing of information are some of the components which shall help in formulating trust, building a mechanism and filling the void in Judiciary. In the 19th biennial state conference of judicial officers, the sitting Chief Justice of India, Justice Sharad Bobde noted “We must employ every talent we have, every skill we possess to ensure that the justice is received within reasonable time. Delay in justice cannot be a reason for anybody to take law into their own hands. We have the possibility of developing Artificial Intelligence for the court system, only for the purpose of ensuring that undue delay is prevented in the delivery of justice.”
In a landmark study by the Stanford Law School, Duke Law and UCLA, AI is known to have performed legal tasks exceptionally well. It outperformed some notably seasoned lawyers in pointing out issues in a sample Non-Disclosure Agreement. It took 26 seconds to chalk out the issues within the NDA with 95% accuracy while the lawyers with more than 26 years of experience took a lot more time and were only 85% accurate.
Tasks of a judge are comparatively different from several legal tasks. Although these advancements are worth noting as these signify development in the cognitive ability of AI, technological advancements in Natural Language Processing and a base for strong AI ethics mechanisms. Despite the fact that the study shows signs of accuracy, it does not mean the system is precise or capable enough, because to notably just determine issues limits the scope of activity for an AI-based lawyer if we call it, and other than just limiting the scope, accuracy enhancement, even cognitively, unless is explainable, cannot be strategically prepared for tasks of different degree.
Yes, to some minimal tasks, in corporate law, such systems are indeed time-saving, but merely relying on the way the results have come does not prove the design to be perfect, nor to draw a probable trajectory out of it.
Thus, it is important to study AI & Judiciary as a subject matter curriculum beyond the very need of systemic mobilization, because such mobilization again has to be based on some motives, and so the human factor must be protected no matter what.
Realistic scrutiny of possible automation in procedural justice
There is no iota of doubt that AI in its present form or in near future can wholly or partly substitute Judges. A valuable proposition then lies in administrative tasks of judges as well as judicial tasks in assistive capacity. At the 79th Foundation Day celebration of Indian Income Tax Appellate Tribunal, even the CJI noted “Though I must make one thing clear: Because we have been dealing with the introduction of artificial intelligence in courts, I am firmly of the view, based on the experience of systems that have used artificial intelligence, that it is only the repetitive area or decision making such as rates of taxation, etc., or something that is invariably the same or which is in a sense mechanical, and that must be covered by artificial intelligence,”
The procedure of any kind has similarity and tendency to be repetitive. For such procedural aspects, high-level machine learning should suffice. The term procedural justice has a broader connotation. It is not only about repetitive procedures at the courts. It is about following a set of steps which are necessary for apt consideration of one’s request and then making a fair decision.
For instance, access to justice is a primal right of citizens. Maintainability of any case/petition is a procedural stage. Looking at the influx of litigation on a daily basis, a Judge is not in a position to apply his sound mind and total consideration to each and every case. Such issues of maintainability are based on technical rules like limitation, cause of action, accrual of right, factual situation and so on. In purely contractual disputes and for some other specific scenarios, this process could be totally automated. Even in corporate law, AI/ML services and systems are helpful to ensure more accurate work, and a degree of automation might seem reasonable.
However, the way a system or service conducts itself and receives a human response – when the required task(s) are put into place, it is more than clear to realize that instrumentalism is much necessary – and so, explainability again becomes an asset of policy determination and compliance cum liability insurance upon the actors involved with the system/service.
AI as an assistive-only tool
There are possibly two kinds of systems, one where human judge could be totally replaceable and secondly where the automation process would only be human judge-assistive. The mode where an AI device is only being thought of as an assistive tool is known as Judicial Decision Support Systems (JDSS). Although the second option seems less risky, it is the opinion of many that even the second option poses significant challenges. Semi-automotive assistive tools trigger a certain psychological trust which could hamper natural judicial thinking. An experiment was conducted to analyze how lawyers respond to automated legal reasoning and following conclusions were drawn:
- had difficulties with the assessment of the accuracy of the automatically generated advice, as they focus on argumentation presented by the system and ignore alternative solutions,
- had too much trust to the system’s work and, as a result, they carelessly accept the system’s advice (including incorrect one put into experiment on purpose),
- when being advised by both the system and the human, the participants considered the system’s advice «to be more objective and rational than the human advice» (even when the human’s advice was identical as the system’s one).
Hence, automated and semi-automated legal reasoning tools have their reservations which need to be addressed. In 2017, McKinsey reported that about 23% of a Lawyer’s job can be totally replaced by Artificial Intelligence. However, a sensible analysis goes beyond headlines and reads between the lines. The data collection exercise does not reveal segregation between administrative tasks and the tasks which require legal acumen of a lawyer on a daily basis.
A UK-based AI chatbot lawyer helps people appeal against parking tickets by collecting data pertaining to the visibility of no-parking signs, clarity of signs and so on. Based on this information and with some other factual details, it helped determine the legal validity of a parking ticket. This AI device known as DoNotPay has onboarded 2,50,000 cases and been successful in 1,60,000 cases giving it a success rate of 64%. These examples of assistive and semi-automated tools are a harbinger of hope for letting AI bring some significant changes in this sector.
Lastly, the AI in the judiciary or in general does not need to be super-intelligent, it just needs to be better than humans in terms of explanability, strategy implementation and digesting what it consumes. Unfortunately, there are numerous examples of human bias existing in Judiciary. It is comparatively difficult to build an AI system which is rather fairer than many people in the judiciary.
A study of the Israeli Parole Board in 2011 showed a strong trend that the board delivered harsher judgements before lunch than after. These human errors are common and are mostly a result of human emotions. There are so many contradictions in case laws on a daily basis that people in future would think that the present court setting was just a little better than flipping a coin. In light of such efforts, with adequate motivation and investments, a void could actually be filled by an AI device in resolving problems with the judiciary. Additionally, procedural regularity and the 3 aspects that make AI Ethics should not be ignored at all, considering the fact that in a realistic manner, AI systems/services can only be used in corporate law or contractual work, in some practical sense.
The human element of litigation and judging may be biased after all, but the idea of fairness is incomplete unless the instrumental focus of involving and ascertaining the indispensable role and position of human lawyers and judges is not adjusted. In India, for a common law system, polyvocality should be acutely observed and scrutinized to ensure that any technological intervention does not become a systemic anomaly to subvert the human promise that the concept of rule of law & procedural justice intends to provide.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Abhivardhan is the Chairperson and Managing Trustee of the Indian Society of Artificial Intelligence & Law and the President of Global Law Assembly.