This Article is written by Honey Thakkar, Student of M.K.E.S. College of Law.
Abstract
This paper discusses the ramifications of Artificial Intelligence (AI) Technology concerning the Indian judicial system, focusing mainly on constitutional order, the responsibility of algorithms, and basic rights of humanity. The application of AI has already commenced through the provision of tools such as SUPACE, however, the lack of a legal structure remains a great impediment. Problems relating to privacy, equality, and due process stand to suffer the most. From international approaches and policy papers, this article offers a scholarship evaluation and recommends strategies to limit the risks and harms associated with the application of AI technologies. The study argues that while the prospects of AI technology are immense, implementation should always correspond with the constitutional aims of India so that justice is not sacrificed for efficiency in the system.
Keywords
AI Technology, Judiciary of India, AI Ethics and Law, Rights of the Constitution, Accountability of the Judiciary, Algorithmic Discrimination, Privacy and Confidentiality of Information, Government Policy on AI, Legal Modernization, High Court of India, SUPACE, Witness Testimony.
Exploring the Use of AI Technology in the Legal Field
Artificial Intelligence (AI) is the branch of computer science dealing with the simulation of intelligent behavior in computers. In legal frameworks, AI can perform legal research, verbatim record hearings, and help with drafting judgments. Illustrative cases include the transcription programs implemented in the Delhi High Court and tools that predict case timelines in India. Other examples include the E-Courts Mission Mode Project, which sought to automate various functions of the courts and get the judiciary ready for contemporary technological offshoots.
Implications From the Legal and Constitutional Perspective
AI involvement in the making of judicial decisions should observe constitutional requirements. Article 14 bans discrimination as it guarantees that all persons are equal before the law, which leads use of unequal discriminatory principles of AI to infringe this right. Article 21, as interpreted in Justice K.S. Puttaswamy v. Union of India, includes the right to privacy, which is compromised when a litigant’s sensitive data is exposed to the AI system with no proper safeguards put in place. Further, the right to fair hearing (audi alteram partem) may in fact be exposed to unexplainable AI algorithms, which makes it impossible for such systems to give justifiable reasoning.
Judicial Accountability and Transparency
Translucent operational procedures are one of the standards used to define justice. A human judge must provide a judgment with an explanation that can be appealed. On the other hand, all-in-one systems are most of the time regarded as “black boxes” where parties cannot see how decisions are made. Automatically, this makes all-in-one systems ‘closed off’ for external evaluation. Violation of any principle, consideration, or reasoned judgment obliterates trust that the public places in an appellate system and the system as a whole.
Bias, Discrimination, and Algorithmic Injustice
The use of modern machine learning techniques makes algorithms exceptionally accurate, but only as good as the data. Decisions will inevitably be unequal where the input data contains all biases and injustice. For example, AI algorithms trained to make decisions related to bail predicting them unjustly devise systems that aid some social groups. The AI system will indubitably undermine any efforts to a fair and just legal system framework mandated by Article 14’s equal law provisions.
AI Legal Personality and Liability
In the case of humans, the law associates the liability with a ‘human actor’ where a person with an “intention” or “negligence” exists. As mentioned before, AI lacks ‘mens rea’ as in what is intent or the legal core of the issue. Hence, the question remaining who is legally liable for ‘making an AI’ error is very alive. Some kind of responsibility in case of flawed decisions detrimental to people of AI will be thrust upon people, judged whether they are conferred a developer, a deployer, or even the adjudicatory body, depending on the animus and the usage of AI. There are no enactments to documents dealing with these types of problems.
The Use of Evidence Created by AI
As provided by sections 65A and 65B of the Indian Evidence Act of 1872, provisions for electronic evidence exist. During the case of Anvar P.V. v. P.K. Basheer, the Supreme Court emphasized the need for adherence to these guidelines in the hearing of digital evidence. AI documents and analysis tools will not qualify for use in court if they do not comply with the applicable legal standards. The question of authenticity and adequacy becomes important at this junction.
Over-reliance on Technology and Separation of Powers
Judicial practice should not integrate AI technologies beyond the role of an assistant. Overdependence on technology poses a threat to the doctrine of separating powers.
Decisions made without human intervention and to a large extent determined by machines void the independence of judgment and augment public distrust toward the system.
Absence of Regulatory Framework
NITI Aayog’s ‘National Strategy for AI’ and MeitY’s ‘Responsible AI’ guidelines stand as documents of value, but, to date, there is no compulsory law regulating the usage of AI in sensitive areas such as justice in India. They provide guiding principles but lack legal value. No rules governing the deployment of AI algorithm technology present put the AI tribunal in jeopardy if any abuse or misuse occurs due to a lack of regulatory frameworks.
Comparative Legal Perspective
In the United States, AI systems such as COMPAS are employed for making bail and sentencing decisions. These systems have been criticized for a lack of transparency and potential racial discrimination. The European Union has an AI Act proposal that considers judicial applications of AI as ‘high-risk,’ warranting stronger compliance oversight. In contrast, China has incorporated AI technology into smart courts, raising
concerns regarding e-surveillance, and control by the ruling party state over the judiciary.
Opportunities and Benefits of AI in the Indian Judiciary
Transformative possibilities aside, there remain hurdles AI can take on. It can enhance accessibility to courts, lessening the burden on judges, facilitate case organization, and supply legal research on the go. With appropriate governance, AI can democratize access to justice, improving judicial processes.
Recommendations
- Review AI policies for the courts and develop a strategy for regulations.
- Require auditability and explainable AI policies.
- Enforce proper governance of privacy and data.
- Train legal practitioners in AI ethics.
- Create an AI Judicial Review Board.
- Require external verifications for bias and discrimination.
Conclusion
AI is both beneficial and detrimental for any form of judicial reform; it increases the efficiency and accessibility of processes while simultaneously posing a threat to fundamental rights. For India, striking a balance between innovation and responsibility is essential. Creating a set of laws and ethics will enable us to utilize AI in a more controlled manner.
References
- Supreme Court of India, ‘SUPACE’ (2021) accessed 20 April 2025.
- Ministry of Law and Justice, ‘E-Courts Project Phase II’ (2023).
- Ibid.
- Justice K.S. Puttaswamy (Retd.) v Union of India (2017) 10 SCC 1.
- Barocas S and Selbst A, ‘Big Data’s Disparate Impact’ (2016) 104 Calif. L. Rev. 671.
- Anvar P.V. v P.K. Basheer (2014) 10 SCC 473.
- NITI Aayog, ‘National Strategy for Artificial Intelligence’ (2018).
- Ministry of Electronics and IT, ‘Responsible AI for All’ (2021).
- European Commission, ‘Proposal for a Regulation on a European approach for Artificial Intelligence’ COM(2021) 206 final.