AI as an Assistant
Supreme Court Justice Vikram Nath recently articulated a crucial perspective on the integration of Artificial Intelligence within the judicial framework.
He emphasized that while AI offers significant potential to streamline and support court operations, its capabilities are fundamentally assistive and cannot extend to replacing the core function of judicial adjudication. Justice Nath pointed out that the complexities inherent in millions of legal cases, spanning diverse areas like matrimonial disputes and commercial settlements, defy categorization within fixed datasets. The subtle nuances, the balancing of equities, and the deep understanding of individual case facts—particularly in sensitive matters like family partition suits—are all areas where human discernment is paramount. AI can undoubtedly collate data, assist with translations, and categorize cases, thereby boosting efficiency. However, the ultimate responsibility for understanding the intricate human elements and making informed decisions rests solely with judges, whose minds are trained to grapple with these very complexities.
Limits in Complex Cases
Further elaborating on AI's limitations, Justice Nath underscored its inadequacy in handling cases of a constitutional or deeply criminal nature. The intricate interpretation of constitutional principles and the multifaceted considerations in criminal proceedings present significant challenges for AI. For instance, in criminal cases with multiple accused individuals named in the same First Information Report (FIR), a judge must exercise profound discretion to decide on bail applications, potentially granting it to some while denying it to others. This requires an appreciation of evidence, an understanding of intent, and an assessment of individual circumstances that AI, in its current form, cannot replicate. While AI can be a valuable tool for administrative tasks and data processing, the capacity for nuanced reasoning, ethical judgment, and the empathetic understanding of human situations remains exclusively within the domain of human judges. The final judgment, therefore, will always reside with human intellect.
Human Conscience and Oversight
Echoing similar concerns, Justice A.G. Masih highlighted that technology, including AI, is not intended to supplant lawyers and judges. He posited that data-driven intelligence, however advanced, cannot substitute human conscience. The very foundation of the justice system relies on public trust, which is built upon the courts’ careful balancing of rights and liabilities, coupled with a thorough assessment of factual circumstances guided by human empathy. Feelings and moral reasoning are inherently human attributes that AI cannot emulate. While technology can facilitate judicial activities, it cannot replace the act of delivering justice itself. Justice Masih also raised the critical point that the growing integration of technology necessitates robust institutional oversight. This might involve developing formal guidelines for the use of court technology and potentially establishing a specialized judicial-tech oversight board to monitor AI tools, meticulously check for biases, and review any automated drafts to ensure fairness and accuracy.
Risks of Inaccuracy
Senior advocate Sajan Poovayya brought to light the inherent risks associated with AI-generated content, particularly the phenomenon of 'hallucination.' He explained that since AI is a creation of mankind, it inevitably carries the potential for 'hallucination'—the generation of fabricated or imaginary information. This characteristic makes AI particularly dangerous for the judiciary, as it could present non-existent case law or flawed logic as factual. Such inaccuracies could severely undermine the integrity of legal arguments and court proceedings. The Chief Justice of the Delhi High Court, D.K. Upadhyay, further elaborated on the global trend of AI-assisted judgment drafting systems being adopted in various countries. He noted that while AI is increasingly integrated into judicial governance for administrative efficiency and substantive support, it simultaneously raises profound questions about accountability, fairness, and the ultimate limits of automation within the justice system. The implications for evidence assessment, especially with manipulated digital content and deepfakes, are also a significant concern, potentially requiring courts to re-evaluate traditional methods of verifying evidence.














