Listen To This Post
New Delhi: The Supreme Court on Tuesday expressed deep concern over the growing trend of lawyers relying on Artificial Intelligence (AI) tools to draft petitions, after a Public Interest Litigation (PIL) cited a completely non-existent judgment titled ‘Mercy versus Mankind’. The court warned that such practices threaten the integrity of judicial proceedings and impose an avoidable burden on judges.
A Bench comprising Chief Justice of India Surya Kant, Justice BV Nagarathna and Justice Joymalya Bagchi made the observations while hearing a PIL filed by academician Roop Rekha Verma seeking guidelines on political speeches. During the hearing, the Bench was taken aback to learn that the petition relied on a judgment that did not appear in the official legal records.
Expressing serious displeasure, the CJI remarked, “We are alarmed to reflect that some lawyers have started using AI to draft petitions. It is absolutely uncalled for.” The Bench underscored that while technology may assist legal research, unthinkingly relying on AI-generated content without proper verification undermines the credibility of legal pleadings and the judicial process.
Justice Nagarathna pointed out that she had recently encountered a similar case in which a petition cited the non-existent case of ‘Mercy vs Mankind’. She noted that such fabricated references not only mislead the court but also create unnecessary complications in judicial proceedings.
Echoing the concern, CJI Surya Kant revealed that the problem was not limited to isolated cases. Referring to proceedings before Justice Dipankar Datta’s Bench, the CJI said that “not one but a series of such judgments were cited,” all of which were later found to be fabricated or unverifiable. This, he said, reflected a disturbing pattern that required urgent attention.
The Bench also noted a subtler but equally troubling trend—instances in which genuine judgments were cited, but incorrect or entirely fabricated quotations were attributed to them. Justice Nagarathna observed that such distortions make it extremely difficult for judges to verify the accuracy of arguments. “It creates an additional burden on judges,” she said, emphasising that courts must spend valuable time cross-checking references instead of focusing on substantive legal issues.
Justice Joymalya Bagchi, meanwhile, lamented what he described as a gradual decline in the quality of legal drafting. He observed that many Special Leave Petitions (SLPs) now consist of lengthy reproductions of past judgments with minimal original reasoning or articulation of legal grounds. This, he said, reflected a worrying erosion of independent legal analysis and professional diligence.
The court’s observations come at a time when AI tools such as ChatGPT and other generative platforms are increasingly being used across professions, including law. While these tools can assist with research, summarisation, and preliminary drafting, they can occasionally generate inaccurate or entirely fictional legal citations—a phenomenon commonly referred to as “AI hallucination.”
At the same time, the Supreme Court itself has been actively leveraging AI to improve judicial efficiency. As part of its efforts to enhance access to justice and reduce delays, the court has deployed AI-based tools to translate judgments into multiple Indian languages, making them accessible to a wider population. It has also integrated AI and machine learning into case management systems to streamline administrative processes and improve efficiency.
However, the court drew a clear distinction between the responsible use of AI as a support tool and its uncritical use as a substitute for professional judgment. Judges have consistently emphasised that technology can assist but cannot replace the application of human intellect, legal reasoning and ethical responsibility.
Over the past few years, courts across the country have encountered several instances in which petitions and legal submissions cited judgments that could not be found in authorised law reports or official records. Such errors not only delay proceedings but also raise concerns about professional accountability and the reliability of legal submissions.
Reiterating the importance of professional responsibility, the Bench stressed that lawyers remain fully accountable for the accuracy of every citation, argument and reference included in their pleadings. It emphasised that all legal authorities must be carefully verified against authorised sources before being presented in court.
The Supreme Court’s strong remarks are likely to serve as a cautionary signal to the legal fraternity at a time when AI tools are rapidly transforming professional workflows. While acknowledging the potential benefits of technology, the court made clear that its misuse—particularly when it compromises accuracy and judicial integrity—cannot be tolerated.
The observations also highlight a broader challenge facing the legal profession in the digital age: balancing the efficiency gains offered by AI with the need to uphold rigorous standards of accuracy, accountability and independent legal reasoning.
As the judiciary increasingly embraces technology to improve access and efficiency, the court’s message was unequivocal: AI may assist, but it cannot replace the diligence, responsibility, and professional integrity that lie at the heart of the legal system.









