REGULATING THE UNREGULATED: SUPREME COURT’S APPROACH TO AI AND DEEPFAKE TECHNOLOGIES IN INDIA

Author: Rasika Pitale

I.INTRODUCTION

Artificial Intelligence (AI), driven by advancements in computational power and sophisticated algorithms, has evolved from a theoretical concept to an integral component of the global digital infrastructure. This progress has birthed technologies like deep learning, a subset of AI, which enables the creation of “deepfakes”, digital forgeries that utilize Generative Adversarial Networks (GANs) to create highly realistic synthetic audio, video, and imagery. Deepfakes are capable of generating entirely new content or manipulating existing media to make it appear authentic.

While AI and deep learning offer benign uses in fields like research and entertainment, their weaponisation poses a profound and imminent threat to individual dignity, societal trust, and the democratic process. This threat is uniquely amplified in India, a nation characterised by a vast digital population and high susceptibility to misinformation.

The rise of deepfakes presents a critical challenge to the Indian legal system: how can an existing regulatory framework, designed for a pre-digital era, govern advanced synthetic media? This article critically examines the Indian scenario, analysing the existing legislative patchwork, the landmark interventions made by High Courts, and, most importantly, the nuanced, sometimes reticent, Supreme Court’s approach to regulating AI and deepfake technologies.

II. THE INDIAN DIGITAL CRUCIBLE

India’s unique digital landscape, marked by over 900 million internet users, a diverse socio-cultural fabric, and a high volume of digital consumption, makes it a fertile ground for the propagation of synthetic misinformation. Deepfakes pose multi-faceted threats:

  1. Democratic Integrity, Synthetic clips can rapidly skew public discourse, incite communal unrest, or influence voter attitudes during electoral cycles.
  2. Gendered Abuse and Defamation: A significant portion of deepfake content globally involves non-consensual sexually explicit material, leading to egregious violations of individual privacy and dignity, particularly against women.
  3. The “Liar’s Dividend”, The very existence of accessible deepfake technology allows bad actors to dismiss genuine, damaging content as “just a deepfake,” eroding trust in all forms of digital media and legitimate institutions.
  4. Cybercrime and Fraud, AI integration has escalated cybercrime, with scammers increasingly employing deepfake technology (GANs) to simulate judicial authorities or create convincing impersonations for ‘digital arrest’ scams, costing victims thousands of crores of rupees.
III. THE LEGISLATIVE PATCHWORK

In the absence of a comprehensive, dedicated AI and deepfake law, Indian authorities and courts have traditionally relied on existing statutory frameworks, which often prove inadequate to address the scale and speed of the digital threat:

  • The Information Technology Act, 2000 (IT Act): This Act contains core provisions utilized against deepfakes, primarily:

Section 66C (Identity Theft) and Section 66D (Cheating by Personation).

Section 66E (Violation of Privacy) and Section 67/67A (Obscenity/Sexually Explicit Material).

Section 79 (Intermediary Liability): This is the most crucial section, granting social media platforms a “safe harbour” immunity for third-party content, provided they exercise “due diligence”. The IT Rules, 2021 (as amended) mandate that platforms remove unlawful content, including deepfakes, within 36 hours of receiving a complaint and educate users about prohibited content. The continuation of the safe harbour is contingent upon compliance with these due diligence requirements.

  • Indian Penal Code (IPC) / Bharatiya Nyaya Sanhita (BNS), 2023 

The newly enacted BNS modernises offences. For instance, Section 353 (BNS) penalises the circulation of misinformation that threatens public order, allowing for imprisonment up to three years. The BNS also addresses cheating by personation and organised cybercrime.

While these laws offer a foundation, their application to sophisticated AI-generated media is strained. The fundamental problem lies in the fact that deepfakes challenge the very definition of evidence and truth, requiring a regulatory response that goes beyond post-facto penal action.

IV. THE SUPREME COURT’S APPROACH: NON-INTERVENTION AND INTERNAL CAUTION

The Supreme Court of India’s approach to the external regulation of AI and deepfakes can be characterized by measured non-intervention and a focus on internal judicial integrity regarding the technology.

1. Declining to Frame Guidelines

In a significant recent development, the Supreme Court declined to issue directions or frame a comprehensive regulatory framework for deepfakes and AI-generated media in a Public Interest Litigation (PIL) filed by Aarati Sah v. Union of India.

The Bench disposed of the plea after the Union Government informed the Court that draft AI rules had already been formulated and released for public consultation. The Court reasoned that since the executive branch was already taking substantial steps to develop policy and regulatory guidelines, there was no necessity for the judiciary to intervene and issue preemptive directions. This stance reflects a judicial deference to the legislative and executive spheres in policy-making for a nascent and rapidly evolving technology, effectively choosing to allow the executive’s draft rules to proceed without judicial overlay.

2. Flagging Internal Risks and Ethical Use

While the Apex Court has been cautious about regulating deepfakes in the public domain, it has shown acute awareness and a proactive approach toward the use and dangers of AI within the judicial system itself.

  • Judicial Vigilance Against Manipulation: The Supreme Court has expressed alarm over the issue, particularly after the Chief Justice of India B.R. Gavai revealed that morphed images of him were circulating online.
  • The SC White Paper on AI: Through its Centre for Research and Planning (CRP), the Supreme Court released a White Paper on “Artificial Intelligence and the Judiciary”. This paper acknowledges the potential of AI tools like SUVAS (translation) and SUPACE (legal research) to improve efficiency, but simultaneously identifies key risks posed by generative AI:
    • Deepfake Evidence Manipulation: The possibility of sophisticated synthetic media distorting oral and documentary evidence.
    • Accuracy Risks and Hallucinations: The risk of AI systems producing non-existent case citations or fictitious legal interpretations.
    • Algorithmic Discrimination: The potential for biased datasets to perpetuate social inequalities in judicial outcomes.
  • The “Human in the Loop” Principle: The SC mandates a cautious approach, insisting that the judge must remain the final arbiter. AI is confined to administrative and processual functions, with mandatory human verification of all AI outputs to ensure judicial independence and prevent reliance on technology from overpowering human judgment.
  • Addressing Cybercrime: The Court has taken a significant step against AI-facilitated crimes, empowering the CBI with pan-India jurisdiction to investigate sophisticated ‘digital arrest’ scams that utilize deepfake technology for impersonation. This directive signals the Court’s recognition of deepfakes as a serious, technology-enabled crime requiring federal investigative coordination.
V. THE HIGH COURT-LED JUDICIAL SHIELD: ANCHORING DEEPFAKES IN ARTICLE 21

In the gap created by the absence of a comprehensive central law and the Supreme Court’s policy of deference, the various High Courts of India have nimbly stepped in to protect individuals by expanding the scope of Personality Rights and the fundamental right to dignity.

1. The Right to Privacy and Personality

Indian courts have anchored the right to control the commercial and personal use of one’s identity in the fundamental right to privacy under Article 21 of the Constitution. The Supreme Court’s own precedent in R. Rajagopal v. State of T.N. (1994) established the groundwork for personality rights, and this doctrine was further reinforced by the landmark K.S. Puttaswamy v. Union of India (2017) judgment, which affirmed privacy as a fundamental right.

2. Key Cases and Judicial Remedies

High Court rulings have been instrumental in addressing the immediate harm caused by deepfakes:

  • Protection of Public Figures: The Delhi and Bombay High Courts have issued extensive injunctions to protect celebrities from AI-driven exploitation and deepfakes.
    • In the case of actor Abhishek Bachchan, the Delhi High Court granted injunctions against fake AI videos.
    • Similarly, the Bombay High Court granted relief to actor Sunil Shetty, explicitly recognising his personality rights under Article 21 and ordering the removal of infringing AI-generated content.
    • The Bombay High Court also protected singer Asha Bhosle from the AI-driven replication of her distinctive voice.
  • Protecting Activists and Ordinary Citizens: In a landmark ruling concerning activist Ms Kamya Buch, the Delhi High Court issued a comprehensive and protective response against an online harassment campaign that included AI-generated visuals and deepfakes. The Court ordered:
    • An ad interim injunction against the dissemination of objectionable content.
    • Directives to platforms like X Corp and Meta to promptly remove specified URLs.
    • Orders to Google LLC to de-index the materials.
    • A directive for social media platforms to disclose the identifying information of the users responsible.

These judicial responses, often relying on “John Doe” orders against unknown defendants, demonstrate the Indian judiciary’s adaptability and willingness to use constitutional principles to create a formidable shield against emerging technological threats like deepfakes.

VI. THE PATH TO FORMAL REGULATION: NEW RULES AND TRACEABILITY

While the Supreme Court has chosen a consultative path, the Union Government is actively modernizing the regulatory landscape to specifically target deepfakes and AI. This transition is marked by a focus on mandatory labelling and enhanced platform accountability:

  • The Digital Personal Data Protection Act (DPDP Act), 2023

 This legislation mandates user consent for the processing of personal data, including biometrics used to create deepfakes. Non-consensual deepfake creation can be penalized as a breach under the Act, with fines potentially reaching ₹250 crore.

  • Mandatory Labelling and Traceability:

 The Ministry of Electronics and Information Technology (MeitY) has proposed crucial amendments to the IT Rules, 2021, mandating the labelling of all AI-generated or “synthetically generated information”.

Proactive Disclosure, Users are required to self-declare if their uploaded content is AI-generated.

Prominent Labelling, Visual content must be labelled to cover at least 10% of the display area, and audio content must have an audible label for at least 10% of its duration.

Metadata Embedding, A permanent, machine-readable metadata identifier or watermark must be embedded in the content to ensure traceability of the creator.

This proposed framework signals a shift towards pre-emptive technical regulation, moving beyond merely removing content to ensuring its origin is always identifiable. However, this approach also raises concerns about its impact on the safe harbour provision under Section 79, as mandatory platform actions like verification and labelling could be interpreted as “modifying” the content, potentially inviting liability.

VII. CONCLUSION: 

The Supreme Court of India’s approach to AI and deepfake regulation is a complex tapestry woven with deference to the executive, internal caution regarding judicial integrity, and a reliance on constitutional precedents. By declining to frame comprehensive guidelines in a PIL, the Court has consciously prioritised allowing the government’s legislative process, culminating in the new IT Rules and the DPDP Act, to mature.

The next phase of Indian jurisprudence will focus on how the new statutory rules, particularly those related to mandatory labelling and traceability, are tested and interpreted in court. The challenge for the Supreme Court in the coming years will be to harmonise the executive’s push for accountability and tracing with the constitutional guarantee of free speech (Article 19(1)(a)), ensuring that regulatory overreach does not lead to a chilling effect on legitimate AI-enabled artistic or political expression. Ultimately, regulating the unregulated requires not just new laws, but a consistent, constitutionally-sound judicial doctrine to ensure that while technology evolves, the fundamental rights of India’s vast digital citizenry remain sacrosanct.

References 
  1. Shinu Vig, Regulating Deepfakes: An Indian Perspective, 17(3) Journal of Strategic Security 70–93 (2024).
  2. Bobby Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107(6) California Law Review 1753–1820 (2019).
  1. Paul Scharre, Michael C. Horowitz & Robert O. Work, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New American Security (2018).
  2. Hannah Smith & Katherine Mansted, Weaponised Deep Fakes: National Security and Democracy, Australian Strategic Policy Institute (2020).
  3. R. Rajagopal v. State of Tamil Nadu, (1994) 6 SCC 632.
  4. K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
  5. Aarati Sah v. Union of India, 2023 SCC OnLine SC 543.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *