Deepfake Videos and the Law in India

Author: Kaushiki Dubey

Introduction

The digital landscape of India is currently facing a formidable and evolving challenge: the proliferation of deepfake videos. These sophisticated, AI-generated fabrications have recently surged into the national consciousness, circulating widely on social media platforms. They depict prominent figures-from celebrities and politicians to private citizens-appearing to say or do things they demonstrably never did. The critical problem lies in their almost-perfect realism, leading many viewers to mistake them for authentic content. By the time clarifications are issued, the inherent, often catastrophic, damage is already inflicted upon the victims.

For individuals, the consequences are severe; reputations are systematically destroyed, mental health is affected, professional opportunities are lost, and personal relationships suffer. Crucially, deepfakes inflict a deeper societal danger: the pervasive erosion of confidence in all digital media. As the line between the real and the unreal blurs, foundational trust in video evidence, news clips, and even genuine testimonies vanishes.

The central and most pressing concern is that the law is significantly lagging behind technology. While artificial intelligence continues its rapid advancements, India’s legal frameworks are struggling to evolve. There is currently no specific, dedicated law in India engineered to directly address deepfake technology. Authorities are forced to rely on a patchwork quilt of outdated legal instruments-a combination of general cyber laws, criminal law provisions, and constitutional principles-all of which were conceptualized before such a sophisticated level of digital impersonation was possible.

The analysis examines deepfakes from a foundational legal perspective, detailing why they are an immediate concern, exploring how existing statutes-including the newly introduced Bharatiya Nyay Sanhita, 2023 (BNS) are being applied, highlighting the enormous practical challenges in enforcement, and unequivocally arguing for why India must urgently legislate a specific, comprehensive legal framework to regulate this dangerous technology.

Understanding the Mechanics and Misuse of Deepfakes

The term “deepfake” blends “deep learning” (a powerful subset of AI) and “fake”. Deep learning allows computer systems to absorb, process, and learn complex patterns from vast quantities of data. When deployed to manufacture synthetic, yet startlingly realistic, audio recordings or video footage, the result is deepfake.

The creation process involves training AI software on extensive datasets of the target person’s genuine content, meticulously cataloging their unique facial expressions, eye and lip movements, voice cadence and body language. Once trained, the software can autonomously generate new content where the person appears to act within a completely artificial scenario.

What makes a modern deepfake uniquely dangerous is their unprecedented realism. Earlier manipulated videos were often easy to spot; today’s deepfakes can be so convincing that even experts struggle to identify them without sophisticated tools. Furthermore, the technology’s accessibility has collapsed, meaning virtually anyone with a smartphone and internet access can generate highly manipulative content, leading to a dramatic surge in misuse.

Common Misuses of Deepfake Technology:

Harassment and Humiliation: Individuals, particularly women are targeted by the creation and dissemination of non-consensual deepfake pornography, a severe act of gender-based cyberviolence and violation of dignity.

Defamation: Deepfake videos are potent tools for character assassination, falsely depicting individuals making offensive or illegal statements, inflicting irreparable damage to reputations.

Political Manipulation: In a democracy like India, deepfakes threaten electoral integrity by spreading fake videos of political leaders to influence public opinion, sow division and undermine democratic institutions.

Financial Fraud: Sophisticated deepfake audio/video is used to impersonate company executives or family members to issue fraudulent instructions, leading to corporate losses or financial scams.

Blackmail and Extortion: The threat of releasing a damaging, fabricated deepfake is used as a powerful lever to coerce and extort money or favors from victims.

Why Deepfakes Are A Serious Problem In the Indian Context

India’s digital ecosystem makes it especially vulnerable to deepfake misuse due to several converging factors:

Hyper Connectivity and Rapid Virality: Affordable smartphones, cheap internet and widespread social media use mean information spreads across the country with extraordinary speed. This environment allows deepfakes to achieve viral status within minutes, often reaching millions before they can be fact-checked.

Cultural Trust in Video Evidence: A strong, often unquestioning, belief in the authenticity of video evidence; the adage of ‘seeing is believing’ holds sway. Deepfakes exploit this foundational trust, especially when circulated within closed social circles like family groups.

The Speed Differential: The pace of legal and platform intervention is fatally slow compared to the instantaneous speed of digital circulation. A malicious deepfake can be shared thousands of times while legal processes are still in their infancy, ensuring the content’s persistence.

Official Recognition: The Government of India has formally recognised the gravity of the threat by issuing strong advisories and directives to social media platforms, signaling that deepfakes are a significant national concern impacting public order and individual safety.

India’s Existing Legal Framework: A Patchwork Approach

A core difficulty lies in the fact that India must prosecute deepfake offenses using laws that were designed before AI existed.

  1. The Information Technology Act, 2000 (IT Act)

The IT Act serves as the primary recourse for many digital offenses:

Section 66D- Cheating By Personation: Applicable when deepfakes are used to impersonate someone for fraud (e.g., a deepfake video call used to trick someone into transferring money).

Section 66E- Violation of Privacy: Protects against the unauthorized sharing of private images, which can be strained to cover the misuse and manipulation of a persona’s identity data.

Sections 67 and 67A- Obscene/Sexually Explicit Content: Deepfake pornography is most often prosecuted under these sections, which penalize the online publication or transmission of sexually explicit material.

While these sections offer some leverage, they require fitting a 21st-century AI problem into a 20th-century legal structure. Enforcement is limited by their original intent, which was focused on traditional cybercrime, not AI-generated synthetic media.

  1. The Bhartiya Nyay Sanhita, 2023 (BNS)

The replacement of the Indian Penal Code with the BNS offers modernized criminal provisions, several of which can be applied reactively to deepfake harms:

Section 356- Defamation: Applicable if a deepfake causes harm to a person’s reputation by spreading false information (e.g., fake videos portraying individuals in a negative light).

Section 75- Sexual Harassment: Deepfakes used to harass women, even without physical contact, fall under this provision due to the nature of the online conduct.

Section 77- Voyeurism: Addresses the creation or distribution of non-consensual sexual content, which applies directly to deepfake pornography.

Section 351- Criminal Intimidation: Used when a victim is threatened with the release of damaging deepfake content.

Though the BNS allows for the punishment of the consequences of deepfakes, defamation, harassment or intimidation, it critically fails to regulate the creation or distribution of the underlying AI technology or mandate transparency for synthetic media.

Deepfakes and Fundamental Constitutional Rights

Deepfakes violate core fundamental rights guaranteed by the Indian Constitution:

Right to Privacy (Article 21): Established in the case of Justice K.S. Puttaswamy vs Union Of India (2017), privacy includes the right to control one’s image, voice and personal identity. Deepfakes violate this by misappropriating a person’s unique biological data without consent for malicious purposes.

Right to Reputation (article 21): As held in Subramanian Swamy vs Union Of India (2016), reputation is a facet of the Right to Life. Deepfakes that spread false narratives directly violate this constitutional protection.

Freedom of Speech and Reasonable Restrictions (Article 19): Deepfakes, engineered to deceive and cause demonstrable harm, cannot claim the shield of legitimate freedom of expression under Article 19 (1)(a). They fall under the ‘reasonable restrictions’ of Article 19(2) due to malicious intent and fraudulent nature.

Indian courts have shown sensitivity to online abuse, condemning the circulation of morphed images and recognizing the importance of digital dignity, providing a judicial basis for addressing deepfake harms.

Practical Challenges in Enforcement

Regulating deepfakes severely hampered by practical hurdles:

Identification of Creators: Perpetrators often hide behind fake accounts and foreign servers, making attribution a massive technological challenge.

Speed vs Law: The slow pace of the legal process is ineffective against content that goes viral in minutes.

Lack of Technical Expertise: Law enforcement agencies often lack the specialized training and cutting-edge forensic tools needed to accurately detect deepfakes and gather evidence.

Jurisdictional Issues: Content created abroad but viewed in India complicates prosecution, requiring difficult cross-border cooperation.

The Imperative for a Dedicated Deepfake Law

Relying on old laws is unsustainable. India urgently needs a specific, comprehensive legal framework to move beyond reactive punishment to proactive litigation. A dedicated law would:

  1. Define Deepfakes Clearly: Provide a precise, legally binding definition of deepfakes and generative AI content.
  2. Penalize Malicious Content: Specifically penalise the malicious creation and knowing sharing of deepfake content with intent to defraud, harass or defame.
  3. Mandate Transparency: Crucially, mandate the clear, indelible labelling(or watermarking) of all AI-generative content to ensure public awareness and prevent deception.
  4. Ensure Fast takedown: Establish clear, time-bound legal mechanisms compelling social media platforms to remove illegal deepfake content (especially non-consensual sexual imagery) within a few hours.
  5. Prioritize Victim Dignity: Provide strong rights for victims to seek fast, effective relief and the repudiation of fabricated content.
The Way Forward

To effectively combat this digital threat, India must implement a multi-pronged strategy:

  1. Legal Reform: Immediately initiate the drafting of a dedicated Deepfake Regulation Act focused on transparency, malicious creation and platform accountability.
  2. Technical Capacity: Invest heavily in training police and prosecutors in AI detection and digital forensics.
  3. Public Awareness: Launch national campaigns to educate citizens on how to recognize and report synthetic media, shifting cultural reliance on ‘seeing is believing’.
  4. Platform Accountability: Strengthen Intermediary Guidelines to impose severe penalties on social media platforms that fail to swiftly remove illegal deepfake content.
  5.  Ethical AI Governance: Develop a national policy ensuring that the development and deployment of generative AI adhere to robust safety and anti-misuse standards.
Conclusion

Deepfakes represent one of the most serious challenges of the digital age, threatening personal dignity, corporate stability and democratic processes. While technology has advanced rapidly, the law in India is still catching up, offering only partial and often insufficient solutions.

To protect privacy, dignity and public trust, India must move towards a comprehensive, forward-looking legal framework. Regulating deepfakes is no longer optional; it is essential for preserving the integrity of the digital world.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *