Author: Nidhyasri
Introduction
The rapid expansion of artificial intelligence in the commercial, governmental, and private sectors has presented India with unparalleled opportunities and grave risks. Though the country introduced wide-ranging reforms that is Digital Personal Data Protection Act, 2023, or DPDP Act-and did away with the Indian Penal Code in favour of the Bharatiya Nyaya Sanhita, 2023 (BNS) such laws were not drafted to take into account autonomous systems, generative AI, and high-risk algorithmic decision-making. Therefore, despite being broad, these legal mechanisms are bound to struggle while dealing with harms caused by AI; specifically, deepfakes, automated fraud, algorithmic discrimination, AI-enhanced cybercrime, and robotic decision-making when partial or no human supervision is involved.
The harms occurring through AI are not just regulatory failures, they are human failures in terms of representation, dignity, and justice because victims suffer losses which the present law cannot find adequate punishment. This human element makes this non-existence of any AI-specific statute stark. India stands at a point where technological acceleration has outpaced legislative foresight and created gaps that cannot be managed effectively through traditional interpretations of criminal law, tort principles, or constitutional doctrines alone.
Shortfalls within the IT Act, DPDP Act, and the Bharatiya Nyaya Sanhita-in addressing AI crimes:
The Information Technology Act, 2000, despite several amendments and supplementary rules, remains fundamentally drafted for humanly caused cyber offences. It presumes intentionality, direct causation, and identifiable perpetrators elements which do not apply neatly to AI systems capable of individual conduct. Where there is an AI-generated defamatory content, AI-driven phishing, or impersonation with deepfake of a real person, the IT Act does not spell out clearly whether the developer, deployer, or user shall be held liable.
It regulates personal data processing, but does not consider aspects of algorithmic opacity, model hallucinations, biometric inferences, or misuse of synthetic data. AI models often use non-personal or anonymized data, which are outside the purview of the Act even though such data can still be used to cause harm to individuals through profiling or manipulation.
In the criminal domain, the Bharatiya Nyaya Sanhita, while modernized, assumes “human mens rea” as the basis for culpability. AI systems operate on probabilistic outputs rather than intention. For instance, the provisions on cheating, fraud, impersonation, sexual offences relating to digital materials, extortion, and identity misuse do not anticipate intelligent systems capable of autonomously initiating these acts. The courts will wrestle with fitting AI-generated harm within human-centric doctrines such as abetment, conspiracy, or common intention in the absence of a legislative anchor defining algorithmic accountability.
Emerging Crimes Driven by AI and the Problem of Attribution:
Active AI tools now make possible crimes that are qualitatively different from conventional cyber-crimes. Deepfake videos created for extortion or political manipulation can be made in a matter of seconds without leaving a traceable human perpetrator once the models are released publicly. Fraudulent voice cloning has already caused immense financial loss in India and elsewhere. The damage is acutely personal: the victim feels violated not just economically but psychologically, too, with their identity used as a weapon against them.
BNS provisions on cheating, extortion, and defamation do not directly address harms whose perpetrator is a partly automated process. Conventional mens rea collapses when an AI system “hallucinates” defamatory statements, or crafts an extortion message autonomously. Absent any statutory scheme assigning responsibility to developers, model trainers, platform providers, and end-users, the courts are turning to inconsistent interpretations, which breed legal uncertainty.
International Developments and Comparative Jurisprudence:
Several jurisdictions have taken steps toward specific AI legislation, acknowledging that general technology laws are insufficient. Today, the European Union’s AI Act, 2024, is the world’s most comprehensive AI-specific legislation, categorizing AI systems into classes of risk, compelling transparency in generative AI, putting product liability standards for algorithmic harm, and imposing human oversight in high-risk applications. The United States, while adopting a sectoral approach, has issued binding federal directives regarding AI procurement and accountability, as well as state laws criminalizing deepfake misuse in elections and pornography. China has also promulgated regulations relating to recommendation algorithms, deep synthesis technology, and generative AI services that focus on real-name verification, watermarking, and safety assessments.
These frameworks underscore that AI demands a corpus of law in its own right, rather than an extension of cyber law, since the nature of harm flowing from AI is system-driven, probabilistic, and sometimes autonomous. They also bring forth the requirement for robust enforcement mechanisms, algorithmic audits, and a human-rights-centric regulatory ethos that India currently lacks.
Judicial Developments and Case Law Relevance:
Even though there is scant AI-specific case law in India, several judicial principles point indirectly to the emerging need for statutory reform in this regard. Courts have underlined informational privacy, data dignity, and algorithmic fairness in decisions like K.S. Puttaswamy v. Union of India (2017), where the fundamental right to privacy was affirmed. This reasoning extends logically to AI systems collecting or inferring sensitive data without consent.
In Shreya Singhal v. Union of India, the Supreme Court has insisted that digital restrictions must not only be narrowly tailored but clearly worded, too. These standards thus create barriers to broad and vague applications of the IT Act to AI-driven harms; this implies that novel crimes, such as creating malicious deepfakes or autonomous fraud, must be narrowly defined in a separate statute, not stretched to fit dated categories.
Global cases have also influenced the debate. The UK Post Office Horizon litigation, though not about modern AI, demonstrates the catastrophic consequences of relying on opaque automated systems without legal accountability. The logic applies even more strongly to AI models with non-deterministic behaviour.
The Constitutional Imperative for AI Regulation in India:
AI today directly implicates constitutional guarantees such as equality before law, free speech, procedural fairness, and privacy. When automated systems generate biased outputs-whether in credit decisions, job screening, or policing assistance-they violate Article 14’s mandate against arbitrary state action. AI-driven misinformation distorts political participation and infringes Article 19 rights. Government reliance on opaque AI tools for welfare distribution or digital surveillance violates Article 21’s dignity and due process guarantees. India needs a statutory framework that is not punitive but rights-protective to ensure AI does not erode constitutional freedoms, and yet allows innovation. Absent codified principles of algorithmic accountability, transparency, human oversight, and remedy mechanisms, the constitutional rights will be whittled away by private technological power.
Why an AI-Specific Law Is Urgently Needed:
India is standing at a regulatory crossroads. Crimes through AI blur the lines between human and machine agencies, making it tough to apply traditional doctrines of intention, knowledge, and causation. A landscape of fragmented responsibility means that nobody-developer, deployer, or end-user-is ever demonstrably liable. Victims currently have no avenue for meaningful recourse since there is no statute on duties of care, safety standards, or mechanisms of redress for malfunction, misuse, or malicious deployment of AI. A dedicated AI law would bring clarity in terms of defining responsible actors, standards for algorithm design and deployment, prohibition on harmful practices, facilitating independent audits, and creating both civil and criminal liability frameworks. It would harmonize with the BNS, DPDP Act, and IT Act while addressing gaps that these instruments structurally cannot fill.
Conclusion
My View The artificial intelligence domain is too complex, dynamic, and impactful to be governed by general technology laws alone. For instance, the current legal framework is built on human intent and conventional cyber offences, which cannot really conceptualize harms induced by autonomous systems that have the capacity to learn, evolve, and act with very minimal human intervention. The ambiguity in defining liability creates uncertainty among innovators and places victims in a state of legal limbo.
In my view, India requires a specific AI law that strikes a balance between innovation and accountability, embeds constitutional protections, incorporates human cantered safeguards, and aligns with international standards. A future-proof legal regime needs to recognize that technology is only as just as the laws that govern it. Without such a law, India risks enabling an ecosystem where AI-driven crimes grow faster than the legal system is able to respond, at the cost of public trust, safety, and human dignity.

Leave a Reply