Author: Charvi Tank
ABSTRACT
Artificial Intelligence has swiftly revolutionized governance, commerce, healthcare, and law enforcement by allowing data-driven decision-making at a scale that was previously unimaginable. While AI envisions efficiency and innovation, its pervasive reliance on personal data, algorithms shrouded in mystery, and automated decision-making triggers grave issues regarding privacy, consent, accountability, and rights. In India, the legal regime on AI remains fragmented and indirect, with the current laws, including the Digital Personal Data Protection Act, 2023, only partly addressing data protection and not adequately regulating AI-specific risks. The paper analyses the regulatory gaps in India’s extant legal framework relating to artificial intelligence and presents issues which indicate a palpable conflict of AI practices with established data protection principles and constitutional safeguards. Based on a comparative analysis and policy evaluation, this article makes an argument for the adoption of a comprehensive, risk-based, and rights-centric AI regulatory framework. The paper concludes by identifying necessary reforms, such as legislating AI-specific laws and establishing appropriate institutional oversight mechanisms to ensure alignment with global best practices with respect to ethical and responsible AI governance in India.
INTRODUCTION
There has been rapid progress in Artificial Intelligence, and this has significantly changed the way in which decisions are made in both the public and private sectors. From predictive policing to credit scoring, and advertising to name but a few areas of application, there is no doubt that AI is increasingly used to make decisions that directly impact people’s rights and freedoms. These applications of AI may be highly beneficial from an efficiency and scalability perspective but do have deep legal issues surrounding them.
AI systems are driven by massive amounts of data, including which data is personal or sensitive. Such systems are also typically driven using complicated algorithms, which are “black boxes” in nature, thus being hard to comprehend in terms of decision-making. The potential risks linked with the unbridled application of AI systems include issues concerning privacy, discriminatory treatment, issues with consent, and reduction in human oversight in critical decision-making.
Regarding the Indian scenario, the legal framework for the use of artificial intelligence is not fully developed. Although the Digital Personal Data Protection Act, 2023, is a major move towards the regulation of personal data, the legislation does not target the regulation of artificial intelligence. Therefore, the use of artificial intelligence remains in a kind of legal twilight zone regarding compliance with the Indian constitution in terms of equality, dignity, and the process of justice. This paper aims at critically evaluating the inadequacy of the current legal system in place in India to cater to the risks posed by AI and exploring the requirement for a comprehensive legal solution to govern AI. It discusses the convergence between AI technology and data protection norms, assesses global models, and draws inferences suitable for the legal ecosystem of India. By emphasizing a harmonious legal solution that supports the cause of fundamental rights as well as the growth of AI, this paper intends to add its bit to the ongoing discussion on ethical AI regulation in India.
WHAT ARE PRIVACY LAWS IN INDIA?
Digital Personal Data Protection Act,2023
The Data Personal Data Protection act was enacted on 11 august 2023. It is India’s first comprehensive act for the protection of digital personal data processed within India. This act sets a framework for organisations who collects personal data and how they should protect it. The act uses SARAL approach, which is simple, accessible, rational and actionable. The act is written in simple format so both the data collectors and individual don’t face any problem in understanding the rules. With the rapid increase in technology, this act ensures that an individual can freely use the tech and learn from it without worrying about their data being leaked and used incorrectly.
the act introduces basic key terms which are as follows:
- Data fiduciary: A person who alone or together with other person determines the purpose and means of processing personal data.
- Data Processor: A mediator who process the personal data provided by fiduciary.
- Data Principle: It means the person to whom the data belongs. In case of minor, parents or guardian are responsible to act on their behalf. in case of a person with physical disability, legal guardians are responsible.
- Consent manager: It is an entity that provides platform to principle to give consent, manage their data or withdraw their consent.
Under this act a data protection board of India is also created to regulate, address grievances and impose penalty on defaulters. Data fiduciaries can face up to penalty of 250 crores in case of data breach. Misuse of personal data, not notifying the board or the individuals about data breach can attract penalty up to 200 crores. Any other offense committed by fiduciaries can attract penalties up to 50 crores. (press information bureau, 2025)
The rules and regulations mentioned under this act does not apply to data available in public domain as it is accessible to everyone.
The data personal protection act has fully come into effect since 2025, providing a safe environment for netizens.
HOW AI USES PERSONAL DATA
In today’s world artificial intelligence (AI) has become a core part of our daily life. From writing massages to getting information, to managing tasks, our whole life is now dependent on AI, making our life easy. But all these benefits come at a cost of our personal data. As AI makes our life easy it also poses a great risk on our privacy, by using our data to train itself so it can perform tasks accordingly. As we see advance in technology day by day, we also need to be updated on how our data is used
- Usage Of Personal Data To Train Datasets
AI requires vast datasets to achieve accuracy. it trains itself by scanning thousands of facial recognition containing millions of biomatrices, via GPS it tracks our location and captures our image, fingerprints, photos and videos and our activities. By analysing this data AI system keep trains themselves but the major issue it is not transparent how this data is processed and used further, increasing risk of data misuse. (velaro, n.d.)
- Usage Of Personal Data For Profiling And Prediction
AI creates a personal profile on any individual by analysing their choices on social media platform, their likes dislikes, there current or past posts, by accessing their purchase history, orders, healthcare choice, financial choice and what not. Whenever internet users go online, they leave their digital footprints by accepting various cookies or visiting different websites so even if they delete their history or any post it will be detected and used by AI to upgrade itself by using that private data. By analysing the similar data from most datasets, AI can create a biased prediction and spread misinformation. (velaro, n.d.)
- Usage Of Personal Data In Automated Decision Making
Initially an AI model is mostly trained by basic stereotypical model we can understand this from an example – when meta revealed that it is using posts from 2007 to train its AI. AI mostly relies on personal data which lead to automated decision making in shortlisting candidate list for job application, loan approval application or even identifying fraudulent calls or massages. This raises a major concern regarding the selection matrix and biasness in AI model. (velaro, n.d.)
Due to lack of transparency in AI matrix, it raises concern regarding the misplacement and usage of private data wrongfully.
Overall, AI uses personal data to learn, upgrade, make automated decisions and perform task. it is an efficient and productive technology but at the same time it give rise to a great concern on privacy risk and misuse of our personal data
- CONFLICT BETWEEN AI AND DPDP ACT: CHALLENGES REGULATORY GAPS IN DPDP REGARDING AI
The Digital Personal Data Protection Act,2023 was introduced to safeguard our private data and establish a clear guideline for our data fiduciaries to adhere to regulations like providing notice, obtaining consent and safeguarding the data collected. But this act does not establish any regulations or compliance for artificial intelligence (AI). AI uses vast amount of personal data to train itself and perform tasks accordingly, it builds dataset from sources like social media, biomatrices, facial recognition, GPS and many more sources. These sources include vast amount of personal data and without any compliance it increases the risk of data breach and misuse of our private data. The first major issue is the conflict between AI’s data practices and the DPDP Act’s requirements of consent and purpose limitation. The DPDP Act requires that data must be collected for a clear purpose and processed only for that purpose. But AI systems constantly collect and reuse personal data for multiple, unpredictable tasks while training themselves. This mismatch increases the risk of data being used beyond what the individual originally consented for.
Another concern is AI algorithmic biasness. the data which is used to train AI is mostly seen as socially bias towards a specific set of people. This leads to discrimination from AI while assessing job applications, division of work and many other tasks. India’s legal framework does not explicitly address algorithmic fairness. The absence of clear mandates for fairness audits, bias detection, and data diversity standards increases the likelihood of systemic discrimination in AI-powered decision-making processes.
AI systems operate as ‘black boxes’ where their internal workings are hidden and decision-making process in not transparent. This unknown decision-making matrix can have consequences which increase privacy concerns. (iisppr, n.d.)
Indian legal framework lacks guidelines for transparency and mandate revelation of AI working model, leaving the individuals without any legal remedy at time of violations.
Under the DPDP act,2023 there are no guidelines for automated decision making. Any decision taken by company using AI can’t be challenged and company is under no obligation to give justifications for their decisions under the law.
DPDP act,2023 also isn’t applicable on large data available on public domains which is commonly used by AI for training purposes.
Another concern is security and data retention issue. The DPDP act,2023 requires reasonable security safeguards but AI stores data inside its model not in any database. So even after deleting the data AI remembers the pattern or personal data. It is risky because there are high chances AI may leak this data while following tasks. DPDP act,2023 doesn’t provide any specific guidelines regarding how this data is to be safeguarded and there is no use of consent withdraw as AI it is a part of its training model. (iisppr, n.d.)
CASE STUDIES
To understand the impact of AI on privacy, it is important to analyse some practical examples and understand how our data is used, misused and leaked through AI bases system
- Delhi Police Facial Recognition System (FRT)
Under the ‘safe city project’ the Delhi police deployed the facial recognition system in cctv cameras to monitor suspected person’s/criminals. This project involved setting up of command centres for video analytics, artificial intelligence and facial recognition. This system recognizes and collects thousands of pictures and videos every day. This did help the police, but it is important to know all videos and photos are being collected without any legal framework in place to regulate it. This violates the judgement of justice KS Puttaswamy vs union of India (2018) that defined right to privacy under Article 21. All the details under this system are being stored without a proper consent. The Delhi High Court under Sadhan Haldar v NCT of Delhi, authorized Delhi police for tracking and reuniting missing children but later in an RTI it was revealed that this technology is being used to match similar faces for investigation purposes. This becomes the case of functional creep as the technology scope is expanded from its original purpose. So far, this technology has stored thousands of photos and videos without any consent from that person, resulting in violation of fundamental right to privacy. Without any legal framework to regulate it gives rise to serious concern regarding misuse and breach of person data. (vidhi center for legal policy, 2021)
- Rise Of Deepfake Cases In India
With the rapid rise of AI, rise of deepfake videos became a crucial topic to look upon. From actors to politicians, many videos were circulated to promote fake schemes and committing fraudulent activities to make money. For instance, an edited video of Union Home Minister Amit Shah that was circulated before the polls falsely claimed that he promised to scrap reservations for Scheduled Castes (SC), Scheduled Tribes (ST), and Other Backward Classes (OBC) if elected. While the clip was edited using simple video editing tools, the Bharatiya Janata Party (BJP) and several mainstream media organisations mislabelled it as a “deepfake” (Times Of India, 2024). A video of actress Rashmika Mandana was also circulated her using her face or someone else’s body, dressed inappropriately (Nyaaya, 2023). These deepfake videos women and public figures causing harm to their social image and causing mental hurt. This violates their rights, but it can’t be challenged as there is no legal framework regulating AI.
- Meta Using Facebook And Instagram Post From 2007 To Train AI
It was revealed by meta that used post and data available on social media platforms since 2007. This sparks a great concern among users as they never consented to their data being used to train AI. Users also weren’t given options to opt out of this. The posts also include images of children which was not scrapped by meta. As the data used to train AI is old, it increases the chances of AI being biased while performing tasks. (The New York Times, 2024)
ANALYSIS OF LAWS FROM OTHER COUNTRIES
Different countries have introduced their laws to safeguard digital personal data and regulate AI.
- European Union – General Data Protection Regulation (GDPR)
The General Data Protection Regulation Act is world’s strongest privacy law, passed on May 25,2018 to set a framework for organizations anywhere present who collects targets or collects data related to people of EU. It imposes heavy fines reaching up to ten million euros on those who violates privacy and security standards. Its most important feature is article 22 which protect people from decision made by AI through automated decision making without any human oversight. any decision made by AI can be challenged and individuals can question the selection matrix in case of unfair decision making. This act also requires companies to conduct Data protection Impact Assessment (DPIA) before using any high-risk technology to safeguard private data. These measures make GDPR much stronger the India’s DPDP act.
- European Union – AI Act (2024)
The EU has also introduced the AI Act, which is the world’s first full law created specifically for artificial intelligence. This law classifies AI systems into risk categories such as unacceptable, high-risk, limited risk and low risk. High-risk AI systems must follow strict rules including accuracy checks, bias testing, human supervision and transparency requirements. Harmful practices like social scoring or mass surveillance are banned. The AI Act directly regulates how AI systems operate, something India does not yet have.
- United States – CCPA and AI Safety Guidelines
The United States does not have a single national privacy law, but states like California have strong rules like the California Consumer Privacy Act (CCPA). CCPA gives people the right to know how their data is collected, the right to access it, and the right to opt out of data selling and targeted advertising. The U.S. government has also issued AI safety guidelines and an Executive Order on Artificial Intelligence, which focus on transparency, security testing, and preventing harmful automated decisions.
These international examples show that while India has taken an important step with the DPDP Act, we still need clearer and stronger rules to regulate AI systems and protect individuals from biased or unfair decisions.
SOLUTION: WHAT INDIA NEEDS?
To fill the regulatory gaps that exist in relation to Artificial Intelligence in the Indian context, it is essential that the country adopts an independent, balanced, and futuristic approach that promotes innovation in the field while also protecting fundamental rights.
- Quantifiable Thresholds in Risk-Based Regulation
India’s framework for AI regulation should be guided by specific, measurable thresholds, rather than blanket categories. Large-scale AI systems or those processing sensitive personal information-for instance, biometric identification, predictive policing, or automated welfare allocation-should trigger heightened regulatory requirements when they cross thresholds in numerical limits, such as more than 50,000 individuals affected, biometric or health data processed, or state or national-level deployment. For example, at that point, deployers of AI systems should be made to perform Algorithmic Impact Assessments in the form of percentage biases, false-positive and false-negative rates, and levels of human intervention, with reassessment necessary every 10,000 new additional affected individuals thereafter. In so doing, this type of model ties regulation to scale and impact, preventing widespread harm well before meaningful oversight is applied.
- Mandatory Transparency, Audits, and Public Disclosure
The country must progress from voluntary transparency principles to mandatory algorithmic audits underpinned by public disclosure requirements. High-risk AI systems should be independently audited once every 12 months with respect to accuracy and fairness against statistically defined benchmarks, such as error-rate deviations across protected groups never exceeding ±5%. Audit outcomes, core system objectives, in categories of data used for training, and the degree of automation involved must be published through a centralized national AI registry, which shall be open to the public. Quantified audit disclosure changes transparency from an idealistic abstraction into a measurable compliance obligation, enabling informed public and judicial review.
- Strict Liability and Time-Bound Authorization for High-Risk AI
India should adopt a strict liability regime along with mechanisms for time-bound authorizations for AI systems having the potential for causing severe harm to fundamental rights. In cases of biometric surveillance, mass profiling, or automated denial of welfare benefits, harm would need to be proved by the victim alone, and he would not have to establish negligence or intent. High-risk deployments of AI should automatically expire after a specified time, say three to five years, and should be renewed upon satisfactory audit outputs and demonstrable public benefit. This ensures that the AI system is continuously reviewed and not kept running in perpetuity.
- Institutional Oversight and Democratic Accountability
Effective AI regulation demands strong institutional mechanisms underpinned by democratic oversight. A dedicated AI regulatory authority should be empowered to sign off on high-risk systems, issue financial penalties, and suspend deployments not meeting compliance thresholds. Government departments deploying AI systems must be mandated to devote an exacting percentage-5-7% of all AI spend-of their project budgets towards bias mitigation, explainability tools, and grievance redress mechanisms. Additionally, an annual AI accountability report should be tabled before the Parliament of India, outlining the number of high-risk systems operational, complaints received, average resolution time taken-ideally less than 30 days-and enforcement actions pursued. Such in embedding of AI governance within institutional and parliamentary processes strengthens accountability and public trust.
CONCLUSION
Artificial Intelligence has appeared on the scene as a revolutionizing force that could altogether make a major contribution to improving the efficiency and innovative potential of India. Nevertheless, the lack of a properly developed regulation framework in the field entails a threat to the principles of privacy, autonomy, equality, and accountability. The existing legislation of the Digital Personal Data Protection Act of 2023 can be considered a starting level of personal data protection but does not resolve the challenges of AI systems.
To have prudent deployment, India would have to transcend piecemeal or reactive regulation and adopt a targeted, risk-sensitive, and rights-focused approach to AI regulation. Savvy or forward-thinking regulation would have to balance control and regulation on the one hand with innovation on the other. This would require having a targeted regulatory body and keeping Indian regulation aligned with best international practice.
In conclusion, while developing a regulatory framework for AI, it must be kept in mind that it is not a means of holding back progress in technology but to implement said progress in a manner that does not violate constitutional rights. This will enable countries like India to take full benefits of artificial intelligence while remaining within the constitutional ethos in terms of social welfare.
Bibliography
iisppr. (n.d.). AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION-MAKING. Retrieved from iisppr: https://iisppr.org.in/ai-and-data-protection-challenges-in-automated-decision-making/
Nyaaya. (2023, december 1). What We Can Learn From Rashmika Mandana’s “Deepfake” controversy. Retrieved from nyaaya: https://nyaaya.org/nyaaya-weekly/what-we-can-learn-from-rashmika-mandanas-deepfake-controversy/
press information bureau. (2025, november 17). DPDP Rules, 2025 Notified. Retrieved from pib india: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2190655®=3&lang=2
The New York Times. (2024, september 26). meta AI uses public post for training. Retrieved from the new york times: https://www.nytimes.com/article/meta-ai-scraping-policy.html
Times Of India. (2024, april 30). shah video. Retrieved from time of india: https://timesofindia.indiatimes.com/city/mumbai/complaint-filed-against-maharashtra-youth-congress-for-sharing-deepfake-video-of-amit-shah/articleshow/109706262.cms
velaro. (n.d.). AI and Personal Data: Balancing Convenience and Privacy Risks. Retrieved from velaro: https://velaro.com/blog/the-privacy-paradox-of-ai-emerging-challenges-on-personal-data
vidhi center for legal policy. (2021, august 16). The Use of Facial Recognition Technology for Policing in Delhi. Retrieved from vidhi center for legal policy: https://vidhilegalpolicy.in/research/the-use-of-facial-recognition-technology-for-policing-in-delhi/

Leave a Reply