Regulating AI from the perspective of Developing Nations

Author: Utkarsh Singh, CMP Degree College Prayagraj

Artificial Intelligence, the word itself is self-explanatory. It means Intelligence created artificially by human intelligence, or in other words, man-made intelligence. AI is among other emerging technologies, such as IOT (Internet of Things), Quantum Computing, which are considered to be pillars of the 4th Industrial Revolution. We are already witnessing the dynamics of the world changing ever since AI was made open to the public at large. ChatGPT from OpenAI was the first chatbot open to the public in 2022. Today, in just a span of two years, the growth of AI has seen tremendous upward movement with its expansion into almost every aspect of life, such as Health, Education, Agriculture, Defence, Information, and Economic sectors. With the rise of AI, multiple companies have jumped into the race to secure the market share for themselves. Every now and then, a new and better AI tool is being launched in the open market, making lives easier. As we know that every coin has two sides, with the rapid rise of AI, various concerns have been raised for stakeholders across the globe. 

Why does AI need to be regulated? 

Some of the major reasons why AI needs to be generated: 

  • Data Protection: AI Models are trained around the data it has collected, and such collected data could lead to a violation of privacy and also misuse of it.
  • Violation of International Property Rights: AI has been known to violate IPRs, one of the most recent cases being that of “Ghibli Art style trend,” highlighting the issue. 
  • Lack of Accountability: Since these AI tools work based on collected data, the issue of accountability is raised.
  • Deepfakes: One of the most disruptive effects of generative AI. It directly influences the legal landscapes, giving rise to defamation cases and also the potential future risk of tampering with E-evidences (Evidence laws). Example: Archita Phukan Case.
  • Ethical and Moral Challenges, including AI Bias. 
  • Cyber Security Threats
  • Misinformation: The data gathered by these AI tools and chatbots can have far-reaching, damaging effects on countries, specifically developing economies, if such Data reaches the wrong hands. 
  • Jurisdictional Challenges 
  • Sensitive and dangerous Information leaks 

International Efforts in Regulating Existing Lacunae:

The issue of regulating AI was recognized by governments across the world, which led to the world’s first international summit for AI, and it was called the AI Safety Summit in November 2023. It was held by representatives from 28 Countries, including India, the US, the EU, the UK, and China. The key takeaways from this summit was recognizing the role of disruptive and damaging powers of AI, with concerns over misinformation, privacy, deep-fakes, and the realization that current laws and systems fail to have jurisdiction over the expansive and rapidly growing domain of AI. It was realized that AI has reached a point where current laws are unable to keep up with AI. The summit ended with The Bletchley Declaration- The first International Commitment to manage AI-related risks together. 

Major existing legal frameworks and regulations include the Organization of Economic Co-operation and Development AI Principles, UNESCO recommendation on the ethics of AI (2021), Global Partnership of AI (GPAI), European Union’s AI Act, and the National Institute of Standards and Technology’s AI risk management framework (USA).

These major regulations act as a guiding star which highlights certain areas of focus, which include transparency, supervision, jurisdiction, updated laws, a conducive environment for new & emerging AI Start-ups and their products, Identification & Classification of high-risk sectors such as real-time AI-based surveillance, regulating the launch of newer and advanced versions of AI models, and others. 

Impact of AI Revolution on Developed and Developing Economies:

The AI boom can help accelerate the growth of the emerging economies (Developing Nations) and drastically cut time short. Integrating AI across different sectors transformed their output potential, drastically increasing efficiency. Although significant progress has been made by the developed economies, developing nations have still been left in the dust. Most of the developing economies lack proper infrastructure, capital, and political will regarding the regulation of AI.

Many of these Developing Countries lack AI-specific Laws and depend upon sector-specific laws and sometimes even moral codes. The lack of integrated specific laws may cause a clash between these laws because more often than not, AI can simultaneously affect more than one sector in a single issue or case. This not only highlights the need for classifications of AI-related risks but also the need for regulation of it. Matters such as that of data privacy need to be regulated as it not only endangers the personal safety of citizens but also that of the nation at large. AI is controlled mostly by “Big Private Players” who have the potential to disrupt the sovereignty of a country with the collected data. To address such situations, strong human-based oversight is needed. Localization of data centres is necessary to ensure their safety and prevent misuse. Investment in R&D, education, and adaptation of AI, International Cooperation– shared infrastructure and know-how can work to mitigate the gap between developed and emerging economies around the world.

The race to lead the AI revolution would depend upon leveraging the developing nations, which have certain leverage that they can use against their competitors, such as making “soft-laws” in the beginning and later developing more stringent and harmonized laws that ensure protection for AI companies as well as citizens of such nations. These developing nations, since they started late they can use this opportunity to learn from existing laws from developed nations and thus reduce the evolution of AI-related laws. Push for transparency and identification of high-risk sectors can help in drafting better prepared laws. Impact assessment of existing laws can further help in this regard.

AI Regulation in India: 

India, as of now, doesn’t have any specific AI laws. In India, it is regulated by the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023. The earliest step taken by NITI Ayog in 2018, in the form of “National Strategy for Artificial Intelligence”, gave preference to certain sectors such as education, agriculture, and healthcare. MeitY guidelines ensure that AI-generated content is labelled, enhancing transparency. 

Conclusion 

AI in recent years has shown exponential growth, but it has also raised new concerns along with enhanced existing threats, such as deep fakes, cyber-security, and data privacy. To prevent any such issue, a strong and elaborate regulation is needed. The laws need to be updated from time to time so that they can keep up with rapid AI advancements. Regulations are needed, but over-regulation may also become a hindrance to growth. These regulations need to be harmonized, classified, and must cover all the sectors affected by AI. Shared infrastructure, collaboration between developing and developed nations can play a crucial role in overall development for all.

Bibliography:

Primary Sources

  • European Union AI Act, 2024
  • Organization of Economic Co-operation and Development AI Principles (2019)
  • National Institute of Standards and Technology’s AI Risk Management Framework, 2023(USA)
  • UNESCO recommendation on the ethics of AI (2021)
  • Digital Personal Data Protection Act, 2023 (India)
  • Information Technology Act, 2000 (India)
  • MeitY’s AI Governance Guidelines, 2025 (India)

Secondary Sources

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *