fbpx

India Implements Regulatory Advisory on AI and Generative Models

In a bold step toward regulating the deployment and usage of Artificial Intelligence (AI) technologies, the Indian government has mandated that all AI models, including large language models (LLMs), generative AI software, and algorithms currently in the beta stage or deemed unreliable, must obtain “explicit permission of the government of India” before they can be made available to users within the country. This directive was issued by the Ministry of Electronics and Information Technology (MeitY) in a precedent-setting advisory late on March 1, 2024.

Ensuring Fairness and Electoral Integrity

The advisory, while not possessing the force of law, clearly outlines the government’s expectations from technology platforms. It emphasizes the importance of preventing any form of bias, discrimination, or actions that could compromise the integrity of India’s electoral process through the use of AI technologies. This move is reflective of a broader global conversation on the ethical implications of AI and its potential to influence public opinion and democracy.

Union Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, indicated that this advisory is a precursor to more formal regulations. “We are doing it as an advisory today asking you (the AI platforms) to comply with it,” he stated, hinting at the future development of legislation that would enforce compliance.

A Response to Bias Accusations

This governmental action comes in the wake of accusations against Google’s AI model, Gemini, which was labeled as biased in a social media post on X, when queried about whether Prime Minister Narendra Modi was a “fascist”. The controversy spotlighted the challenges of ensuring AI models are free from bias, particularly when addressing sensitive political inquiries.

Also Read:  The Shifting Landscape of AI Regulation: State-Led Initiatives Take the Lead

Both Union IT & Electronics Minister Ashwini Vaishnaw and Chandrasekhar expressed their concerns, emphasizing that Indian users should not be subject to experiments with unreliable platforms. In response, Google announced its intention to rectify the identified issues and temporarily halted Gemini from generating images.

Advisory Details and Future Implications

The advisory also mandates platforms using generative AI to clearly label the potential unreliability of their output and suggests the implementation of a ‘consent popup’ to inform users about the inherent fallibilities of AI-generated content. This recommendation aims to enhance transparency and user awareness regarding the nature of AI-generated information.

Furthermore, the document extends its scope to include intermediaries and platforms involved in the “synthetic creation, generation, or modification” of content, requiring them to embed appropriate metadata for identification purposes, particularly in cases where the content could be used for misinformation or as deepfake material.

This development indicates a significant step by the Indian government towards establishing a regulatory framework for AI, aiming to balance innovation with ethical considerations and user protection. As the technology landscape continues to evolve, such measures are crucial for maintaining trust and integrity in digital platforms and ensuring that AI serves the public good without compromising democratic values or individual rights.

AI was used to generate part or all of this content - more information