fbpx

Indian AI Governance in-Progress: Recalibrating Commercial Viability and Safety Standardisation

The ongoing discourse around AI safety in India is too often shaped by policy and ethics frameworks or comparative law literature imported from the Global North countries’ comparative perspectives. These are dominated by abstract principles, a tendency to overlook or not realise the risk of regulatory arbitrage in local regulatory regimes, and AI hype driven by various market players. This tendency to replicate such models not only overlooks India’s unique socio-technical landscape but risks stifling innovation in environments that are resource constrained, culturally diverse, and digitally uneven. What India urgently needs is a homegrown, resilient AI safety ecosystem that is scalable, economically frugal, and rooted in the country’s lived realities.

For instance, since the remarks made by US Vice President JD Vance on AI Safety and the viability of European Union’s General Data Protection Regulation at the AI Action Summit in France, in February 2025, a notable change is adopted that the UK Government changed the name of their AI Safety Institute launched in line with Bletchley Declaration in November 2023, as “AI Security Institute”. However, G7 member-states like Japan, under whose Presidency, a quite crucial AI Safety understanding was adopted in Hiroshima, in 2023-24, continue their AI safety initiatives. Incidentally, India’s AI governance as seem to be a “work in progress”, in comparison with Europe’s AI treaty by Council of Europe, and the AI Act, could become an exemplar to recalibrate commercial viability expectations of AI systems, and the basic AI Safety considerations around them.

Commercial Viability Expectations of AI Systems

In multiple binding and non-binding (or soft law) global AI frameworks, the term “intended purpose” (also known as specific purpose / specified purpose / purpose / general purpose) has an interesting legal value. It implies that by virtue of the use case and technical features of an AI system, any legal or policy framework can be implemented. Now, contrary to prevailing narratives, India’s challenge is not a lack of principles but a failure of implementation. Tech systems deployed across sectors often bypass existing cybersecurity and data protection safeguards even if we take ISO and IEEE standards as a market benchmark. Even if we assume that AI systems are created and delivered, which can happen by third-party AI workarounds, or integrating AI as a component, or having a standalone AI tool from scratch – certain data protection, privacy and cybersecurity risks do exist in practice across the world.

Risk Determination of AI Systems

However, the concept of risk determination in artificial intelligence and law is becoming a tricky problem. There are multiple kinds of “risks” that may be legally defined, or could have a tangible public policy basis, at least in common law jurisdictions, including India and the United States. The table below gives a picture of the most possible kinds of legal and policy issues around AI systems, which could be generalisable as “risks”.

The above table clearly shows that AI safety cannot be framed merely as a theoretical or ethical exercise. At the same time, some immediate risks are tangible and profound, while some risks might be qualitative but might not be larger concerns for India-based or India-targeting companies and research labs yet. This demand an approach focused on institutional trust, real-world safety, and enforceable redress, not just high-level principles and voluminous regulations. The debate must move beyond philosophical alignment to practical safeguards, especially for low-income and vernacular users who bear the brunt of AI failures. Designing for the margin not the median must be our guiding principle.

We must prioritize edge case failures such as the gig worker falsely flagged by fraud detection, the rural woman locked out of social benefits due to faulty biometric scans, or senior citizens confused by “AI assistants” that do not understand her dialect. Vernacular accessibility and robust grievance mechanisms must be central to safety standards and not afterthoughts. However, not all AI use cases and associated risks can be conflated to have the energy of AI safety research and governance initiatives diverted to all AI use cases, since many AI use cases are and will turn out to be commercially sub-standard.

AI Safety Research vs. AI Governance in India

Safety in the context of AI systems could mean enforceable explainability norms, mandatory human overrides, and transparent audit trails accessible to regulators. For example, the health sector presents even more urgent risks. AI assisted diagnostics and insurer risk engines trained on narrow data sets are being applied to broad populations with little transparency. Patients rarely know if a recommendation is AI generated, what assumptions underlie it, or how to contest errors. AI safety in healthcare must be a rights based obligation guaranteeing clarity, consent, and accountability. In fact, under the Digital Personal Data Protection Act, 2023 (DPDP Act), if a hospital as data fiduciary (similar to data controller in GDPR) provides data to data processing entities (like a third-party AI company as a vendor), then any data breach, or misuse of data or lack of exercise of data protection rights will make the hospital liable, and not the data processing entity.

This means the hospital would have to sign a contract/ agreement on liability sharing, accountability terms and waivers as may be needed. However, this example also shows that some “risks” might be qualitatively visible, but could be limitedly seen as mere commercial law problems, even within data law. This is exactly why aspects of AI Safety like explainability norms, human oversight, etc., should be termed as facets of AI governance. AI Safety Research, unlike governance, dives into the technical groundwork needed to make these standards workable. For hospitals, this means developing AI tools that clearly explain diagnostic decisions in plain language, helping them comply with DPDP Act requirements and build patient trust. For the Indian AI Safety Institute, it should be about researching bias in diverse healthcare datasets and creating practical, open-source protocols for human oversight that hospitals can easily adopt. This research may strengthen governance instead of making it a regulatory checklist with no practical value considering India’s complex healthcare landscape, for instance.

Co-authored by Kapil NareshChief Knowledge Advisor and Vice-President, AI Standardisation Alliance, Indian Society of Artificial Intelligence and Law

author avatar
Abhivardhan Author
Managing Partner, Indic Pacific Legal Research, Chairperson & Managing Trustee, Indian Society of Artificial Intelligence and Law

This content is labeled as created by a human - more information