New AI Rules Aim for Safe, Inclusive Tech Development
New Delhi: India has released a national framework to guide the safe and inclusive development of artificial intelligence, in what officials described as a step to ensure that the rapid expansion of AI technologies takes place within a structure that protects public interest and social welfare.
The framework, issued under the India Artificial Intelligence Mission, sets out principles and implementation pathways to shape how artificial intelligence systems are designed, deployed, audited and made accountable in different sectors. The initiative reflects the government’s view that artificial intelligence has become a general-purpose technology with economy-wide and governance-wide implications and therefore requires guidelines that extend beyond individual applications.
The announcement was made today in New Delhi by the Office of the Principal Scientific Adviser and the Ministry of Electronics and Information Technology. The officials present said the need for a governance framework arises from the increasing availability of artificial intelligence systems in areas ranging from healthcare diagnostics and educational tools to public service delivery, financial decision-making and critical infrastructure management. They noted that the pace of development has outstripped the pace of institutional adaptation, creating a situation in which innovation and risk are progressing simultaneously.
Professor Ajay Kumar Sood, the Principal Scientific Adviser, said the core principle of the framework is summarised as “Do No Harm.” He said that while artificial intelligence has the capacity to significantly expand efficiency and access in public services, there is also the possibility of unintended harms if systems are opaque, biased, poorly trained, or deployed without adequate oversight. The framework, therefore, emphasises the establishment of controlled testing environments, referred to as sandboxes, where systems can be evaluated before broad release. Professor Sood said the design of the framework is intentionally flexible because “artificial intelligence is not static technology; regulation cannot be static either.”
Secretary S. Krishnan of the Ministry of Electronics and Information Technology said the guidelines are intended to be integrated into India’s existing legal and regulatory environment rather than replacing it. He said the priority is human-centric deployment of artificial intelligence, in which technology is assessed by its effect on people rather than by the sophistication of the system itself. He said the framework recognises the developmental role of artificial intelligence in areas such as healthcare, agriculture and climate monitoring, while maintaining attention to the risks associated with automation in welfare systems, labour markets and surveillance environments.
Additional Secretary Abhishek Singh, who heads the India Artificial Intelligence Mission and the National Informatics Centre, said the guidelines were developed through a multi-stage process. The initial draft was prepared by a committee, released for public comment, and revised following feedback from universities, technology firms, legal experts and civil society organisations. He said consultation was essential because the impact of artificial intelligence varies by sector and region, and governance needs to take account of differences in access, capacity and local priorities. He added that the India Artificial Intelligence Mission is working to expand affordable computing resources, public datasets and support systems for startups so that the benefits of artificial intelligence development are not limited to large technology organisations.
The committee responsible for drafting the guidelines was chaired by Professor Balaraman Ravindran of the Indian Institute of Technology Madras and included members from the National Institution for Transforming India (NITI) Aayog, industry, academic research and legal practice. The guidelines consist of seven guiding principles, six governance pillars, phased implementation timelines and operational instructions for developers, regulators and public institutions. The seven guiding principles emphasise the protection of human rights, transparency in design and decision-making, fairness in automated outcomes, accountability of developers and institutions, explainability of artificial intelligence systems to users and regulators, robustness and safety of algorithms against errors or misuse, and inclusivity to ensure equitable access across regions, communities, and socioeconomic groups. Together, these principles provide a coherent ethical and operational compass for artificial intelligence deployment, ensuring that innovation does not outpace safeguards or public trust. Officials described the guidelines as a reference architecture rather than a final regulatory statute, with scope for sectors to develop application-specific rules as needed.
The announcement was accompanied by the results of the India Artificial Intelligence Hackathon for Mineral Targeting, organised in collaboration with the Geological Survey of India. The hackathon sought to evaluate how artificial intelligence can support mineral exploration by analysing geological, geophysical and satellite data. The winning approaches demonstrated methods for critical mineral mapping and semi-supervised resource discovery. Officials said this reflected artificial intelligence’s potential to influence resource planning and industrial supply chains that underpin clean energy transitions and manufacturing strategies.
Earlier in the day, a panel discussion at the Emerging Science, Technology and Innovation Conclave brought together researchers and industry representatives to examine the challenges of building India’s artificial intelligence ecosystem. The discussion focused on the need for computing infrastructure, indigenous language models and data availability, and on ensuring that the spread of artificial intelligence does not widen existing social inequalities. Participants observed that artificial intelligence’s benefits are most likely to be realised if the technology is shaped by public purpose rather than market-driven adoption alone.
The need for the governance framework, therefore, emerges from the intersection of technological scale and social responsibility. Artificial intelligence systems are increasingly influencing decisions that affect livelihoods, welfare access, medical evaluation and knowledge distribution. Without standards for transparency, explanation, fairness and accountability, the risk is not only that individual harms may occur but that public confidence in digital systems may weaken. The guidelines aim to create a common reference point to prevent fragmentation of governance approaches across sectors and institutions.
The framework also arrives at a moment of international debate about how countries can participate in global artificial intelligence development without losing regulatory autonomy. India’s approach places emphasis on accessibility, affordability and public oversight, positioning artificial intelligence as a component of national development policy rather than merely a commercial sector. Officials said the guidelines will inform India’s contributions to international cooperation platforms and the India Artificial Intelligence Impact Summit in February 2026. Compared with global standards, India’s framework shares objectives with others but differs in approach: the European Union’s AI Act is legally binding and risk-based with mandatory compliance; the Organisation for Economic Co-operation and Development (OECD) principles are voluntary and focus on inclusive, human-centred development; and the United States relies on sector-specific guidance and voluntary frameworks that prioritise private-sector innovation. India positions itself between these models, offering a detailed multi-sector reference framework that is non-binding but intended to guide policy, public institutions, and private developers with attention to accessibility, equity, and integration into existing law.
For now, the guidelines serve as a foundation. Their effect will depend on how they are translated into rules, funding, auditing capacity, and institutional practice across sectors. The question ahead is whether artificial intelligence governance in India can balance innovation and safeguards in a way that protects people while enabling technological progress. The answer will depend on decisions made not only by the state but by the organisations developing and deploying artificial intelligence systems in the coming years.
– global bihari bureau
