
The Ministry of Electronics and Information Technology (MeitY) recommends that government bodies should have a dedicated artificial intelligence (AI) governance board to review and authorise AI applications. MeitY says that the AI governance board would make sure that all AI initiatives align with recognised guidelines outlined in domestic and international legal instruments. It would also provide guidance throughout an AI model’s lifecycle, ensuring that it not only meets technical benchmarks but also addresses ethical considerations.
This comes as a part of the ‘AI Competency Framework for Public Sector Officials’ that the Ministry released on March 6. Besides these recommendations, MeitY also released a series of other AI initiatives last week, including a datasets platform called AIKosha, and the IndiaAI Compute Portal.
Why it matters:
This is not the first time that MeitY has suggested that AI applications should seek Government approval. In March 2024, MeitY put out an advisory that stated that any “under-tested or unreliable” AI model must get Government approval after receiving explicit permission from the Government. Many pushed back against this advisory, questioning what MeitY meant by under-tested/unreliable models and whether such an advisory was even legally binding.
Amidst this pushback, the Government backtracked on the advisory in a couple of days, releasing a fresh one that stated that under-tested or unreliable artificial intelligence (AI) models must label the inherent fallibility or unreliability of the output they generate. The AI governance board that MeitY is suggesting as a part of the competency framework suggests that the Government may soon have the tools to make legally binding approvals for AI apps. However, unlike the previous approvals, which would have affected AI accessibility for the general public, these approvals seem to be limited to the AI apps that the Government bodies seek to use. It is important to note that while some of the other recommendations specifically mention public sector AI initiatives, it is unclear whether the AI governance board would only give approvals for Government AI projects or for private sector AI apps as well.
Other key recommendations:
- Formulating an AI Ethics Committee: This committee will ensure that an AI project incorporates standard AI practices into all stages of the project’s lifecycle.
- Risk management framework: Public sector bodies should implement a strong framework for identifying, evaluating, and reducing AI risks in order to improve the safety and transparency of AI systems. Government bodies should evaluate both existing and new AI projects using a standardised evaluation tool. To ensure standardisation in AI evaluations, the government bodies can develop standard disclosure templates. The Government must also require all public sector AI projects to publish disclosures on a Government portal accessible to the public, thereby ensuring transparency and safety in AI development.
- Special vertical for data governance: The IT Ministry suggests a privacy-by-design approach to AI models, adding that AI projects should have robust privacy protocols, especially in projects that handle sensitive data. Further, given the significance of data in AI models, MeitY suggests that every ministry should create a special vertical to manage the entire data lifecycle. This vertical will oversee data collection, storage, processing, and sharing.
- Comprehensive AI documentation: Government bodies must ensure comprehensive documentation for AI models, including algorithm designs, datasets a model relies on and its decision-making process.
- AI Audits and impact assessments: Public sector bodies can engage independent auditors to assess the performance, fairness and ethical implications of an AI model. Similarly, Government bodies must also carry out regular impact assessments before deploying an AI system. Post-deployment the Government body should review the risk profiles of the deployed system, especially those AI systems that operate in high-stakes areas like criminal justice and healthcare. AI systems, especially those that affect public services should have adequate human oversight. Further, the IT Ministry says that Government bodies can release reports that outline the performance of AI systems, including any errors, biases, or areas for improvement identified during audits or assessments.
MediaNama has sent comments to the IT Ministry to clarify whether the governance board will approve only government AI initiatives or only private sector ones. We will update the story once we hear from them.
Advertisements
Also read:
Support our journalism: