Indian government issues advisory to mitigate AI risks

Tech firms will require permission from the government to use and provide under-tested or unreliable AI models and software to Indian users

India’s Ministry of Electronics and Information Technology (Meity) has issued an advisory for the use of artificial intelligence (AI) platforms that will require organisations to seek permission from the government to use and provide under-tested or unreliable AI models and software to users in the country.

After permission has been granted, the AI models and software can be deployed “only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated”.

According to Indian media reports, those dealing with unlawful information will also be punished, and platforms would have to use a “consent popup” mechanism to avert possible inaccuracies generated by AI.

Metadata should also be embedded into AI-generated images, video or audio, so that the computer resource or device used to generate the content can be identified if needed.

The fact that companies working on AI products need to have a nod of approval from the government has led to confusion in the technology industry, especially since AI technology is evolving quickly.

Amid the confusion, Rajeev Chandrasekhar, union minister of state for electronics and information technology, clarified that the advisory was aimed at “significant platforms”, and that permission-seeking from the ministry is only for large platforms and will not apply to startups.

With India going to the polls in the coming months, the advisory also cautioned that AI platforms should not threaten the poll process or spread misinformation. The platforms should also ensure that the biases arising from AI models do not hamper the electoral process in the country.

Read more about IT in India

Non-compliance with the advisory would result in penalties. That means tech companies and those involved in the development of AI models would have to rework their strategies to secure approval from the government.

The latest advisory comes in the wake of the controversy surrounding Google’s Gemini AI platform, which has been criticised for historical inaccuracies and racial bias. Earlier this week, Google apologised for Gemini’s unsubstantiated comments about Indian prime minister Narendra Modi. 

In 2018, the Niti Aayog, the Indian government’s apex public policy think tank, released the National Strategy for Artificial Intelligence, taking into account the AI research and development required for healthcare, agriculture and education, along with smart cities and infrastructure.

This was followed by a discussion paper on the principles for responsible AI in February 2021, which identified seven broad principles for the responsible management of AI. They include principles of safety and reliability, equality, inclusivity, and privacy and security.

Rajiv Kumar, vice-chairman of Niti Aayog, noted at the time that the discussion paper would serve as an essential roadmap for the AI ecosystem, “encouraging adoption of AI in a responsible manner in India and building public trust in the use of this technology, placing the idea of ‘AI for All’ at its very core”.

Read more on Artificial intelligence, automation and robotics

Data Center
Data Management