Advertisements

Artificial intelligence continues to grow, showing us how it can improve productivity and efficiency by integrating GenAI into the workspace. However, we have seen negative uses of AI, especially in India. The Government of India is developing ethical guidelines for the use of AI, ensuring AI systems are transparent and accountable. To address the increasing cases of deepfakes, the Indian Government has introduced strict regulations.

Specifically, when it comes to deepfakes, many instances have occurred where they were used to track and monitor minority groups, raising concerns about privacy and discrimination. Despite its benefits, the deep-fake regulation concerns misuse. The Ministry of Electronics and Information Technology (MEITY) has warned all popular social media platforms like Facebook, Instagram, and others who have violated these rules related to deepfakes and other restricted content.

Deepfakes Used to Create Deepfakes of Celebrities and Politicians

Deepfakes have been used to spread misinformation and manipulate public opinion. Videos have been created and shared on social media, including those featuring PM Narendra Modi, Mukesh Ambani, and more. These videos are generated using AI and machine learning for realistic but fake representations. IT Minister Ashwini Vaishnaw has met with tech giant representatives like Meta, Google, and Amazon to discuss proactive measures.

India's framework to tackle deepfakes before they disrupt democracy

In addition, the Government of India (GOI) will also develop a platform where users can report IT rule violations on social media platforms. Due to the lack of regulation around deepfake technology, there is a need for a comprehensive overhaul of IT law. The government has collaborated with technology industry stakeholders to create measures for detecting, preventing, and reporting fake content.

This includes the option to help citizens file FIRs against social media platforms for IT Rule violations, especially in the case of deepfakes. Under the IT Act, social media platforms may lose “safe harbour immunity” unless they act quickly against deepfakes. By clarifying the deepfake Rule 3 (1) (b) (v) of the IT Rules, it appears under existing regulations.

The concerns regarding things could also be a threat, seeing all the recent incidents that forced the government to bring in these new regulations to address the challenge. The framework shared by the Government of India clearly states the penalties for spreading fake news, and the companies must comply; if they fail, they will be temporarily banned.

Government of India Focusing on Policies of Deepfakes

The Indian government has now mandated social media platforms to revise their terms to prohibit 11 types of harmful content. The Government of India has given a seven-day deadline for social media to align their users’ policies with the IT Rules, taking strict measures against the circulation of fake and other prohibited content.

Updated policies will forbid the generation and sharing of deep-fake content. To comply with these terms, AI companies are given a seven-day deadline. The government will appoint special officers, called Rule Seven Officers, to address the issue of such reported deepfake content on online platforms.

Not only this, but the Indian Government also said they would not tolerate such IT rule violations, emphasizing the seriousness of the issues. Considering different platforms’ cost and varying capabilities, a balanced approach to compliance is a challenge.

Most social media platforms agree to align with the Government of India’s terms of use within seven days to ensure user awareness and safety. If social media platforms fail to comply, they may face a temporary ban in India and may also lose safe harbor provisions that protect them from legal issues regarding user content.

It is essential to understand and be aware of the potential negative consequences of AI, which the government is addressing, but also, on a personal level, you have to take steps to mitigate them. This includes the public being educated about the potential risks of AI, especially in the case of Deepfake.

This is not just an issue in India but a global one. International cooperation could prevent this, and consistent regulation could protect digital rights. Prompting countries and the UN to develop AI governance rules, Union Minister Rajeev Chandrasekhar emphasized the adequacy of existing IT rules in handling deepfakes and misinformation while acknowledging the need for updated regulations.

x
Advertisements