The government has submitted a status report on deepfake technology, highlighting concerns over AI scams, election manipulation, and the need for stricter enforcement.
The Ministry of Electronics and Information Technology (MeitY) highlighted key concerns regarding deepfake technology in a status report submitted to the Delhi High Court on Monday (March 24). The court was hearing petitions challenging the unregulated spread of deepfake technology in India. One of the petitions was filed by India TV’s Chairman and Editor-in-Chief Rajat Sharma, urging the government to implement regulations and restrict public access to deepfake-generating apps and software. Sharma’s plea underscored how deepfakes can fuel misinformation, distort public discourse, and jeopardise democratic processes.
The report emphasised the increasing misuse of deepfakes—particularly during state elections—the growing number of AI-driven scams, and the need for stricter enforcement rather than new laws.
Additionally, the report pointed out the absence of a standardised definition for “deepfake,” which complicates regulatory efforts. It also noted that sophisticated actors can circumvent detection mechanisms such as watermarking and metadata tagging. The report further emphasised the need for large-scale public awareness campaigns to educate users on identifying and understanding deepfakes. It stressed the importance of developing indigenous datasets and tools for detecting and analysing deepfakes in Indian languages and contexts.
Stakeholder inputs call for AI content disclosure
In its status report, MeitY detailed the steps taken to examine deepfake-related concerns. A nine-member committee held discussions with technology and policy experts in January this year.
The stakeholders advocated for mandatory AI content disclosure, labelling standards, and grievance redressal mechanisms, while also stressing on the importance of targeting malicious actors rather than restricting the creative applications of deepfake technology.
Highlighting the importance of targeting malicious actors rather than restricting creative applications of deepfake technology, the committee also deliberated on “mandatory intermediaries compliance”, which would determine the responsibility of digital platforms in regulating AI-generated content. This framework could define the liability of social media companies, balancing their accountability while ensuring free expression, the report stated.
The report also mentioned that many stakeholders agreed that the existing legal framework under the Information Technology Act, 2000 (IT Act), the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), and the Bharatiya Nyaya Sanhita, 2023 (BNS) is sufficient to address malicious deepfake use but requires stronger enforcement and improved forensic capabilities.
Election deepfakes and AI misuse raise alarms
The Deepfakes Analysis Unit (DAU), an initiative under the Meta-supported Misinformation Combat Alliance (MCA), has flagged two disturbing trends related to the misuse of deepfake technology:
- Deepfakes specifically targeting women during state elections
- A post-election surge in AI-driven scam content
The DAU also highlighted the challenges in detecting deepfake audio, stating that distinguishing between real and manipulated voice recordings remains a significant hurdle. It stressed the need for collaborative detection frameworks and clear regulatory norms. The committee acknowledged these concerns and recommended seeking cooperation from law enforcement agencies, particularly the Indian Cyber Crime Coordination Centre (I4C), to track and analyse deepfake-related criminal cases.
Consultations with victims yet to begin
In its report, MeitY admitted that it had not yet consulted victims of deepfake attacks. The ministry is working with the Ministry of Information and Broadcasting to gather testimonies from individuals affected by deepfake content, the report stated. The committee also requested an additional three months to complete these consultations, which the Delhi High Court approved.
A bench led by Chief Justice DK Upadhyaya and Justice Tushar Rao Gedela ordered the committee to incorporate suggestions from the petitioners while evaluating the issue. The next hearing on the matter is scheduled for July 21, 2025.
Rajat Sharma moves plea in Delhi HC against non-regulation of deepfake technology
India TV’s Chairman and Editor-in-Chief Rajat Sharma had moved a plea in the Delhi High Court against the non-regulation of deep fake technology. Following which, a division bench of the High Court comprising acting Chief Justice Manmohan and Justice Manmeet Pritam Singh Arora issued a notice and sought a response from the Union Government through the Ministry of Electronics and Information Technology.
During the hearing, the bench orally remarked that “this is a major problem” and questioned the central government on its willingness to act. “Political parties are complaining about this as well. You are not taking any action,” the court noted.
According to the plea, the proliferation of deepfake technology poses significant threats to various aspects of society, including misinformation and disinformation campaigns, the undermining of public discourse and democratic processes, potential use in fraud and identity theft, and harm to individuals’ reputations and privacy.
Deepfake technology is a serious menace: Delhi High Court
The Delhi High Court in August 2024 noted that deepfake technology is going to be a serious menace in society and the government should start thinking about it and observed that the antidote for Artificial Intelligence (AI) would be technology only. The high court observed that before the 2024 elections, the government was agitated on the issue, and now things have changed.
Centre to file report on measures taken to prevent issue of deepfakes
In a significant development related to the matter of deepfakes in October 2024, the Delhi High Court asked the Centre to file a status report on the measures taken by it to counter the increasing menace of the deepfake technology. During the hearing on the matter, Chief Justice Manmohan and Justice Tushar Rao Gedela sought the report to highlight measures taken at the government level.
The Delhi High Court in November 2024 directed the Centre to nominate members for a panel to examine the issue of deepfakes. The direction from the high court came after it was informed by the Union Ministry of Electronics and Information Technology that a committee had been formed on November 20 on deepfake matters.
What is deepfake technology?
Deepfake technology enables the creation of hyper-realistic videos, audio clips, and images, which can manipulate public perception by altering people’s words and actions. This poses a major risk to information integrity, as it can be used for political propaganda, financial fraud, and personal defamation.