Spread the love

This Article is written by Tanishq Kumar of Chaudhary Charan Singh University, an intern under Legal Vidhiya

ABSTRACT

Deepfakes, synthetic media created by artificial intelligence and capable of perfectly impersonating individuals, have quickly become India’s biggest online privacy, reputation, and information integrity threat. While useful technology, its widespread use has facilitated new cybercrime, political disinformation, and harassment. In this article, the legal challenges of deepfakes in India are critically analyzed, the sufficiency of existing cyber law architecture is assessed, and regulatory loopholes are established. Based on statute, case law, and comparative analysis, it contends that long-overdue reforms will have to contend with new threats posed by synthetic media.

KEYWORDS

Deepfakes, Cyber Law, Artificial Intelligence, Information Technology Act, Privacy, Indian Law, Evidence, Regulatory Gaps, Digital Forensics, Free Speech

INTRODUCTION

The fast-paced development of artificial intelligence has revolutionized the creating of digital content, and deepfakes, indistinguishable-from-reality video, audio, or pictures created with the help of deep learning models, are both a technological wonder and a social issue. Deepfakes are beneficial for use in filmmaking and disability access, but their abuse has led to a spate of cybercrime in the form of identity theft, defamation, revenge porn, and political misinformation. With the Indian presence on the internet growing day by day, the legal community is under tremendous pressure to meet the new array of challenges brought about by synthetic media.

UNDERSTANDING DEEPFAKES

The Technology of Deepfakes

Deepfakes are synthetic media where the face of a person in a present image or video is replaced with someone else’s face with the help of artificial neural networks. The technology employs deep learning in the form of generative adversarial networks (GANs) to produce forgeries that seem like original material. “Deepfakes are videos, audio, or images made with artificial intelligence, manipulated to make people say or do something they never said or did, obscuring the line between fiction and reality.”[1]

Social Impact

The availability of advanced AI tools has leveled the playing field to produce deepfakes, and even amateurs can now produce realistic-looking synthetic media with a home computer or smartphone. Deepfakes have been used to produce fake news, consensual pornography, financial fraud, and political disinformation. “The malicious use of deepfakes can destroy reputations, influence election results, and erode trust in digital content, directly undermining democracy and human rights.”[2] The victims suffer immense reputational, psychological, as well as economic damage, and the viral nature of social media sites makes the spread and damage of such content even more widespread.

PRIVACY, CONSENT, AND INDIVIDUAL RIGHTS

Privacy as a Constitutional Right

Deepfakes themselves are illegal appropriation of someone’s likeness, voice, or image, which is a violation of the right to privacy under Article 21 of the Indian Constitution. The Supreme Court in its landmark judgment in Justice K.S. Puttaswamy v. Union of India[3] declared the right to privacy. The right being exercised against the fake media is not defined, especially when the culprits are foreign or unknown.

Consent and Online Similarity

The victims of deepfakes cannot pursue redress as there are no such special legal provisions against the unauthorized creation and dissemination of misleading media. “There is no express statutory right in India over one’s digital image, leaving people open to abuse and exploitation.”[4] There being no such right, legal recourse is denied to deepfake creators.

DEEPFAKES AND CRIMINAL LAW

Defamation, Harassment, and Gendered Abuse

Deepfakes are extensively being used to defame, cyberbully, and engage in revenge porn. Section 499, 500, and 292 and 506 are all offenses under the Indian Penal Code[5] and are related to defamation, obscenity, and criminal intimidation, respectively, but these sections were not written with AI-generated content in mind. “Victims find it difficult to establish the falsity and genuineness of deepfakes, and thus it becomes difficult to prosecute and provide redressal.”[6] Women and marginalized groups are overrepresented victims of deepfake pornography and gendered harassment, and the absence of prompt legal remedies contributes to the trauma inflicted on victims.

Political Interference and Electoral Manipulation

The use of deepfakes in misinformation and manipulation of public perception in elections has undermined the morale of democratic processes. In 2024, India witnessed some cases of deepfake videos of political leaders going viral and there was a call for tighter regulation. “Deepfakes can derail electoral processes, cause violence, and destroy the legitimacy of democratic institutions.”[7] The spread of such material at a fast speed can sway voters, undermine the integrity of institutions, and destabilize the political climate.

EVIDENTIARY AND INVESTIGATIVE ISSUES

Authenticity and Admissibility

The evidentiary worth of digital data is eroded by deepfakes since the courts have to deal with the authenticity, integrity, and admissibility of altered media. “The evidentiary worth of digital data is eroded by deepfakes, since the courts have to deal with the authenticity, integrity, and admissibility of altered media.”[8] The Indian Evidence Act, 1872[9], lays down the principle of proof of authenticity in the case of electronic records, but forensic methods available are typically outgunned by the potential of deepfake technology.

Burden of Proof and Chain of Custody

As deepfakes become more sophisticated, victims and law enforcement must provide more. Chain of custody and forensic authentication of electronic evidence are paramount but presently lacking. Courts demand a high level of proof for the introduction of electronic records, and a failure in the chain of custody can make evidence inadmissible.

INDIAN LEGAL SYSTEM: DEFICIENCIES AND SHORTFALLS

Information Technology Act, 2000 (IT Act)

India’s main cyber law legislation is the IT Act, 2000[10]. It makes hacking (Section 66), identity theft (Section 66C), and dissemination of obscene content (Section 67) offences, but not deepfakes or synthetic media per se. Section 66E makes invading privacy through electronic means an offence, and that may be applied in certain cases of deepfakes. “The IT Act, 2000, does not specifically refer to deepfakes, and the victims would be left to depend on general provisions of privacy, obscenity, and identity theft.”[11]

Indian Penal Code, 1860 (IPC)

The IPC does refer to defamation, obscenity, and criminal intimidation but in non-internet language. Even though judicially interpreted to embrace cybercrimes, the novelty of deepfakes, where information can be anonymous, cross-border, and spread quickly, makes enforcement difficult.

Digital Personal Data Protection Act, 2023

The DPDP Act[12] attempts to protect privacy and personal data but is data processing-focused and not excluding misuse of identity or likeness in synthetic media. The Act does not say anything with regard to special remedies for victims of deepfakes and therefore there is a gigantic regulatory gap.

IT Rules and Intermediary Liability

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[13], compel platforms to delete illegal content, such as morphed images, within 24 hours of a complaint. Identification and deletion of deepfakes are not standardized, however. “Intermediaries cannot pre-detect deepfakes in general because there has been no standard technical standard and the sheer volume of content.”[14] While rules place due diligence obligations, no clear regime of liability exists for platforms that cannot preclude or delete deepfakes.
Shortage of Deepfake-Specific Laws

India does not have a distinct law presently dealing with the creation or sharing of deepfakes. The United States and Singapore enacted distinct laws dealing with synthetic media and doctored content. New York has a distinct law dealing with synthetic media. Lack of a precise definition and criminalization of deepfakes in Indian law hinders the victims in accessing prompt and effective remedies.

REGULATORY AND ENFORCEMENT CHALLENGES

Attribution and Anonymity

Deepfake producers usually use anonymized accounts, VPNs, and foreign servers, which are difficult to trace and prosecute the offenders. Mutual legal assistance treaties (MLATs) and cross-border cooperation are slow and cumbersome, typically causing investigation and enforcement delays.

Law Enforcement Capacity

Indian law enforcement agencies are not typically technically trained and capable of investigating deepfake offenses. Training in digital forensics and AI technology is specialized and limited, and investment in sophisticated detection and attribution technologies is needed. Absence of a national AI policy in law enforcement agencies also fails to facilitate an appropriate response to deepfake crime.

INTERNATIONAL PERSPECTIVES

Comparative Legislation

The United States has enacted laws criminalizing the malicious production and dissemination of deepfakes, most notably in elections and non-consensual pornography. The DEEPFAKES Accountability Act, for instance, mandates disclosure and criminalizes the use of synthetic media in certain purposes. Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) enables the authorities to direct the takedown of doctored content and fines the spread of misinformation. The European Union’s draft AI Act includes transparency and accountability requirements for the use of high-risk AI, such as synthetic media. The Digital Services Act also places obligations on platforms to address illegal content.

India can draw some valuable lessons from such regimes in order to frame its own regulatory framework, particularly in the definition of deepfakes, liability apportionment, and deletion of abusive content in a timely manner.

International Cooperation

Because of the transnational nature of deepfake offenses, international cooperation cannot be avoided. India needs to be included in global efforts to harmonize the AI content regime, share threat intelligence, and ensure transborder investigations. Adherence to the Budapest Convention on Cybercrime can help India in collaboration with the global community at large in investigation and prosecution of deepfakes crimes.

POLICY INITIATIVES AND RECOMMENDATIONS

Legislative Reform

India needs enactment of a certain bill concerning the production, distribution, and ill use of deepfakes. The bill should detail deepfakes and sanction ill use, provide for immediate removal and compensation to the aggrieved party, provide for disclosure requirements on synthetic media in certain cases, and provide penalties for the production and distribution of ill-used deepfakes.

Strengthening the Forensics and the Law Enforcement

There is a need for investment in sophisticated digital forensic tools and specialized law enforcement training. Foreign technology firms and foreign law enforcement cooperation would assist in detection and attribution. Specialized units to handle AI-enabled crimes would improve the capability for investigation.

Platform Responsibility

There needs to be open social media and messaging app policies, such as proactive detection and automatic removal of deepfakes, openness about content moderation and algorithmic decision-making, and consequences for failing to obey removal requests.

Public Awareness and Digital Literacy

There needs to be a national-level campaign for raising awareness about the dangers of deepfakes, detection of manipulated content, and redressal processes. Digital literacy training online needs to be integrated into school education and mass outreach activities. Judicial Capacity Building and Training Judges and prosecutors need to be educated in special courses in AI technologies, digital forensics, and the particular pitfalls deepfakes present. Judicial standards on the admissibility and analysis of AI-created evidence would result in fairness and consistency in the judicial process. International Engagement India must be a part of active participation in international forums like INTERPOL, Europol, and the United Nations to exchange best practices, intelligence, and a united response against deepfakes.

CASE LAW AND JUDICIAL REACTIONS

Indian courts have started realizing the concerns deepfakes and AI-generated evidence raise. In State of Maharashtra v. Dr. Praful B. Desai[15], the Supreme Court deemed video conferencing evidence admissible in criminal proceedings on a proof of authenticity and reliability basis. This is a precedent for the admissibility of newer types of digital evidence, including deepfakes. Courts have warned against the dangers of false evidence, however. The burden of establishing authenticity usually lies with victims or prosecutors, and the absence of well-established standards of testing AI-generated evidence can result in volatile verdicts. There is a pressing need for the judiciary to create jurisprudence specific to deepfakes and AI-enabled crimes.

FUTURE DIRECTIONS FOR INDIAN LAW

A counter-deepfakes effort will require government, technology companies, civil society, and academia. Task forces may be established in an inter-institutional fashion to monitor emerging trends, develop detection technologies, and share best practices. Public-private partnerships are required to upscale digital forensics infrastructure and operationalize speedy response to deepfake attacks. Research and development investments need to be made to keep pace with the rapidly changing nature of deepfake technology. Indian universities and startups may be encouraged to develop indigenous tools to detect deepfakes, authenticate content, and verify content.

Government incentives and subsidies can stimulate innovation here in this strategic sector. Legal reform should be complemented by effective victim support mechanisms. Helplines, counseling, and effective grievance redressal mechanisms should be offered to the victims of deepfakes. Policymakers should make police stations sensitive to the social and psychological impacts of deepfake offenses. Industry associations should be made mandatory to set ethical standards for the development and deployment of AI technology by the government. Watermarking, content verification, and revealing synthetic media norms should be made mandatory for AI developers and platforms.

CONCLUSION

Deepfakes constitute a level-one threat to the veracity of information, privacy, and democratic processes in India. The current legal framework, though robust in some respects, is not designed to meet the unique threats of artificial media. Legislative reform must be fast-tracked, investment in digital forensics must be made, and there must be pre-emptive regulation of platforms to safeguard citizens and society from the perils of deepfakes. With AI technology still evolving, India’s legal and regulatory infrastructure must keep up to deliver justice, accountability, and trust in the digital era.

REFERENCES

  1. S. Seth, Deepfakes and the Law: The Next Frontier for Cyber Regulation in India, Manupatra (2024).
  2. S. Choudhury, Deepfakes: The Looming Threat to Democracy, Bar & Bench (Jan. 2025).
  3. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India).
  4. S. Kumar, Regulating Deepfakes under the IT Act, iPleaders (Feb. 2024).
  5. A. Verma, Evidentiary Challenges of Deepfakes in Indian Courts, SCC Online (2024).
  6. S. Saha, Deepfake Videos in Indian Elections: Legal and Ethical Challenges, Live Law (May 2024).
  7. A. Verma, Evidentiary Challenges of Deepfakes in Indian Courts, SCC Online (2024).
  8. S. Kumar, Regulating Deepfakes under the IT Act, iPleaders (Feb. 2024).
  9. IT Rules 2021 and the Deepfake Dilemma, The Hindu (Apr. 2024).
  10. State of Maharashtra v. Dr. Praful B. Desai, (2003) 4 S.C.C. 601 (India).
  11. The Information Technology Act, 2000, No. 21 of 2000, Acts of Parliament, India.
  12. The Indian Penal Code, 1860, No. 45 of 1860, Acts of Parliament, India.
  13. The Indian Evidence Act, 1872, No. 1 of 1872, Acts of Parliament, India.
  14. The Digital Personal Data Protection Act, 2023, No. 22 of 2023, Acts of Parliament, India.
  15. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, G.S.R. 139(E), Gazette of India, Feb. 25, 2021.

[1] S. Seth, Deepfakes and the Law: The Next Frontier for Cyber Regulation in India, Manupatra (2024).

[2] S. Choudhury, Deepfakes: The Looming Threat to Democracy, Bar & Bench (Jan. 2025).

[3] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India)  

[4] S. Kumar, Regulating Deepfakes under the IT Act, iPleaders (Feb. 2024).

[5] The Indian Penal Code, 1860, No. 45 of 1860, Acts of Parliament, India.

[6] A. Verma, Evidentiary Challenges of Deepfakes in Indian Courts, SCC Online (2024).

[7] S. Saha, Deepfake Videos in Indian Elections: Legal and Ethical Challenges, Live Law (May 2024).

[8] A. Verma, Evidentiary Challenges of Deepfakes in Indian Courts, SCC Online (2024).

[9] The Indian Evidence Act, 1872, No. 1 of 1872, Acts of Parliament, India.

[10] The Information Technology Act, 2000, No. 21 of 2000, Acts of Parliament, India.

[11] S. Kumar, Regulating Deepfakes under the IT Act, iPleaders (Feb. 2024).

[12] The Digital Personal Data Protection Act, 2023, No. 22 of 2023, Acts of Parliament, India.

[13] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, G.S.R. 139(E), Gazette of India, Feb. 25, 2021.

[14] IT Rules 2021 and the Deepfake Dilemma, The Hindu (Apr. 2024).

[15] State of Maharashtra v. Dr. Praful B. Desai, (2003) 4 S.C.C. 601 (India)

Disclaimer: The materials provided herein are intended solely for informational purposes. Accessing or using the site or the materials does not establish an attorney-client relationship. The information presented on this site is not to be construed as legal or professional advice, and it should not be relied upon for such purposes or used as a substitute for advice from a licensed attorney in your state. Additionally, the viewpoint presented by the author is personal.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *