Spread the love

This Article is written by Tanishq Kumar of Chaudhary Charan Singh University, an intern under Legal Vidhiya

ABSTRACT

Artificial intelligence (AI) has transformed the cybercrime environment in a fundamental manner, enabling attackers to automate attacks, create deepfakes, and exploit vulnerabilities at unprecedented scale and sophistication. India, being one of the world’s fastest-growing digital economies, has some unique challenges of its own to overcome AI-enabled cybercrimes. This article critically examines the architecture of AI-enabled cyber threats, evaluates the preparedness of India’s legal system, and identifies key gaps in law, enforcement, and policy. Based on statutes, case law, and comparative analysis, it argues that robust reforms are necessitated to keep pace with the rapidly evolving threat environment.

KEYWORDS

Artificial Intelligence, Cybercrime, Indian Law, Information Technology Act, Deepfakes, Data Protection, Legal Framework, Digital Forensics, Jurisdiction

INTRODUCTION

The application of artificial intelligence in digital networks has revolutionized legitimate as well as illicit activities on the internet. AI-enabled cybercrimes ranging from botnet malware and phishing to deepfakes and AI-enabled fraud—are novel and advanced challenges for lawmakers and law enforcers alike. AI-enabled crimes, unlike traditional cyberattacks, can learn, evolve, and increase at a mind-boggling pace, thereby making detection and attribution more challenging. With India accelerating its digital development, the legal framework is facing the twofold challenge of encouraging innovation on one side and protecting citizens, businesses, and national security from AI-enabled threats on the other.

The rapid adoption of AI technologies in sectors such as finance, health, and communications has expanded the attack surface for cybercriminals. AI technologies can be employed to carry out advanced social engineering attacks, automate vulnerability scans, and generate realistic-looking replica content that can be employed to deceive the public or scam individuals. For instance, AI-generated deepfake videos have been employed to impersonate public figures and destroy their reputation, spread disinformation. Similarly, AI-driven bots can spam social media channels with disinformation campaigns and manipulate elections and public opinion.
The ever-changing threat scenario calls for a forward-looking and responsive legal system able to address the subtlety of AI-enabled crimes, striking a balance between innovation and security and privacy.

AI-POWERED CYBERCRIMES: AN OVERVIEW

Defining AI-Powered Cybercrimes

AI-driven cybercrimes are those crimes in which artificial intelligence is utilized to design, implement, execute, or augment criminal operations in cybercrime. They use machine learning, natural language processing, computer vision, and automation to expand their scope and bypass traditional security measures. In one report, it is stated that “AI is revolutionizing the cybercrime economy, allowing attackers to launch quicker, targeted and personalized attacks like never before, the report cautioned while highlighting that Indian users were targeted repeatedly through replica dashboards, spoofed brand messages, and malicious mobile apps.”[1]
Principal categories of AI-Based Crimes

Nowadays, AI is used to generate lookalike phishing emails, fake voices for impersonation, generate deepfakes, and carry out credential stuffing attacks on an automated basis. According to a fresh study, “AI tools were used in 80 per cent of the phishing mails, which, in other words, means that AI was used in eight out of every 10 phishing campaigns.” [2]Deepfakes have particularly become a point of concern, with AI-generated audio, video, or images being utilized for impersonation, scams, and reputational harm.

Besides these, ransomware attacks based on AI have also become more sophisticated, using machine learning to evade conventional antivirus detection. AI can be used to manipulate or alter AI systems themselves, in attacks such as data poisoning or adversarial machine learning, with catastrophic consequences in high-stakes domains such as healthcare or finance.
Artificially intelligent robots are being utilized to enable more mass-scale attacks, such as distributed denial-of-service (DDoS) attacks, that have the potential to take down key services and infrastructures. Applying generative AI to create malware or to use automated vulnerability scanning enhances the threat further since traditional security measures are not capable of keeping pace.

AI in Ransomware and Cyber Extortion

One of the newest trends in AI-facilitated cybercrime is the application of AI in ransomware and cyber extortion attacks. Fresh versions of ransomware now employ AI to categorize valuable targets within compromised systems. AI applications can decide which files to encrypt depending on utilization patterns, file types, and user activity thereby making the attack even more effective. AI can be employed to help with the automation of extortion emails, specific to the industry and threat level of the victim.

AI and Innovative Phishing Techniques

AI has significantly enhanced conventional phishing techniques. Employing natural language processing (NLP) technology, AI-powered phishing emails are not only grammatically correct but also contextually sensitive to the target user. AI-powered phishing emails often replicate tone of voice, signature templates, and even writing patterns of known users, thus making them more genuine. Phishing attacks are therefore improved and harder to detect even for trained staff.

THE LEGAL FRAMEWORK IN INDIA

Information Technology Act, 2000 (IT Act)

Information Technology Act 2000 is India’s sole contributor legislature which deals with cybercrimes. The Act makes hacking (Section 66), identity theft (Section 66C), and spreading malicious code (Section 43) criminal offenses. The Act was passed well before the concept of AI and does not specifically mention AI-based attacks. “The IT Act, 2000, India’s primary legislation on cybercrimes and e-commerce, was in the right direction when it was passed. But it does not specifically deal with AI-based cybercrimes.”[3]

Section 75 of the IT Act attempts to provide jurisdiction to crimes which have been committed outside India if the victim computer system is located within India. “Section 75 of the IT Act provides limited extraterritorial jurisdiction but is hard to enforce without international harmonization.”
Indian Penal Code, 1860 (IPC) and Other Laws

The IPC supplements the gaps in the IT Act for crimes such as cheating, impersonation, and criminal breach of trust. All these provisions can be applied to AI-based impersonation or fraud, but they do not address the technicalities of AI-produced content or AI-based decision-making. The Digital Personal Data Protection Act, 2023, attempts to safeguard personal data but does not address AI-based data breaches or profiling.

Sectoral and Regulatory Gaps

There is no AI-offense specific law in India so far. “There is no Indian law governing use of robots, artificial intelligence, and algorithms in India except Digital Data Protection Act 2023 that only addresses processing of personal Data and AI systems will be used to extract users’ consent and stifle discriminative Data scraping practice. The act won’t be applicable on personal data that was made public by the user to whom the data belongs, and this has raised an eyebrow over the use of such data for scraping and AI development.”[4]

LEGAL AND PRACTICAL CHALLENGES

Attribution and Liability

Attribution of responsibility in AI-facilitated cybercrimes is beset with challenges. “The problem with responsibility in the instance of use of AI by cyber crooks is that although a program is created by someone else with other motives but is assisting cyber crooks in following their selfish motives. In such a situation how can an individual be held responsible for the act which he did not perform nor had an intention of doing?”[5] Classical legal concepts grounded on human agency and direct causation are challenged by attributing responsibility when an AI system operates autonomously or is exploited by unidentified perpetrators.

Jurisdictional Complexities

AI-driven offenses originate outside Indian borders or get routed through de-centralized networks, cloud servers, and VPNs. “AI crimes are committed on computer networks and systems across the globe. While the activity of encoding, crypto currencies, and other technologies such as the dark web or cloud storage can lead to data loss, they also pose serious challenges to lawful administration in tracking criminals, their organization, or electronic evidence.”[6] This poses issues of territorial jurisdiction, particularly under Section 75 of the IT Act, and becomes difficult to enforce in the absence of streamlined mutual legal assistance treaties (MLATs).

Evidentiary and Investigative Issues

AI-created data, like deepfakes, make collection and admissibility of electronic evidence more difficult. In one legal study, “In AI-enabled crimes like deepfakes, criminals employ neural networks to create highly realistic audio-visual material, undermining the basis of evidentiary credibility and inappropriately burdening victims with the evidence.”[7] Law enforcement agencies need sophisticated forensic tools and skills to verify AI-altered information and trace the origin of bot attacks.

Data Protection and Privacy

Data is the building block of the digital economy, and AI’s hunger for data raises significant privacy concerns. “The cyber laws of most of the nations require the data processing system to be translucent. In India pre consent is prequisite or obtaining personal Data. The entities are to be provided with the details of not only how the data was handled but also why certain choices were made by the system. Nevertheless, intricate algorithms of AI make it awfully grim to meet the transparency prerequisite.”[8] The lack of clarity on how AI systems process, store, and share data complicates compliance and enforcement.

EMERGING TRENDS AND CASES

AI-driven phishing, deepfake frauds, and identity theft are now on the rise in India. India received 1.91 million complaints related to cybercrime in 2024, an almost ten-fold rise from 2019, with losses worth billions. “Phishing fraud, identity theft, cyber slavery are no longer abstractions. AI has made them a part of our everyday life.”[9] There have also been reported cases of AI-generated pictures being used for harassment and scams.
Law enforcement agencies are beginning to employ AI technologies for digital forensics and predictive threat analysis but legal admissibility and chain of custody requirements for such evidence remain undefined. Lack of dedicated training and resources to investigate AI-driven crimes also handicaps effective enforcement. In certain recent cases, courts have been unable to authenticate AI-generated evidence, highlighting the urgent necessity for clear evidence standards.

INTERNATIONAL COMPARISONS AND BEST PRACTICES

The European Union’s proposed Artificial Intelligence Act brings a risk-based regulation with the requirements of transparency, accountability, and human control of high-risk AI systems. In the US, it relies on sectoral regulation and agency guidance, whereas the UK emphasizes risk assessment, public trust, and collaboration between sectors. India’s strategy is largely reactive, and the legal framework is generally behind the rapid growth in AI.
Several countries have specialized cybercrime units and have heavily invested in AI-driven threat intelligence systems. International organizations such as INTERPOL and Europol have ensured cross-country coordination and information sharing, understanding that AI-driven cybercrimes will most likely be transnational in scope. India, on the other hand, lacks a national AI-for-cybersecurity framework and its engagement with international frameworks is minimal.

POLICY INITIATIVES AND ENFORCEMENT

India has established the Indian Cyber Crime Coordination Centre (I4C) to provide technical support, training, and inter-agency coordination. The Digital India Act coming up will introduce stricter regulations on AI, data security, and digital security, but this is possible only with severe enforcement. Government departments, as well as the private sector and educational institutions, have launched campaigns and training modules to educate the public and law enforcement officers on the threats of AI-based cybercrimes.

JUDICIAL RESPONSES AND CASE LAW

Indian courts gradually came to understand the nuances of AI-driven cybercrimes and digital evidence. A pertinent example is State of Maharashtra v. Dr. Praful B. Desai[10], where the Supreme Court held that evidence in electronic form, like video conferencing, can be used in criminal trials, subject to its authenticity and reliability being proven. This has now been set as precedent to accept new forms of digital evidence, which is relevant in deciding AI-driven content like deepfakes.

But there are also risks that are created by false or AI-generated evidence. Authentication and integrity will usually be the disproportionate burden on the victim or the prosecution to prove. Judicial supervision of the chain of custody, forensic authentication, and expert testimony will increase as AI-generated material improves.

In spite of all these advancements, there is hardly any jurisprudence related to AI-based cybercrimes. Judicial norms are derived mainly from current laws such as the IT Act, Indian Evidence Act, and the IPC, none of which were written with AI in consideration. This is one such field where judiciary reforms and judicial awareness are the hour of need to equip the judicial system to remain in sync with the changing face of AI-based crimes.

RECOMMENDATIONS AND WAY FORWARD

India needs to pass detailed legislation that identifies AI, imposes responsibility for autonomous action, and governs the use of AI in high-risk areas. Digital forensic labs, cyber investigation software, and police training need to be invested in. India needs to sign international cybercrime treaties and participate in bilateral and multilateral data sharing and joint investigation pacts. Processes of recovery, preservation, and admissibility of digital evidence need to be clearly laid out to ensure prosecution. Prevention and resilience also require public awareness and digital literacy programs.

There must be created a national AI for cybersecurity strategy, including anticipatory risk analysis, ethical principles, and continuous assessment of emerging threats. Its construction would have to include government, industry, and academia to ensure legal and technical solutions keep up with the threat landscape.

INTERNATIONAL LEGAL RESPONSES TO AI-CYBERCRIME

While India is still in the process of formulating its AI legislation, other nations have already started the process of creating AI legal standards. The European Union’s proposed AI Act is an example of efforts to regulate high-risk AI applications, including software related to cybersecurity. Similarly, the United States proposed algorithm accountability bills in order to prevent misuse of AI. All of these evolving legal responses from abroad can serve as a point of reference for Indian policymakers and lawmakers.

CONCLUSION

AI-based cybercrimes constitute a paradigm change in the threat scenario, challenging the adequacy of existing legal guidelines. India has achieved a significant leap in updating its cyber laws and data protection regime, but there remain sweeping gaps in addressing the new challenges posed by AI. There is a responsibility of dynamic legislative adaptation, capacity building, and cross-border cooperation to make the legal system nimble and effective in the context of high-speed AI-based threats.

REFERENCES

  1. Times News Network, AI-Driven Cybercrime Threatens India’s Digital Future, ₹23,000 Crore Lost in 2024, Times of India (Jan. 30, 2024), https://timesofindia.indiatimes.com/india/ai-driven-cybercrime-threatens-indias-digital-future-rs-23000-crore-lost-in-2024/articleshow/106997572.cms (last visited July 12, 2025).
  2. New Indian Express, AI Driving Force Behind 82.8 Per Cent of Phishing Emails in Karnataka, New Indian Express (Feb. 2, 2024), https://www.newindianexpress.com/states/karnataka/2024/feb/02/ai-driving-force-behind-phishing-emails (last visited July 12, 2025).
  3. Seth Associates, The Cyber Legislations Governing AI – India & EU, https://www.sethassociates.com/cyber-legislations-ai-india-eu (last visited July 12, 2025).
  4. Woxsen L. Rev., Camouflage of AI in Cyber Crimes Vis-à-Vis Legal Issues and Challenges, https://www.woxsen.edu.in/lawreview/camouflage-ai-cyber-crimes (last visited July 12, 2025).
  5. IJFMR, Legal Challenges of Artificial Intelligence in India’s Cyber Laws, https://www.ijfmr.com/papers/2023/AI_Cybercrime_India.pdf (last visited July 12, 2025).
  6. Cyber Law and Emerging Use of Artificial Intelligence, Legal Service India, https://www.legalserviceindia.com/legal/article-1515-cyber-law-and-emerging-use-of-artificial-intelligence.html (last visited July 12, 2025).
  7. Legal Challenges of Deepfake Technology and AI-Generated Content in India, Jus Corpus, https://www.juscorpus.com/legal-challenges-of-deepfake-technology-and-ai-generated-content-in-india/ (last visited July 12, 2025).
  8. Legal Perspective of Cybercrime in India, Lawsimpl.AI, https://www.lawsimpl.ai/legal-perspective-of-cybercrime-in-india (last visited July 12, 2025).
  9. India Today, Assam Man Circulates AI-Morphed Images of Ex-Girlfriend, Arrested, India Today (Feb. 7, 2024), https://www.indiatoday.in/india/story/ai-morphed-images-assam-arrest-2488314-2024-02-07 (last visited July 12, 2025).
  10. State of Maharashtra v. Dr. Praful B. Desai, (2003) 4 S.C.C. 601 (India)

[1] Times News Network, AI-Driven Cybercrime Threatens India’s Digital Future, ₹23,000 Crore Lost in 2024, Times of India (Jan. 30, 2024), https://timesofindia.indiatimes.com/india/ai-driven-cybercrime-threatens-indias-digital-future-rs-23000-crore-lost-in-2024/articleshow/106997572.cms (last visited July 12, 2025).

[2] New Indian Express, AI Driving Force Behind 82.8 Per Cent of Phishing Emails in Karnataka, New Indian Express (Feb. 2, 2024), https://www.newindianexpress.com/states/karnataka/2024/feb/02/ai-driving-force-behind-phishing-emails (last visited July 12, 2025).

[3] Seth Associates, The Cyber Legislations Governing AI – India & EU, https://www.sethassociates.com/cyber-legislations-ai-india-eu (last visited July 12, 2025).

[4] Woxsen L. Rev., Camouflage of AI in Cyber Crimes Vis-à-Vis Legal Issues and Challenges, https://www.woxsen.edu.in/lawreview/camouflage-ai-cyber-crimes (last visited July 12, 2025).

[5] IJFMR, Legal Challenges of Artificial Intelligence in India’s Cyber Laws, https://www.ijfmr.com/papers/2023/AI_Cybercrime_India.pdf (last visited July 12, 2025).

[6] Cyber Law and Emerging Use of Artificial Intelligence, Legal Service India, https://www.legalserviceindia.com/legal/article-1515-cyber-law-and-emerging-use-of-artificial-intelligence.html (last visited July 12, 2025).

[7] Legal Challenges of Deepfake Technology and AI-Generated Content in India, Jus Corpus, https://www.juscorpus.com/legal-challenges-of-deepfake-technology-and-ai-generated-content-in-india/ (last visited July 12, 2025).

[8] Legal Perspective of Cybercrime in India, Lawsimpl.AI, https://www.lawsimpl.ai/legal-perspective-of-cybercrime-in-india (last visited July 12, 2025).

[9] India Today, Assam Man Circulates AI-Morphed Images of Ex-Girlfriend, Arrested, India Today (Feb. 7, 2024), https://www.indiatoday.in/india/story/ai-morphed-images-assam-arrest-2488314-2024-02-07 (last visited July 12, 2025).

[10] State of Maharashtra v. Dr. Praful B. Desai, (2003) 4 S.C.C. 601 (India).

Disclaimer: The materials provided herein are intended solely for informational purposes. Accessing or using the site or the materials does not establish an attorney-client relationship. The information presented on this site is not to be construed as legal or professional advice, and it should not be relied upon for such purposes or used as a substitute for advice from a licensed attorney in your state. Additionally, the viewpoint presented by the author is personal.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *