ABSTRACT
This article is written by Kamal Singh Rautela, an intern under Legal Vidhiya
Who is oblivious from modern digital media and related applications nowadays? I believe hardly to find any left in complete ignorance. Whether it be social media, news search engines or any source influenced from the digitalization taken up for nourishing oneself with information from all over the world. It should also be presumptuous that people are immune to the burning side of the furnace they are gaining warmth from, rather it is mostly known to most that there is quite dark on the other side of this digital media which not only connects and enrich the users but also possess the capability to keep them in severely suffering state. There are numerous ways used for doing such harms by third-party persons who have such mens rea to do so. However, in this article we have chosen to emphasize and impart over the impacts of popularly known Deepfakes issues uprising on a daily basis whole over the world and related cyber crimes over the society at large.
KEYWORDS:
Deepfakes, digital, CNN, LSTM, SVM, Vigilance, GAN
INTRODUCTION:
It is quite the case that the world is facing the harsh aftermath effects of the meticulous and enormous ever-developing digital media sector technologies. Deepfakes have become one such issue to be discussed worldwide because of the numerous-severe concerns it commenced since its discovery by the evil-minded digital media hackers or crackers. “Deepfake” term originated since 2017, when some anonymous user used the same to refer himself and used to create and distributing pornographic content[1]. This technology, however, has founded since 2014 in a research article published by Ian J. Goodfellow et al.[2] who has developed the Generative adversarial network[3] (GAN) which is a part of machine learning and generative AI. This GAN is the prominent and key component in the working of identical photo-video creator technology we call as deepfakes. With the help of these, the perpetrators create fake content and misuses them in number of ways which affect the society deeply and we are going to discuss those impacts in this article.
OBJECTIVE:
This article aims to emphasize on the dark side of digital media. It analyses how deepfakes related to news and information raise severe concerns in society. Further, the article delves into the other side of the crimes, which even harm national security by creating distrust among the asses. The effect of cybercrime is not only on society at large but it also misuses the individual’s identity. To make this article more analytical, emphasis has been taken from various research studies, news, journals, etc. concerned with this topic.
DEEPFAKES: MISUSE AND IMPACTS
Deepfakes have emerged to be a rising concern before the world governments after facing numerous issues of fake news and information created disturbances among their population whether it be the $35 million heist from the Japanese company in 2020 by using deep voice technology to fake the voice of the company director[4] or the ₹40,000 lost by a 73 years old Kerala man to deepfake AI based fraud[5].
Here we will try to understand the various ways these are misused and the consequences of them on the various groups of the society:
Sextortion:
With the help of these technologies the digital media perpetrators attempts to make fake video and images of either men or women and then tries to trap their target in such a state where they get some sensitive information or secret of their prey. This helps them to demand money or any compromising pictures of the victim to further have the threads in their hand to control or persuade the victim for their benefit. An article of Times of India presents some instances of such attempts made to extort money from the victims by initially becoming their social media friend and then video calling with a women avatar whom they thought to be a real person but later founded to be a deepfake created one[6].
This event is quite horrifying for the extroverts who like to connect with new people on such social media platforms to fear the worst. In 2020, 214% surge in fake news cases were found according NCRB reports[7]. This must be taken as a warning and people should be careful while connecting to unknown people and trusting in their sweet talks. Once you trapped, you will hardly escape due to social pressures or fear of embarrassment or other reasons. The only way could be awareness and your wit to tackle the situation to prevent it beforehand.
Fake Pornography:
The deepfake technology using the edge of making fake but original looking images and videos also promoted the dark industry of pornography. An AI based firm named Deep Trace Labs has published a report in 2019 stating statistical data of 14,678 deepfake videos available online of which 96% ones were pornographic content[8]. In this way the perpetrators create humiliating images or videos of celebrities or any person specific of the society by just using their public accessible data like images, voice, etc and then creates a replica of the same and weave these data threads into the pornographic content. These also allows them to distribute wrong ideas among the addicted population related to women specially as this group is the prime target of these fierce wolves of modern digital world of deepfakes.
It was further founded that 100% porn on top 5 pornography websites were of females in which 99% statistics were showing them from entertainment sector (actresses, musicians, etc.) means celebrities[9]. It is shocking to know for most of the people that there are several sites and even such applications where images of fully clothed females can be uploaded and the same image would be obtained in nude form[10]. This is really depressing that now some random pervert can simply click or somehow manage to get a girl’s photograph or video simply to convert it in a nude avatar. These could ultimately conclude into making her life a living hell if that person tries to pose threat or blackmail the girl.
National Security Risks and Election Interference:
This technology can also bring the National security in a comprising state where the law and order would be disrupted by inserting fear or distrust among the masses simply by forging a video of any prominent authority announcing war at world maze or during a war conflict creating something which augments the degree of mistrust against the relative government or simply making false videos of elected officials or public figures indulged in inappropriate behaviour[11]. Creating fake videos is hazardous. A New York Times article highlighted an issue of two news anchors reporting news were actually created of deepfake[12]. Director of National Intelligence to assess threats to national security and arguing that such technology could be used to spread misinformation, exploit social division, and create political unrest[13].
Moreover, the U.S. intelligence senate committee concluded that Russia interfered in 2016 U.S. Presidential elections[14]. In 2020, The Hindu published an article over the incident of two BJP prominent members to criticizing Mr. Arvind Kejriwal and urging the population to vote them in some video which was distributed over 5,800 WhatsApp groups were found fake created by this technology[15]. These events depict a clear image of the burning world by misusing this technology whose detection is quite hard at first place and it disrupts the public order easily as the population having many grievances against the government simply outburst by such actions. It is further dangerous for religious or ethnic groups or the marginalized to be provoked by this technology by such fake videos. One cannot possibly imagine how catastrophic it could shape the world politics.
Identity Theft:
One of the most prominent ways used by the deepfake perpetrators is to using someone’s image, video and voice to create their version of the same person to be utilized by them for the purpose of scams and frauds to their known. Humans are simply emotion-driven living beings who based their life on trust and hope. These digital hunters feed on making such people their prey. They simply at the initial stage collects the required data and information of their future victim and their acquaintances for later creating a situation to that person by means of video calls or zoom meet to simply making him believe that this person is the same whom the target knows and thus convincing them to give them money or sometimes some sensitive images of them which further result into the sextortion as discussed earlier. According to Regula survey report of 37% organization experienced the deepfake voice fraud and 29% victimize of deepfake videos[16]. The explicit examples of the severity of these attacks can be understood by the instances occurred in recent times for example the cryptocurrency platform Binance tricked by some scammers using deepfakes[17] or the ₹5 crore scam of North-China resident man who got tricked into using a deepfake video call in the face of his friend convincing him to transfer the certain amount of 4.3 yuan (around ₹5 crore)[18].
Imagine tomorrow your long known relative or friend asks you for some money over a video call where you cannot differentiate with your bare eyes would you not try to help him? Or just imagine a scenario where a beautiful girl or boy whom you have met on some dating site or social media becomes your lover and asks for some financial help would you refuse? Or tomorrow your father or mother asking you money for some urgent purpose on their new number or account, would you have reasons to not believe them?
I think these are certain incidents where people mostly get tricked because of the human emotion is what is incited by these perpetrators. Again, awareness is just a minor solution but new systems to detect such frauds and related legislations will be surely more helpful to general public.
WAY FORWARD:
The authorities are not completely ignorant of the situations and trying to dealt with the situation with the help of brining in some regulations and statutes to administrate the same. The EU has presented The Strengthened Code Of Procedure On Disinformation with 34 signatories to tackle these crimes by empowering the users, transparency obligations on AI systems, and others[19]. The US introduced Deepfake Task Force Act [20] to regulate such ongoing frauds. India, however, not having specific laws but still capable to punish the perpetrators under various existing domestic statutes on Intellectual Property Rights, Information Technology Act, etc.
Apart from the legislative part there are many digital advanced methods developing to distinguish a forged image from the original such as CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis[21], Support Vector Machine (SVM)[22], Long Short-Term Memory (LSTM) based like CLRNet[23] or CNN with LSTM (FaceForensics++, Celeb-DF, DeepFake Detection Challenge)[24].
This shows that the challenging roller-coaster ride with all the ups and downs can be passed through but we have to keep up ourselves with vigilance and awareness to such harmful incidents and to know that we can tackle them and are now alone in this catastrophic circumstance of digital media. No one can escape unharmed who harms others.
CONCLUSION:
Deepfakes have drastically evolved over the time and will be keep evolving but we are also not defenceless. The anonymity and the mistrust spread by these deepfake videos and deepfake voices must be patiently rebuilt. The people must have belief in the government and also should promote these incidents to others, the media should also apply the existing means to detect the news originality along with promoting awareness through advertisement means, and the pressure groups at domestic or international level should also contribute the same.
People should know how they can be psychologically, financially or even physically harmed as consequences to such acts. Until, the situation can be completely eradicated, people must take care of themselves to avoid as much as they can to visit such untrusted sites or connect with unknown people or to receive calls from unknown numbers. The best way to be secured from these attacks is to not let them target you at first place.
[1] Abhishek Chatterjee, Deepfake technology: how and why China is planning to regulate it, The Hindu, available at https://www.thehindu.com/sci-tech/technology/deepfake-technology-how-and-why-china-is-planning-to-regulate-it/article66277740.ece, last seen on 12/08/2023
[2] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville & Yoshua Bengio, Generative Adversarial Nets, Neural Information Processing Systems, available at https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf, last seen on 12/08/2023
[3] Generative Adversarial Networks, Gartner Information Technology Gartner Glossary, available at https://www.gartner.com/en/information-technology/glossary/generative-adversarial-networks-gan, last seen on 12/08/2023
[4] Thomas Brewster, Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find, Forbes, available at https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=af31ce075591, last seen on 12/08/2023
[5] Vishnu Varma, Kerela man loses ₹40l to AI-enabled deep-fake fraud, Hindustan Times, available at https://www.hindustantimes.com/india-news/deepfake-scammers-trick-indian-man-into-transferring-money-police-investigating-multi-million-rupee-scam-101689622291654.html, last seen on 12/08/2023
[6] Ashish Chauhan, Ahmedabad: Deepfakes replace women on sextortion calls, Times of India, available at https://timesofindia.indiatimes.com/city/ahmedabad/deepfakes-replace-women-on-sextortion-calls/articleshow/86020397.cms, last seen on 12/08/2023
[7] Ministry of Home Affairs, Government of India, Crime in India 2020 Statistics volume- I, available at https://ncrb.gov.in/sites/default/files/CII%202020%20Volume%201.pdf, last seen on 12/08/2023
[8] Henry Ajder, Giorgio Patrini, Francesco Cavalli & Laurence Cullen, The State Of Deepfakes: Landscapes, threats, and impact, Deeptrace, available at https://regmedia.co.uk/2019/10/08/deepfake_report.pdf, last seen on 12/08/2023
[9] Ibid
[10] Samantha Cole, This Horrifying App Undresses a Photo of Any Woman With a Single Click, Vice, available at https://www.vice.com/en/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman, last seen 12/08/2023
[11] Deep Fakes and National Security, Congressional Research Service, available at https://crsreports.congress.gov/product/pdf/IF/IF11333#:~:text=Deep%20fakes%20could%20also%20be,have%20attempted%20to%20recruit%20sources, ;last seen 12/08/2023
[12] Adam Satariano and Paul Mozur, The People Onscreen Are Fake. The Disinformation Is Real., The New York Times, available at https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html, last seen on 12/08/2023
[13] Jack Langa, Deepfakes, Real Consequences: Crafting Legislation To Combat Threats Posed By Deepfakes, 101, Boston University Law Review, 761, 769 (2021), available at https://www.bu.edu/bulawreview/files/2021/04/LANGA.pdf, last seen on 12/08/2023
[14] Intelligence Community Assessment: Assessing Russia n Activities and Intentions in Recent U.S. Elections, Senate Intelligence Committee, available at https://www.intelligence.senate.gov/sites/default/files/documents/ICA_2017_01.pdf, last seen on 12/08/2023
[15] John Xavier, Deepfakes enter Indian election campaigns, The Hindu, available at https://www.thehindu.com/news/national/deepfakes-enter-indian-election-campaigns/article61628550.ece, last seen on 12/08/2023
[16] Regula survey: a third of businesses hit by deepfake fraud, https://regulaforensics.com/news/a-third-of-businesses-hit-by-deepfake-fraud/, last seen on 12/08/2023
[17] Luke Hurst, Binance executive says scammers created deepfake ‘hologram’ of him to trick crypto developers, Euro News, available at https://www.euronews.com/next/2022/08/24/binance-executive-says-scammers-created-deepfake-hologram-of-him-to-trick-crypto-developer, last seen on 12/08/2023
[18] Danny D’Cruze, AI Scam Alert! Scammer steals over Rs 5 crore from a man using his friend’s face, Business Today, available at https://www.businesstoday.in/technology/news/story/ai-scam-alert-scammer-steals-over-rs-5-crore-from-a-man-using-his-friends-face-382379-2023-05-23, last seen on 12/08/2023
[19] 2022 Strengthened Code of Practice on Disinformation, European Commission, available at https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation, last seen on 12/08/2023
[20] Deepfake Task Force Act, CONGRESS GOV, available at https://www.congress.gov/bill/117th-congress/senate-bill/2559/text, last seen on 12/08/2023
[21] Mauro Barni, Kassem Kallas, Ehsan Nowroozi & Benedetta Tondi, (2020), CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis, 2020 IEEE International Workshop on Information Forensics and Security (WIFS) available at https://www.researchgate.net/publication/343256045_CNN_Detection_of_GAN-Generated_Face_Images_based_on_Cross-Band_Co-occurrences_Analysis, last seen on 12/08/2023
[22] Harsh Agarwal, Ankur Singh & Rajeswari Devarajan, (2021), Deepfake Detection Using SVM, 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), available at https://www.researchgate.net/publication/354810411_Deepfake_Detection_Using_SVM, last seen on 12/08/2023
[23] Shahroz Tariq, Sangyup Lee & Simon S. Woo, A Convolutional LSTM based Residual Network for Deepfake Video Detection, Cornell University, available at https://arxiv.org/abs/2009.07480, last seen on 12/08/2023
[24] Suganthi ST, Mohamed Uvaze Ahamed Ayoobkhan, Krishna Kumar V, Nebojsa Bacanin, Venkatachalam K, Hubálovský Štěpán, and Trojovský Pavel, Deep learning model for deep fake face recognition and detection, National Library of Medicine, available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044351/, last seen on 12/082023
0 Comments