Site icon Legal Vidhiya

RESTRICTION ON MOVEMENT, FREE MOVEMENT OF HATE: ANALYSING THE ROLE OF SOCIAL MEDIA IN VILIFYING MUSLIMS DURING LOCKDOWN AND THE LEGAL RESPONSE TO IT

Spread the love

This article is written by Sai Sriharsha Dimili of Andhra University, an intern under Legal Vidhiya

Abstract:

This research article explores the phenomenon of vilifying Muslims through social media during lockdowns and the subsequent legal responses. The study investigates the impact of movement restrictions on the dissemination of hate speech targeting the Muslim community. By employing a mixed-methods research approach, combining qualitative content analysis and legal case studies, the article provides an in-depth analysis of the role of social media in propagating hatred. The findings highlight the challenges posed by unrestricted online hate speech and the legal measures taken to counteract it. The article concludes with recommendations for future strategies to address this issue effectively.

Keywords:

Social Media, Hate Speech, Muslims, Lockdown, Legal Response, Movement Restrictions, Vilification, Online Discourse.

Introduction:

The advent of digital communication and social media platforms has transformed how information is disseminated. However, this technological advancement has also increased the proliferation of hate speech targeting marginalised communities. The COVID-19 pandemic and the subsequent lockdowns have further exacerbated this issue. The COVID-19 pandemic, originating in early 2020, profoundly impacted health, livelihoods, and societal cohesion. It exacerbated existing divisions, particularly harming democratic structures. Amid increased marginalization, a significant outcome was the normalization of anti-minority sentiment, notably towards India’s largest Muslim community. Concurrent with the pandemic, there was a surge in Islamophobia, permeating all aspects of life. Actions ranging from isolation to incitement of violence gained traction. The Muslim community, already dealing with discriminatory policies, was inundated with hatred. This article delves into hate speech, its role in endorsing discrimination and violence, and the nexus between Islamophobia, hate speech, and anti-Muslim violence[1]. The influence of platforms like WhatsApp and Facebook in amplifying hate through fake news is explored, alongside the challenges of regulating these influential platforms. The conflict between hate speech and free speech, as well as recent legal perspectives, is examined. Instances of resistance against hate speech are provided, underscoring the need to address concerns in countering hatred’s proliferation.

Research Methodology:

To address the research objectives, a mixed-methods approach will be adopted. First, qualitative content analysis will be conducted on a representative sample of social media posts containing hate speech directed towards Muslims during lockdowns. This analysis will provide insights into the nature, extent, and patterns of hate speech dissemination. Second, legal case studies will be examined to understand the legal responses taken against perpetrators of online hate speech. A comparative analysis of these cases will shed light on the efficacy of legal measures in addressing this issue.

Review Literature:

Over the last decade, notable occurrences like the EU referendum, terrorist incidents, and discussions about immigration and national identity have integrated racialized anti-Muslim themes into more mainstream political dialogues. In a 2015 report[2], we documented a marked increase in anti-Muslim hate crimes after the Europe-wide terrorist attacks, particularly in the wake of the November attacks in France that year. In the subsequent year, we delved into how the 2016 EU referendum outcome acted as a significant catalyst for heightened racist and religiously motivated hate crimes[3]. The rhetoric employed in anti-Muslim hate crimes often gets reappropriated by those positioning themselves as self-proclaimed leaders within larger historical ideological conflicts across Europe. Some emphasize Muslim identity and visible religious symbols as cultural threats, with a particular focus on women wearing hijabs. The white supremacist terrorist attacks in Christchurch, New Zealand, in March 2019, which tragically claimed 51 Muslim lives, had a profound ripple effect on Muslims in other regions[4]. The 2020 interim report analyzed the surge in cases immediately after the Christchurch attack, scrutinizing the language used in various cases linked to the incident.

The analysis conducted during 2014-15 explored the impact of the British news media on anti-Muslim and anti-immigrant bias. A subsequent study in 2018 revealed that mainstream news media disproportionately linked Muslims to negative opinions and occurrences compared to other religious groups. This highlighted the tendency of the national press to portray Muslim communities as posing a threat to perceived British values. During the pandemic years, concerns arose from the public that images of Muslim communities in general Covid-19 coverage might unfairly label them as non-compliant with lockdowns or as potential spreaders of the virus, potentially causing suspicion.

Preexisting Structural Inequalities Brought by the Pandemic:

The findings indicate that Muslims have encountered heightened discrimination in various aspects of their daily lives during the Covid-19 pandemic, despite the widespread lockdown measures in place throughout the year. Multiple forms of evidence point to the pandemic not only exposing existing inequalities but also exacerbating them for Muslims. An illustrative example is the 2020 report by Public Health England, which highlighted disparities among ethnic groups impacted by the pandemic[5]. The report demonstrated that individuals of Bangladeshi ethnicity faced double the risk of death after contracting Covid-19 compared to those of White British ethnicity. Similarly, people from Asian countries like India, Pakistan and other Black ethnicities faced between 10% and 50% higher risks. These disparities were also observed within the working-age population across different ethnicities.

Evolving Tendencies of Anti-Muslim Hatred on Social Media:

Some of the major online behavioural incidents are as follows[6],

Numerous instances of stigma encompass experiences such as status loss, stereotyping, labelling, and discrimination, particularly in contexts where power dynamics persist. These instances operate within the realm of interpersonal interactions as well as within structural systems, leading to biases and unequal access to healthcare. One illustrative case is the stigma associated with Covid-19, which attached the virus to ethnicity and nationality, notably affecting Chinese, East Asian, and Southeast Asian (ESEA) communities. This led to a surge in racist violence, discriminatory actions, social avoidance, denial of services or healthcare, and unequal healthcare provision. Stigmatized cultural identity perpetuates assumptions of inferiority, hindering societal inclusion, oversimplifying multifaceted identities into harmful one-dimensional attributes, and resulting in both economic and interpersonal forms of discrimination. These dynamics have repercussions in workplaces and job-seeking processes. Historical instances demonstrate how minority ethnic groups, such as Jewish, Chinese, and Black communities, faced blame, dehumanization, stigmatization, and marginalization during certain epidemics in the United States. Mainstream framing perpetuated explicit discrimination, stigma, and racism against these groups, with some media outlets even disregarding mourning periods.

Stigma is a pivotal instrument in generating and perpetuating power dynamics and control relationships, becoming deeply interwoven with the mechanisms of social inequality. This occurs particularly when existing hostilities are combined with the societal framing of diseases and illnesses, which involves moral judgments and the context of the virus’s emergence. Stigma also contributes to elevated social vulnerability and levels of discrimination, impacting those susceptible to contracting the disease. Furthermore, history demonstrates how associating specific diseases with geographical areas reinforces discriminatory behaviours and stigmatization. In 2015, the World Health Organization issued guidance on best practices for naming diseases to prevent offending various cultural, social, national, regional, professional, or ethnic groups[7].

Hate speech extends beyond mere harm, encompassing the creation of a groundwork for animosity. Even in the absence of explicit calls for prejudice or violence, hate speech cultivates an environment that is more accepting of such behaviours and, at the very least, provides them with tacit endorsement. Through orchestrated campaigns of dehumanization involving offensive language, false information, sensationalized news, and widespread dissemination via both mainstream and social media platforms, certain social groups are depicted as menaces to society or even national security. By distorting historical facts, mythology, and cultural narratives without proper context, this strategy strives to establish the dominance of the majority while inventing an internal opponent. In the Indian context, this Hindu-centric ideology normalizes bias and, in some cases, even physical aggression against those perceived as contravening its exclusive principles. The manipulation of historical events, as viewed through the lens of the majority, constructs a storyline of their alleged victimization, ultimately leading to the marginalization of minority groups as outsiders. When the ruling party adopts and endorses this ideology, and state authorities are complicit, it serves to escalate conflicts rooted in communal differences.

Legal Frameworks:

The role of social media in disseminating hateful or discriminatory content, including against Muslims, has been a topic of concern for quite some time. During the COVID-19 lockdowns, there were instances where false information and harmful narratives were spread, often exacerbating existing tensions. These incidents can fall under hate speech, incitement to violence, or other legal categories depending on the jurisdiction.

It is important to note that laws and cases vary from country to country, so specific instances may have different legal implications based on the jurisdiction they occur in. Some countries have strict laws against hate speech and incitement, and social media platforms may also have their own policies for dealing with such content.

Legal Measures and Recommendations:

Here are some general legal measures and recommendations that were relevant:

It is important to note that the effectiveness of these measures can vary, and there are ongoing debates about the balance between free speech and combating harmful content. Additionally, laws and recommendations may differ significantly based on the country and jurisdiction.

Way Forward and Recommendations:

In the quest to rectify harm, social media platforms need to allocate resources or prioritize the mental well-being of marginalized groups by enhancing support services for licensed therapists, both in-person and online. The adverse health impacts linked to consuming traumatic online content, particularly in the context of racism, underscore the importance of these efforts[11]. Additionally, platforms should enhance the visibility of support services after distressing events and find effective ways to inform their users about available assistance.

Google-owned platforms, such as YouTube, must intensify their endeavours to educate users about reporting hate speech and harmful content on their platforms. They should adopt greater transparency in their decision-making process, ensuring that valid content is not unjustly removed due to malicious attempts. Google must take additional measures, especially concerning ideological-driven sites that propagate disinformation, some of which may even be anti-Muslim, while benefiting from their association with Google News’ credibility.

TikTok should continue building upon its positive community-building initiatives and empower users to report offensive content, both in videos and hashtags. Enhancing moderation tools to curb hate speech within user comments is crucial.

During the initial national lockdowns in 2020, significant social media platforms like Twitter and Facebook witnessed the proliferation of anti-Muslim and Islamophobic disinformation campaigns[12]. While these platforms made efforts to debunk such misinformation and flag inappropriate content, Twitter needs to enhance its mechanisms for users to report misleading and harmful content. Its introduction of a misleading reporting function in certain countries is a positive step, but more comprehensive measures are overdue, including options to report disinformation, hate speech, and biased acts.

Addressing the issue of far-right agitators evading bans on platforms like Twitter is imperative. These individuals often create alias accounts or exploit the credentials of others to continue spreading their content, thus necessitating more stringent controls[13].

To counter disinformation on Facebook, fact-checking content is helpful, but further measures are needed to discourage users from sharing false information. Instances of far-right agitators maintaining verified accounts on affiliated platforms like Instagram after being banned on Facebook reveal significant oversights that warrant investigation and correction.

Newspapers play a role in combatting misinformation about Muslim communities by incorporating the publication date into image previews when shared on social media. This proactive step could limit the reach of individuals aiming to exploit older stories to stigmatize Muslims. Combating falsehoods is a joint responsibility, necessitating collaboration between newspapers and social media platforms. While it may not eliminate confirmation bias, it could deter some individuals from sharing distorted information.

Conclusion:

Emphasizing the impact of movement restrictions on hate speech against Muslims, the challenges of unregulated online hate and legal measures to counter it are certainly necessary. Employing mixed methods, it analyzes social media’s role in propagating hate and anti-Muslim sentiments. It also emphasizes the urgency of addressing anti-minority sentiment exacerbated by the pandemic, underscores the need for inclusive approaches, and offers recommendations for combatting hate speech. Ultimately, it advocates for collective efforts to create an inclusive digital space, protecting marginalized communities from discrimination and violence while fostering understanding and tolerance.


[1] Najib, Peter Hopkins and Kawtar. “Where does Islamophobia take place and who is involved? Reflections from Paris and London,” Social & Cultural Geography 21, No. 4 (2020), Pg. 458

[2] Faimau, G. (2015), “The Conflictual Model of Analysis in Studies on the Media Representation of Islam and Muslims: A Critical Review,” Pg. 321-335.

[3] Ahmed, S. and Matthes, J. (2016), “Media representation of Muslims and Islam from 2000 to 2015: A Meta-Analysis,” International Communication Gazette.

[4] Yaqin, A., Forte, A. and Morey, P (2019), “Contesting Islamophobia: Anti-Muslim Prejudice in Media, Culture and Politics,” Bloomsbury

[5] Public Health England, Annual Report and Accounts 2020/21, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051756/phe-annual-report-and-accounts-2020-to-2021-web-accessible.pdf

[6] Imran Awan and Irene Zempi (2020),  “The Affinity Between Online and Offline Anti-Muslim Hate Crime: Dynamics and Impacts”, https://www.ohchr.org/sites/default/files/Documents/Issues/Religion/Islamophobia-AntiMuslim/Civil%20Society%20or%20Individuals/ProfAwan-3.pdf

[7] WHO Issues Best Practices For Naming New Human Infectious Diseases, https://www.who.int/news/item/08-05-2015-who-issues-best-practices-for-naming-new-human-infectious-diseases#:~:text=The%20best%20practices%20state%20that,disease%20manifests%2C%20who%20it%20affects%2C (last visited 12 August 2023)

[8] Facebook Hate Speech Exploded in Myanmar During Rohingya Crisis, https://www.theguardian.com/world/2018/apr/03/revealed-facebook-hate-speech-exploded-in-myanmar-during-rohingya-crisis (last visited 12 August 2023)

[9] In Sri Lanka, Hate Speech and Impunity Fuel Anti-Muslim Violence, https://www.aljazeera.com/news/2018/3/13/in-sri-lanka-hate-speech-and-impunity-fuel-anti-muslim-violence (last visited 12 August 2023)

[10] Germany’s Balancing Act: Fighting Online Hate While Protecting Free Speech, https://www.politico.eu/article/germany-hate-speech-internet-netzdg-controversial-legislation/ (last visited 12 August 2023)

[11] Henry A. Willis, Brendesha M., Tynes, Matthew W. Hamilton and Ashley M. Stewart. “Race-Related Traumatic Events Online and Mental Health Among Adolescents of Colour,” Journal of Adolescent Health 65, No. 3 (2019), Pg. 371-377

[12] Najib, Peter Hopkins and Kawtar. “Where does Islamophobia take place and who is involved? Reflections from Paris and London,” Social & Cultural Geography 21, No. 4 (2020), Pg. 458-478

[13] Twitter Tests ‘Misleading’ Post Report Button for First Time, https://www.bbc.com/news/technology-58258377 (last visited 12 August 2023)

Exit mobile version