As of September 2022, Bangladesh has in total of 127.6 million Internet users. About 57.5 million of them are using the social media platform Facebook in Bangladesh. A whooping 9 million joined the social media platforms for the first time during the period of 2020-2021, when there was a pandemic and people were mostly at home due to lock down scenario.
As per Napoleon Cat statistics for 2022, Facebook has the largest share of social media market in Bangladesh with 49 million users which is about 28.2% of total population. Out of these 67.7% are male and 32.3% are female. People aged 18 to 24 constitute the largest chunk of this user group.
YouTube is a bit behind with 34.50 million users. 42.1 percent of YouTube’s audience in Bangladesh was female, while 57.9 percent was male.
Instagram is distant third with 4.9 million of users. In early 2022, 33.0 percent of Instagram’s ad audience in Bangladesh was female, while 67.0 percent was male. Again, Instagram is very popular among the younger generation (aged 18 to 24).
LinkedIn has more than 5 million users in Bangladesh. In early 2022, 24.9 percent of LinkedIn’s audience in Bangladesh was female, while 75.1 percent was male.
Out of all these platforms, Facebook continues to be the most popular and most influential social media platform in Bangladesh. Facebook and YouTube posts have also caused a number of communal violence in Bangladesh.
Facebook:
Facebook’s official policy[1] considers hate speech as a direct attack on people as opposed to concepts or institutions. ‘Attack’ is defined as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation based on that person’s protected characteristics such as race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.
Section 3 of Terms of Service[2]– ‘Your commitments to Facebook and our community’ mentions about what one can share and do on Meta products. Although, it doesn’t mention explicitly of hate content/speech but it says that one cannot post any content that is unlawful, misleading, discriminatory or fraudulent. But more clear reference to hate content is part of ‘community standards[3]and are categorized in three tiers. Tier 1 deals with violent speech or support in written or visual form and designated dehumanizing comparisons, generalizations, or behavioral statements on certain categories. Tier 3 deals with content targeting a person or group of people on the basis of their protected characteristic with any of the following:
- Segregation in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting segregation.
- Exclusion in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting, defined as
- Explicit exclusion, which means things like expelling certain groups or saying they are not allowed.
- Political exclusion, which means denying the right to right to political participation.
- Economic exclusion, which means denying access to economic entitlements and limiting participation in the labour market.
- Social exclusion, which means things like denying access to spaces (physical and online)and social services, except for gender-based exclusion in health and positive support Groups.
Content that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics.
Facebook’s terms of service have always reserved the right to remove hate content, and it tried to update its terms of service to address the harmful content. In reality, Facebook continues to be flooded with hate filled content and has become a major source of misinformation, disinformation and fake news in Bangladesh. A number of communal riots leading to deaths, arson, and physical attacks were caused by posts on Facebook. In 2017, Bangladesh Counter-Terrorism and Transnational Crime (CTTC) cyber unit identified[4] around 2,500 Facebook pages that were promoting communal hatred in Bangladesh by spreading hate speeches.
Facebook is saying that it is proactively trying to make the platform free of hate content, by following different approaches[5]. This includes,
- Third party fact checking to help identify misinformation, put prominent labels and reduce the distribution of content or disapprove related ads if the content is rated false or partly false on our platform.
- Allowing advertisers to review individual in-stream videos, publishers and instant articles in which their ads are to be embedded.
- Refunding advertisers when ads run in videos or in instant articles that are determined to violate the Network Policies.
- Providing community standards enforcement reports by publishing Facebook’s efforts to keep the community safe.
- Arranging certification from independent groups such as the Digital Trading Standards Group to examine the advertising processes against JICWEBS’ good practice principles.
- Allowing users to report or flag a content for reviewing by using an interface available next to the content. Once reported, the content is reviewed by the internal reviewers at Facebook.
- Using proactive artificial intelligence-based detection tools to identify hateful content and groups that aren’t reported to Facebook.
- Introducing penalties including the extreme step of removing the group from Facebook, if the moderators post or permit any post with violating content.
- Exploring ways of making moderators more accountable and responsible for the content in groups they moderate
- Generating awareness, capacity among the group moderators on community standards to moderate content and membership.
- Continuing active self-reviewing of content including content in private groups by 35,000 safety and security professionals at Facebook.
In 2016, Facebook’s content moderation was mostly based on user reports but since then an AI based detection tool has taken over the task of identifying the vast majority of hate content. Just to put things in perspective, when Facebook started its metrics on hate speech, only 23.6% of content removed was detected proactively by the system; Now, that number is over 97%. [6]
Prevalence is the most important metric to this tool. “It represents not what we caught, but what we missed, and what people saw, and it’s the primary metric we hold ourselves accountable to,” – says Guy Rosen[7], Head of Integrity, Facebook. Facebook’s AI system goes through billions of posts looking for reference that might match its definitions of content violating its rules. The screening algorithms, called classifiers, are the bedrock of the company’s content-moderation system[8]. The process of developing these classifiers is a complex and labor intensive process that requires human intervention to mark a post based on a certain set of rules, which technical experts then take to train the system to determine probability that other posts have violated those rules.
According to the latest Community Standards Enforcement Report[9], its prevalence is about 0.05% of content viewed, or about 5 views per every 10,000, down by almost 50% in the last three quarters.
Facebook in a leaked report[10] identified a list of ‘at risk countries’ in different tiers of priorities where Facebook spends more resources to monitor and remove hateful content. For example, as per 2019 data, India, Brazil and the United States were placed in tier 0 category, for which Facebook[11] even set up dashboards to analyze network activity and alerted local election officials to any problems. Although Bangladesh was not explicitly mentioned, there is a chance that Bangladesh was in the list of countries at Tier 2. Facebook mentioned in that report that they were asked to firefight violent clashes in Bangladesh. So it seems that Facebook also acts to remove hate content in response to requests from Government agencies and security establishments.
Instagram:
Both Instagram and Facebook are owned by Meta. So they follow the same hate content policy.
Therefore Instagram defines hate speech as a direct attack against people — rather than concepts or institutions— on the basis of any protected characteristics. Since Instagram is a more photo-video based medium, they have strengthened the rules[12] in 2020 banning more implicit forms of hate speech including banning more implicit forms of hate speech, like content depicting blackface and common antisemitic tropes. Like Facebook, it also has a tier based system to designate hate content.
Instagram’s Community Guidelines was created to foster and protect the community. So by signing in Instagram, users agree to these guidelines and the terms of use, violation of which may result in removing content or disabling accounts. Many of these have a relationship with hate content or hate speech. For example, it talks about sharing photos and videos that you’ve taken or have the right to share, posting photos and videos that are appropriate for a diverse audience, fostering meaningful and genuine interactions, following the law, respecting other members of the Instagram community, maintaining our supportive environment by not glorifying self-injury and being thoughtful when posting newsworthy events.Instagram’s terms of use talks about fostering a positive, inclusive and safe environment. It talks about terms and policies to combat abuse, violations as well as harmful and deceptive behavior.
A media organization[13] has envisioned Instagram’s hate content. According to them, such type of content at Instagram would be a ‘more visual way of presenting classic misinformation that we’ve seen on other platforms. So, a lot of racist memes, white nationalist content, sometimes screenshots of fake news articles’. Since Instagram is more popular among the younger population, the prime target of such content at Instagram is teenagers and sort of young millennials. Memes and humor is a really good way to attract their attention and entice people to extremist ideas, and conspiracy theories. It may start with laughing at it but ends up believing it and getting engaged with it through humor. One drawback of Instagram’s architecture is that it is built on one of the algorithms of recommendation. So once a user follows an account, more such recommendations start to come, which often can become a trap for those who are unaware.
YouTube
YouTube’s official policy[14] is not to allow hate content that incites hatred or violence against groups based on protected attributes such as,
- Age
- Caste
- Disability
- Ethnicity
- Gender Identity and Expression
- Nationality
- Race
- Immigration Status
- Religion
- Sex/Gender
- Sexual Orientation
- Victims of a major violent event and their kin
- Veteran Status
This protected attributes is very important for YouTube as the policy also describes[15] types and forms of such hate content such as to dehumanize members of these groups; characterize them as inherently inferior or ill; promote hateful ideology like Nazism or conspiracy theories related to these groups; or the act of denying well-documented violent events that took place, like a school shooting. YouTube’s hate policy is intertwined with harassment policy. YouTube considers a content to be harassment[16] when it targets an individual with prolonged or malicious insults based on intrinsic attributes, including their protected group status or physical traits. This policy also includes ‘harmful behavior such as deliberately insulting or shaming minors, threats, bullying, or encouraging abusive fan behavior’. This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. This can also be applied to external links, clickable links directing users to other sites in the video, as well as other forms.
To deal with hate content on its platform, YouTube has several mechanisms. Unlike Facebook, YouTube considers understanding of local language and context to be important in order to deal with a complex issue such as hate content. Therefore they dedicated resources for that. That’s why they have prepared a review team with different linguistic and subject matter expertise. Another mechanism is AI based machine learning through which a piece of hate content can be automatically detected and sent for human review. YouTube also has a strong appeal processes[17] through which a decision can be challenged and reviewed. The fact that YouTube has more than 25% of its decisions reversed[18] through the appeal processes (soon after the COVID lock down restrictions were lifted) show that it is working and unlike Facebook it did prioritize the appeal procedure.
In January 2019[19], YouTube brought forward the idea of borderline content that “comes close to—but doesn’t quite cross the line of—violating YouTube’s Community Guidelines.” and added a different set of penalties in the form of demonization. As Forbes magazine identified[20], border line content classification gives YouTube an added flexibility in how it responds to content in gray areas where opinions both for and against banning the material from the platform are strong. Demonetization in that sense is a balanced medium, preserving free speech but preventing the spread of hate. Channels that repeatedly come close to border line content often are suspended from the partner program, meaning they can’t run ads on their channel or use other monetization features, like Super Chat.
But there are some limited exceptions too while removing hate speech. The platform may allow the video to exist if the ‘documentary intent is evident in the content, the content does not promote hate speech and viewers are provided sufficient context to understand what is being documented and why’[21].
[1] https://transparency.fb.com/policies/community-standards/hate-speech/
[2] https://www.facebook.com/legal/terms
[3] https://transparency.fb.com/policies/community-standards/?source=https%3A%2F%2Fwww.facebook.com%2Fcommunitystandards%2F
[4] https://archive.dhakatribune.com/bangladesh/2017/11/16/hundreds-facebook-pages-spreading-communal-hatred-bangladesh
[5] https://www.facebook.com/business/news/sharing-actions-on-stopping-hate
[6] https://about.fb.com/news/2021/10/hate-speech-prevalence-dropped-facebook/
[7] https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184
[8] https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184
[9] https://about.fb.com/news/2021/10/hate-speech-prevalence-dropped-facebook/
[10] https://www.theverge.com/22743753/facebook-tier-list-countries-leaked-documents-content-moderation
[11] https://www.theverge.com/22743753/facebook-tier-list-countries-leaked-documents-content-moderation
[12] https://about.instagram.com/blog/announcements/an-update-on-our-work-to-tackle-abuse-on-instagram
[13] https://www.npr.org/2019/03/30/708386364/does-instagram-have-a-problem-with-hate-speech-and-extremism
[14] https://www.youtube.com/intl/ALL_ca/howyoutubeworks/our-commitments/standing-up-to-hate/
[15] https://www.youtube.com/intl/ALL_ca/howyoutubeworks/our-commitments/standing-up-to-hate/
[16] https://www.youtube.com/intl/ALL_ca/howyoutubeworks/our-commitments/standing-up-to-hate/
[17] https://support.google.com/youtube/answer/185111
[18] https://unesdoc.unesco.org/in/documentViewer.xhtml?v=2.1.196&id=p::usmarcdef_0000377720_eng&file=/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_442239bd-387e-47f8-aa1b-4a98fe8e2e5f%3F_%3D377720eng.pdf&locale=en&multi=true&ark=/ark:/48223/pf0000377720_eng/PDF/377720eng.pdf#Plataformas_Discurso_Odio_Ingles.indd%3A.11061%3A135
[19] https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html
[20] https://www.forbes.com/sites/masonsands/2019/06/09/youtubes-borderline-content-is-a-hate-speech-quagmire/?sh=3fd7bc4f6299
[21] https://support.google.com/youtube/answer/2801939?hl=en