In late 2020, Facebook researchers came to a sobering conclusion. The company’s efforts to curb hate speech in the Arabic world were not working.
In a 59-page memo circulated internally just before New Year’s Eve, engineers detailed the grim numbers.
Only six percent of Arabic-language hate content was detected on Instagram before it made its way onto the photo-sharing platform owned by Facebook. That compared to a 40 percent takedown rate on Facebook.
Ads attacking women and the LGBTQ community were rarely flagged for removal in the Middle East. In a related survey, Egyptian users told the company they were scared of posting political views on the platform out of fear of being arrested or attacked online.
In Iraq, where violent clashes between Sunni and Shia militias were quickly worsening an already politically fragile country, so-called “cyber armies” battled it out by posting profane and outlawed material, including child nudity, on each other’s Facebook pages in efforts to remove rivals from the global platform.
In many of the world’s most dangerous conflict zones, Facebook has repeatedly failed to protect its users, combat hate speech targeting minority groups and hire enough local staff to quell religious sectarianism — according to disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by the legal counsel of Frances Haugen, a Facebook whistleblower.
The redacted versions were obtained by a consortium of news organizations, including POLITICO.
Across the trove of internal Facebook documents, a picture emerges of the social networking giant struggling to come to terms with its prominent role in war-torn countries. Many of these states are home to sizable terrorist or extremist groups spreading online propaganda and violence across the global platform and into the offline world.
Facebook did not respond to questions whether its executives took action as a result of the 2020 report outlining widespread problems across the Middle East.
In Afghanistan, where 5 million people are monthly users, Facebook employed few local-language speakers to moderate content, resulting in less than one percent of hate speech being taken down. Across the Middle East, clunky algorithms to detect terrorist content incorrectly deleted non-violent Arabic content 77 percent of the time, harming people’s ability to express themselves online and limiting the reporting of potential war crimes.
In Iraq and Yemen, high levels of coordinated fake accounts — many tied to political or jihadist causes — spread misinformation and fomented local violence, often between warring religious groups.
In one post reviewed by POLITICO, Islamic State fighters heralded the killing of 13 Iraqi soldiers via a Facebook update that used an image of Mark Zuckerberg, the company’s chief executive, to mask the propaganda from the platform’s automated content policing tools.
“There’s a war very much happening on Facebook,” said Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks online extremism, who reviewed some of Facebook’s internal documents on behalf of POLITICO.
Ever since the 2016 U.S. presidential election, people’s attention — and much of the company’s resources — have been focused on tackling Facebook’s growing and divisive role within American politics.
But the tech giant’s similar position of power in countries worldwide, most notably in those with existing religious tensions, years of violent conflict and weak government institutions, has often led to more dire outcomes — and even less scrutiny over the company’s role in global politics.
“We think it's bad in the United States. But the raw version roaming wild in most of the world doesn't have any of the things that make it kind of palatable in the United States,” Haugen, the Facebook whistleblower, told reporters in reference to the company’s global activities. “I genuinely think there's a lot of lives on the line.”
A Wild West of hate speech
The disclosures highlight the tech giant’s difficulty in combating the spread of hate speech and extremist content in countries across the Middle East and beyond where ongoing violence — and its promotion online — represents an imminent danger to millions of Facebook users.
It also has allowed religious extremist groups, as well as the Taliban regime in Afghanistan and authoritarian governments like that of Bashar al-Assad in Syria, to use the social network to spread violent and hate-filled messages, both within these countries and to potential supporters in the West, based on POLITICO’s separate review of thousands of Facebook and Instagram posts and discussions with four disinformation experts with expertise in the region.
That includes Arabic-language hate speech and terrorist content — some of which offered tactics for attacking targets in the West — shared widely on Facebook and Instagram, often with posts translated into English for ready-made consumption.
Unlike developing countries like India, home to Facebook’s largest user base and where local politicians have exerted significant regulatory pressure on the tech giant, countries across the Middle East and in Central Asia have not garnered the same attention from the company’s engineers, based on the internal documents.
Facebook flagged many of these countries as high-risk areas, or so-called Tier 1 zones that required additional resources like sophisticated content-policing algorithms and in-country teams to respond to events in almost real-time, according to Facebook’s internal list of priority countries for 2021.
But a lack of local language expertise and cultural knowledge made it difficult, if not impossible, to crack down on online sectarianism and other forms of harmful content aimed at local vulnerable groups like the LGBTQ community. These failures may have had a knock-on effect on real-world violence, according to Facebook’s researchers and outside experts.
“Disinformation and online hate speech are now core pillars of foreign policy for countries and groups in the Middle East,” said Colin P. Clarke, director of policy and research at The Soufan Group, a nonprofit focused on global security. “It brings an added dimension that catalyzes the sectarian bunkers more broadly. And that's troubling.”
In response, Facebook said it had teams dedicated to stopping online abuse in high-risk countries, and that, collectively, there were native speakers currently reviewing content in more than 70 languages worldwide.
“They’ve made progress tackling difficult challenges — such as evolving hate speech terms — and built new ways for us to respond quickly to issues when they arise,” Joe Osborne, a Facebook spokesperson, said in a statement. “We know these challenges are real and we are proud of the work we’ve done to date.”
A desert of native speakers
Facebook’s difficulties in the Arabic world did not spring up overnight. For years, the company’s own researchers, outside experts and some governments within the region had warned the social networking giant to invest heavily in the Middle East to curb the spread of hate speech.
“Since 2018, I’ve been saying that Facebook doesn’t have enough Arabic speakers and that their AI doesn’t work, particularly in Arabic,” said Ayad, the Institute for Strategic Dialogue researcher. “These documents seem to vindicate what I’ve been saying for three years. It makes me seem less crazy.”
Yet even as the problem of hate speech and other harmful content grew within the region, Facebook found itself with not enough speakers of Arabic dialects — particularly those with local knowledge of war-torn countries.
When a Facebook researcher asked the company’s content moderation team who understood Arabic content, “Iraqi representation was close to non-existent,” according to the internal document entitled “An Incomplete Integrity Narrative for Middle East and North Africa,” dated December 2020.
At the time, the country represented one-third of all detected hate speech within the region; and 25 million Iraqis, or two-thirds of the population, had a Facebook account, according to the company’s own estimates.
Another separate, undated document — entitled “Opportunities for High Impact Changes to the Arabic System” — warned there were almost no people who spoke Yemini Arabic inside the content moderation team even as that country’s civil war was escalating and Facebook had highlighted Yemen as a top priority.
In response, Facebook said that it had added more native speakers, including in Arabic, and that it was considering hiring more content viewers with specific language skills if such personnel were required.
In Iraq, this lack of local expertise within Facebook fomented religious sectarianism as Sunni and Shia militias struck tit-for-tat blows at each other on the social network amid deteriorating sectarian offline violence.
The country, according to the company’s own research, is a “proxy for cyber armies working on reporting content in order to block certain pages and content,” based on internal documents.
That included Iran- and Islamic State-backed militants routinely spamming opponents’ Facebook groups and accounts in efforts to trick the tech giant to shut down their rivals’ digital propaganda machines. Violent extremists also peppered the social network with propaganda targeted at the country’s Shia majority.
In early July, for instance, people connected to Islamic State conducted a coordinated campaign on Facebook that praised a deadly bombing in Baghdad and attacked those connected to the Iraqi government.
Over the dayslong online push, roughly 125 extremist accounts spanned out across the platform, targeting Shia rivals and promoting graphic images of the violent attack, according to research from the Institute for Strategic Dialogue that was shared with POLITICO.
Islamic State used local Arabic slang to sidestep Facebook’s content rules and spread hate speech that dehumanized their opponents, while gloating openly online that the country’s officials could not protect its own citizens. It also praised the terrorist group’s leadership for carrying out the attack — posts that were in direct violation of the tech giant’s community standards against hate speech.
Almost all of this material was not proactively removed by Facebook. It was only deleted from the platform weeks after the July incident after campaigners subsequently flagged the violent material to the company’s representatives.
“What happens online has an effect on what happens offline,” said Ayad, the Institute for Strategic Dialogue researcher. “It’s targeted at making people feel insecure, and worsens existing tensions between Sunni and Shia militias.”
Algorithms lost in translation
In its fight against hate speech, Facebook’s first line of defense are complex algorithms that automatically detect — and remove — harmful material.
But one of company’s internal documents, entitled “Afghanistan Hate Speech Landscape Analysis,” from January 2021, reveals these systems, which have struggled to work effectively in the United States, are often barely functioning in some violence-plagued countries. Authoritarian regimes and religious extremist groups have bombarded the platform with hateful material and incitements to violence.
Much of this problem is down to how Facebook trains its algorithms to detect hate speech.
Like other tech companies, the social networking giant relies on reams of existing online content — and engineers with local language expertise — to program its machine learning technology to weed out harmful material. These so-called classifiers, or data points from which the algorithm can learn, are expected to teach automated tools to remove harmful content before it can spread quickly online.
But, according to internal research, Facebook’s systems fare poorly when handling many foreign languages, mostly because local references are culturally specific and are hard for algorithms to fully understand, according to an internal document entitled “Content-focused Integrity playbooks have been ineffective for these problems” from March 2021.
In another research memo, from August 2020, outlining problems in combating hate speech in Ethiopia, the company’s engineers were even more blunt. They flagged an ongoing lack of language skills, particularly in less mainstream dialects, and a failure — in some countries — to have any automated content tools in place to catch harmful material.
“One of the primary challenges facing integrity work in at-risk countries is our ability to actually measure risks and harm,” the paper claimed. “In many of these markets, we do not have any classification, and it can be exceedingly difficult to build.”
These constraints have played out, in almost real-time, in Afghanistan.
In early 2021, Facebook researchers delved into how hate speech spread within the Central Asian country — just as the Taliban were girding themselves for their eventual retaking of Kabul from the U.S.-backed government in August.
Even after it took over running the country, the Taliban was still officially banned from Facebook because it was designated, internationally, as a terrorist group. Yet scores of pro-Taliban posts, in both local languages and in English, still remain on the social network, according to POLITICO’s review of online activity and analysis from outside disinformation experts.
The company’s internal findings acknowledged that local hate speech appearing on the platform — everything from attacks against the LGBTQ community to mockery of non-religious Afghans to Facebook pages that streamed audio content from the Taliban — was a byproduct of the country’s ongoing ethnic and religious divides.
But when researchers reviewed how much of this hate speech was automatically taken down over a 30-day period, they discovered that Facebook’s automated content-policing tools had caught only 0.2 percent of this harmful material. The rest was handled by human reviewers, even though the social networking giant admitted it did not have sufficient speakers of both Pashto and Dari, Afghanistan’s two main languages.
“The action rate for hate speech is worryingly low,” read the document.
In response, Facebook said it had invested in artificial intelligence tools to automatically take down content and identify hate speech in more than 50 languages. A spokesperson added that more Pashto and Dari speakers had been hired since early 2021, but declined to provide numbers of the staffing increases.
The company’s engineers blamed this failure on a lack of updated local slur words and other hate speech language that could be fed into the Afghanistan-specific content algorithm, as well as significant flaws in how locals could raise alarms about harmful content to Facebook.
Many of the online tools available to Afghans to flag hate speech, for example, were not translated into local languages. Instead, these systems — which also helped Facebook to train its automated content-detection tools — were only available in English. Such limitations, the researchers concluded, made it difficult for locals, the majority of whom only spoke Pashto or Dari, to call out hateful material on the platform.
“There is a huge gap in the hate speech reporting process in local languages in terms of both accuracy and completeness of the translation,” according to the internal documents. The company said it had now translated those online tools into Dari and Pashto.
Blockers gone wild
Facebook has struggled to stop hate speech from spreading on its platform in much of the Arabic speaking world. But the company’s clunky content-moderation algorithms have also the opposite effect: falsely removing legitimate posts and curtailing free speech within the region.
In documents published in late 2020, the company’s engineers discovered that more than three-quarters of Arabic-language content automatically removed from the platform for allegedly promoting terrorism has been mistakeningly labeled as harmful material.
Facebook’s content-moderation algorithms, according to the analysis, had repeatedly flagged legitimate political speech, mundane news articles about current affairs and advocacy work by local human rights campaigners for automatic removal. Efforts to reinstate those posts had put a “huge drain” on Facebook’s local teams.
The false deletions included news reports in Lebanon criticizing Hezbollah, a U.S. designated terrorist group; ads in Palestine promoting gender-based issues; and Arabic-speaking media outlets discussing the assassination of Qasem Soleimani, the Iranian general targeted by then-President Donald Trump and killed by a drone strike in January 2020.
In doing so, the researchers concluded, the company was “silencing Arab users and impeding freedom of speech.”
In response, Facebook said it had no evidence its algorithms were making errors at the rates noted by the company’s researchers. A spokesperson added the company was under an obligation to remove content related to Hamas and Hezbollah because the U.S. government had designated both groups as terrorist organizations.
For Dia Kayyali, associate director for advocacy at Mnemonic, a nonprofit that helps to preserve social media records of potential war crimes and other human rights abuses in Syria, Yemen and Sudan, the Facebook’s internal research comes as no surprise.
For years, their team and in-country partners have tracked repeated takedowns from political activists, human rights campaigns and regular Facebook users whenever they posted on hot-button topics across the Middle East.
“You are much more likely to get your content taken down in the region if it touches on anything political whatsoever,” they said.
Yet Facebook’s removal of legitimate content also had a knock-on effect: to give the perception the social media giant had tilted the scales in favor of countries’ authoritarian regimes over human rights groups, based on the 2020 research.
In Syria, where reams of social media content posted by journalists and campaigners has been removed during the decadelong war, the tech giant’s accidental deletion of such material gave the impression that Facebook backed the country’s autocratic leader, according to the Syrian Archive, a group that documents local human rights violations, according to an undated document outlining ways to improve how Facebook handled Arabic content.
The social network said it did not make decisions on who should be recognized as the official government in any country, and only removed content when social media posts broke its rules.
For Kayyali, the human rights campaigner, these mistakes — including errors in how Facebook tweaked its content algorithms in the Arabic-speaking world that potentially hampered people’s free speech — had real world implications.
Not only do people across the Middle East believe the tech giant does not care about their local concerns, Kayyali said, but such over-aggressive removals also make it more difficult to document potential war crimes captured via social media.
“Facebook’s machine learning and natural language processing in Arabic is both not as good as in English and it is biased,” they said. “The combination of the two is really deadly to content in the region.”
----------------------------------------
By: Mark Scott
Title: Facebook did little to moderate posts in the world’s most violent countries
Sourced From: www.politico.com/news/2021/10/25/facebook-moderate-posts-violent-countries-517050
Published Date: Mon, 25 Oct 2021 06:01:51 EST