This article is more than

1 year old
Palestine

Inside Meta, Debate Over What’s Fair in Suppressing Speech in the Palestinian Territories

Author: Editors Desk Source: WSJ:
October 21, 2023 at 14:06
Meta Platforms has been wrestling with how to enforce its rules in the midst of the Israel-Hamas war. PHOTO: MOHAMMED ABED/AGENCE FRANCE-PRESSE/GETTY IMAGES
Meta Platforms has been wrestling with how to enforce its rules in the midst of the Israel-Hamas war. PHOTO: MOHAMMED ABED/AGENCE FRANCE-PRESSE/GETTY IMAGES

In trying to prevent Instagram and Facebook from contributing to further violence, Meta is juggling internal friction and limited tools

After Hamas stormed Israel and murdered civilians on Oct. 7, hateful comments from the region surged through Instagram. Meta Platforms managers cranked up automatic filters meant to slow the flood of violent and harassing content.

But still the comments kept appearing—especially from the Palestinian territories, according to a Meta manager. So Meta turned up its filters again, but only there.

An internal forum for Muslim employees erupted. 

“What we’re saying and what we’re doing seem completely opposed at the moment,” one employee posted internally, according to documents viewed by The Wall Street Journal. Meta has publicly pledged to apply its policies equally around the world.

The social media giant has been wrestling with how best to enforce its content rules in the midst of the brutal and chaotic war. Meta relies heavily on automation to police Instagram and 

Facebook META -1.33%decrease; red down pointing triangle, but those tools can stumble: They have struggled to parse the Palestinian Arabic dialect and in some cases they don’t have enough Hebrew-language data to work effectively.

In one recent glitch, Instagram’s automatic translations of users’ profiles started rendering the word “Palestinian” along with an emoji and an innocuous Arabic phrase as “Palestinian terrorists.”

And when Meta turns to human employees to fill the gaps, some teams have different views on how the rules should be applied, and to whom.

A Meta spokesman said that there were more comments in Palestinian territories that violated its rules, so it had to lower the threshold to achieve the same effect produced elsewhere. Meta has also apologized for the translation glitch.

The company handles relations with Israel from Tel Aviv, led by an executive who once worked for Israeli Prime Minister Benjamin Netanyahu. Meanwhile, a Dubai-based human rights policy team covers the Arab world including Palestinian territories. Those teams often disagree on content in the region, according to people familiar with the matter.

User comments have been a battleground. Following Hamas’s invasion of Israeli border towns and killing of civilians, Meta detected a surge on Instagram of hateful comments between five and 10-fold in Israel, Lebanon and Palestinian territories. The company decided to hide a higher percentage of comments that might violate its policies, the documents show.

Normally, Meta only begins to hide such comments when its systems are 80% certain that they qualify as what the company calls hostile speech, which includes things such as harassment and incitement to violence.

As part of “temporary risk response measures”—emergency calming efforts of the sort that Meta has previously deployed in wars, potential genocides, and the Jan. 6 Capitol riot—Meta cut that threshold in half over a swath of the Middle East, hiding any comment deemed 40% likely to be inflammatory, the documents show. 

That change reduced the hateful comments in Israel, Lebanon, Syria, Egypt and several other countries enough to make Meta’s safety staff comfortable, according to a post on an internal message system by a product manager involved with it. But in the days following, comments from the Palestinian territories that met Meta’s definition of hostile speech remained high on Instagram.

“Therefore, the team decided to temporarily further reduce the threshold,” the product manager wrote, lowering the bar to hide comments from users in Palestinian territories if Meta’s automated system judged there was at least a 25% chance they violated rules. 

By Thursday, Meta’s internal content moderators had deleted the lengthy discussion thread on the forum that included both the description of Meta’s intervention and the comments responding to it.  

Meta and other social-media companies have come under scrutiny from multiple camps beginning with the Oct. 7 attack, in which Hamas killed at least 1,400 Israelis and took over 200 hostage, according to Israeli authorities. Footage of the raids and victims spread virally across social media and were rebroadcast in the news—with some social-media companies setting and reversing policies about what would be allowed. 

The European Union on Thursday sent Meta and TikTok formal requests for information about what measures it took to stem the spread of such material, which may be illegal in many EU countries, something it did the prior week for X, formerly known as Twitter.

Meta has blocked hashtags, limited livestreams and restricted images of hostages.

Meta has long had trouble building an automated system to enforce its rules outside of English and a handful of languages spoken in large, wealthy countries. The human moderation staff is generally thinner overseas as well. 

Arabic-language content has been a sore point—particularly in the Palestinian territories. That is in part because the company’s system wasn’t initially trained to understand the differences between different Arabic dialects, and performed more poorly for the Palestinian dialect, according to a 2022 report Meta commissioned from outside consultants. 

Meta has also until recently lacked an automated system to detect Hebrew-language content that might be against its rules, something that the 2022 report said led to less enforcement against Hebrew posts.

In response to the report, Meta committed to building an automated system for catching violations in Hebrew, as well as improving its ability to detect Arabic dialects. 

In September, the company told its Oversight Board that the goal of “having functioning Hebrew classifiers” was “complete.” But earlier this month the company internally acknowledged that it hadn’t been using its Hebrew hostile speech classifier on Instagram comments because it didn’t have enough data for the system to function adequately, according to a document reviewed by the Journal. 

Despite the system’s limitations, the company has now deployed it against hateful comments given the current conflict. The Meta spokesman said that the classifier was already at work elsewhere on the company’s platforms. 

Palestinian photographer Motaz Azaiza, who has been posting graphic videos of wounded or dead Gaza residents to Instagram along with his emotional reactions, said Meta closed his account twice during the conflict. But he managed to get those decisions reversed upon appeal—and as of Friday his account, which had 25,000 followers two weeks ago, has grown to over five million.

In a separate incident, Meta internally declared a site event—an urgent problem requiring immediate remediation—because Meta’s automated systems were mistranslating certain innocuous Arabic language references to Palestinians, including one that became “Palestinian terrorists,” another document shows.

An investigation found the problem was due to “hallucination” by a machine learning system.

Salvador Rodriguez contributed to this article.

Write to Sam Schechner at Sam.Schechner@wsj.com, Jeff Horwitz at jeff.horwitz@wsj.com and Newley Purnell at newley.purnell@wsj.com

You did not use the site, Click here to remain logged. Timeout: 60 second