US, France call for ‘utmost restraint’ in Middle East

How social media failed to avoid censorship, curb hate speech and disinformation during the Gaza war

LONDON: Tech giant Meta recently announced it would begin removing social media posts that use the term “Zionist” in contexts where it refers to the Jewish people and Israelis rather than representing supporters of the political movement, in an effort to curb anti-Semitism on its platforms.

Facebook and Instagram's parent company previously said it would lift its blanket ban on the single most moderated term across all of Meta's platforms — “shaheed,” or “martyr” in English — after a year-long review by its regulatory board found the approach was “overbroad.”

Similarly, TikTok, X and Telegram have long pledged to step up efforts to curb hate speech and the spread of misinformation on their platforms in light of the ongoing war in Gaza.

Activists accuse social media giants of censoring posts, including those that provide evidence of human rights abuses in Gaza. (Getty Images)

These initiatives are intended to create a safer, less toxic online environment. But as experts have consistently pointed out, these efforts often fail, resulting in empty promises and a worrying trend toward censorship.

“In short, social media platforms have not been very good at avoiding censorship or curbing hate speech and disinformation about the war on Gaza,” Nadim Nashif, founder and director of 7amleh, a Palestinian digital rights and human rights activist group, told Arab. News.

“Throughout the conflict, censorship and account removals have compromised efforts to document human rights abuses on the ground.”

Nashif says hate speech and incitement to violence remain “rampant”, particularly on Meta's platforms and X, where anti-Semitic and Islamophobic content continues to “spread widely”.

Since the October 7 Hamas-led attack that sparked the conflict in Gaza, social media has been flooded with content related to the war. In many cases, it has served as a crucial window into the dramatic events unfolding in the region and has become an important source of real-time news and accountability for Israeli actions.

Profiles supporting both Hamas and the actions of the Israeli government have been accused of sharing misleading and hateful content.

FASTFACT

1,050

Removals and other suppression of content on Instagram and Facebook posted by Palestinians and their supporters, documented by Human Rights Watch during the period October-November 2023.

Even so, none of the social media platforms — including Meta, YouTube, X, TikTok or messaging apps like Telegram — have publicly outlined policies designed to mitigate hate speech and incitement to violence related to the conflict.

Instead, these platforms remain flooded with war propaganda, dehumanizing speeches, genocidal statements, explicit calls for violence and racist hate speech. In some cases, platforms take down pro-Palestinian content, block accounts and sometimes ban shadow users from expressing their support for the people of Gaza.

On Friday, Turkiye's communications authority blocked access to the Meta-owned social media platform Instagram. Local media said access was blocked in response to Instagram removing posts from Turkish users expressing condolences over the recent killing of Hamas political chief Ismail Haniyeh in Tehran.

The day before, Malaysian Prime Minister Anwar Ibrahim accused Meta of cowardice after his Facebook post about Haniyeh's murder was removed. “Let this serve as a clear and unequivocal message to Meta: Stop this display of cowardice,” Anwar, who has repeatedly condemned Israel's war on Gaza and its actions in the occupied West Bank, wrote on his Facebook page.

Screenshot of Malaysian Prime Minister Anwar Ibrahim's post condemning Meta's censorship of his post critical of Israel's killing policy.

Meanwhile, images of Israeli soldiers allegedly blowing up mosques and homes, burning copies of the Koran, torturing and humiliating blindfolded Palestinian prisoners, driving them around strapped to military vehicles and celebrating war crimes remain freely available on mobile screens.

“Historically, platforms have been very bad at moderating content about Israel and Palestine,” Nashif said. “Throughout the war on Gaza, and the ongoing credible genocide, this has simply gotten worse.”

A Human Rights Watch report titled “Meta's Broken Promises,” published in December, accused the company of “systematic online censorship” and “inconsistent and opaque application of its policies” and practices that have silenced voices in support of Palestine and the Palestinian people. rights on Instagram and Facebook.

The report added that Meta's conduct “fails to meet its human rights due diligence obligations” due to years of failed promises to address its “widespread crackdown.”

Jacob Mukherjee, convener of the MA Program in Political Communication at Goldsmiths, University of London, told Arab News: “I'm not sure to what extent you can really even call them attempts to stop censorship.

“Meta promised to carry out various reviews – which, incidentally, it has been promising for a couple of years now since the latest upsurge in the Israel-Palestine conflict in 2021 – before October 7 last year.

“But as far as I can see, not much has changed, materially. They've had to respond to suggestions that they've been engaged in censorship, but it's mainly been a PR move in my view.”

Between October and November 2023, Human Rights Watch documented more than 1,050 removals and other suppression of content on Instagram and Facebook posted by Palestinians and their supporters, including content about human rights abuses.

Of these, 1,049 involved peaceful pro-Palestine content that was censored or otherwise suppressed unnecessarily, while one case involved the removal of pro-Israel content.

But censorship seems to be only part of the issue.

7amleh's Violence Indicator, which monitors real-time data on violent content in Hebrew and Arabic on social media platforms, has recorded more than 8.6 million pieces of such content since the conflict began.

Nashif says the proliferation of violent and harmful content, mainly in Hebrew, is largely due to insufficient investment in moderation.

This content, which has been primarily aimed at Palestinians on platforms such as Facebook and Instagram, was used by South Africa as evidence in its case against Israel at the International Court of Justice.

Meta is undoubtedly not alone in bearing responsibility for what has been described by South Africa's lawyers as the first genocide broadcast live to mobile phones, computers and television screens.

X has also faced accusations from supporters of both Palestine and Israel for giving free rein to handles known for spreading misinformation and manipulated images, which have often been shared by prominent political and media personalities.

“One of the major problems with current content moderation systems is the lack of transparency,” Nashif said.

“When it comes to AI, the platforms do not release clear and transparent information about when and how AI systems are implemented in the content moderation process. Policies are often opaque and give the platforms a lot of leeway to do as they wish.”

For Mukherjee, the issue of moderation that happens behind a smokescreen of shady politics is highly political, requiring these companies to adopt a “balanced” approach between political pressure and “managing the expectations and desires of the user base.”

He said: “These AI tools can kind of be used to insulate the real power holders, ie the people running the platforms, from criticism and accountability, which is a real problem.

“These platforms are private monopolies that are essentially responsible for regulating an important part of the political public sphere.

“In other words, they help shape and regulate the arena in which conversations happen, in which people form their opinions, from which politicians feel the pressure of public opinion, and yet they are completely irresponsible.”

While there have been examples of pro-Palestinian content being censored or removed, as revealed by Arab News in October, these platforms made it clear, long before the Gaza conflict, that it is not ultimately in their interest to remove content from their platforms.

“These platforms are not made for reasons of public interest or to ensure that we have an informed and educated population that is exposed to a range of perspectives and is equipped to properly make decisions and form opinions,” Mukherjee said.

“The fact (is) that the business models actually want there to be lots of content and if it's pro-Palestine content, so be it. Ultimately it's still about getting eyeballs and engagement on the platform, and content that evokes strong emotions , to use industry terms, gets engagement, and that means data and that means money.”

Leave a Comment