Digital Suppression During War: Meta’s Contribution to Censorship and Bias Against Palestinians in October 2023
Amidst horrific attacks on Gaza, and an intensifying crackdown on Palestinian communities, Meta has proven once again that it is not safe for Palestinians. While over 10,000 Palestinians, including 4,000 children, were killed in one month, Meta’s digital platforms continue to play a role in harming the digital rights of the Palestinian people.
During the first month of the crisis, the Palestinian Observatory of Digital Rights Violations (7or) recorded 879 verified violations of Palestinian digital rights across Meta’s platforms. This includes 336 cases of censorship, and 543 cases of hate speech, incitement to violence, or other forms of online harassment.
Once more, we find that Meta’s platforms aid in efforts to dehumanize the Palestinian people, silence Palestinian voices, and disproportionately moderate Palestinian content. Meanwhile, Meta also allows racist and inflammatory speech and incitement to violence in Hebrew, which materializes into real-world violence.
Across the span of the Israeli onslaught, Palestinian voices, especially those of journalists and human rights defenders, face significant and disproportionate censorship on Meta’s social media platforms. This not only restricts freedom of expression but also hinders access to information. The silencing of Palestinian voices manifests in several formats. For example, Meta immediately censored the Arabic hashtag #طوفان_الاقصى on the first day of the escalation but has not done so with the parallel hashtag in Hebrew #חרבות_ברזל because it was not seen to violate their policies. Moreover, there have been widespread reports of Shadowbanning’s impact on Palestinian content. These are just two examples of the multitude of ways in which Palestinian voices and narratives have been silenced and censored during a time of crisis.
One of the primary reasons Palestinian content has been aggressively over-moderated since October 7th is because the company “lowered the threshold” on the level of confidence its automated systems require before acting on Arabic/Palestinian content in particular. Regularly, automated systems need an 80% confidence level before taking content down. However, now these systems only need to show a 25% confidence level before taking down Arabic/Palestinian content.
The disproportionate over-moderation leads to restrictions limiting the reach of Palestinian content. In some cases, it can completely suspend users (including journalists, activists, and human rights defenders) on the platforms. Palestinian and international news outlets, including but not limited to Ajjyal Radio Network, 24FM, Modoweiss, Palestinian Refugees Portal, as well as journalist accounts like Faten Elwan and Motaz Azaiza, have all experienced and/or continue to experience content takedowns and account restrictions on Instagram and Facebook. While civil society organizations can follow up with Meta on a case-by-case basis in order to ensure Palestinian content is not disproportionately targeted by the automated applications of content moderation tools, the process itself is unsustainable, and excessive. It cannot suffice to bring about lasting change.
Many cases where journalists' accounts were restricted or experienced content taken down for violating the DOI policy were due to reporting news in Arabic and/or from Gaza. The editor of 24FM Ihab Al-Jariri described to Al-Jazeera how they publish on social media in three languages: Arabic, English, and Hebrew; however, only the Arabic content was restricted, while the same news reports in English and Hebrew were not affected.
Other observations on matters of “shadow-banning” or limited reach include low views on stories by influencers or their stories being pushed to the end automatically because they share news content from other pages. The excuse given by Meta for this kind of problem in 2023 is repetitive of what the company said when similar observations were made during May 2021: global issue affecting users equally.
Similarly, Meta took action on comment settings which directly affected Palestinian content during this crisis. Meta changed the default settings to all users in this specific region from public to ‘friends only’, which was seemingly intended to limit the reach of public posts. In a similar vein, they changed comment settings so only people who were friends or followers of the page for more than 24 hours could comment. Sometimes, they actually disabled the ability to comment due to unspecified activities “to protect our community”. Furthermore, users noticed that comments with the Palestinian flag emoji were being hidden for no apparent reason. However, reporting emerged that Meta began to consider the Palestinian flag emoji a “negative/harmful symbol”, and thus automatically hide comments with the flag rather than delete them.
Throughout October of 2023, hate speech and incitement to violence targeting Palestinians spread rampantly across Meta’s platforms. Publicly, the company has made commitments to combating such harmful content, but they have been largely ineffective. Meta internal documents acknowledged that the Hebrew hostile speech classifiers were not as effective as they should be, because they didn’t have enough data for the system to function adequately.
Examples of such hate speech and incitement to violence in Hebrew have been quite blatant. For instance, some Israeli users have taken to adding “Death to Arabs” מוות לערבים to their Facebook profile names with no repercussions. In a similar vein, hashtags like למחוקאתעזה# (Erase Gaza) continue to be active on Instagram and are not censored even though it is audaciously violent, targeted, and has created real-world consequences. In contrast, Meta immediately censored the Arabic hashtag #طوفان_الاقصى on the first day of the escalation and never censored the parallel hashtag in Hebrew #חרבות_ברזל because it was not seen to violate their policies.
Beyond the disproportionate silencing of Palestinians and the spread of hate speech and incitement in the Hebrew language, Meta’s AI systems further added to the problems by dehumanizing Palestinians. Recently, it was discovered that WhatsApp’s AI image generator created emojis of gun-wielding children when prompted with ‘Palestinian,’ and Instagram’s AI-translation model replaced “Palestinianالحمد الله” with “Palestinian Terrorist”. Both cases indicate a persistent issue. Meta’s generative AI tools are creating offensive and biased results that dehumanize Palestinians. This reflects a pervasive problem of AI-driven dehumanization of Palestinians due to bias in training datasets.
There have been widespread instances of disinformation and misinformation on Meta’s platforms. This significantly harms freedom of expression and access to authentic information, as well as the right to security during a time of crisis. Since the first days of the escalation, there have been documentation efforts showing false information is weaponized to propagate hatred and incite violence against Palestinians. It is also used to manipulate the opinions of global citizens and divert their attention from the reality on the ground. This disinformation is employed to rationalize collective punishment against all Palestinians and is often accompanied by incitement and calls for violence. Meta quickly boasted of having “the largest third partner fact-checking network of any platform” , but with the number of false accusations that spread rampantly across their platforms throughout October, their efforts still seem to be insufficient.
Despite repeated calls, Meta still does not disclose data about voluntary requests from Israel's Internet Referral Unit, commonly known as the "Cyber Unit." It is important to note that Meta has no legal obligation to comply with these requests, and there is also nothing stopping Meta from publishing data on the requests, and the rate at which they comply. However, the Cyber Unit does provide intermittent reporting itself. During the early days of the crisis, the Cyber Unit submitted 2,150 requests to Facebook. According to a representative of the unit, Meta complied with 90% of these takedown requests. Notably, the cyber unit also takes credit for requesting the censorship of critical hashtags, including the Arabic hashtag #طوفان_الاقصى across Meta’s platforms mentioned earlier.
Meta should publicly commit to providing complete transparency regarding all government-made requests, both legal and voluntary, submitted by the Israeli government and the Israeli Cyber Unit, and all governmental Internet Referral Units (IRUs). This transparency should encompass the full scope of data related to each request, as well as the subsequent actions taken by Meta. Based on the combination of the state of Israel's use of social media content to political crackdown on the Palestinian community, and increased influence from IRUs on content moderation, all users deserve greater insight and understanding into Meta’s relationship with state actors.
In conclusion, Meta must take immediate and decisive action to rectify the deep-rooted issues that continue to plague its platforms. From AI-driven dehumanization to the disproportionate moderation of Palestinian content, this must stop. In the context of an ongoing and asymmetrical war, characterized by major breaches of the Geneva Conventions, the company risks being seen as complicit and in violation of internationally-recognized fundamental rights and freedoms for Palestinians and all users.