As the largest social media company in the world, Meta allows billions of people around the globe to connect with each other, share opinions, and learn about the world. However, Meta is not that place for millions of Palestinians. Their content and accounts are censored and over moderated at disproportionately high rates. This illustrates the double standards and over-enforcement of content moderation policies on Palestinian content. Meanwhile, Israeli content in Hebrew is barely moderated, leading to hate speech and incitement of violence against Palestinians and Arabs, spreading widely on the platforms.

Double standards in content moderation policies are silencing the Palestinian narrative and violating Palestinian digital rights. This creates a chilling effect among Palestinians. Furthermore, over moderation erases the documentations of human rights violations, thus, interfering with investigations of war crimes against Palestinians, dissuading political participation as a result of the systematic muzzling of Palestinians online. According to our research, two thirds of Palestinian youth are afraid to voice their political opinions online. This negatively influences public opinion about the Palestinian freedom struggle by spreading stereotypes of Palestinains, while simultaneously preventing Palestinians from raising awareness for their cause. Meta must stop silencing Palestinian voices and narratives. Content moderation policies need to be equal, objective, transparent, and clear for all.

How to help us?

How do we fix this?
Let Palestine Speak, here’s how:


  1. Government request transparency: Complete transparency on requests—both legal and voluntary—submitted by the Israeli government and the Israeli Cyber Unit, including all data related to the request and subsequent actions taken by Meta. Users have to be able to appeal content decisions.
  2. Automation transparency: Transparency about where automation and machine learning algorithms are being used in the content moderation process, including complete data on whether content was correctly moderated, as well as key words and hate speech lexicon used for both Arabic and Hebrew languages.
  3. Dangerous organizations: Transparency about content policies related to the classification and moderation of “terrorism” and “extremism”.It should be a minimum standard to publish any internal lists of groups classified as “terrorist” or “extremist.” Users cannot adhere to rules that are not made explicit. Furthermore, Meta should initiate its own investigation to add extreme Israeli right-wing militarized groups to the list.
  4. Commitment to co-design: Commitment to a co-design process with civil society to improve upon policies and processes involving Palestinian content.
  5. Hate Speech Against Palestinians: With the recent dramatic increase of hate speech against Palestinians on Meta’s platforms, specifically in Hebrew, Meta must improve their Hebrew language content moderation by creating a Hebrew hate speech lexicon.