The Santa Clara Principles

On Transparency and Accountability in Content Moderation


An Open Letter to Mark Zuckerberg

Read Facebook's response


Dear Mark Zuckerberg:

What do the Philadelphia Museum of Art, a Danish member of parliament, and a news anchor from the Philippines have in common? They have all been subject to a misapplication of Facebook’s Community Standards. But unlike the average user, each of these individuals and entities received media attention, were able to reach Facebook staff and, in some cases, receive an apology and have their content restored. For most users, content that Facebook removes is rarely restored and some users may be banned from the platform even in the event of an error.

When Facebook first came onto our screens, users who violated its rules and had their content removed or their account deactivated were sent a message telling them that the decision was final and could not be appealed. It was only in 2011, after years of advocacy from human rights organizations, that your company added a mechanism to appeal account deactivations, and only in 2018 that Facebook initiated a process for remedying wrongful takedowns of certain types of content. Those appeals are available for posts removed for nudity, sexual activity, hate speech or graphic violence.

This is a positive development, but it doesn’t go far enough.

Today, we the undersigned civil society organizations, call on Facebook to provide a mechanism for all of its users to appeal content restrictions, and, in every case, to have the appealed decision re-reviewed by a human moderator.

Facebook’s stated mission is to give people the power to build community and bring the world closer together. With more than two billion users and a wide variety of features, Facebook is the world’s premier communications platform. We know that you recognize the responsibility you have to prevent abuse and keep users safe. As you know, social media companies, including Facebook, have a responsibility to respect human rights, and international and regional human rights bodies have a number of specific recommendations for improvement, notably concerning the right to remedy.

Facebook remains far behind its competitors when it comes to affording its users due process. 1 We know from years of research and documentation that human content moderators, as well as machine learning algorithms, are prone to error, and that even low error rates can result in millions of silenced users when operating at massive scale. Yet Facebook users are only able to appeal content decisions in a limited set of circumstances, and it is impossible for users to know how pervasive erroneous content takedowns are without increased transparency on Facebook’s part. 2

While we acknowledge that Facebook can and does shape its Community Standards according to its values, the company nevertheless has a responsibility to respect its users’ expression to the best of its ability. Furthermore, civil society groups around the globe have criticized the way that Facebook’s Community Standards exhibit bias and are unevenly applied across different languages and cultural contexts. Offering a remedy mechanism, as well as more transparency, will go a long way toward supporting user expression.

Earlier this year, a group of advocates and academics put forward the Santa Clara Principles on Transparency and Accountability in Content Moderation, which recommend a set of minimum standards for transparency and meaningful appeal. This set of recommendations is consistent with the work of the UN Special Rapporteur on the promotion of the right to freedom of expression and opinion David Kaye, who recently called for a “framework for the moderation of user- generated online content that puts human rights at the very center.” It is also consistent with the UN Guiding Principles on Business and Human Rights, which articulate the human rights responsibilities of companies.

Specifically, we ask Facebook to incorporate the Santa Clara Principles into their content moderation policies and practices and to provide:

Notice: Clearly explain to users why their content has been restricted.

  • Notifications should include the specific clause from the Community Standards that the content was found to violate.
  • Notice should be sufficiently detailed to allow the user to identify the specific content that was restricted and should include information about how the content was detected, evaluated, and removed.
  • Individuals must have clear information about how to appeal the decision.

Appeals: Provide users with a chance to appeal content moderation decisions.

  • Appeals mechanisms should be easily accessible and easy to use.
  • Appeals should be subject to review by a person or panel of persons that was not involved in the initial decision.
  • Users must have the right to propose new evidence or material to be considered in the review.
  • Appeals should result in a prompt determination and reply to the user.
  • Any exceptions to the principle of universal appeals should be clearly disclosed and compatible with international human rights principles.
  • Facebook should collaborate with other stakeholders to develop new independent self-regulatory mechanisms for social media that will provide greater accountability3

Numbers: Issue regular transparency reports on Community Standards enforcement.

  • Present complete data describing the categories of user content that are restricted (text, photo or video; violence, nudity, copyright violations, etc), as well as the number of pieces of content that were restricted or removed in each category.
  • Incorporate data on how many content moderation actions were initiated by a user flag, a trusted flagger program, or by proactive Community Standards enforcement (such as through the use of a machine learning algorithm).
  • Include data on the number of decisions that were effectively appealed or otherwise found to have been made in error.
  • Include data reflecting whether the company performs any proactive audits of its unappealed moderation decisions, as well as the error rates the company found.

Article 19, Electronic Frontier Foundation, Center for Democracy and Technology, and Ranking Digital Rights

7amleh - Arab Center for Social Media Advancement
Access Now
ACLU Foundation of Northern California
Adil Soz - International Foundation for Protection of Freedom of Speech
Africa Freedom of Information Centre (AFIC)
Albanian Media Institute
Alternatif Bilisim
American Civil Liberties Union
Americans for Democracy & Human Rights in Bahrain (ADHRB)
Arab Digital Expression Foundation
Asociación Mundial de Radios Comunitarias América Latina y el Caribe (AMARC ALC)
Association for Progressive Communications
Bits of Freedom
Brennan Center for Justice at NYU School of Law
Bytes for All (B4A)
CAIR San Francisco Bay Area
Cartoonists Rights Network International (CRNI)
Cedar Rapids, Iowa Collaborators
Center for Independent Journalism - Romania
Center for Media Studies & Peace Building (CEMESP)
Child Rights International Network (CRIN)
Committee to Protect Journalists (CPJ)
CyPurr Collective
Digital Rights Foundation
EFF Austin
El Instituto Panameño de Derecho y Nuevas Tecnologías (IPANDETEC)
Electronic Frontier Finland
Elektronisk Forpost Norge
Facebook Users & Pages United Against Facebook Speech Suppression
Fight for the Future
Florida Civil Rights Coalition
Foro de Periodismo Argentino
Foundation for Press Freedom - FLIP
Freedom Forum
Fundación Acceso
Fundación Ciudadano Inteligente
Fundación Datos Protegidos
Fundación Internet
Fundación Vía Libre
Fundamedios - Andean Foundation for Media Observation and Study
Garoa Hacker Club
Global Voices Advocacy
Gulf Center for Human Rights
HERMES Center for Transparency and Digital Human Rights
Homo Digitalis
Human Rights Watch
Idec - Brazilian Institute of Consumer Defense
Independent Journalism Center (IJC)
Index on Censorship
Initiative for Freedom of Expression - Turkey
Instituto Nupef
International Press Centre (IPC)
Internet without borders
Intervozes - Coletivo Brasil de Comunicação Social
La Asociación para una Ciudadanía Participativa ACI Participa
Lucy Parsons Labs
May First/People Link
Media Institute of Southern Africa (MISA)
Media Rights Agenda (MRA)
Mediacentar Sarajevo
New America's Open Technology Institute
NYC Privacy
Open MIC (Open Media and Information Companies Initiative)
OutRight Action International
Pacific Islands News Association (PINA)
Panoptykon Foundation
PEN America
PEN Canada
Peninsula Peace and Justice Center
People Over Politics
Portland TA3M
Privacy Watch
Prostasia Foundation
Raging Grannies Action League
ReThink LinkNYC
Rhode Island Rights
SHARE Foundation
Son Tus Datos (Artículo 12 A.C.)
South East Europe Media Organisation
Southeast Asian Press Alliance (SEAPA)
Syrian Archive
Syrian Center for Media and Freedom of Expression (SCM)
TA3M Seattle
The Association for Freedom of Thought and Expression
The Rutherford Institute
Viet Tan
Vigilance for Democracy and the Civic State
Visualizing Impact

Facebook’s Response

Thank you for your November 13 letter to Mark Zuckerberg addressing notice, appeal, and data transparency for violations of Facebook’s Community Standards. You’ve raised important questions about how Facebook is approaching these issues.

Your letter gives us an opportunity to summarize the work we’ve been doing over the past year in these areas. Please find details below, using the headings in your letter. We have also noted areas where we aren’t currently in line with your recommendations, or where we are working in that direction.

Please bear in mind that much of this is work in progress, and we will provide further updates as our policies and enforcement develop. We look forward to continuing this dialog with you.


Our procedures for notifying users of Community Standards violations have become more detailed over time. In April, we released the internal guidelines we use to enforce on our Community Standards, so that people have clarity on exactly where we draw the line. As part of this process, we have been working to improve the notifications that we send users when we remove content that goes against our Community Standards.

In the majority of cases, when we tell people their content goes against our standards, we cite the policy violated. In a few cases, where concerns for safety or gaming of our policies are high, such as with our policies around sexual exploitation and terrorist propaganda, we provide general notice that the content in question has violated our Community Standards. We also identify for users the specific piece of content that violates our standards.

With regard to how content is detected, evaluated, and removed (if it violates our policies), two points:

First, Facebook’s Community Standards Enforcement Report numbers showing how much violating content we have detected on our service. It’s an important part of our effort to be transparent — so people can judge for themselves how well we are doing. As outlined further below, this report provides aggregate data for most major policies on the percentage of violating content that we have proactively detected as compared to that which was reported by users. For six of the eight policies for which we report data in the enforcement report, we identified over 95% of the violating content ourselves. This figure is significant because it indicates that we are successfully removing large volumes of violating content before we receive user reports. User reports nonetheless remain important, not least because they provide a signal that users perceive the reported content as harmful.

Second, when it comes to individual pieces of content, we don’t disclose how we are made aware of violating content, since doing so could undermine the confidentiality of user reporting — an important principle underlying our enforcement approach.


When we talk about appeals, we’re referring to ways for people to request re-review of a content decision we’ve made. Prior to this year, such re-review was available to people whose profiles, Pages, or Groups had been taken down. In April 2018, we introduced the option to request re-review of individual pieces of content that were removed for adult nudity or sexual activity, hate speech, or graphic violence. We’ve subsequently extended this option so that re-review is now available for additional content areas, including dangerous organizations and individuals (which includes our policies on terrorist propaganda), bullying and harassment, regulated goods, and spam. We are continuing to roll out re-review for additional types of violations. We also plan to launch this option for individuals who have reported content that was not removed.

When we announced the ability to seek re-review of content removals, we also said that we would like to set up a system where users can provide more information and context on decisions that they think we got wrong. Along these lines, we are currently experimenting with the best ways to solicit context from users.

There are some violation types – for example, severe safety policy violations – for which we don’t offer re-review. For all other types of content, in order to request re-review of a content decision we’ve made, you are given the option to “Request Review.” We make the opportunity to request this review clear, either via a notification or interstitial. Re-review is conducted by a different human reviewer than that of the original review. In a limited number of cases, where we have very high confidence in our original decision and believe that human review is not efficient — for example, spam and cases involving “banked” content — we may also rely on automation for re-review. If we find we have made a mistake, we will restore the content. Typically, the re-review takes place within 24 hours.

Please take a look at the three screenshots appearing at the bottom of this message. These images show, respectively, notice of a Community Standards violation; the option to seek re-review of our content decision; and acknowledgment of a request for re-review.

We welcome the opportunity for Facebook to collaborate with stakeholders on innovative approaches to self-regulation and accountability. We engage regularly with external groups and experts on our policy development and enforcement, and we are looking to do more. For example, our CEO recently laid out a path for appealing content decisions to an independent body “Independent Governance and Oversight”). In 2019, we’ll undertake a process of stakeholder consultation on this idea, to bring external voices into the decision making process. We would like to include your voices in that process.


Our first ever Community Standards Enforcement Report, released in May 2018 as part of our larger bi-annual Transparency Report, provided data on the volume of content we actioned (including both content removed and content in which an interstitial or warning screen was added) and the percentage of violations we found before users reported them in the following areas: adult nudity and sexual activity, fake accounts, hate speech, spam, terrorist propaganda, and violent and graphic content.

We received your letter just two days before the release of our second Community Standards Enforcement Report This report contains a range of information on topics raised in the letter, covering the period from April to September 2018. The report includes updates to the data we shared in our May report, as well as new data in the areas of bullying and harassment, and child nudity and sexual exploitation of children. For each of these policies, we provided data on how much content we took action on, and the percentage of violations that we found before users reported them. Where possible, we also provided information on prevalence of this content on Facebook. The report further highlights data we plan to provide in the future, including an indication of how quickly we took action on Community Standards violations and how often we restore content upon re-review.

In many policy areas, the percentage of removed content first identified by Facebook’s automated systems is above 95%. In the areas of hate speech and bullying and harassment, where context is critical, proactive detection rates are lower (~52% and ~15% respectively). At this point, we do not provide a further breakdown of reporting source.

We also don’t provide data on the format of the content being removed, whether text, photo, or video, because we do not believe this data provides critical information to our users or civil society about our content review practices.

We audit the quality and accuracy of reviewer decisions on an ongoing basis, to understand where our policies or training can be improved, and to follow up with reviewers on improving where errors are being made. Reducing these errors is one of our most important priorities.

We hope this response is useful, and appreciate the opportunity to convey these updates. We are exploring ways to make our updates more easily accessible. In the meantime, please feel free to share this message with your colleagues and other members of civil society.

1 See EFF’s Who Has Your Back? 2018 Report, and Ranking Digital Rights Indicator G6,

2 See Ranking Digital Rights, Indicators F4, and F8, and New America’s Open Technology Institute, “Transparency Reporting Toolkit: Content Takedown Reporting”,

3 For example, see Article 19’s policy brief, “Self-regulation and ‘hate speech’ on social media platforms,” speech%E2%80%99-on-social-media-platforms March2018.pdf.