In February 2018, the Santa Clara Principles on Transparency and Accountability in Content Moderation were created by a small group of organizations, academics, and advocates on the sidelines of the first COMO at Scale conference in Santa Clara, California. The principles provide a set of baseline standards or initial steps that companies engaged in content moderation should take to provide meaningful due process to impacted speakers and better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.
In the ensuing two years, the principles have been successfully implemented in whole by Reddit, and in part by several more companies including Github, Apple, WordPress, and YouTube. Furthermore, a greater number of companies publicly endorsed the principles in 2019.
We are now embarking on a process of introspection and analysis to determine whether the Santa Clara Principles should be updated for the ever-changing content moderation landscape. We are particularly interested in hearing from groups and individuals from the Global South, and those who represent marginalized communities that are heavily impacted by commercial content moderation practices.
Currently the Santa Clara Principles focus on the need for numbers, notice, and appeals around content moderation. This set of questions will address whether these categories should be expanded, fleshed out further, or revisited.
The first category sets the standard that companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines. Please indicate any specific recommendations or components of this category that should be revisited or expanded.
The second category sets the standard that companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension. Please indicate any specific recommendations or components of this category that should be revisited or expanded.
The third category sets the standard that companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension. Please indicate any specific recommendations or components of this category that should be revisited or expanded.
Do you think the Santa Clara Principles should be expanded or amended to include specific recommendations for transparency around the use of automated tools and decision-making (including, for example, the context in which such tools are used, and the extent to which decisions are made with or without a human in the loop), in any of the following areas:
Content moderation (the use of artificial intelligence to review content and accounts and determine whether to remove the content or accounts; processes used to conduct reviews when content is flagged by users or others)
Content ranking and downranking (the use of artificial intelligence to promote certain content over others such as in search result rankings, and to downrank certain content such as misinformation or clickbait)
Ad targeting and delivery (the use of artificial intelligence to segment and target specific groups of users and deliver ads to them)
Content recommendations and auto-complete (the use of artificial intelligence to recommend content such as videos, posts, and keywords to users based on their user profiles and past behavior)
Do you feel that the current Santa Clara Principles provide the correct framework for or could be applied to intermediate restrictions (such as age-gating, adding warnings to content, and adding qualifying information to content). If not, should we seek to include these categories in a revision of the principles or would a separate set of principles to cover these issues be better?
How have you used the Santa Clara Principles as an advocacy tool or resource in the past? In what ways? If you are comfortable with sharing, please include links to any resources or examples you may have.
How can the Santa Clara Principles be more useful in your advocacy around these issues going forward?
Do you think that the Santa Clara Principles should apply to the moderation of advertisements, in addition to the moderation of unpaid user-generated content? If so, do you think that all or only some of them should apply?
Is there any part of the Santa Clara Principles which you find unclear or hard to understand?
Are there any specific risks to human rights which the Santa Clara Principles could better help mitigate by encouraging companies to provide specific additional types of data? (For example, is there a particular type of malicious flagging campaign which would not be visible in the data currently called for by the SCPs, but would be visible were the data to include an additional column.)
Are there any regional, national, or cultural considerations that are not currently reflected in the Santa Clara Principles, but should be?
Are there considerations for small and medium enterprises that are not currently reflected in the Santa Clara Principles, but should be?
What recommendations do you have to ensure that the Santa Clara Principles remain viable, feasible, and relevant in the long term?
Who would you recommend to take part in further consultation about the Santa Clara Principles? If possible, please share their names and email addresses.
If the Santa Clara Principles were to call for a disclosure about the training or cultural background of the content moderators employed by a platform, what would you want the platforms to say in that disclosure? (For example: Disclosing what percentage of the moderators had passed a language test for the language(s) they were moderating or disclosing that all moderators had gone through a specific type of training.)
Do you have any additional suggestions?
Have current events like COVID-19 increased your awareness of specific transparency and accountability needs, or of shortcomings of the Santa Clara Principles?
Information collected through this survey is subject to EFF's Privacy Policy. All data collected through this survey will be used to determine whether the Santa Clara Principles should be updated and if so, how. This may include the publication of a report discussing the results of this consultation. Data collected may be shared by EFF with the organizations collaborating on this project: Global Partners Digital, Open Technology Initiative, Article 19, Center for Democracy and Technology, Ranking Digital Rights, ACLU of Northern California, Witness, Brennan Center for Justice, Red en Defensa de los Derechos Digitales, and AccessNow.