By Riley Galligher and Samantha Tanner

This is part 4 of our series “A Guided Tour of Disinformation Policy”. To read the first part, click here.

Our Content Moderation category includes policies oriented towards preventing the production and dissemination of disinformation, or the removal of existing disinformation, by regulations from the government or by media platforms. Content Moderation can be sectioned into positive and negative content moderation, self-regulation, co-regulation, or direct Regulation, and sometimes stands alone but is often implemented as an element of broader legislation.

Negative content moderation is the traditional and most common type where content is flagged and taken down from platforms following protocols outlined by the media platform or the government. Positive content moderation, on the other hand, is when the same authorities promote or enforce the dissemination of certain content.

Content Moderation can be split into co-regulation, self-regulation, and direct Regulation, where the difference lies in the authority enforcing the Regulation. Self-regulation is when social media platforms, such as Meta, create user guidelines that allow them to remove content that goes against their platform-specific guidelines. Direct Regulation is when the government creates guidelines for taking down content and enforces them in information spaces and media platforms. Co-regulation is when governments collaborate with social media platforms to develop the guidelines that monitor and censor content.

We have referred to Direct Regulation earlier in the Legal Action section above. While Direct Regulation is technically Content Moderation, it is done by the government and typically accompanied by an enforcement mechanism like fines to media platforms for noncompliance. The criminalization aspect of direct regulation blurs the lines between our categories and illustrates the complexity of this policy issue. In our framework, crossover between our three categories enables us to make sense of this complexity, identifying where policies 1) set content guidelines for media platforms that force them to regulate certain content (content moderation) and 2) criminalize the production and distribution of specific content for individuals and platforms through fines and punishment (legal action). This includes any law that prohibits disinformation with guidelines, takes it down (content moderation), and punishes disinformation actors (legal action).

Direct Regulation and Negative Content Moderation Example

China’s 2017 Provisions for the Administration of Internet News Information Services Law is an example of Direct Regulation working with Legal Action. This law “prohibits internet news information service providers and users from producing, reproducing, publishing, or spreading information content prohibited by laws and administrative regulations.”[1] If found in violation of the government’s content guidelines, providers and users must rectify their wrongdoings (remove content) and(or) face fines or be criminally prosecuted (legal actions).

Positive Content Moderation Example

This law also provides us with an example of positive content moderation. Platforms are also to reprint information from officially registered news media. This government action is an excellent example of positive and negative content moderation where a law prohibits the production and distribution of specific content (negative) and enforces the distribution of approved content (positive).[2]

Self Regulation Example

The Self-Regulation Approach describes how social media platforms and other media organizations create policies or guidelines to monitor and regulate content on their platforms. Meta’s User Guidelines are a traditional example of self-regulation as they detail the boundaries of acceptable content on Meta’s platforms.[3] Other examples of self-regulation include Trust and Safety teams or Fact-Check Teams that social media platforms employ to ensure their platforms facilitate safe and accurate information. The self-regulation approach is different from state-sponsored disinformation policy. Still, it represents a considerable proportion of the overall content moderation and efforts to combat disinformation that occurs, arguably making social media platform leaders the most influential actors when it comes to combatting disinformation on platforms.

Co-Regulation

Examples of the Co-Regulation approach include China’s national and provincial cyberspace authorities investigating top social networking platforms, including WeChat, Weibo, and Baidu Tieba, and found posts containing “violence, terror, rumors and obscenity.”[4] The companies, in turn, offered apologies, self-criticism, and more self-regulation.[5] It also includes the Brazilian government’s agreement with Meta and Google to “combat disinformation generated by third parties.”[6] Both represent the regulation of content on social media platforms by industry leaders either in response to or in collaboration with government’s concerns and initiatives.

Pros and Cons of Self-regulation

Content Moderation as a policy approach raises several issues in democratic nations because content moderation gets close to the line of censorship, whether by state or private platform. One concern surrounding self-regulated content moderation includes transparency on behalf of social media platforms. Platform transparency is the idea that people should be made aware of how decisions are made and who makes them to enhance the accountability and legitimacy of social media platforms.[7] Transparency is particularly essential when it comes to content moderation because content moderation aims to keep information ecosystems safe and accurate. Transparency ensures that content moderators make decisions towards that goal and do not arbitrarily take down content.

Pros and Cons of Content Moderation by the State

The most pressing concern with content moderation and disinformation policy, as a whole, is the fine line between content moderation and state censorship. Civil society groups worldwide have shared their concern with governments passing legislation enabling repressive restrictions on free expression in the name of moderating disinformation. This issue isn’t just theoretical. Governments have implemented strict content moderation laws that give them the power to take down or punish speech they do not like under the guise of disinformation or fake news.[8]

This specific conflict has forced scholars to critically analyze content moderation in the broader context of regulating disinformation. We also see convergence in the literature surrounding disinformation policy approaches. We’ve framed this content moderation discussion by the actor: state, platform, or both. Other scholars have explored regulating strategies by separating content moderation by its target: content, data, or structure, and pairing it with the proper actor. Clara Iglesias Keller argues content moderation can “maximize positive effect on information spaces” when done by media platforms not only because of their access to information spaces but also because of the dangers content moderation presents when done by the state.[9] Keller argues state action is best targeted toward structure or data to avoid the “disproportionate risks to freedom of expression” inherent to content moderation by the state.[10]

Much of the scholarship agrees that content moderation by the state threatens free speech. Still, some scholars share concerns about “incentivizing media companies to take on regulation themselves without rule of law” as we have seen in the United States, China, and Brazil.[11] By changing the target of regulation policies from content and disinformation actors, media policies can maximize positive effects by “seeking to reduce the vulnerabilities of media systems that those actors exploit.”[12]

  1. Roudik, Peter, et al. Initiatives to Counter Fake News in Selected Countries. [Washington, D.C.: The Law Library of Congress, Global Legal Research Directorate, 2019] Pdf. Retrieved from the Library of Congress, <www.loc.gov/item/2019668145/>.

  2. Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter Institute for Media Studies, 19 Feb. 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/.

  3. “Facebook Community Standards | Transparency Center.” Meta, https://transparency.fb.com/policies/community-standards/.

  4. Repnikova, Maria. “China’s Lessons for Fighting Fake News – Foreign Policy.” Foreign Policy, 6 Sept. 2018, https://foreignpolicy.com/2018/09/06/chinas-lessons-for-fighting-fake-news/.

  5. Repnikova, Maria. “China’s Lessons for Fighting Fake News – Foreign Policy.” Foreign Policy, 6 Sept. 2018, https://foreignpolicy.com/2018/09/06/chinas-lessons-for-fighting-fake-news/.

  6. Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter Institute for Media Studies, 19 Feb. 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/.

  7. Jozwiak, Magdalena Ewa. “What We Mean When We Talk about Platform Transparency.” The Datasphere Initiative, 2 May 2022, https://www.thedatasphere.org/news/what-we-mean-when-we-talk-about-platform-transparency/.

  8. Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter Institute for Media Studies, 19 Feb. 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/.

  9. Iglesias Keller, C.. Don’t Shoot the Message: Regulating Disinformation Beyond Content. In: Blanco de Morais, C., Ferreira Mendes, G., Vesting, T. (eds) The Rule of Law in Cyberspace. Law, Governance and Technology Series, vol 49. 2022.Springer, Cham. https://doi.org/10.1007/978-3-031-07377-9_16

  10. Iglesias Keller, Clara. Don’t Shoot the Message: Regulating Disinformation Beyond Content. In: Blanco de Morais, C., Ferreira Mendes, G., Vesting, T. (eds) The Rule of Law in Cyberspace. Law, Governance and Technology Series, vol 49. 2022. Springer, Cham. https://doi.org/10.1007/978-3-031-07377-9_16

  11. Tenove, Chris and Spencer McKay. Disinformation as a Threat to Deliberative Democracy. Political Research Quarterly, 74(3), 703-717. 2021. https://doi.org/10.1177/1065912920938143

  12. Tenove, Chris and Spencer McKay. Disinformation as a Threat to Deliberative Democracy. Political Research Quarterly, 74(3), 703-717. 2021. https://doi.org/10.1177/1065912920938143