By Riley Galligher and Samantha Tanner

This is part 3 of our series “A Guided Tour of Disinformation Policy”. To read the first part, click here.

Disinformation Policy: Identifying Landmarks

In the interest of taking you on this guided tour of disinformation policy, we need to pick a starting point for analyzing this quickly evolving policy issue area. We have organized the various methods of combating disinformation with policy into three general categories: legal action, content moderation, and media literacy initiatives. These three categories stand out as (1) the most prevalent methods of combating disinformation among existing policy efforts and (2) the most common way scholars have categorized policies.

As we demonstrated in the discussion surrounding definitions of disinformation, crafting effective disinformation policy is a nuanced effort and the same complexity affects policymaking and policy enforcement. Scholars in the field have taken several different approaches to categorizing disinformation policy, leaving us with a number of ways to think about disinformation policy proposals and its objectives. Some researchers attempt to group policies by variables such as actor, target, agency, field, enforcement mechanism, etc. Focusing on specific variables such as these to guide policy organization is a valuable approach and can offer deep insights into specific policy issues and analysis. However these detailed approaches to categorizing disinformation policy are not well suited for the aims of this guided tour. Moreover they leave the existing pool of literature on disinformation policy without a common vocabulary enabling broader discussion and analysis of disinformation policy.

From this rather unconsolidated group of approaches among the existing literature on disinformation policy, we have been able to organize the majority of policy methods into three categories: legal action, content moderation, and media literacy.

Legal Action includes legislation passed by governments to punish actors that produce and disseminate disinformation or platforms that distribute it.

Content Moderation includes legislation or policies designed to prevent the production and dissemination of disinformation by censoring disinformation. This can be done by the government, platforms, or companies.

Media Literacy includes initiatives to educate citizens on how to identify disinformation and increase resilience and critical thinking among the public. It also includes initiatives to verify information on media sites and flag disinformation to the public.

In creating these categories, we tested their compatibility with existing groupings of disinformation policies or their methods. We found these categories encompass the largest number of existing categorizations of disinformation policy. Furthermore, it has a clear line of reasoning that can be identified and followed across scholars and existing thought in the field. Three examples of this compatibility among the literature with our categories are from:

  • Tackling the Information Crisis[1]
    • Legal Action: Direct Regulation
    • Content Moderation: Self-Regulation and Co-Regulation
    • Media Literacy: Audience Education
  • Institutional Counter Disinformation Strategies[2]
    • Legal Action: Speech Law and Censorship
    • Content Moderation: Algorithmic Filter
    • Media Literacy: Refutation, Exposure of Inauthenticity, and Alternative Narrative
  • Government Responses to Disinformation[3]
    • Legal Action: Legal and Regulatory Responses
    • Content Moderation: Media and Civic Responses
    • Media Literacy: Direct Response and Public Communication
  • Tackling Disinformation: EU Regulation of the Digital Space[4]
    • Legal Action: Direct Regulation
    • Content Moderation: Self-Regulation and Co-Regulation
    • Media Literacy: Audience-Centered Approaches: Fact-Checking and Media Literacy

These are just four examples of how the organization of policy approaches and methods in the literature surrounding disinformation policy varies. Despite this variation, our categorizations are broadly compatible with the frameworks.. We expect these categories will remain compatible with the disinformation policy of tomorrow and continue to provide a valuable theoretical framework to understand future policy developments.

Legal Action

As described in the previous section, Legal Action includes legislation passed by governments to punish actors that produce and disseminate disinformation or platforms that distribute it. According to the literature, this encompasses government regulation, speech law, censorship, and legal and regulatory responses. These policies can impose fines to companies and platforms or charge individuals and bad actors for creating or disseminating disinformation. While these policies follow a traditional legislative theory of crime and punishment, there are nuances to this model of regulating disinformation we will discuss through examples.

Two major approaches scholars have outlined are 1) building off of existing laws and applying them to disinformation-related cyber crimes and 2) writing new legislation to prosecute and sanction disinformation actors or platforms that produce or disseminate disinformation.[5]

A classic example of a legal action policy is China’s 2016 Cybersecurity Law. This law demonstrates the government’s attempt to actively criminalize the manufacturing and spread of “rumors that undermine economic and social order.”[6]

The process for building legislation using this legal action approach can begin with authorities opening criminal investigations into platforms or individuals who are believed to have spread disinformation. Later, it can progress into passing legislation that criminalizes the spread of disinformation and provides a structure for regular prosecution and punishment.2 This is evident in Kazakhstan, with the government opening criminal investigations into new outlets for allegedly publishing false information, parallel to their law that carries a sentence of up to seven years in prison for “disseminating knowingly false information.”3

Under the legal action framework, we also include instances where countries take “provisions of existing civil, criminal, administrative, and other laws regulating the media, elections, and anti-defamation” and apply it to modern technology, media platforms, and individuals to stamp out disinformation.[7] This represents one of the main advantages of this approach: tradition. This regulatory approach follows traditional enforcement mechanisms, criminal law, and comes from the traditional regulatory actor, the government. Legal action policies create a direct and predictable consequence for producing and disseminating disinformation and can prove quite effective. The United Kingdom, Sweden, Nicaragua, and many more countries have employed this regulatory approach in their fight against disinformation.[8]

While this approach is widespread and follows traditional methods of government action, it can easily devolve into government censorship and pose direct threats to civil rights such as freedom of speech. Authoritarian states routinely use the language of disinformation to justify repressive measures. This risk is present even when the relevant actors harbor good intentions. Kenya’s 2018 cybercrime law is an example of legal action policies that have had adverse effects on journalists and freedom of speech in the country. “Under this law, people who knowingly share false or misleading information in an attempt to make it look real can be fined up to 5,000,000 shilling (nearly $50,000) or imprisoned for up to two years”.[9] Journalists have since come out against the bill as they believe it will criminalize free speech and infringe civil rights. This law highlights the complications that come with criminalizing certain types of information, even if it is false. Kenya is not alone in experiencing journalistic outcry against recent anti-disinformation policies for fear of infringement of free speech, systematic prosecution of journalists, and government censorship.

Malaysia’s 2018 Anti-Fake News Act is another occasion where we see these civil rights and freedom of speech concerns expressed. The country drew considerable debate over its potential infringement on free speech within the nation.[10] The bill was passed a few weeks before the country’s national election, feuling additional speculation that the policy may have been drafted with the intent to suppress dissent against the ruling party. Within the act, fake news is defined as including “any news, information, data and reports, which is or are wholly or partly false, whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas.”[11] The primary measure of the law imposes fines not exceeding 500,000 ringgits or 6 years of prison time—or both—upon any individual who “maliciously creates, offers, publishes, prints, distributes, circulates or disseminates” fake news or publications containing fake news.[12] These kinds of legal action policies represent the concerns for democratic institutions and civil rights associated with a government criminalizing what they consider disinformation, and we will further discuss these nuances in our Regime Types Discussion.

To put this category of disinformation policy into a theoretical perspective, see the breakdowns above of categorization for Tackling the Information Crisis and Tackling Disinformation: EU Regulation of the Digital Space. Both works characterize our legal action category as “Direct Regulation,” or in their contexts, hard, state-driven regulation of social media platforms, news media outlets, and individuals through sanctions, fines, or imprisonment.[13] Despite concerns of freedom of speech and journalistic freedom that arise with this kind of legal approach, “direct regulation seems to be the preferred choice for many EU and non-EU states across the world,” such as France, Germany, Malaysia, and Russia.[14] For more detail on specific policies taking this approach, see Tackling Disinformation: EU Regulation in the Digital Space.

  1. Mansell, Robin & Livingstone, Sonia & Beckett, Charlie & Tambini, Damian. (2019). Tackling the Information Crisis: A Policy Framework for Media System Resilience.

  2. Stray, Jonathan. (2019). Institutional Counter-disinformation Strategies in a Networked Democracy. 1020-1025. 10.1145/3308560.3316740.

  3. Law Library of Congress, Global Legal Research Directorate, September 2019.

  4. Durach, Flavia and Bârgăoanu, Alina and Nastasiu, Cătălina, Tackling Disinformation: EU Regulation of the Digital Space (2020). Romanian Journal of European Affairs, Vol 20, No 1, June 2020, Available at SSRN: https://ssrn.com/abstract=3650780

  5. Roudik, Peter, et al. Initiatives to Counter Fake News in Selected Countries. [Washington, D.C.: The Law Library of Congress, Global Legal Research Directorate, 2019] Pdf. Retrieved from the Library of Congress, <www.loc.gov/item/2019668145/>.

  6. Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter Institute for Media Studies, 19 Feb. 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/.

  7. Roudik, Peter, et al. Initiatives to Counter Fake News in Selected Countries. [Washington, D.C.: The Law Library of Congress, Global Legal Research Directorate, 2019] Pdf. Retrieved from the Library of Congress, <www.loc.gov/item/2019668145/>.

  8. Roudik, Peter, et al. Initiatives to Counter Fake News in Selected Countries. [Washington, D.C.: The Law Library of Congress, Global Legal Research Directorate, 2019] Pdf. Retrieved from the Library of Congress, <www.loc.gov/item/2019668145/>.

  9. Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter Institute for Media Studies, 19 Feb. 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/.

  10. Freedom on the Net 2018: Malaysia. Freedom House, https://perma.cc/Z2Y9-CD7L. Accessed 22 Jan. 2024.

  11. Buchanan, Kelly. Malaysia: Anti-Fake News Act Comes into Force. 2018. Web Page. Retrieved from the Library of Congress, <www.loc.gov/item/global-legal-monitor/2018-04-19/malaysia-anti-fake-news-act-comes-into-force/>.

  12. Buchanan, Kelly. Malaysia: Anti-Fake News Act Comes into Force. 2018. Web Page. Retrieved from the Library of Congress, <www.loc.gov/item/global-legal-monitor/2018-04-19/malaysia-anti-fake-news-act-comes-into-force/>.

  13. Durach, Flavia and Bârgăoanu, Alina and Nastasiu, Cătălina, Tackling Disinformation: EU Regulation of the Digital Space (2020). Romanian Journal of European Affairs, Vol 20, No 1, June 2020, Available at SSRN: https://ssrn.com/abstract=3650780

  14. Durach, Flavia and Bârgăoanu, Alina and Nastasiu, Cătălina, Tackling Disinformation: EU Regulation of the Digital Space (2020). Romanian Journal of European Affairs, Vol 20, No 1, June 2020, Available at SSRN: https://ssrn.com/abstract=3650780