By Riley Galligher and Samantha Tanner
This is part 5 of our series “A Guided Tour of Disinformation Policy”. To read the first part, click here.
As discussed in our policy framework section, our last disinformation policy category is Media Literacy. Media Literacy is a broad category that can include initiatives to educate citizens on identifying disinformation, promoting critical information practices, or fact-checking and flagging disinformation. This category encompasses a range of measures, such as videos that teach school children how to find trusted information sources, some forms of content moderation, and flagging content for users to make them more aware of disinformation. Though the boundary between different disinformation policy categories can be fuzzy, the critical distinction for media literacy is that the policy is focused on consumers of information (or disinformation), with policies aiming to educate the consumers and shore up their defenses—that is, to increase their resilience against—disinformation campaigns.
Positive content moderation can be considered both a content moderation and media literacy disinformation policy and illustrates how boundaries overlap. As discussed in our content moderation section, positive content moderation is the promotion or dissemination of certain content, for example, Meta flagging COVID-19-related content on its platforms and attaching links to the Center for Disease Control’s website to provide accurate information in real time and actively mitigate pandemic disinformation. This is a form of positive content moderation because Meta is a social media platform regulating that certain information be disseminated and attached to posts by tagging posts that mention COVID-19 or other related keywords with a redirect toward official CDC information on the pandemic. However, this is also a form of media literacy response because the policy is targeted towards the consumers of information—in this case, social media users—and the action of redirecting them to CDC information on COVID-19 is an attempt to educate the public and leave them less susceptible to disinformation about faulty COVID-19 cures or vaccine paranoia.
There are various kinds of media literacy policies. Below, we list key examples of media literacy interventions targeting disinformation.
Fact-Checking Example: NGOs within Sweden developed policies to combat disinformation during the country’s national election in 2018. Two Stockholm-based media corporations collaborated with public service companies SVT (Swedish Television) and SR (Swedish Radio)—publicly funded through income taxes but entirely independent of the government—to review the accuracy of election coverage and statements that politicians and other media made.
Youth Literacy Campaign Example: Media literacy efforts to combat disinformation in Kenya began in 2018 with the United States Embassy in Kenya launching a one-year literacy campaign (called YALI Checks: Stop.Reflect.Verify.) to counter the dissemination of false information. The campaign was targeted at the 47,000 members of the Kenya Chapter of the Young African Leaders Initiative (YALI), where the campaign produced several online activities, including quizzes, videos, online chats, and discussions with experts on media literacy tools, to educate and counter “the spread of false information.”
Legal Information Portal Examples: Because it can be difficult for citizens to understand the language used in laws and policies, disinformation about legislation can spread quickly. Kenya has a “central, free legal database” administered by the National Council for Law Reporting, where citizens can find legislation and case laws. Additionally, Brazil has a national law portal for federal laws and state constitutions, along with a separate portal maintained by the Federal Supreme Court that provides access to the electronic judicial gazette. Brazil’s national press also has a website providing access to the federal official gazette, Diário Oficial da União.
Counter-Propaganda Example: The EU Stratcom Task Force was created in 2015 as a counter-propaganda agency that tracks and refutes state propaganda from Russia. The task force monitors media within different countries and conducts analysis, publishing series such as “EU vs Disinfo,” and “Disinformation Review,” primarily in English but with several Russian publications.
Content Labeling Example: Media Literacy also encompasses promoting verified news sources. This includes Instagram’s attaching of the CDC website on posts flagged to contain unverified COVID-19-related disinformation or updating its search interface to first show credible sources of information like the CDC account when individuals search for “vaccine.” This also includes China’s 2017 Provisions for the Administration of Internet News Information Services Law that forced internet news providers to spread information and news verified by the government without “distorting or falsifying news information.”
One of the most apparent benefits of a media literacy approach is that it is less susceptible than legal action or negative content moderation policies to being used to violate free speech or suppress dissent. Media literacy combats disinformation by providing additional information—fact-checking, counter-narratives, or strategies to identify disinformation—and leaves citizens the agency to make their own (hopefully more informed) decisions about the veracity of the content they consume. An additional benefit is that approaches such as Legal Information Portals aim to increase the accessibility and transparency of government information, which may potentially increase civic awareness and engagement. Media literacy campaigns and education have the unique opportunity to target younger populations. Equipping citizens from a young age to recognize disinformation and develop strategies to consume and distribute high-quality information could be an investment that increases resilience to disinformation in the long run. It is also an example of a proactive approach to disinformation as opposed to reactive.
As with most approaches to disinformation, there is complexity that comes with labeling information as maliciously false. When agencies, organizations, or campaigns work to identify and refute disinformation, they are necessarily making a value-judgment about what the “truth” is. Suppose a national government establishes an agency to combat disinformation. In that case, there may be concerns about political agendas influencing the designation of information as true or false and the potential for future administrations to misuse this power. Due to this, many media literacy approaches compiled in disinformation policy literature are nongovernmental or headed by semi or fully autonomous public service companies such as Sweden’s SVT. Additionally, there are some questions about the effectiveness of media literacy policy types, such as content-labeling and flagging employed by Meta during the pandemic. Meta began rolling back the program in 2022 after internal research indicated it wasn’t substantially effective in mitigating disinformation.
After discussing all three policy types identified in our framework one can see how they overlap. Our next post will discuss crossover between the categories. Still, there are some ways that the policy types can be differentiated — primarily through assumptions about the root problem of disinformation that each policy type makes as they authorize action to combat it.
Whereas the goal with legal action is to punish disinformation and the goal in content moderation is to prevent and/or remove disinformation, the goal in media literacy policy approaches is to increase resilience to disinformation. Resilience, in a disinformation context, can be defined as “a structural context in which disinformation does not reach a large number of citizens” and, if they do come in contact with it, are “less inclined to support or further distribute such low-quality information and, in some cases, they will be more able to counter that information.” The goals associated with different disinformation policy strategies also reflect the policymakers’ conceptualization of the issue that disinformation presents. In content moderation, the problem of disinformation is assumed to lie in the information ecosystem and access to disinformation; through negative or positive content moderation, the problem – the information itself – is removed. With legal action, the implied problem is that bad actors will always continue to perpetuate disinformation unless disincentivized, which the threat of fines or incarceration aims to accomplish. Finally, in media literacy, the implied problem is an under-informed populace and that legislation aimed at education in various ways will enable citizens to recognize and dismiss disinformation, thus minimizing its spread and impact.
In this way, one can look at disinformation policies enacted by governments and, through examining trends, can identify how governments conceptualize disinformation and look for regional trends in disinformation management methods.
Brian Fung, “Meta Phased out Covid-19 Content Labels after Finding They Did Little to Combat Misinformation, Oversight Board Says,” CNN, last modified April 20, 2023, https://edition.cnn.com/2023/04/20/tech/meta-covid-misinformation-oversight-board/index.html. ↑
Peter Roudik et al., “Initiatives to Counter Fake News in Selected Countries,” Global Legal Research Directorate, 2019, 97, The Law Library of Congress. ↑
Roudik et al., “Initiatives to Counter,” 65. ↑
Roudik et al., “Initiatives to Counter,” 66. ↑
Roudik et al., “Initiatives to Counter,” 9. ↑
Jonathan Stray, “Institutional Counter-disinformation Strategies in a Networked Democracy,” Association for Computing Machinery, 2, https://doi.org/10.1145/3308560.3316740. ↑
Kaya Yurieff and Oliver Darcy, “Facebook Vowed to Crack down on Covid-19 Vaccine Misinformation but Misleading Posts Remain Easy to Find,” CNN, last modified February 8, 2021, https://edition.cnn.com/2021/02/07/tech/facebook-instagram-covid-vaccine/index.html. ↑
Maria Repnikova, “China’s Lessons for Fighting Fake News,” Foreign Policy, last modified September 6, 2018, https://foreignpolicy.com/2018/09/06/chinas-lessons-for-fighting-fake-news/. ↑
Stray, “Institutional Counter-disinformation,” 4. ↑
Fung, “Meta Phased,” CNN. ↑
Edda Humprecht, Frank Esser, and Peter Van Aelst, “Resilience to Online Disinformation: A Framework for Cross-National Comparative Research,” The International Journal of Press/Politics 25, no. 3 (2020): 6, https://doi.org/10.1177/1940161219900126. ↑