Information Disorder and Generative AI – An Evolving Reading List

Written by GDIL

July 11, 2023

Curated by Ryan Williams (Mastodon Profile)

Last updated: Jul 11, 2023

Introduction

We intend for this list to function as a living document that evolves alongside the rapidly changing landscape of generative AI, disinformation, and misinformation.

The list is structured into three main categories, each offering a unique perspective.

First, we provide accessible explanations of core technologies and concepts like neural networks and transformer models. We also discuss the capabilities and limitations of large language models.

The second category focuses on the intersection of Generative AI and Information Disorder. Here, we explore the potential misuse of language models for disinformation campaigns, the psychological impact of an AI-saturated ‘post-truth world’, and challenges for privacy, democracy, and national security. We also link to frameworks for understanding the participatory nature of strategic information operations and the market dynamics incentivizing the creation of disinformation.

Finally, we turn our attention to AI Governance and Policy, offering both US and global perspectives on the matter.

Feel free to navigate through the sections at your own pace as per your interest. Happy reading!

Generative AI – Technology

But what is a neural network? – (19m) This video explains the basics of neural networks. No technical skills required.
What is a Transformer Model – NVIDIA Blog. The Transformer model architecture is one of the enabling innovations behind the recent explosion of AI applications.
The Text-to-Image Revolution, Explained – (13m) Vox
GPT-4 System Card (60pgs) – OpenAI authored paper that characterizes the capabilities of GPT-4.
Talking about Large language Models (11pgs) This influential paper by Murray Shanahan aims to inject philosophical nuance in the discourse around artificial intelligence by clarifying how large language models work and steering away from anthropomorphic analogies.
On the Opportunities and Risks of Foundation Models (161 pgs) – Large Language Models are sometimes called “foundation models” because they enable a diverse set of capabilities through fine-tuning and prompt engineering. This comprehensive report was written by the Center for Research on Foundation Models (CRFM) and Stanford Institute for Human-Centered Artificial Intelligence. It can help you develop an intuition for what these models are “good for”.
Chain-of-thought Reasoning – (31pgs) One feature of all state-of-the-art models is that the quality of generation is dependent on the user’s prompting strategy. In other words, the instructions you provide the model can dramatically affect what the model produces. This paper demonstrates the chain-of-thought prompting technique.
Retrieval Augmented Generation – (25m) Why are tech executives so confident that LLMs will transform search experiences? This interview explains the concepts behind Retrieval Augmented Generation.
The Impact of AI on Developer Productivity: Evidence from GitHub Copilot – (19pgs) While not directly relevant to information disorder, generative AI is increasingly capable of producing useful code from natural language instructions. It is easy to imagine how these tools will make inauthentic automated amplification even more accessible to bad actors.

Generative AI and Information Disorder

General

Forecasting potential misuses of language models for disinformation campaigns—and how to reduce risk (67pgs) The Stanford Internet Observatory and OpenAI collaborated to produce a fantastic overview of how current models will affect future disinformation campaigns.
Humans Aren’t Mentally Ready for an AI-Saturated ‘Post-Truth World’ – Wired Magazine
AI and the Future of Disinformation Campaigns – Part 2: A Threat Model (61pgs) – This Center for Security and Emerging Technology Report suggests a framework for evaluating AI disinformation threats.
Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (70pgs) – Danielle Citron and Bobby Chesney identified many of the risks associated with synthetic media (media produced by generative AI) in this comprehensive 2019 report.
The Dark Forest Theory of the Web – New models are poised to flood the web with generic, generated content. This will change the fundamental economics of the social web in ways sure to affect disinformation and misinformation.
AI model GPT-3 (dis)informs us better than humans – (9pgs) While the results are far from conclusive, research into the persuasiveness of generative AI indicates that large language models may be effective at misleading humans.

Frameworks for thinking about Information Disorder

Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations (20pgs) – This article by Kate Starbird, Ahmer Arif, and Tom Wilson foregrounds the concept of computer-mediated collaborative work in disinformation campaigns. Information operations don’t “just happen”. Thinking about the sociotechnical systems that enable disinformation campaigns can help us anticipate how generative AI will serve sophisticated threat actors beyond simply “making more content”.

The Marketplace for Rationalizations (22pgs) – This innovative paper by Daniel Williams explores how market dynamics incentivize the creation of disinformation and misinformation. As generative AI changes the marginal cost of producing content, these market dynamics may usher in sophisticated infrastructure for delivering misinformation.

AI Governance and Policy

US Perspectives

Office of Science and Technology Policy Blueprint for an AI bill of Rights
NIST AI Risk Mangagment Framework
National Security Commission on Artificial Intelligence Final Report
Author Bots Derek Baumbauer and Mihai Surdeanu explore how the current generation of generative AI will be treated under Section 230
Negligent AI Speech: Some Thoughts About Duty – (14) Jane Baumbaur explores the question of whether companies and individuals using generative AI can be found criminally negligent in cases where models cause harm.

Global Perspectives

AI Language Models: Technological, socio-economic and policy considerations (OECD) (52pgs) – This report offers an overview of the AI language model and NLP landscape with current and emerging policy responses from around the world.
AI and Global Governance: Modalities, Rationales, Tensions (30pgs)
Global AI Policies and Strategies Dashboard – This is a fantastic resource for staying informed about the latest developments in global AI governance.
China’s new AI rules protect people — and the Communist Party’s power

Theories of Governance and Disinformation Policy

These ideas and frameworks will sensitize you to the questions facing governments and corporations around the world.

The Perils of Legally Defining Disinformation (25gs)
Do You Have a Right Not to Be Lied To? The legal thinkers reconsidering freedom of speech.
The Fourth Generation of Human Rights: Epistemic Rights in Digital Lifeworlds (20pgs)
Initiatives to Counter Fake News in Selected Countries (100pgs)
Forthcoming: Global Disinformation Policy Database

Related Articles

A Guided Tour of Disinformation Policy: Definitions and Why They Matter

A Guided Tour of Disinformation Policy: Introduction

By Riley Galligher and Samantha Tanner In the digital age, the internet has reshaped the way we produce, share, and come to believe information. As the internet became a major center for information exchange, it also became a key target of disinformation. Information...

GDIL Featured in #Connexions23

GDIL Featured in #Connexions23

Executive Direct Michael Mosser and Task Team Leaders Ryan Williams (Evergreen), Liz Wong (Barnowl), and Zach Daum (Tearline) were featured in a presentation at the first annual #Connexions conference in April 2023. Watch their presentation here and mark your calendars for #Connexions24 March 17-20, 2024!

A Guided Tour of Disinformation Policy: Definitions and Why They Matter

A Guided Tour of Disinformation Policy: Crossover

By Riley Galligher and Samantha Tanner This is part 6 of our series "A Guided Tour of Disinformation Policy". To read the first part, click here. Crossover The structure of our guided tour has centered, until this point, around our three categories of disinformation....

A Guided Tour of Disinformation Policy: Definitions and Why They Matter

A Guided Tour of Disinformation Policy: Introduction

By Riley Galligher and Samantha Tanner In the digital age, the internet has reshaped the way we produce, share, and come to believe information. As the internet became a major center for information exchange, it also became a key target of disinformation. Information...

GDIL Featured in #Connexions23

GDIL Featured in #Connexions23

Executive Direct Michael Mosser and Task Team Leaders Ryan Williams (Evergreen), Liz Wong (Barnowl), and Zach Daum (Tearline) were featured in a presentation at the first annual #Connexions conference in April 2023. Watch their presentation here and mark your calendars for #Connexions24 March 17-20, 2024!

A Guided Tour of Disinformation Policy: Definitions and Why They Matter

A Guided Tour of Disinformation Policy: Introduction

By Riley Galligher and Samantha Tanner In the digital age, the internet has reshaped the way we produce, share, and come to believe information. As the internet became a major center for information exchange, it also became a key target of disinformation. Information...

GDIL Featured in #Connexions23

GDIL Featured in #Connexions23

Executive Direct Michael Mosser and Task Team Leaders Ryan Williams (Evergreen), Liz Wong (Barnowl), and Zach Daum (Tearline) were featured in a presentation at the first annual #Connexions conference in April 2023. Watch their presentation here and mark your calendars for #Connexions24 March 17-20, 2024!