Threatcasting at the Global Disinformation Lab

Written by Team Threatcasting

May 27, 2022

GDIL student researchers explain how the threatcasting methodology can surface novel threats.
Written by Team Threatcasting: Nayla Borrell, Kevin Lentz, Ramiro De Los Santos, Sydney A Svagerko

It is 2028. Jorge is being driven back from work by his self-driving Tesla. It’s been a long day designing the new user interface his company is developing. The UI will be used in the next generation of unmanned submarines ordered by the US Navy. The Navy ordered them as part of its new force re-design that will replace the massive surface-vessel losses incurred in the catastrophic battle of Taiwan in 2026. His Series 2 pulls into the garage and he heads in. His smart lock scans his face as he approaches the door. Usually, it opens the door for him. Today a cranking alarm sound floods the garage, and he hears the three automated dead bolts lock into place. He flinches and flicks his eyes up to bring down the heads-up display on his smart glasses.

A short message and live video feed of his daughter’s room opens. Immediately Jorge notices an uncanny but menacing form zipping around his daughter’s room. Like some kind of robotic hummingbird, or hornet. His daughter is backed into a corner, her face contorted with fear. Jorge’s heart hammers in his chest and he begins to sweat.

“There is a brief case on the porch in your Amazon locker. It contains a secured communications link with your new client. Scan your fingerprint in the next three minutes to accept it, and the weapon in your daughter’s room will be deactivated. If you alert your employer, or the government, we will know. We’re watching.”

Jorge races to the Amazon locker on his front porch.


What will Jorge do? What would you do? What should you do? And what about the Navy, whose future workhorse underwater platform has been compromised before it is even finished? And the US government who issued the defense contract, and Jorge’s company who is obligated to complete it?

While such a scenario might seem unlikely, it is worth considering that at one point so too did commercial electric vehicles, virtual reality, and the popularization of fully remote working, among many other now commonplace technologies and realities. Moreover, in the case of poor Jorge, the technologies to create this nightmare are actually already being developed. They are an integral part the People’s Liberation Army’s modern psychological warfare doctrine.[i] We want to be imagining precisely these kinds of nightmares now, so that we can prevent them in the future. But how to do it in a manner that is relevant, impactful, and timely?

This semester at the Global Disinformation Lab, in conjunction with the Army Cyber Institute, we learned about Threatcasting, which is a process developed to address exactly these issues. Created by Intel’s in-house futurist of many years, and further developed by a team at Arizona State University, Threatcasting is a way to encourage a group of relevant stakeholders to project themselves into a grim near-future, and then to create policies to prevent it by working their way back in time. 

Concretely, it involves organizing a conference around a topic, selecting relevant participants and experts, having the experts brief the participants, and then guiding the groups of participants to generate a person, in a place, experiencing a threat in as much detail as possible. The groups will also identify key “gates and flags” that will indicate whether or not society is heading towards the dark future they have imagined. Analysts will then take the data the groups generate, find similarities and dissimilarities across the groups, and generate a report of key findings and concerns which the client can use as a roadmap to navigate the coming years.

Scenarios – Starfruit Labs

To demonstrate how the Threatcasting process works, the team at the Global Disinformation Lab created six different scenarios in connection to the fictional scenario involving the fictional “Starfruit Labs.” Starfruit is a startup artificial intelligence company whose machine learning algorithms were created to take advantage of vast quantities of anonymized personal and public health data, consumer genomics, and biomedical research to identify personalized dietary regimes for elite athletes. The fictional company’s assets are owned in part by the United States’ Army Futures Command and will be used by West Point for a ten-year research project that monitors cadet physical and mental performance. Our Threatcasting team divided into several sub-groups to generate potential threat futures that could result from the Starfruit scenario. 

One team’s model is set in 2032. A prominent activist in the 2019 Hong Kong protests going by the alias Hilary Klein is unknowingly having her cyber profile monitored by a private Chinese biotechnology firm. While visiting her cousin, a West Point cadet involved in the Starfruit program, she logs into her email on his computer. A next generation tracking AI notifies the Chinese biotech company and begins downloading data and tracking keystrokes on Chris’ computer, giving them access to Starfruit’s genomic data. In this model the Chinese firm is attempting to leapfrog their American competition by stealing important data, which highlights the importance of securing server and communication lines of biotechnology firms.

Another team’s model also emphasizes data security. Their scenario is also set in 2032 and focuses on Robert, a 5’8” middle-aged white Starfruit analyst whose gambling addiction threatens the financial stability of his family. Robert is approached at an illegal casino by a Chinese Communist Party intelligence officer who recently lost an undercover agent within Starfruit Labs. The officer offers to pay off Robert’s gambling debt if he installs spyware on his work computer, enabling CCP intelligence to retrieve lab data in real time. Both of these models highlight how current technology allows for the covert transmission of data addition, and in this case the human touch needed to operationalize the collection. 

A final team’s model goes a step further by demonstrating the potential social effects of this data theft and manipulation. Their scenario focuses on Stephen Murphy, a senior white West Point cadet who is participating in a program to optimize his lacrosse skills. However, what Stephen and Starfruit don’t know is that Chinese external intelligence services have infiltrated West Point’s research project and have begun manipulating nutritional suggestions to Chinese-American cadets that will allow them to outperform their peers. When Stephen notices his lack of improvement in comparison to the Chinese American cadet he’s competing against for a starting position, he becomes disgruntled and turns to racialized conspiracy theories to explain the disparity, heightening ethnic tensions within the corps. 

Policy Recommendations 

For the threat facing Starfruit Labs, our teams came up with multiple policy recommendations to respond to the identified threats. 

In response to the threat characterized by private network mapping and targeting of ethnic diaspora, the first team recommended updating and bolstering data privacy laws. Specifically, regulating the privacy policies of big tech companies and limiting their ability to share user data with third parties in threat actor countries. They also recommended better protecting the data itself by creating a system of classifications for biotech data to be able to regulate sensitive types of data. Additionally, they identified institutionalizing best practices like keeping work laptops in the office and mandatory password lengths as a low-cost high impact solution.

The second team envisioned a threat characterized by deception and financial manipulation. Their recommendations involved a robust employee protection program. This would monitor the financial health of their employees and prevent over-accumulation of operational authority in any given employee by implementing a tailored data-access and operations framework. 

The final team proposed a threat characterized by lack of government oversight, where threat actors use individuals as vectors to access intelligence. To counter this, the team proposed efforts to disrupt hostile activities by monitoring the capabilities of threat actors and taking counteroffensive measures. To mitigate the impact of what’s already been done, they suggest updating US surveillance and cyber defense infrastructure and institutionalizing cyber hygiene. 

Cybersecurity experts acknowledge that human error is the most common culprit of a leak or compromised system, and thus most of these responses focus on good best practices to minimize the possibility of an accidental leak. This includes personnel training, restricting outside tech, regular malware checks, and keeping lab computers on location. Two of the teams also recommend government limitations on the sale of user info, establishing classifications of biotech data, and compartmentalizing personnel access. Taken in concert, these policy recommendations would bolster the security of a location like Starfruit Labs and represent the kinds of considerations necessary for biomedical research.


Takeaways

President Biden has on multiple occasions predicted that we will see more change in the next decade than we have in the last thirty to forty years.[ii] If the COVID pandemic and Russian invasion of Ukraine are any indicators, he’s probably right. Threatcasting emerged from the desire to not just react to changes like the ones Biden predicts, but to get out ahead of them and proactively anticipate the terrain of the future. Following this methodology, the Threatcasting team at the Global Disinformation Lab worked with the Army Cyber Institute to envision probable future threats to the stability of the US armed forces posed by the rapidly changing field of AI and intelligence operations. Our recommendations ranged from upgrading data security systems to the implementation new forms of strategic government data oversight. The teams also identified the security of the individual worker as a key vulnerability in the coming era of AI-driven technology and intelligence competition.

No one can predict the future. And, to be clear, that is not what Threatcasting is about. Threatcasting is a way to identify a range of threats one might not have known about, and to produce a framework for handling those threats. It is an exercise in scripting out policy responses to threats that may or may not come to pass and figuring out ways to identify what kind of future our technologies and societies are moving towards. Threatcasting is one of the few methodologies that can effectively combine the social sciences, hard sciences, economics, history, and fictional story telling to create much needed advances and recommendations for emerging security threats in our ever-changing world. We found this method to be highly valuable. While Threatcasting can’t let us see into the future, it can help us plan for a variety of likely eventualities. And, as Eisenhower once famously remarked: planning is worthless, but planning is everything. 


[i]https://warontherocks.com/2022/04/new-tech-new-concepts-chinas-plans-for-ai-and-cognitive-warfare/

[ii] https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/01/19/remarks-by-president-biden-in-press-conference-6/

Related Articles

The Great Global Disinformation Feed

The Great Global Disinformation Feed

Perhaps the only thing that travels faster than a lie is a lie promoted on the internet. Disinformation evolves and adjusts rapidly for even the smallest of targeted social groups. With over 190 countries in the world – and each country hosting numerous social groups...

Student Researchers use Satellite Imagery to Counter Disinformation

Student Researchers use Satellite Imagery to Counter Disinformation

The GDIL "Project Tearline" team published the result of their research this semester. The team focused on disinformation about Ukrainian refugees. Check out the report to learn more about how the students analyzed satellite imagery in order to debunk claims made by...

Student Researchers use Satellite Imagery to Counter Disinformation

Student Researchers use Satellite Imagery to Counter Disinformation

The GDIL "Project Tearline" team published the result of their research this semester. The team focused on disinformation about Ukrainian refugees. Check out the report to learn more about how the students analyzed satellite imagery in order to debunk claims made by...