INFER crowd and expert forecasts shape a talk for government officials on how AI will impact disinformation campaigns

Author
Walter Frick
Published
Jul 19, 2023 03:24PM UTC
An estimated 100 million people tried ChatGPT within two months of its launch. Since then it’s been impossible to keep up with every generative AI app launch—and it’s become easy for anyone with an internet connection to use AI to generate content.

Are we entering an era of even more rampant disinformation—information intentionally designed to mislead? That was the focus of a recent talk supported by INFER as part of the Phoenix Challenge, a seminar series for senior officials from governments, think tanks, and other policy experts to cooperate in addressing challenges to operations in the information environment. Coordinators partnered with INFER on a set of questions to forecast AI’s potential impact on the information ecosystem.  

The differences uncovered in the aggregate forecasts of the INFER forecasters and experts at the Phoenix Challenge formed the basis of the disinformation talk. INFER forecasters brought a dose of caution to this exercise. Although they foresee significant challenges from AI-driven misinformation, they expect change to come more slowly than do the Phoenix participants.

The Phoenix Challenge, held at the Georgia Tech Research Institute in June, is run by the University of Maryland’s Applied Research Laboratory for Intelligence and Security (ARLIS) on behalf of the Office of the Undersecretary of Defense for Policy. Attendees’ predictions were collected via survey days prior to the event. Several weeks in advance, the forecasting community including the Pros (a group selected for their performance track record) also made predictions on INFER, the crowdsourced forecasting platform co-run by ARLIS. 

The six questions that participants answered covered everything from digital ID systems to bans on large language models to the use of digital provenance technology for tracking the authenticity of images. For example, participants were asked: “Before 1 June 2024, will Facebook, WhatsApp, Messenger, or Twitter announce that they are labeling posts as potentially written by AI?”

Across all six questions, INFER forecasters estimated a lower likelihood of the change occurring by the given date. They were more skeptical both of AI-driven changes like the use of a large-language model in an influence campaign and of AI-mitigation efforts, like news publishers adopting digital provenance technology. As a group, they seem to expect AI’s effect on disinformation to be more gradual compared to Phoenix participants.



For example, participants were asked about a possible ban on AI models:


Both groups—INFER forecasters and experts from the Phoenix seminar—thought that more bans were possible but the experts gave roughly even odds (54%), whereas the INFER experts thought they were improbable (36%).

Interestingly, the two groups had similar arguments in support of their forecasts, including privacy concerns as a reason why more bans might happen—and a slow-moving policy process as a reason why they might not. But the differences were interesting, too. Phoenix participants listed concern over job loss as a reason that governments might ban OpenAI models while INFER forecasters didn’t emphasize that line of reasoning. These differences in reasoning provide grist for policy analysts, who are preparings insights for policymakers to further explore different scenarios for AI regulation.

Something similar was true for a question on digital provenance technologies:


Here again the INFER forecasters thought this was improbable (29%) while the experts thought it was somewhat more likely (42%). Both groups emphasized ease of implementing digital provenance as a reason it might occur, and lack of financial incentives as a reason it might not. But some INFER forecasters noted remaining technical challenges and competing corporate priorities as reasons the tech platforms might not adopt digital provenance. 

By comparing seasoned INFER forecasters to the Phoenix seminar experts, this exercise provides policy analysts with both micro and macro insights into AI’s effect on disinformation. At the macro level, the INFER forecasters are more skeptical of rapid change—perhaps reflecting the forecasting insight that major changes are the exception not the rule. At the micro level, they emphasized similar reasons for and against change, but with several additions that policy analysts can investigate as they prepare policymakers for AI’s potential role in fueling disinformation. 

//
To access the INFER report presented to Phoenix Challenge, download the .pdf here or visit our Report Archive

Walter Frick

By: Walter Frick

Walter Frick is the founder of Nonrival, a newsletter that lets readers make predictions about tech, business, and the economy. He is a contributing editor (former senior editor) at Harvard Business Review, former executive editor at Quartz and has written for publications including The Atlantic, the BBC, and MIT Technology Review.

Files
Tip: Mention someone by typing @username