Responding to Misinformation Through Resilient Civic Infrastructure
In a 2018 issue of the journal Nature, anthropologist Heidi Larson marked the one hundredth anniversary of the Spanish Flu. That pandemic took the lives of three percent of the global population, but after a century of advances in medicine, the virus that most worried Larson was misinformation. “If a strain as deadly as the 1918 influenza emerges,” she wrote, “a debilitating and fatal disease will spread” due to widespread vaccine hesitancy.1
When the COVID-19 virus arrived, Larson’s fears were realized. One study found that a quarter of the most rapidly circulating messages on Twitter during the pandemic were misinformation that undermined established scientific information.2 A subsequent study found that exposure to a single instance of such misinformation reduced people’s willingness to get vaccinated by six percent in both the U.S. and the United Kingdom.3
Understanding and Debunking Misinformation
Larson was far from alone in warning the world about the proliferation of misinformation. The problem became so acute in the U.S. that the RAND Corporation launched a project devoted to understanding and combating what it calls “truth decay.”4 Established as a nonprofit think tank to provide reliable information to government officials, RAND found that a steady decline in evidence-based policymaking flowed from the confluence of biased reasoning, proliferating misinformation, rejection of formerly trusted sources, and sociopolitical polarization. The net result is an alienated public suspicious of policymakers’ intentions and a government besieged by uncivil discourse and internal paralysis.
This problem has received so much scholarly attention that we now have meta-analytic summaries, which reveal statistical patterns across several reports. One such analysis of sixty-five previous studies offers hope.5 Communicating a “corrective message” can dislodge a false belief roughly twelve percent of the time. This effect was strongest when presented as an apolitical rebuttal, particularly as an appeal to clear thinking that makes misinformation appear convoluted or paranoid.
A separate meta-analysis focused on debunking strategies found that such efforts were no less effective than the original misinformation messages.6 Sifting through the details, the authors suggested that debunking works best when it comes swiftly, refrains from rehashing the original claim, and appeals to the public’s capacity for critical thinking and natural skepticism. By contrast, earlier meta-analyses on “forewarning” or “inoculating” against forthcoming misinformation showed limited efficacy and a risk of backfiring.7
Civic Infrastructure Design to Counter Misinformation
These meta-analyses suggest that local communities can address misinformation through prompt and forceful rebuttals. Doing this at scale, however, requires more than just a savvy social media team at City Hall. The best approach begins by reconceptualizing misinformation not as a media strategy puzzle but as a civic infrastructure problem.
Misinformation often spreads haphazardly from faceless user accounts through loose social networks.8 An effective response, by contrast, can come through a blend of curated information, structured public deliberation, and a shared interpretation of reality. Lest that sound too abstract, consider a solution that I have studied for more than a decade.
To improve voters’ policy knowledge and collective judgment, the Oregon legislature expanded its civic infrastructure in 2009 by establishing a Citizens’ Initiative Review (CIR). Designed in response to statewide ballot initiatives, each CIR brought together small representative groups of registered Oregon voters to meet with proponents and opponents and deliberate. Each CIR wrote a one-page analysis that the Secretary of State published in the official Oregon State Voters’ Pamphlet.9 Our research found that CIR Statements made voters more knowledgeable, often correcting mistaken beliefs voters had already adopted. Perhaps because the information came from fellow voters rather than a political party or interest group, it cut through people’s natural skepticism.10
The CIR had a special electoral purpose, but the principles underlying it provide guidance for building a broader debunking platform:
- Create a deliberative public space where the community can examine evidence together, question sources, and reach shared interpretations and/or thoughtful disagreements.
- Summarize findings in language people can understand by using community members as the authors.
- Provide a trusted conduit for sharing reports widely to reach even those who eschew conventional media.
- Use a democratic internal design to sustain neutrality and credibility.
- Obtain impartial public sponsorship to avoid dependency on philanthropy.11
Robust Digital Deployment
One of the drawbacks of the CIR model and similar designs is the cost. Even on the cheap, it takes tens of thousands of dollars to deploy a “deliberative mini-public”—a paid random sample of citizens that deliberates to provide policy recommendations.12
Prompt responses to misinformation surges require a process that can deploy quickly and repeatedly throughout the year. Realistically, this means putting in place a less expensive digital infrastructure whereby a community can recognize, study, and respond to misinformation as it arises. No living model of such a system exists at the present time, though promising digital efforts have sprung up in Taiwan, Spain, and across the globe.13 A few for-profit platforms have shown endurance, such as GoVocal (formerly CitizenLab), but none has a proven track record tackling misinformation in real time.
Drawing on bits and pieces of these innovations, I close by sketching such a platform. Following the five design principles outlined above, a digital civic infrastructure designed to counter false and misleading claims might work like this:
A city or county government establishes and funds an online “Community Information Hub,” with IT staff support, a robust code base, privacy, and other essential digital infrastructure. A multi-stakeholder oversight board (including randomly selected residents, subject-matter experts, and community organizations) oversees the platform’s ongoing development, rules for moderation, topic selection, and deliberative standards through a transparent, publicly documented process.
The infrastructure includes a moderated online deliberation space (e.g., structured forums or deliberation software) where residents can review curated evidence, question sources, and discuss claims in real time or asynchronously, with facilitation protocols to ensure balanced participation and respectful exchanges. Creative flourishes can fill out the details of this deliberative process, with a level of complexity appropriate to the situation and features designed to make the process engaging—or even fun.14
After each inquiry, the platform produces plain-language digital briefs, including meme-worthy infographics and short videos, co-authored by community participants with staff assistance, as needed. To reach even relatively low-trust and low-engagement audiences, these outputs circulate through multiple distribution channels, including SMS alerts, social media, email newsletters, community group partnerships, and local media, as well as official voter guides during elections.
The bottom line is that misinformation can be countered effectively while following sound democratic principles. In the years ahead, I hope readers join me in exploring real possibilities for such digital innovation. Democracy needs such inventions to ride out the storms it endures in the present day and to become smarter and more resilient than ever before.15
John Gastil is Distinguished Professor in Communication Arts and Sciences, Public Policy, and Political Science and Senior Scholar at the McCourtney Institute for Democracy at Penn State University. This essay touches on ideas developed more fully in his forthcoming book, Rewiring the Democracy Machine: A Call for Changing How We Govern Ourselves Online, to be published open-access by Temple University Press in 2027.
