This report is also authored by Dean W. Jackson and Danielle Dougall
Executive summary
Efforts to protect information integrity during elections are threatened by seismic economic, technological, and political shifts.
- Downsizing in the technology sector has reduced the size of trust and security teams; New platforms and technologies like generative AI make fighting misinformation more difficult.
- A coordinated political attack on election integrity threatens the capacity and, in some cases, the safety of independent counterdisinformation researchers and advocates, those not affiliated with the platforms or the government.
- This report draws on interviews with 31 people (including current and former employees of technology companies and representatives of independent research and advocacy initiatives) about their experiences responding to electoral disinformation and the growing challenges facing they are faced.
The election integrity initiatives examined for this report reflect a variety of approaches.
- Some consider themselves primarily researchers, while others conduct research to inform advocacy, although almost all include an element of rapid response through methods such as sending counter-messages to voters or cooperating with election officials.
- Their interaction with government agencies varies from routine meetings to strict no-communication policies.
- Their relationships with social media platforms also range from formal partnerships to distanced critiques.
For independent initiatives combating elections and disinformation, partnerships with platforms offer significant benefits but also raise concerns about sustainability and extractive labor.
- Sometimes the initiatives bring cultural and linguistic fluency that platform staff lack and can detect harmful narratives that might otherwise go unnoticed or unaddressed by platforms.
- Platform staff are also keenly aware that input from external experts helps legitimize decisions about content and integrity.
- But some independent professionals are reluctant to provide “free work to multi-billion dollar corporations,” calling it “extractive” but “unfortunately necessary to reduce harm.”
The 2020 election, tech layoffs, and other recent events have called into question the ability of counter-election disinformation initiatives to influence platform content moderation.
- Respondents said the platforms were frustratingly inconsistent and sometimes unresponsive before widespread layoffs in the tech sector. The situation is worse now.
- Similarly, independent researchers have limited insight into digital threats and trends because most platforms are opaque and provide little access to data crucial to answering key questions.
- Generative artificial intelligence presents potential new risks related to election disinformation. Rather than jumping to conclusions, stakeholders should methodically consider the highest potential hazards and the most appropriate responses.
As prominent politicians and a significant portion of the electorate continue to deny the outcome of the 2020 election, disinformation researchers find themselves under attack.
- Independent researchers increasingly face hostile campaigns from partisan media and legal, digital, and sometimes physical harassment, exemplified by congressional subpoenas of figures in the field.
- The chilling effect alone could drive young professionals away from the sector, make it harder to obtain funding, and deter government officials from engaging in efforts to combat disinformation, especially with the possibility of a lawsuit in progress, Missouri vs. Bidenwill result in permanent restrictions on government communications with platforms or researchers.
In this environment, independent initiatives fighting elections and disinformation are reconsidering their approaches.
- Many initiatives lean more heavily toward other strategies such as counter-messaging, assistance to targeted election officials, and political advocacy.
- At the same time, many outcomes for online trust and safety are as bad or worse than in 2016. The 2024 elections will likely be the most vulnerable environment for political misinformation that the United States has seen since eight years.
Recommendations
Independent initiatives to combat disinformation should take short- and medium-term measures to weather the storm and mitigate the damage.
- Funders, research institutions, and nonprofits should create shared resources and practices for researchers under attack. These could include legal defense pools, cybersecurity assistance, and proactively developed communications plans to respond to coordinated attacks.
- Counter-election misinformation should shift to year-round risk reduction strategies, such as pre-housing, election official training, and advocacy efforts. False narratives about voter fraud have a persistent impact on voting rights between election cycles, so this work should receive continued support.
- Advocates should focus less on individual pieces of content and more on mitigating the impact of disinformation “super-spreaders.” This is a proven force multiplier for disinformation: relatively few individuals are responsible for a large amount of false and viral content.
In the medium to long term, election integrity initiatives should broaden the scope of advocacy, relying less on unstable partnerships with platforms and the federal government with other stakeholders and on non-digital threats for the elections.
- Researchers, donors, and advocates should consider election misinformation as part of a larger institutional problem by supporting reforms to the electoral process and the law. Electoral systems such as ranked-choice voting and primary reform can reduce incentives for misinformation.
- Advocates and their donors should increase resources devoted to advocacy to select state governments around relevant issues such as election worker safety and researcher access to data.
The government can also take steps to strengthen public confidence in the integrity of elections and efforts to combat disinformation and, in collaboration with other stakeholders, do more to promote trust and safety online while while respecting freedom of expression.
- Government and other institutions should promote and utilize the talent of former trust and security personnel by hiring them and encouraging the profession to develop industry-comparable norms, standards and field development opportunities related areas such as cybersecurity.
- Governments should clarify and be more transparent about their role in responding to electoral disinformation, particularly following the injunction issued in the case of Missouri vs. Biden. They could explicitly set limits and transparency requirements regarding the federal government’s communications with social media platforms and independent researchers.
Platforms should improve both the capacity and processes to protect elections from digital disinformation.
- Platforms should reinvest in trust and safety teams as soon as possible, with a particular focus on civil rights specialists who can shape content moderation policies and practices.
- Platforms should reaffirm their commitment to policies and practices that combat electoral disinformation and respond to allegations of censorship and bias by adhering to principles such as the Santa Clara Principles on Transparency and Accountability in Election Moderation. content.
- Platforms should designate consistent contact points for civil society. The departure of key personnel shows the limits of personalized relationships and is a persistent problem for independent researchers.
- Platforms should themselves increase transparency in their communications with government agencies.
- Platforms should expand researchers’ access to platform data – and lawmakers should consider supporting this expansion through laws like the Platform Accountability and Transparency Act. The public deserves to know more about the impact of social media on society.