Nation/World

Efforts to combat disinformation in retreat as voters head to the polls

During the chaos after the 2020 election, tech companies erected unprecedented defenses to prevent misinformation from spreading on their platforms.

Twitter’s Trust and Safety team added fact-checking labels to false claims about the election and blocked some of then-President Donald Trump’s posts about vote fraud from spreading. Facebook peppered election posts with links to its voter information center, which was filled with reliable information about the legitimacy of mail-in ballots and voting in general. Several weeks after the race was called, YouTube began removing videos that made claims of widespread election fraud.

Four years later, all those platforms are in retreat.

Under Elon Musk, Twitter - now X - eliminated most of its content moderation staff, replacing them with a crowdsourced, and flawed, fact-checking experiment. Facebook, now Meta, has scaled down its voter information center, and has decreased the visibility of posts about politics across Facebook and Instagram. And YouTube now allows claims of election fraud on the network.

Facing legal threats and political pressure, programs to combat the spread of disinformation have waned at social media giants; most companies have declined to update their policies to respond to the 2024 election. A once-thriving ecosystem of academic and government programs intended to monitor the spread of hoaxes and foreign interference online also has diminished, opening the door for threats against election workers and viral, unproven claims about voting irregularities.

This new environment has fostered a flood of exaggerated claims about irregularities in the voting process, which researchers say has escalated as Election Day approaches. Some election officials and researchers argue this ecosystem exposes voters to an information free-for-all, warping their perception of the results and potentially contributing to political instability.

“After 2020, the platforms really felt like ‘mission accomplished,’” said Color of Change president Rashad Robinson, whose digital civil rights group has pushed tech companies to adopt tougher rules against voter suppression. “And so [now] when you talk to them, they have moved a lot on believing that they know how to deal with the problem.”

ADVERTISEMENT

Some experts argue the saturation of online conspiracies could translate into dangerous offline actions. A large quantity of voter-fraud propaganda might prime the public to distrust the outcome of the election, for example, laying the political groundwork for GOP leaders to challenge the results.

Individual pieces of election misinformation could inspire physical or digital attacks against poll workers, election officials or immigrant communities.

“We saw the dry run in 2020 of [election workers] being followed home and attacked, [facing] death threats online, [and] photos of them circulating on Facebook,” said Nora Benavidez, senior counsel of the digital rights group Free Press.

Meta spokesman Corey Chambliss said in a statement that protecting the U.S. 2024 elections remains a top priority for the social media giant and “no tech company does more to protect its platforms - not just during election periods but at all times.”

“We have around 40,000 people globally working on safety and security - more than during the 2020 cycle - and have invested more than $20 billion in teams and technology in this area since 2016,” Chambliss added.

YouTube spokeswoman Audrey Lopez said in a statement that the company will “support elections with a multilayered approach to effectively connect people to high-quality, authoritative news and information.” A spokesperson for X did not respond to a request for comment.

The corporate retreat is fueled by several factors. A conservative legal and political campaign over allegations of censorship has successfully pressured government agencies, tech companies and outside researchers to stop working together to detect election falsehoods. Musk, who has dramatically reduced X’s misinformation programs, has inspired other companies to roll back safeguards against propaganda.

Some platforms aren’t just allowing false claims of election fraud to spread, they are actively soliciting them. X owner Elon Musk’s pro-Trump super PAC, America PAC, last month launched an Election Integrity community page on X to encourage more than 58,000 members to post examples of potential voter fraud in the 2024 election, creating a database that includes many unfounded and unsubstantiated allegations.

“What is happening is much, much bigger than someone making a business decision to have less curated news sources and to have less content moderation,” said Eddie Perez, who once ran Twitter’s civic integrity team and is now a board member of the nonprofit OSET Institute. “Musk, in his support for Trump, is actually going to another extreme, which is to use the power of the platform in a proactive way, in favor of very specific antidemocratic viewpoints.”

The deluge of election denialism arrives as the unfounded claim that the 2020 election was rigged against former president Donald Trump has become a mainstream talking point among conservatives. In the months before the vote, these claims have ballooned into a hodgepodge of conspiracy theories.

Since 2021, tech companies have opened the door to politicians contesting the election results. Trump has returned to Meta platforms, YouTube and X after the companies suspended his account in the wake of the Jan. 6 riot at the U.S. Capitol.

Meta started allowing politicians to put 2020 election rigging claims in political ads, though fraud claims about the 2024 vote remained barred.

Twitter once banned misleading claims that could undermine the public’s confidence in an election “including false information about the outcome of the election.” By 2023, a year after Musk took over the platform, that prohibition had disappeared from the company’s civic integrity policy, according to its website.

“They all get to a point where they’re like ‘we can’t do this for every election in the past,’” said Katie Harbath, CEO of the tech consultancy Anchor Change and a former Facebook public policy director. “They might be more willing to take action for 2024 stuff than they are spending a ton of time constantly re-litigating 2020 and other past elections.”

Internet companies also have dramatically shifted how they promote accurate information about the election. Four years ago, Meta ran a voter information center with continuous updates about the election from outside groups, including the Bipartisan Policy Center, a Washington think tank. The voter information center now directs users to static government websites, after Meta lobbyists complained that relying on the think tank could make them appear biased, according to two people familiar with the matter who spoke on the condition of anonymity to speak on private deliberations.

A Twitter curation team, which included some seasoned journalists, pushed election-related articles from news outlets in Spanish and English to a dedicated election page on the platform’s Explore tab. Today, the program doesn’t exist. The company is directing users to a government voter registration page.

Both X and Meta have de-emphasized news stories in users’ news feeds, blunting the reach of mainstream journalists who share accurate updates about the election. Meta scrapped a news tab on Facebook promoting credible articles about elections and reduced the visibility of accounts that talk about politics and social issues.

ADVERTISEMENT

While Meta has said shifting away from news and politics exposes users to less vitriolic content they don’t want, experts and activists have argued the move could lower the quality and diversity of information online, particularly for people who don’t actively seek out quality journalism from other sources.

“I think in many ways, the solution for companies in the election context is simply to remove the possibility of accountability,” Benavidez said. “And one way to do that is by depoliticizing feeds.”

Tech companies are also receiving less support from federal agencies this year to fight disinformation because the White House was mired in litigation with Republican state attorneys’ general. Their lawsuit, Murthy v. Missouri, alleged the Biden administration’s coordination with the tech companies to tamp down on election and vaccine falsehoods amounted to censorship. The Supreme Court ultimately rejected the conservatives’ effort in June, but communication between internet platforms and government watchdogs is now more limited.

The Department of Homeland Security has pulled back from direct outreach to companies such as Meta, Google and X after years of holding joint meetings with them to discuss election threats including foreign influence campaigns, according to two people familiar with the matter, who spoke on the condition of anonymity to discuss sensitive matters.

The FBI said in a statement that it was sharing information with social media companies and recently updated its procedures so that the platforms are aware they “are free to decide on their own” whether to take action.

Meanwhile, federal programs that combat foreign disinformation are in jeopardy. The Global Engagement Center, which was founded in 2016 to combat propaganda campaigns that undermine the United States, is expected to shutter in December unless Congress votes to extend its authorization. Sen. Chris Murphy (D-Connecticut) and Sen. John Cornyn (R-Texas) have co-sponsored an amendment to let the program to continue, but it faces resistance from House Republicans, who accuse the agency of “mission creep” and say its work could violate the First Amendment.

Secretary of State Antony Blinken “has publicly made it clear that continuing this vital work overseas is a priority,” the State Department said in a statement.

Some disinformation research programs have also folded or shifted strategies to avoid being targeted by probes from House Republicans and conservative activists investigating allegations of digital censorship. Others are simply having trouble performing the research at all after both Twitter and Meta reduced or eliminated access to tools widely used to track viral misinformation on their platforms.

ADVERTISEMENT

Now, researchers are waiting with trepidation to see how tech companies’ reduced defenses against misinformation and rising political propaganda will affect voters as they head to the polls.

“The world of misinformation and disinformation is much broader than it was in 2020,” said Tim Harper, who leads election work for the Center for Democracy and Technology, a Washington nonprofit that advocates for digital rights and freedom of expression. “How this plays out will be difficult to determine until the election is over.”

ADVERTISEMENT