The developments underscore the far-reaching impact of a conservative legal campaign against initiatives established to avoid a repeat of the 2016 election, when Russia manipulated social media in an attempt to sow chaos and swing the vote for Donald Trump.
For months, researchers in government and academia have warned that a barrage of lawsuits, congressional demands and online attacks are having a chilling effect on programs intended to combat health and election misinformation. But the shift in communications about foreign meddling signals how ongoing litigation and Republican probes in Congress are unwinding efforts once viewed as critical to protecting U.S. national security interests.
Ben Nimmo, chief of global threat intelligence for Meta, said government officials stopped communicating foreign election interference threats to the company in July.
That month, a federal judge limited the Biden administration’s communications with tech platforms in response to a lawsuit alleging such coordination ran afoul of the First Amendment by encouraging companies to remove falsehoods about covid-19 and the 2020 election. The decision included a specific exemption to allow the government to continue to communicate with the companies about national security threats, specifically foreign interference in elections. The case, Missouri v. Biden, is now before the U.S. Supreme Court, which has paused lower court restrictions while it reviews the matter.
The shift erodes a partnership that was considered crucial to the integrity of elections around the world — just months before voters head to the polls in Taiwan, the European Union, India and the United States. Ahead of the 2024 U.S. presidential race, foreign actors such as China and Russia have become more aggressive at trying to exacerbate political tensions in the United States, while advanced artificial intelligence allows bad actors to easily create convincing political propaganda.
Sen. Mark R. Warner, the Democratic chair of the Senate Intelligence Committee, said “legal warfare by far-right actors” has led to a dire situation.
“We are seeing a potential scenario where all the major improvements in identifying, threat-sharing, and public exposure of foreign malign influence activity targeting U.S. elections have been systematically undermined,” the senator from Virginia said in a statement.
Social media companies have long communicated with law enforcement about threats of child pornography and terrorism, but they did not discuss the threat of Russian interference during the 2016 campaign. Amid revelations of that interference, the firms began meeting with the FBI and Department of Homeland Security officials responsible for protecting elections from foreign interference to share information about potential threats ahead of the 2018 midterms. Tech companies like Meta, Google and Twitter have also routinely relied on warnings from civil society groups and outside researchers about disinformation threats on their platforms.
“We believe that it’s important that we continue to build on the progress the defender community has made since 2016 and make sure that we work together to keep evolving our defenses against foreign interference,” Nimmo told reporters on a call.
Missouri v. Biden — and a parallel investigation in Congress led by Rep. Jim Jordan (R-Ohio) — has led to broad legal uncertainty about interactions between the federal government and the tech industry. Most of the allegations in the lawsuit focus on ways federal officials allegedly pressured social networks to remove misleading posts about coronavirus vaccines and elections.
But Meta’s announcement suggests that the Biden administration is broadly pulling back from even routine communications with Silicon Valley.
The federal judge’s July 4 ruling prohibited key agencies — including the State Department, the FBI and DHS — from urging companies to remove “protected free speech” from the platforms. However, Trump-appointed Judge Terry A. Doughty appeared to acknowledge concerns the decision could dismantle election integrity initiatives, specifying the restrictions did not apply to warning companies of national security threats or foreign attempts to influence elections. The 5th Circuit Court of Appeals ruling removed some of the restrictions, including communication with the State Department.
“There’s very legitimate questions that I think the Republicans have about the role that government should have, and I absolutely think we should have that conversation,” said Katie Harbath, a former director of public policy at Meta.
Even though the Supreme Court has temporarily lifted the restrictions on communications, the Biden administration appears to be treading cautiously.
“The fact that the government doesn’t have clear guidance creates this instinct to err on the side of caution and just not do anything lest they be seen as doing something problematic,” said Evelyn Douek, an assistant professor at Stanford Law School.
The conservative legal strategy is an evolution in a years-long effort to prevent companies from allegedly suppressing GOP views online. In addition to the litigation, Republicans, led by Jordan, have used their control of the House of Representatives to demand documents and testimony about the tech companies’ interactions with the Biden administration and accuse the White House of illegally colluding with Silicon Valley.
The litigation and political scrutiny has led to broad uncertainty among foreign policy officials about what communications with tech companies are appropriate, according to a former State Department official, who spoke on the condition of anonymity because of legal risks.
“If you start asking those people to second guess every time they need to send an email or pick up the phone to do pretty standard work that we’ve asked them to do on our behalf … it’s going to make the government less functional,” the person said.
The Department of Justice, the FBI and the State Department declined to comment. The White House did not respond to a request for comment.
During an October Senate hearing, Homeland Security Secretary Alejandro Mayorkas and FBI Director Christopher A. Wray said that they had overhauled their communications with the tech industry in the wake of the Missouri v. Biden litigation, following questioning from Sen. Rand Paul (R-Ky.).
“We’re having some interaction with social media companies, but all of those interactions have changed fundamentally in the wake of the court’s ruling,” Wray said.
Wray said the changes were made “out of an abundance of caution” to ensure the agency does not run afoul of any court rulings. Mayorkas said DHS no longer participates in periodic meetings with tech companies and other government agencies, where they previously discussed the “threat environment that the homeland faced.”
University academics and disinformation research groups are also in limbo. Many are seeking cheap legal representation to defend themselves against mounting cases. Some are overhauling programs to track online falsehoods, while others are reevaluating their communication with industry and the public altogether.
“The trust and safety workers are gone. The relationships with external researchers is now gone,” said Anika Collier Navaroli, senior fellow at the Tow Center for Digital Journalism and a former senior Twitter policy official. “And now this third piece of the actual information from the government is gone. … So we’re basically unprotected.”
Meta head of security policy Nathaniel Gleicher said that while the company has resources to detect coordinated attacks on its social networks, the government is often more adept at tracking campaigns that are organized off social media. Before the 2020 U.S. election, Meta dismantled three covert influence operations based in Russia, Mexico and Iran after receiving tips from law enforcement about their off-platform activity, according to Gleicher.
“Our investigators might not know that a campaign is coming until the last minute,” he said. “If they are operating off of our platforms, there are a number of times when a tip from [the] government has enabled us to take action.”
Influence operations from Russia, Iran and China continue to aim at domestic targets. Meta said Thursday it dismantled a group of 4,789 Facebook accounts posing as Americans discussing politics in the United States, often criticizing both sides of the political aisle. Some of these accounts appeared to be copying and pasting content from X onto Facebook, including posts by elected officials. In some instances, the network amplified X owner Elon Musk’s tweets on his platform, formerly known as Twitter.
The threat of such campaigns might only grow as the 2024 presidential campaign heats up. Meta warned that if the Russia-Ukraine war or U.S.-China relations become hot-button election issues, it expects foreign influence operations to target those debates, as well.
Renée DiResta, a technical research manager at the Stanford Internet Observatory, said the 2022 midterms showed that both political parties are vulnerable to these campaigns.
“These operations are real, they are global, and they target all political parties and positions — this is not a partisan issue,” she said. “In the U.S. 2022 midterms, we saw Iran targeting the progressive left and China targeting both the left and the right to advance state interests.”
Graham Brookie, vice president and senior director of the Atlantic Council’s Digital Forensic Research Lab, said that China-based foreign influence campaigns evolved to spread conspiracy theories or target leaders.
“It’s not getting better,” Brookie said. “The cost of engaging in foreign influence activities, especially in online information environments has not gone up for bad actors.”
Joseph Menn contributed to this report.