Big Tech’s Backtrack: How Tech Failures Fueled Election Disinformation
By: Isabel Sunderland and Jamie Neikrie
In August 2024, Issue One launched a brand new tool, “Big Tech’s Broken Promises,” which tracks the empty proclamations and covert policy changes of the world’s largest technology companies. In weeks leading up to the U.S. presidential election, Issue One’s Technology Reform team tracked over 30 new broken promises from Meta, YouTube, X, and TikTok related to election integrity and national security. What we tracked, and the additions we made, pulled back the curtain on the hollow commitments, misleading claims, and unequal enforcement by these companies that helped undermine the information ecosystem in which the election took place. Featured below is a deep dive into some of the most egregious broken promises.
I. Meta Auto-Generated Hundreds of Pages for Militia Groups
On October 29, 2024, WIRED and the Tech Transparency Project (TTP) uncovered the network of anti-government militia movements using Facebook to recruit, coordinate, prepare for violence, and promote ballot box stakeouts in the days leading up to the presidential election. 262 public and private Facebook groups and 193 Facebook pages for militia and anti-government activists were created between January 6, 2021 and October 29th, 2024. Most alarming, Meta was actively facilitating these militias by auto-generating group pages and pushing them into users’ feeds.
Among the active militia groups on Facebook was the American Patriots Three Percent (AP3), which was banned on the platform in 2020. The company claims that it successfully carried out a “strategic network disruption” of the organization, removing a total of 900 groups, pages, and accounts associated with members. Meta’s Dangerous Organizations and Individuals policy prohibits “hate organizations or organizations that may intend to coerce civilians or the government.” Still, WIRED and TTP found that many AP3 groups and profiles remained on Facebook, with clear indications (like insignias and photos) that they are tied to the movement. In the summer of 2024, Facebook auto-generated pages for AP3’s Arizona and New Mexico chapters, boosting the organization’s engagement and reach. Similarly, in one public group called “We Fight for Our Lives,” members openly discussed violently organizing ahead of the election, including comments like, “I’m ready to fight,” and “I’ll pull the [expletive] trigger fo sho.”
II. False Claims about Noncitizen Voting Proliferate Across Facebook and X
On October 30, 2024, the Institute for Strategic Dialogue (ISD) revealed the results of a 48-hour investigation into a coordinated network of accounts on X (formerly Twitter) that falsely claimed foreign nationals had voted illegally in the U.S. election. The campaign began on October 22, when one account posted, “I’m going to illegally vote for Donald Trump as a European national,” alongside images of completed ballots and various passports. Other accounts followed suit with nearly identical content, including tweets like, “Patriots landed last night… We’re now en route to ballot locations. #Q,” and, “Democrats had it coming for not enforcing voter ID laws.” X claims to prohibit “verifiably false or misleading information” intended to intimidate or discourage participation in elections. The platform’s policy includes measures to limit the reach of violative content, such as excluding posts from search results, removing them from timelines, restricting discoverability, and downranking them in replies. However, ISD’s investigation found that the most popular post in the noncitizen network had garnered over 11.7 million views. A sample of 50 accounts involved in the campaign collectively reached more than 14 million views, 160,000 likes, and 10,000 retweets. Furthermore, researchers reported receiving a push notification highlighting the network’s most successful account, even though they had not previously followed it.
These claims fly in the face of strict laws everywhere in the country that already prohibit noncitizens from voting and controls put in place at both the federal and state level to ensure that only eligible American citizens register to vote in federal elections. Numerous studies, including by the right-leaning Heritage Foundation, have shown that violations of these laws occur at near-zero rates. The CATO Institute says the percentage of noncitizens voting is closer to zero than 1%. A database maintained by the conservative Heritage Foundation indicates that only 85 cases exist involving allegations of noncitizens voting from 2002 to 2023.
False narratives about noncitizen voting are hardly restricted to X. In a separate investigation released last month by ISD, researchers found that Meta was permitting hundreds of advertisements falsely claiming that foreign nationals were voting in the U.S. election. For example, the Americans for Legal Immigration PAC ran ads on the platform claiming that 42,000 noncitizens were “poised to steal the 2024 elections” in Arizona. Meta’s policies explicitly prohibit ads that “discourage people from voting or call into question the legitimacy of an upcoming or ongoing election.” Still, other organizations like the Conservative Political Action Conference and TheBlaze similarly ran ads claiming that “ballot harvesting, early voting, mail-in ballots, and counting ballots after election day… have [been] used and are likely to [be] use[d] again to rig the election.” An investigation conducted by the BBC also released last month corroborates ISD’s significant findings. Indeed, the news team identified 118 paid-for ads posted on Facebook and Instagram since September 1, 2024, that were shown between 7.8 and 9 million times on the platform. The ads included broad claims of widespread voter registration of noncitizens and deceptive polls asking users whether noncitizens should be allowed to vote in elections despite it being already illegal.
III. Deceptive Ads about U.S. Presidential Candidates Permitted on Facebook
On October 30, 2024, The Washington Post reported on “Progress 2028” – a conservative group that posed as a pro-Vice President Harris group by presenting false claims about the candidate’s platforms. The group placed 13 different ads on Facebook that were designed to look like they supported Harris’ campaign, but mischaracterized her policies or touted controversial stances that she does not endorse. Meta’s Misinformation policy insists that the company has Terms and Conditions that prohibit content that promotes voter interference, threats of violence, and election misinformation.
However, ads from Progress 2028 have been allowed to stay up despite the fact that they falsely claim that Harris is in favor of ensuring undocumented immigrants can vote and receive Medicare benefits, instituting mandatory gun buybacks, and banning fracking. According to OpenSecrets, Progress 2028 was registered as a fictitious name by Building America’s Future, an organization that reportedly receives over $100 million in funding from Elon Musk and other billionaires. Yael Eisenstat, senior policy fellow at Cybersecurity for Democracy and member of Issue One’s Council for Responsible Social Media, observed that Meta’s ad transparency system — which identifies the ads only as being funded by the “Progress 2028” with no indication that the group is fictitious — gives users the false impression that the company has vetted and verified the ads as legitimate.
IV. X’s Grok AI Parrots Election Disinformation on X
On October 26, 2024, NBC News reported that X’s Grok AI fueled the spread of voter fraud conspiracy theories and boosted unfounded claims against Vice President Kamala Harris. X’s newest feature, “stories for you,” uses Grok AI to aggregate trending social media topics and curate a related feed of posts. According to NBC News, the information does not appear to be fact-checked by humans and “seemed to repeat false or unsubstantiated claims as if they were true.” For example, Grok repeatedly promoted debunked allegations about Dominion Voting Systems, accusing the company of “election rigging” and “fraud,” despite Dominion winning a $787.5 million defamation lawsuit against Fox News in 2023. The AI tool “accused Dominion of ‘potentially stifling legitimate discussions on election security’ through ‘legal threats.’”
X’s Civic Integrity Policy insists that the platform restricts the reach of “verifiably false or misleading information about the circumstances surrounding a civic process intended to intimidate or dissuade people from participating in an election or other civic process.” Still, days after repeating the fraudulent Dominion Voting theory, Grok claimed that election workers in Maricopa County, Arizona, were “corrupt” because of the speed at which they count ballots and promoted the false claim that voting machines in Tarrant County, Texas, were “flipping” votes. Lastly, Grok repeated baseless allegations that Vice President Harris used cocaine in the White House and attended parties hosted by Sean “Diddy” Combs. The posts aggregated by Grok have millions of views.
V. Foreign Interference Thrives Across All Social Media Platforms
TikTok, Meta, X, and YouTube have all made separate public commitments that claim that they stop foreign malign influence operations, label accounts that are funded or supported by government entities, reduce the spread of inauthentic content, and dismantle coordinated influence operations. But despite these promises, foreign actors continued to influence the 2024 election, empowered by widespread layoffs and policy rollbacks within the tech industry.
On September 4, 2024, the Department of Justice indicted two Russian nationals for their roles in an influence operation orchestrated by the state broadcaster Russian Today (RT). According to the indictment, the defendants co-opted and funneled nearly $10 million to an unnamed Tennessee-based online content company to pump pro-Russia propaganda to U.S. audiences across social media platforms. Two RT employees managed the operation from Moscow, using fake personas and shell organizations to finance the operation. Over the past year, the U.S. company published nearly 2,000 English-language videos, garnering more than 16 million views on multiple social media channels. The Department of Justice’s indictment stated that the social media content was “often consistent with the Government of Russia’s interest in amplifying U.S. domestic divisions in order to weaken U.S. opposition to core Government of Russia interests, such as its ongoing war in Ukraine.” According to the indictment, after the March 22, 2024, terrorist attack on a music venue in Moscow, the Russian national asked one of the company’s founders to blame Ukraine and the United States for the attack.
In similar fashion, in May 2024, the Counter Disinformation Network (CDN) found over 1,300 pro-Russian posts from June 4-28, 2024 resembling the Russian Doppelganger campaign, a targeted disinformation operation that imitated legitimate media sources to spread pro-Russian disinformation on Meta and X. The posts relied on a few key themes in different languages: criticism of government support for Ukraine, exploiting divisive domestic issues like inflation, supporting far-right political parties, and undermining Western alliances like NATO and the EU. On X, Doppelganger posts received a total of 4.6 million views or 2,025 views per post, despite the company making public commitments to stop foreign malign influence operations, label accounts that are funded or supported by government entities, and reduce the spread of inauthentic content. Six weeks after the operation was reported, only one of the Doppelganger accounts uncovered by CDN had been removed from the platform.
Doppelganger content also extended to advertisements on social media, in violation of policies that platforms enacted requiring the labeling of state-backed political advertisements and extensive prohibitions against ads containing disinformation. For example, on Meta platforms alone, CDN found at least 98 ads that boosted Doppelganger-related pro-Russian content. According to research from AI Forensics, 65% of ads connected to political and social issues from Doppleganger were spreading unlabeled on Facebook in over 16 countries in the European Union. Less than than 20% of the paid and boosted pro-Russian propaganda ads were taken down by Meta after being shown to users at least 2.6 million times.
VI. Major Platforms Fail to Extend Content Moderation to Livestreams
On October 18, 2024, ISD published a report that found that Facebook, X, YouTube, and TikTok consistently failed to enforce their community guidelines regarding election integrity on livestream content. ISD selected 26 pieces of livestream content that were determined to be violating community guidelines. The investigation found that of the 26 videos studied, 15 livestreams included likely instances of election and civic integrity policy violations. Likewise, eight videos featured prominent media and political figures making disproven claims about fraud and rigged votes in the 2020 election.
Notably, TikTok livestreams included troves of false, misleading, or unfounded claims about election integrity. Among them were numerous comments and claims targeted at Vice President Harris, which stated that the election would be rigged in her favor. The company's policy on Civic and Election Integrity claims that the platform "prohibit misinformation that may 'disrupt the peaceful transfer of power or lead to off-platform violence.'" However, in one livestream, a speaker claimed Vice President Harris "is promoting families to not exist anymore"; a comment in a separate livestream video said that the "[Democratic Party] are setting things up so that after the election, they're going to use mechanisms to rob us of our sovereignty."
Furthermore, ISD found instances of hate speech in nine out of 26 studied live streams (on Facebook, X, YouTube, and TikTok) – the majority (six live streams) of which were found on X. The bulk of hate speech identified was targeted at Jewish people and communities. Under its Violent Content policy, X does not allow content that includes violent threats, damage to "infrastructure that is essential to daily, civic, religious, or business activities," and "wishing, hoping or expressing desire for harm" against other groups. Likewise, the company states that it is "committed to combating hatred, prejudice, and intolerance – particularly when they are directed at marginalized and persecuted groups." For example, in one livestream discussion, a speaker said, "The Jews are the ones controlling the levers of people in the institutions."
No Lessons Learned
In the lead-up to the 2024 election and during the voting period, tech companies had the opportunity to uphold their promises to users and maintain the integrity of election-related information on their platforms. Instead, they gutted their trust and safety teams, rolled back key accountability policies, and shuttered researcher access. Our update to the Broken Promises tracker shows that those failures continued through Election Day, exposing voters to manipulation and undermining confidence in key democratic processes.
Isabel Sunderland is an intern on Issue One’s Technology Reform team.