Congressional Memo Alleges Facebook Knew Its AI Amplified January 6 Rhetoric

On October 5, 2021, Frances Haugen testified before a subcommittee in a Senate hearing room in Washington, D.C., that Facebook had purposefully changed the safety settings it had implemented on its platform prior to the 2020 election and had done so in the weeks preceding the January 6 Capitol attack. Haugen had met with lawmakers in private that summer before disclosing her identity in public on 60 Minutes two days prior, so the testimony in the Senate chamber was not shocking. However, the specificity of what she said, supported by thousands of pages of internal documents she had copied before leaving the company, made it difficult to write off as partisan noise.

The January 6 accusation was primarily procedural in nature. Before the November 2020 presidential election, Facebook had put in place what internal documents called “safety systems”—algorithmic modifications intended to lessen the amplification of controversial content and slow the dissemination of false information. According to Haugen’s testimony and the materials she submitted, Facebook disabled such systems or went back to their previous configurations after the election was called. She claimed that the safety setups decreased engagement, and engagement is what drives advertising income, which is why the corporation made these adjustments. Following the attack on the Capitol on January 6, 2021, the systems were turned back on. “Facebook changed those safety defaults in the run up to the election because they knew they were dangerous,” Haugen stated. “Because they wanted that growth back after the election, they returned to their original defaults.”

Important Information

FieldDetails
WhistleblowerFrances Haugen — former Facebook product manager on the Civic Integrity team; revealed identity October 3, 2021 on CBS 60 Minutes
Documents DisclosedThousands of pages of internal Facebook research, employee discussions, and management presentations — shared with Congress, SEC, state prosecutors, and The Wall Street Journal
WSJ Investigation“The Facebook Files” — published starting September 2021; nine-part investigative series based on Haugen’s documents
Congressional TestimonyHaugen testified before the Senate Commerce Subcommittee on Consumer Protection, October 5, 2021
Key January 6 AllegationFacebook deployed election safety systems before November 2020 — then removed or reversed them after Election Day; systems were restored only after the January 6 Capitol attack
Algorithm ConcernFacebook’s engagement-based ranking (“downstream MSI”) prioritized content likely to be shared or provoke reaction — amplifying angry, inflammatory content over moderate material
Stop the Steal FindingInternal documents indicated Facebook’s recommendation systems helped “Stop the Steal” spread rapidly despite internal warnings
“Harmful Non-Violating” ContentEngineers identified a category of damaging content that technically did not violate rules — and acknowledged no clear strategy to address it
SEC Complaints FiledAt least eight complaints alleging Facebook misled investors about its role in misinformation related to the 2020 election and January 6
Meta’s ResponseMeta disputed the characterization — said it invested in safety with 40,000+ employees dedicated to security; denied prioritizing profit over safety
Section 230 ImplicationHaugen called for reforming Section 230 to exclude algorithmic decisions — arguing companies have 100% control over their algorithms and should face liability for their design

In 2019, Facebook implemented an algorithm known internally as “downstream MSI” that increased the likelihood that content would show up in a user’s News Feed if it was likely to be shared or interacted with by others as it progressed via chains of reshares. According to Haugen and the internal records, the practical impact was that content that elicited strong emotional reactions traveled farther and more quickly than stuff that elicited measured, moderate ones. Outrage and anger are especially effective motivators for engagement. Despite internal concerns that it was doing just that, the “Stop the Steal” movement, which coordinated a large portion of the activities prior to January 6, quickly expanded via Facebook groups and recommendation systems in the weeks that followed the election.

The records also detailed a category of content that engineers classified as “harmful non-violating”—rhetoric that, according to internal evaluation, was detrimental but did not legally violate Facebook’s declared policies. Employees who had recognized the issue and flagged it upward were genuinely frustrated because there was no clear plan in place to deal with it, as documented in the documents. Facebook’s processes were not intended to bridge the gap between what the rules stated and what the content really contributed to public discourse. Haugen accused Facebook of deceiving investors about its involvement in spreading false information and violent extremism in relation to the 2020 election and January 6 in at least eight different complaints he filed with the Securities and Exchange Commission.

Congressional Memo Alleges Facebook Knew Its AI Amplified January 6 Rhetoric
Congressional Memo Alleges Facebook Knew Its AI Amplified January 6 Rhetoric

Facebook’s response at the time and in the years since has been consistent: social media was not the main cause of political polarization in the US, the company did not purposefully put profits ahead of safety, and it made significant investments in safety infrastructure with over 40,000 employees working on security and integrity. The business also took direct aim at Haugen, with executive Monika Bickert calling the documents she had taken “stolen” and speculating that she might have violated the law. The proposal did not succeed in court, and Haugen was shielded by federal law as a whistleblower.

The congressional memo aspect of this story was significant not only because an employee voiced concerns (big businesses have internal dissenters), but also because the documents seemed to demonstrate management awareness of what the platform’s systems were doing and a decision not to change them that was at least partially related to engagement and revenue considerations. Haugen’s description of it was as follows: “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.” She likened the incident to the tobacco industry’s decades-long internal admission that its products were harmful while maintaining a public denial.

It’s difficult to ignore how much of this is still unresolved. In January 2025, Zuckerberg announced the termination of Meta’s third-party fact-checking program, citing concerns about censorship as the reason. Haugen’s testimony sparked legislative interest, although this enthusiasm never completely materialized into legislation that may alter the laws governing algorithmic amplification. Haugen expressly urged Congress to amend Section 230 so that algorithmic decisions would be subject to accountability, but it is still essentially unchanged. The historical record now includes the records that detailed the company’s knowledge of its systems prior to January 6. It is still really unclear if they are included in any legal calculation.

Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments