The Jan. 6 committee spent months gathering stunning new details on how social media companies failed to address the online extremism and calls for violence that preceded the Capitol riot.
The evidence they collected was written up in a 122-page memo that was circulated among the committee, according to a draft viewed by The Washington Post. But in the end, committee leaders declined to delve into those topics in detail in their final report, reluctant to dig into the roots of domestic extremism taking hold in the Republican Party beyond former president Donald Trump and concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel’s sensitive deliberations.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
Congressional investigators found evidence that tech platforms — especially Twitter — failed to heed their own employees’ warnings about violent rhetoric on their platforms and bent their rules to avoid penalizing conservatives, particularly then-president Trump, out of fear of reprisals. The draft report details how most platforms did not take “dramatic” steps to rein in extremist content until after the attack on the Capitol, despite clear red flags across the internet.
“The sum of this is that alt-tech, fringe, and mainstream platforms were exploited in tandem by right-wing activists to bring American democracy to the brink of ruin,” the staffers wrote in their memo. “These platforms enabled the mobilization of extremists on smaller sites and whipped up conservative grievance on larger, more mainstream ones.”
Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs
But little of the evidence supporting those findings surfaced during the public phase of the committee’s probe, including its 845-page report that focused almost exclusively on Trump’s actions that day and in the weeks just before.
That focus on Trump meant the report missed an opportunity to hold social media companies accountable for their actions, or lack thereof, even though the platforms had been the subject of intense scrutiny since Trump’s first presidential campaign in 2016, the people familiar with the matter said.
Rep. Zoe Lofgren, left, a Northern California Democrat, resisted efforts to focus more of the committee’s report on social media companies, interviews indicate. (Shuran Huang/For The Washington Post)
Confronting that evidence would have forced the committee to examine how conservative commentators helped amplify the Trump messaging that ultimately contributed to the Capitol attack, the people said — a course that some committee members considered both politically risky and inviting opposition from some of the world’s most powerful tech companies, two of the people said.
“Given the amount of material they actually ultimately got from the big social media companies, I think it is unfortunate that we didn’t get a better picture of how ‘Stop the Steal’ was organized online, how the materials spread,” said Heidi Beirich, co-founder of the Global Project Against Hate and Extremism nonprofit. “They could have done that for us.”
The final Jan. 6 committee hearing, in 4 minutes
3:57
On Dec. 19, the House committee investigating the Jan. 6, 2021, insurrection wrapped proceedings and made criminal referrals for former president Donald Trump. (Video: Blair Guild/The Washington Post)
The Washington Post has previously reported that Rep. Liz Cheney (R-Wyo.), the committee’s co-chair, drove efforts to keep the report focused on Trump. But interviews since the report’s release indicate that Rep. Zoe Lofgren, a Democrat whose Northern California district includes Silicon Valley, also resisted efforts to bring more focus in the report onto social media companies.
Lofgren denied that she opposed including a social media appendix in the report or more detail about what investigators learned in interviews with tech company employees.
“I spent substantial time editing the proposed report so it was directly cited to our evidence, instead of news articles and opinion pieces,” Lofgren said. “In the end, the social media findings were included into other parts of the report and appendixes, a decision made by the Chairman in consultation with the Committee.”
Committee Chairman Bennie G. Thompson (D-Miss.), did not respond to a request for comment. Thompson previously had said that the committee would examine what steps tech companies took to prevent their platforms from “being breeding grounds to radicalizing people to violence.” Rep. Jamie Raskin (D-Md.), who sat in on some of the depositions of tech employees, did not comment.
Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs
Understanding the role social media played in the Jan. 6 attack on the Capitol takes on greater significance as tech platforms undo some of the measures they adopted to prevent political misinformation on their platforms. Under new owner Elon Musk, Twitter has laid off most of the team that reviewed tweets for abusive and inaccurate content and restored several prominent accounts that the company banned in the fallout from the Capitol attack, including Trump’s and that of his first national security adviser, Michael Flynn. Facebook, too, is considering allowing Trump back on its platform, a decision expected as early as next week.
“Recent events demonstrate that nothing about America’s stormy political climate or the role of social media within it has fundamentally changed since January 6th,” the staffers’ draft memo warned.
Two years after Jan. 6, Facebook mulls if Trump is still a threat
Social media moderation also has become a flash point in the states. Both Texas and Florida passed laws in the wake of Trump’s suspension to restrict what content social media platforms can remove from their sites, while California has imposed legislation requiring companies to disclose their content moderation policies.
But the Jan. 6 committee report offered only a vague recommendation about social media regulation, writing that congressional committees “should continue to evaluate policies of media companies that have had the effect of radicalizing their consumers.”
Twitter employees, they testified, could not even view the former president’s tweets in one of their key content moderation tools, and they ultimately had to create a Google document to keep track of his tweets as calls grew to suspend his account.
“ … Twitter was terrified of the backlash they would get if they followed their own rules and applied them to Donald Trump,” said one former employee, who testified to the committee under the pseudonym J. Johnson.
The committee staffers who focused on social media and extremism — known within the committee as “Team Purple” — spent more than a year sifting through tens of thousands of documents from multiple companies, interviewing social media company executives and former staffers, and analyzing thousands of posts. They sent a flurry of subpoenas and requests for information to social media companies ranging from Facebook to fringe social networks including Gab and the chat platform Discord.
Yet as the investigation continued, the role of social media took a back seat, despite Chairman Thompson’s earlier assertion that how misinformation spread and what steps social media companies took to prevent it were “two key questions for the Select Committee.”
Committee staffers drafted more subpoenas for social media executives, including former Twitter executive Del Harvey, who was described in testimony as key to Twitter’s decisions regarding Trump and violent rhetoric. But Cheney never signed the subpoenas, two of the people said, and they were never sent. Harvey did not testify. At one point, committee staffers discussed having a public hearing focused on the role of social media during the election, but none was scheduled, the people said.
The evidence they collected was written up in a 122-page memo that was circulated among the committee, according to a draft viewed by The Washington Post. But in the end, committee leaders declined to delve into those topics in detail in their final report, reluctant to dig into the roots of domestic extremism taking hold in the Republican Party beyond former president Donald Trump and concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel’s sensitive deliberations.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
Congressional investigators found evidence that tech platforms — especially Twitter — failed to heed their own employees’ warnings about violent rhetoric on their platforms and bent their rules to avoid penalizing conservatives, particularly then-president Trump, out of fear of reprisals. The draft report details how most platforms did not take “dramatic” steps to rein in extremist content until after the attack on the Capitol, despite clear red flags across the internet.
“The sum of this is that alt-tech, fringe, and mainstream platforms were exploited in tandem by right-wing activists to bring American democracy to the brink of ruin,” the staffers wrote in their memo. “These platforms enabled the mobilization of extremists on smaller sites and whipped up conservative grievance on larger, more mainstream ones.”
Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs
But little of the evidence supporting those findings surfaced during the public phase of the committee’s probe, including its 845-page report that focused almost exclusively on Trump’s actions that day and in the weeks just before.
That focus on Trump meant the report missed an opportunity to hold social media companies accountable for their actions, or lack thereof, even though the platforms had been the subject of intense scrutiny since Trump’s first presidential campaign in 2016, the people familiar with the matter said.
Rep. Zoe Lofgren, left, a Northern California Democrat, resisted efforts to focus more of the committee’s report on social media companies, interviews indicate. (Shuran Huang/For The Washington Post)
Confronting that evidence would have forced the committee to examine how conservative commentators helped amplify the Trump messaging that ultimately contributed to the Capitol attack, the people said — a course that some committee members considered both politically risky and inviting opposition from some of the world’s most powerful tech companies, two of the people said.
“Given the amount of material they actually ultimately got from the big social media companies, I think it is unfortunate that we didn’t get a better picture of how ‘Stop the Steal’ was organized online, how the materials spread,” said Heidi Beirich, co-founder of the Global Project Against Hate and Extremism nonprofit. “They could have done that for us.”
The final Jan. 6 committee hearing, in 4 minutes
3:57
On Dec. 19, the House committee investigating the Jan. 6, 2021, insurrection wrapped proceedings and made criminal referrals for former president Donald Trump. (Video: Blair Guild/The Washington Post)
The Washington Post has previously reported that Rep. Liz Cheney (R-Wyo.), the committee’s co-chair, drove efforts to keep the report focused on Trump. But interviews since the report’s release indicate that Rep. Zoe Lofgren, a Democrat whose Northern California district includes Silicon Valley, also resisted efforts to bring more focus in the report onto social media companies.
Lofgren denied that she opposed including a social media appendix in the report or more detail about what investigators learned in interviews with tech company employees.
“I spent substantial time editing the proposed report so it was directly cited to our evidence, instead of news articles and opinion pieces,” Lofgren said. “In the end, the social media findings were included into other parts of the report and appendixes, a decision made by the Chairman in consultation with the Committee.”
Committee Chairman Bennie G. Thompson (D-Miss.), did not respond to a request for comment. Thompson previously had said that the committee would examine what steps tech companies took to prevent their platforms from “being breeding grounds to radicalizing people to violence.” Rep. Jamie Raskin (D-Md.), who sat in on some of the depositions of tech employees, did not comment.
Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs
Understanding the role social media played in the Jan. 6 attack on the Capitol takes on greater significance as tech platforms undo some of the measures they adopted to prevent political misinformation on their platforms. Under new owner Elon Musk, Twitter has laid off most of the team that reviewed tweets for abusive and inaccurate content and restored several prominent accounts that the company banned in the fallout from the Capitol attack, including Trump’s and that of his first national security adviser, Michael Flynn. Facebook, too, is considering allowing Trump back on its platform, a decision expected as early as next week.
“Recent events demonstrate that nothing about America’s stormy political climate or the role of social media within it has fundamentally changed since January 6th,” the staffers’ draft memo warned.
Two years after Jan. 6, Facebook mulls if Trump is still a threat
Social media moderation also has become a flash point in the states. Both Texas and Florida passed laws in the wake of Trump’s suspension to restrict what content social media platforms can remove from their sites, while California has imposed legislation requiring companies to disclose their content moderation policies.
But the Jan. 6 committee report offered only a vague recommendation about social media regulation, writing that congressional committees “should continue to evaluate policies of media companies that have had the effect of radicalizing their consumers.”
Did Twitter give Trump a pass?
Some of what investigators uncovered in their interviews with employees of the platforms contradicts Republican claims that tech companies displayed a liberal bias in their moderation decisions — an allegation that has gained new attention recently as Musk has promoted a series of leaked internal communications known as the “Twitter Files.” The transcripts indicate the reverse, with former Twitter employees describing how the company gave Trump special treatment.Twitter employees, they testified, could not even view the former president’s tweets in one of their key content moderation tools, and they ultimately had to create a Google document to keep track of his tweets as calls grew to suspend his account.
“ … Twitter was terrified of the backlash they would get if they followed their own rules and applied them to Donald Trump,” said one former employee, who testified to the committee under the pseudonym J. Johnson.
The committee staffers who focused on social media and extremism — known within the committee as “Team Purple” — spent more than a year sifting through tens of thousands of documents from multiple companies, interviewing social media company executives and former staffers, and analyzing thousands of posts. They sent a flurry of subpoenas and requests for information to social media companies ranging from Facebook to fringe social networks including Gab and the chat platform Discord.
Yet as the investigation continued, the role of social media took a back seat, despite Chairman Thompson’s earlier assertion that how misinformation spread and what steps social media companies took to prevent it were “two key questions for the Select Committee.”
Committee staffers drafted more subpoenas for social media executives, including former Twitter executive Del Harvey, who was described in testimony as key to Twitter’s decisions regarding Trump and violent rhetoric. But Cheney never signed the subpoenas, two of the people said, and they were never sent. Harvey did not testify. At one point, committee staffers discussed having a public hearing focused on the role of social media during the election, but none was scheduled, the people said.