More stories

  • in

    Kanye West’s Instagram and Twitter accounts locked over antisemitic posts

    Kanye West’s Instagram and Twitter accounts locked over antisemitic postsThe rapper has also drawn heavy criticism for donning a ‘white lives matter’ T-shirt during Paris fashion week Kanye West has now had both his Instagram and Twitter accounts locked after antisemitic posts over the weekend.Twitter locked his account Sunday after it removed one of West’s tweets saying he was going “death con 3 On JEWISH PEOPLE” because it violated the service’s policies against hate speech.“I’m a bit sleepy tonight but when I wake up I’m going death con 3 On JEWISH PEOPLE The funny thing is I actually can’t be Anti Semitic because black people are actually Jew also You guys have toyed with me and tried to black ball anyone whoever opposes your agenda,” he tweeted on Saturday in a series of messages. The tweet has since been removed and West’s account locked.“The account in question has been locked due to a violation of Twitter’s policies,” a spokesperson for the platform told BuzzFeed News.The social media company Meta also restricted West’s Instagram account after the rapper made an antisemitic post on Friday in which he appeared to suggest the rapper Diddy was controlled by Jewish people, an antisemitic trope, NBC News reported.The controversial rapper who legally changed his name to Ye recently drew heavy criticism for donning a “white lives matter” T-shirt during Paris fashion week. He also dressed models in the shirt containing the phrase that the Anti-Defamation League considers a “hate slogan”.The league, which monitors violent extremists, notes on its website that white supremacist groups have promoted the phrase.West told Fox News host Tucker Carlson he thought the shirt was “funny” and “the obvious thing to do”.“I said, ‘I thought the shirt was a funny shirt; I thought the idea of me wearing it was funny,’” he told Carlson. “And I said, ‘Dad, why did you think it was funny?’ He said, ‘Just a Black man stating the obvious.’”During the same interview, West told Carlson that Jared Kushner, the Jewish son-in-law of former president Donald Trump, negotiated Middle East peace deals “to make money”.West was diagnosed with bipolar disorder several years ago and has spoken publicly about his mental health challenges.TopicsKanye WestTwitterUS politicsInstagramnewsReuse this content More

  • in

    Nick Clegg to decide on Trump’s 2023 return to Instagram and Facebook

    Nick Clegg to decide on Trump’s 2023 return to Instagram and FacebookMeta’s president of global affairs said it would be a decision ‘I oversee’ after the ex-president’s accounts were suspended in 2021 Nick Clegg, Meta’s president of global affairs, is charged with deciding whether Donald Trump will be allowed to return to Facebook and Instagram in 2023, Clegg said on Thursday.Speaking at an event held in Washington by news organization Semafor, Clegg said the company was seriously debating whether Trump’s accounts should be reinstated and said it was a decision that “I oversee and I drive”.Judge asks Trump’s team for proof that FBI planted documents at Mar-a-Lago Read moreClegg added that while he will be making the final call, he will consult the CEO, Mark Zuckerberg, the Facebook board of directors and outside experts.“It’s not a capricious decision,” he said. “We will look at the signals related to real-world harm to make a decision whether at the two-year point – which is early January next year – whether Trump gets reinstated to the platform.”The former president was suspended from a number of online platforms, including those owned by Meta, following the 6 January 2021 Capitol riot during which Trump used his social media accounts to praise and perpetuate the violence.While Twitter banned Trump permanently, Meta suspended Trump’s accounts for two years, to be later re-evaluated. In May 2021, a temporary ban was upheld by Facebook’s oversight board – a group of appointed academics and former politicians meant to operate independently of Facebook’s corporate leadership.However, the board returned the final decision on Trump’s accounts to Meta, suggesting the company decide in six months whether to make the ban permanent. Clegg said that decision will be made by 7 January 2023.Clegg previously served as Britain’s deputy prime minister and joined Facebook as vice‑president for global affairs and communications in 2018. In February, he was promoted to the top company policy executive role.In the years since he began at Meta, Clegg has seen the company through a number of scandals, including scrutiny of its policies during the 2016 US presidential election, Facebook’s role in the persecution of the Rohingya in Myanmar, and the revelations made by whistleblower Frances Haugen.TopicsDonald TrumpNick CleggMark ZuckerbergFacebookInstagramUS Capitol attackUS politicsnewsReuse this content More

  • in

    Instagram CEO testifies before Congress over platform’s impact on kids

    Instagram CEO testifies before Congress over platform’s impact on kidsAdam Mosseri defends platform and calls for creation of body to determine best practices to help keep young people safe online The head of Instagram began testimony before US lawmakers on Wednesday afternoon about protecting children online, in the latest congressional hearing scrutinizing the social media platform’s impact on young users.Adam Mosseri defended the platform and called for the creation of an industry body to determine best practices to help keep young people safe online. Mosseri said in written testimony before the Senate commerce consumer protection panel the industry body should address “how to verify age, how to design age-appropriate experiences, and how to build parental controls”.“We all want teens to be safe online,” Mosseri said in opening statements. “The internet isn’t going away, and I believe there’s important work that we can do together – industry and policymakers – to raise the standards across the internet to better serve and protect young people.”Instagram and its parent company, Meta Platforms (formerly Facebook), have been facing global criticism over the ways their services affect the mental health, body image and online safety of younger users.In opening statements, Senator Richard Blumenthal promised to be “ruthless” in the hearing, saying “the time for self-policing and self-regulation is over”.“Self policing depends on trust, and the trust is gone,” he said. “The magnitude of these problems requires both and broad solutions and accountability which has been lacking so far.”In November, a bipartisan coalition of US state attorneys general said it had opened an inquiry into Meta for promoting Instagram to children despite potential harms. And in September, US lawmakers grilled Facebook’s head of safety, Antigone Davis, about the impacts of the company’s products on children.The scrutiny follows the release of internal Facebook documents by a former employee turned whistleblower, which revealed the company’s own internal research showed Instagram negatively affected the mental health of teens, particularly regarding body image issues.Ahead of Wednesday’s hearing, Instagram said it will be stricter about the types of content it recommends to teens and will nudge young users toward different areas if they dwell on one topic for a long time.In a blogpost published on Tuesday, the social media service announced it was switching off the ability for people to tag or mention teens who do not follow them on the app and would enable teen users to to bulk delete their content and previous likes and comments.In the blogpost, Mosseri also said Instagram was exploring controls to limit potentially harmful or sensitive material, was working on parental control tools and was launching a “Take a Break” feature, which reminds people to take a brief pause from the app after using it for a certain amount of time, in certain countries.Democratic senator and chair of the panel, Richard Blumenthal called the company’s product announcement “baby steps”.“They are more a PR gambit than real action done within hours of the CEO testifying that are more to distract than really solve the problem,” he told Politico.Republican Senator Marsha Blackburn criticized the company’s product announcement as “hollow”, saying in a statement: “Meta is attempting to shift attention from their mistakes by rolling out parental guides, use timers and content control features that consumers should have had all along.”An Instagram spokeswoman said the company would continue its pause on plans for a version of Instagram for kids. Instagram suspended plans for that project in September amid growing opposition to the project.TopicsInstagramUS CongressUS politicsnewsReuse this content More

  • in

    The whistleblower who plunged Facebook into crisis

    After a set of leaks last month that represented the most damaging insight into Facebook’s inner workings in the company’s history, the former employee behind them has come forward. Now Frances Haugen has given evidence to the US Congress – and been praised by senators as a ‘21st century American hero’. Will her testimony accelerate efforts to bring the social media giant to heel?

    How to listen to podcasts: everything you need to know

    On Monday, Facebook and its subsidiaries Instagram and WhatsApp went dark after a router failure. There were thousands of negative headlines, millions of complaints, and more than 3 billion users were forced offline. On Tuesday, the company’s week got significantly worse. Frances Haugen, a former product manager with Facebook, testified before US senators about what she had seen in her two years there – and set out why she had decided to leak a trove of internal documents to the Wall Street Journal. Haugen had revealed herself as the source of the leak a few days earlier. And while the content of the leak – from internal warnings of the harm being done to teenagers by Instagram to the deal Facebook gives celebrities to leave their content unmoderated – had already led to debate about whether the company needed to reform, Haugen’s decision to come forward escalated the pressure on Mark Zuckerberg. In this episode, Nosheeen Iqbal talks to the Guardian’s global technology editor, Dan Milmo, about what we learned from Haugen’s testimony, and how damaging a week this could be for Facebook. Milmo sets out the challenges facing the company as it seeks to argue that the whistleblower is poorly informed or that her criticism is mistaken. And he reflects on what options politicians and regulators around the world will consider as they look for ways to curb Facebook’s power, and how likely such moves are to succeed. After Haugen spoke, Zuckerberg said her claims that the company puts profit over people’s safety were “just not true”. In a blog post, he added: “The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content.” You can read more of Zuckerberg’s defence here. And you can read an analysis of how Haugen’s testimony is likely to affect Congress’s next move here. Archive: BBC; YouTube; TikTok; CSPAN; NBC; CBS;CNBC; Vice; CNN More

  • in

    Facebook ‘tearing our societies apart’: key excerpts from a whistleblower

    FacebookFacebook ‘tearing our societies apart’: key excerpts from a whistleblower Frances Haugen tells US news show why she decided to reveal inside story about social networking firm Dan Milmo Global technology editorMon 4 Oct 2021 08.33 EDTLast modified on Mon 4 Oct 2021 10.30 EDTFrances Haugen’s interview with the US news programme 60 Minutes contained a litany of damning statements about Facebook. Haugen, a former Facebook employee who had joined the company to help it combat misinformation, told the CBS show the tech firm prioritised profit over safety and was “tearing our societies apart”.Haugen will testify in Washington on Tuesday, as political pressure builds on Facebook. Here are some of the key excerpts from Haugen’s interview.Choosing profit over the public goodHaugen’s most cutting words echoed what is becoming a regular refrain from politicians on both sides of the Atlantic: that Facebook puts profit above the wellbeing of its users and the public. “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money.”She also accused Facebook of endangering public safety by reversing changes to its algorithm once the 2020 presidential election was over, allowing misinformation to spread on the platform again. “And as soon as the election was over, they turned them [the safety systems] back off or they changed the settings back to what they were before, to prioritise growth over safety. And that really feels like a betrayal of democracy to me.”Facebook’s approach to safety compared with othersIn a 15-year career as a tech professional, Haugen, 37, has worked for companies including Google and Pinterest but she said Facebook had the worst approach to restricting harmful content. She said: “I’ve seen a bunch of social networks and it was substantially worse at Facebook than anything I’d seen before.” Referring to Mark Zuckerberg, Facebook’s founder and chief executive, she said: “I have a lot of empathy for Mark. And Mark has never set out to make a hateful platform. But he has allowed choices to be made where the side-effects of those choices are that hateful, polarising content gets more distribution and more reach.”Instagram and mental healthThe document leak that had the greatest impact was a series of research slides that showed Facebook’s Instagram app was damaging the mental health and wellbeing of some teenage users, with 30% of teenage girls feeling that it made dissatisfaction with their body worse.She said: “And what’s super tragic is Facebook’s own research says, as these young women begin to consume this eating disorder content, they get more and more depressed. And it actually makes them use the app more. And so, they end up in this feedback cycle where they hate their bodies more and more. Facebook’s own research says it is not just that Instagram is dangerous for teenagers, that it harms teenagers, it’s that it is distinctly worse than other forms of social media.”Facebook has described the Wall Street Journal’s reporting on the slides as a “mischaracterisation” of its research.Why Haugen leaked the documentsHaugen said “person after person” had attempted to tackle Facebook’s problems but had been ground down. “Imagine you know what’s going on inside of Facebook and you know no one on the outside knows. I knew what my future looked like if I continued to stay inside of Facebook, which is person after person after person has tackled this inside of Facebook and ground themselves to the ground.”Having joined the company in 2019, Haugen said she decided to act this year and started copying tens of thousands of documents from Facebook’s internal system, which she believed show that Facebook is not, despite public comments to the contrary, making significant progress in combating online hate and misinformation . “At some point in 2021, I realised, ‘OK, I’m gonna have to do this in a systemic way, and I have to get out enough that no one can question that this is real.’”Facebook and violenceHaugen said the company had contributed to ethnic violence, a reference to Burma. In 2018, following the massacre of Rohingya Muslims by the military, Facebook admitted that its platform had been used to “foment division and incite offline violence” relating to the country. Speaking on 60 Minutes, Haugen said: “When we live in an information environment that is full of angry, hateful, polarising content it erodes our civic trust, it erodes our faith in each other, it erodes our ability to want to care for each other. The version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world.”Facebook and the Washington riotThe 6 January riot, when crowds of rightwing protesters stormed the Capitol, came after Facebook disbanded the Civic Integrity team of which Haugen was a member. The team, which focused on issues linked to elections around the world, was dispersed to other Facebook units following the US presidential election. “They told us: ‘We’re dissolving Civic Integrity.’ Like, they basically said: ‘Oh good, we made it through the election. There wasn’t riots. We can get rid of Civic Integrity now.’ Fast-forward a couple months, we got the insurrection. And when they got rid of Civic Integrity, it was the moment where I was like, ‘I don’t trust that they’re willing to actually invest what needs to be invested to keep Facebook from being dangerous.’”The 2018 algorithm changeFacebook changed the algorithm on its news feed – Facebook’s central feature, which supplies users with a customised feed of content such as friends’ photos and news stories – to prioritise content that increased user engagement. Haugen said this made divisive content more prominent.“One of the consequences of how Facebook is picking out that content today is it is optimising for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarising – it’s easier to inspire people to anger than it is to other emotions.” She added: “Facebook has realised that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money.”Haugen said European political parties contacted Facebook to say that the news feed change was forcing them to take more extreme political positions in order to win users’ attention. Describing polititicians’ concerns, she said: “You are forcing us to take positions that we don’t like, that we know are bad for society. We know if we don’t take those positions, we won’t win in the marketplace of social media.”In a statement to 60 Minutes, Facebook said: “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true. If any research had identified an exact solution to these complex challenges, the tech industry, governments, and society would have solved them a long time ago.”TopicsFacebookSocial networkingUS Capitol attackInstagramMental healthSocial mediaYoung peoplenewsReuse this content More

  • in

    Facebook to suspend Trump’s account for two years

    Facebook is suspending Donald Trump’s account for two years, the company has announced in a highly anticipated decision that follows months of debate over the former president’s future on social media.“Given the gravity of the circumstances that led to Mr Trump’s suspension, we believe his actions constituted a severe violation of our rules which merit the highest penalty available under the new enforcement protocols. We are suspending his accounts for two years, effective from the date of the initial suspension on January 7 this year,” Nick Clegg, Facebook’s vice-president of global affairs, said in a statement on Friday.At the end of the suspension period, Facebook said, it would work with experts to assess the risk to public safety posed by reinstating Trump’s account. “We will evaluate external factors, including instances of violence, restrictions on peaceful assembly and other markers of civil unrest,” Clegg wrote. “If we determine that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to re-evaluate until that risk has receded.”He added that once the suspension was lifted, “a strict set of rapidly escalating sanctions” would be triggered if Trump violated Facebook policies.Friday’s decision comes just weeks after input from the Facebook oversight board – an independent advisory committee of academics, media figures and former politicians – who recommended in early May that Trump’s account not be reinstated.However the oversight board punted the ultimate decision on Trump’s fate back to Facebook itself, giving the company six months to make the final call. The board said that Facebook’s “indeterminate and standardless penalty of indefinite suspension” for Trump was “not appropriate”, criticism that Clegg wrote the company “absolutely accept[s]”.The new policy allows for escalating penalties of suspensions for one month, six months, one year, and two years.The former president has been suspended since January, following the deadly Capitol attack that saw a mob of Trump supporters storm Congress in an attempt to overturn the 2020 presidential election. The company suspended Trump’s Facebook and Instagram accounts over posts in which he appeared to praise the actions of the rioters, saying that his actions posed too great a risk to remain on the platform.Following the Capitol riot, Trump was suspended from several major tech platforms, including Twitter, YouTube and Snapchat. Twitter has since made its ban permanent.The former president called Facebook’s decision “an insult to the record-setting 75m people, plus many others, who voted for us in the 2020 Rigged Presidential Election,” in a statement. “They shouldn’t be allowed to get away with this censoring and silencing, and ultimately, we will win.” Trump received fewer than 75m votes in the 2020 election, which he lost. He also hinted at a 2024 run.Facebook also announced that it would revoke its policy of treating speech by politicians as inherently newsworthy and exempt from enforcement of its content rules that ban, among other things, hate speech. The decision marks a major reversal of a set of policies that Clegg and Facebook’s CEO, Mark Zuckerberg, once championed as crucial to democracy and free speech.The company first created the newsworthiness exemption to its content rules in 2016, following international outcry over its decision to censor posts including the historic “napalm girl” photograph for violating its ban on nude images of children. The new rule tacitly acknowledged the importance of editorial judgment in Facebook’s censorship decisions.In 2019, at a speech at the Atlantic festival in Washington, Clegg revealed that Facebook had decided to treat all speech by politicians as newsworthy, exempting it from content rules. “Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be,” Clegg said at the time.Under the new rules, Clegg wrote Friday, “when we assess content for newsworthiness, we will not treat content posted by politicians any differently from content posted by anyone else”.The newsworthiness exemption is by no means the only policy area in which Facebook treats politicians differently from other users. The company also exempts politicians’ speech from its third-party fact-checking and maintains a list of high-profile accounts that are exempted from the AI systems that Facebook relies on for enforcement of many of its rules.Facebook did not immediately respond to questions about whether those policies remain in effect. More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More

  • in

    'Four years of propaganda': Trump social media bans come too late, experts say

    In the 24 hours since the US Capitol in Washington was seized by a Trump-supporting mob disputing the results of the 2020 election, American social media companies have barred the president from their platforms for spreading falsehoods and inciting the crowd.Facebook, Snapchat and Twitch suspended Donald Trump indefinitely. Twitter locked his account temporarily. Multiple platforms removed his messages.Those actions, coming just days before the end of Trump’s presidency, are too little, too late, according to misinformation experts and civil rights experts who have long warned about the rise of misinformation and violent rightwing rhetoric on social media sites and Trump’s role in fueling it.“This was exactly what we expected,” said Brian Friedberg, a senior researcher at the Harvard Shorenstein Center’s Technology and Social Change Project who studies the rise of movements like QAnon. “It is very consistent with how the coalescing of different factions responsible for what happened yesterday have been operating online, and how platforms’ previous attempts to deal with them have fallen short.”Over the past decade, tech platforms have been reluctant to moderate Trump’s posts, even as he repeatedly violated hate speech regulations. Before winning the presidency, Trump used Twitter to amplify his racist campaign asserting, falsely, that Barack Obama was not born in the US. As president, he shared racist videos targeting Muslims on Twitter and posted on Facebook in favor of banning Muslims from entering the US, a clear violation of the platform’s policies against hate speech. He retweeted to his tens of millions of followers a video of one of his supporters shouting “white power!” in 2020 June. He appeared to encourage violence against Black Lives Matter protests in a message shared to multiple platforms that included the phrase “when the looting starts, the shooting starts”.Trump’s lies and rhetoric found an eager audience online – one that won’t disappear when his administration ends. Experts warn the platforms will continue to be used to organize and perpetuate violence. They point, for example, to Facebook and YouTube’s failure to curb the proliferation of dangerous conspiracy theory movements like QAnon, a baseless belief that a secret cabal is controlling the government and trafficking children and that Trump is heroically stopping it. Parts of the crowd that stormed the Capitol on Wednesday to bar the certification of Trump’s election defeat donned QAnon-related merchandise, including hats and T-shirts, and the action was discussed weeks in advance on many QAnon-related groups and forums.QAnon theories and communities have flourished on Facebook this year. By the time the company banned QAnon-themed groups, pages and accounts in October, hundreds of related pages and groups had amassed more than 3 million followers and members.YouTube removed “tens of thousands of QAnon-videos and terminated hundreds of channels” around the time of Facebook’s measures. It also updated its policy to target more conspiracy theory videos that promote real-world violence, but it still stopped short of banning QAnon content outright. A spokesman from YouTube noted the company had taken a number of other actions to address QAnon content, including adding information panels sharing facts about QAnon on videos as early as 2018.Trump’s leverage of social media to spread propaganda has gone largely unchecked amid a vacuum of laws regulating government speech on social media, said Jennifer M Grygiel, assistant professor of communication at Syracuse University and expert on social media.Grygiel cited the Smith-Mundt Act of 1948, which regulates the distribution of government propaganda, as an example of one law that limits the government’s communication. But such regulation does not exist for the president’s Twitter account, Grygiel said. Instead we have relied on the assumption the president would not use his social media account to incite an insurrection.“What happened this week is the product of four years of systematic propaganda from the presidency,” Grygiel said.In the absence of any meaningful regulation, tech companies have had little incentive to regulate their massively profitable platforms, curb the spread of falsehoods that produce engagement and moderate the president.That’s why experts say things have to change. In 2020, Republicans and Democrats amplified calls to regulate big tech. The events this week underscore that the reckoning over big tech must include measures aimed at addressing the risks posed by leaders lying and promoting violence on their platforms, some argue.“The violence that we witnessed today in our nation’s capital is a direct response to the misinformation, conspiracy theories and hate speech that have been allowed to spread on social media platforms like Facebook, YouTube, Twitter etc,” said Jim Steyer, who runs the non-profit children’s advocacy organization Common Sense Media and helped organize the Stop Hate for Profit campaign (with the ADL and a number of civil rights organizations), which called on advertisers to boycott Facebook over hate speech concerns and cost Facebook millions.“Social media platforms must be held accountable for their complicity in the destruction of our democracy,” he added, arguing that in absence of meaningful enforcement from social media, Congress must pass better legislation to address hate speech on these platforms.Facebook and Twitter did not respond to requests for comment.Grygiel said it was time to move away from the idea that a president should be tweeting at all. Adam Mosseri, head of Facebook’s subsidiary Instagram, said on Twitter on Thursday evening that Facebook has long said it believes “regulation around harmful content would be a good thing”. He acknowledged that Facebook “cannot tackle harmful content without considering those in power as a potential source”.Grygiel said: “We need non-partisan work here. We need legislation that ensures no future president can ever propagandize the American people in this way again.” More