More stories

  • in

    Meta lifts restrictions on Trump’s Facebook and Instagram accounts

    Meta has removed previous restrictions on the Facebook and Instagram accounts of Donald Trump as the 2024 election nears, the company announced on Friday.Trump was allowed to return to the social networks in 2023 with “guardrails” in place, after being banned over his online behavior during the 6 January insurrection. Those guardrails have now been removed.“In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for president on the same basis,” Meta said in a blogpost, citing the Republican national convention, slated for next week, which will formalize Trump as the party’s candidate.As a result, Meta said, Trump’s accounts will no longer be subject to heightened suspension penalties, which Meta said were created in response to “extreme and extraordinary circumstances” and “have not had to be deployed”.“All US presidential candidates remain subject to the same community standards as all Facebook and Instagram users, including those policies designed to prevent hate speech and incitement to violence,” the company’s blogpost reads.Since his return to Meta’s social networks, Trump has primarily shared campaign information, attacks on Democratic candidate Biden, and memes on his accounts.Critics of Trump and online safety advocates have expressed concern that Trump’s return could lead to a rise of misinformation and incitement of violence, as was seen during the Capitol riot that prompted his initial ban.The Biden campaign condemned Meta’s decision in a statement on Friday, saying it is a “greedy, reckless decision” that constitutes “ a direct attack on our safety and our democracy”.“Restoring his access is like handing your car keys to someone you know will drive your car into a crowd and off a cliff,” said campaign spokesperson Charles Kretchmer Lutvak. “It is holding a megaphone for a bonafide racist who will shout his hate and white supremacy from the rooftops and try to take it mainstream.”In addition to Meta platforms, other major social media firms banned Trump due to his online activity surrounding the 6 January attack, including Twitter (now X), Snapchat and YouTube.The former president was allowed back on X last year by the decision of Elon Musk, who bought the company in 2022, though the former president has not yet tweeted.Trump returned to YouTube in March 2023. He remains banned from Snapchat.Trump founded his own social network, Truth Social, in early 2022. More

  • in

    Battle lines drawn as US states take on big tech with online child safety bills

    On 6 April, Maryland became the first state in the US to pass a “Kids Code” bill, which aims to prevent tech companies from collecting predatory data from children and using design features that could cause them harm. Vermont’s legislature held its final hearing before a full vote on its Kids Code bill on 11 April. The measures are the latest in a salvo of proposed policies that, in the absence of federal rules, have made state capitols a major battlefield in the war between parents and child advocates, who lament that there are too few protections for minors online, and Silicon Valley tech companies, who protest that the recommended restrictions would hobble both business and free speech.Known as Age-Appropriate Design Code or Kids Code bills, these measures call for special data safeguards for underage users online as well as blanket prohibitions on children under certain ages using social media. Maryland’s measure passed with unanimous votes in its house and senate.In all, nine states across the country – Maryland, Vermont, Minnesota, Hawaii, Illinois, New Mexico, South Carolina, New Mexico and Nevada – have introduced and are now hashing out bills aimed at improving online child safety. Minnesota’s bill passed the house committee in February.Lawmakers in multiple states have accused lobbyists for tech firms of deception during public hearings. Tech companies have also spent a quarter of a million dollars lobbying against the Maryland bill to no avail.Carl Szabo, vice-president and general counsel of the tech trade association NetChoice, spoke against the Maryland bill at a state senate finance committee meeting in mid-2023 as a “lifelong Maryland resident, parent, [spouse] of a child therapist”.Later in the hearing, a Maryland state senator asked: “Who are you, sir? … I don’t believe it was revealed at the introduction of your commentary that you work for NetChoice. All I heard was that you were here testifying as a dad. I didn’t hear you had a direct tie as an employee and representative of big tech.”For the past two years, technology giants have been directly lobbying in some states looking to pass online safety bills. In Maryland alone, tech giants racked up more than $243,000 in lobbying fees in 2023, the year the bill was introduced. Google spent $93,076, Amazon $88,886, and Apple $133,449 last year, according to state disclosure forms.Amazon, Apple, Google and Meta hired in-state lobbyists in Minnesota and sent employees to lobby directly in 2023. In 2022, the four companies also spent a combined $384,000 on lobbying in Minnesota, the highest total up to that point, according to the Minnesota campaign finance and public disclosure board.The bills require tech companies to undergo a series of steps aimed at safeguarding children’s experiences on their websites and assessing their “data protection impact”. Companies must configure all default privacy settings provided to children by online products to offer a high level of privacy, “unless the covered entity can demonstrate a compelling reason that a different setting is in the best interests of children”. Another requirement is to provide privacy information and terms of service in clear, understandable language for children and provide responsive tools to help children or their parents or guardians exercise their privacy rights and report concerns.The legislation leaves it to tech companies to determine whether users are underage but does not require verification by documents such as a driver’s license. Determining age could come from data profiles companies have on a user, or self-declaration, where users must enter their birth date, known as “age-gating”.Critics argue the process of tech companies guessing a child’s age may lead to privacy invasions.“Generally, this is how it will work: to determine whether a user in a state is under a specific age and whether the adult verifying a minor over that designated age is truly that child’s parent or guardian, online services will need to conduct identity verification,” said a spokesperson for NetChoice.The bills’ supporters argue that users of social media should not be required to upload identity documents since the companies already know their age.“They’ve collected so many data points on users that they are advertising to kids because they know the user is a kid,” said a spokesperson for the advocacy group the Tech Oversight Project. “Social media companies’ business models are based on knowing who their users are.”NetChoice – and by extension, the tech industry – has several alternative proposals for improving child safety online. They include digital literacy and safety education in the classroom for children to form “an understanding of healthy online practices in a classroom environment to better prepare them for modern challenges”.At a meeting in February to debate a proposed bill aimed at online child safety, NetChoice’s director, Amy Bos, argued that parental safety controls introduced by social media companies and parental interventions such as parents taking away children’s phones when they have racked up too much screen time were better courses of action than regulation. Asking parents to opt into protecting their children often fails to achieve wide adoption, though. Snapchat and Discord told the US Senate in February that fewer than 1% of under-18 users on either social network had parents who monitor their online behavior using parental controls.Bos also ardently argued that the proposed bill breached first amendment rights. Her testimony prompted a Vermont state senator to ask: “You said, ‘We represent eBay and Etsy.’ Why would you mention those before TikTok and X in relation to a bill about social media platforms and teenagers?”NetChoice is also promoting the bipartisan Invest in Child Safety Act, which is aimed at giving “cops the needed resources to put predators behind bars”, it says, highlighting that less than 1% of reported child sexual abuse material (CSAM) violations are investigated by law enforcement due to a lack of resources and capacity.However, critics of NetChoice’s stance argue that more needs to be done proactively to prevent children from harm in the first place and that tech companies should take responsibility for ensuring safety rather than placing it on the shoulders of parents and children.“Big Tech and NetChoice are mistaken if they think they’re still fooling anybody with this ‘look there not here’ act,” said Sacha Haworth, executive director of the Tech Oversight Project. “The latest list of alleged ‘solutions’ they propose is just another feint to avoid any responsibility and kick the can down the road while continuing to profit off our kids.”All the state bills have faced opposition by tech companies in the form of strenuous statements or in-person lobbying by representatives of these firms.Other tech lobbyists needed similar prompting to Bos and Szabo to disclose their relevant tech patrons during their testimonies at hearings on child safety bills, if they notified legislators at all. A registered Amazon lobbyist who has spoken at two hearings on New Mexico’s version of the Kids Code bill said he represented the Albuquerque Hispano Chamber of Commerce and the New Mexico Hospitality Association. He never mentioned the e-commerce giant. A representative of another tech trade group did not disclose his organization’s backing from Meta at the same Vermont hearing that saw Bos’s motives and affiliations questioned – arguably the company that would be most affected by the bill’s stipulations.The bills’ supporters say these speakers are deliberately concealing who they work for to better convince lawmakers of their messaging.“We see a clear and accelerating pattern of deception in anti-Kids Code lobbying,” said Haworth of the Tech Oversight Project, which supports the bills. “Big tech companies that profit billions a year off kids refuse to face outraged citizens and bereaved parents themselves in all these states, instead sending front-group lobbyists in their place to oppose this legislation.”NetChoice denied the accusations. In a statement, a spokesperson for the group said: “We are a technology trade association. The claim that we are trying to conceal our affiliation with the tech industry is ludicrous.”These state-level bills follow attempts in California to introduce regulations aimed at protecting children’s privacy online. The California Age-Appropriate Design Code Act is based on similar legislation from the UK that became law in October. The California bill, however, was blocked from being passed into law in late 2023 by a federal judge, who granted NetChoice a preliminary injunction, citing potential threats to the first amendment. Rights groups such as the American Civil Liberties Union also opposed the bill. Supporters in other states say they have learned from the fight in California. They point out that language in the eight other states’ bills has been updated to address concerns raised in the Golden state.The online safety bills come amid increasing scrutiny of Meta’s products for their alleged roles in facilitating harm against children. Mark Zuckerberg, its CEO, was told he had “blood on his hands” at a January US Senate judiciary committee hearing on digital sexual exploitation. Zuckerberg turned and apologized to a group of assembled parents. In December, the New Mexico attorney general’s office filed a lawsuit against Meta for allegedly allowing its platforms to become a marketplace for child predators. The suit follows a 2023 Guardian investigation that revealed how child traffickers were using Meta platforms, including Instagram, to buy and sell children into sexual exploitation.“In time, as Meta’s scandals have piled up, their brand has become toxic to public policy debates,” said Jason Kint, CEO of Digital Content Next, a trade association focused on the digital content industry. “NetChoice leading with Apple, but then burying that Meta and TikTok are members in a hearing focused on social media harms sort of says it all.”A Meta spokesperson said the company wanted teens to have age-appropriate experiences online and that the company has developed more than 30 child safety tools.“We support clear, consistent legislation that makes it simple for parents to manage their teens’ online experiences,” said the spokesperson. “While some laws align with solutions we support, we have been open about our concerns over state legislation that holds apps to different standards in different states. Instead, parents should approve their teen’s app downloads, and we support legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps.” More

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    Meta allows ads saying 2020 election was rigged on Facebook and Instagram

    Meta is now allowing Facebook and Instagram to run political advertising saying the 2020 election was rigged.The policy was reportedly introduced quietly in 2022 after the US midterm primary elections, according to the Wall Street Journal, citing people familiar with the decision. The previous policy prevented Republican candidates from running ads arguing during that campaign that the 2020 election, which Donald Trump lost to Joe Biden, was stolen.Meta will now allow political advertisers to say past elections were “rigged” or “stolen”, although it still prevents them from questioning whether ongoing or future elections are legitimate.Other social media platforms have been making changes to their policies ahead of the 2024 presidential election, for which online messaging is expected to be fiercely contested.In August, X (formerly known as Twitter) said it would reverse its ban on political ads, originally instituted in 2019.Earlier, in June, YouTube said it would stop removing content falsely claiming the 2020 election, or other past US presidential elections, were fraudulent, reversing the stance it took after the 2020 election. It said the move aimed to safeguard the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions”.Meta, too, reportedly weighed free-speech considerations in making its decision. The Journal reported that Nick Clegg, president of global affairs, took the position that the company should not decide whether elections were legitimate.The Wall Street Journal reported that Donald Trump ran a Facebook ad in August that was apparently only allowed because of the new rules, in which he lied: “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.”The Tech Oversight Project decried the change in a statement: “We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, deputy executive director. “This announcement is a horrible preview of what we can expect in 2024.”Combined with recent Meta moves to reduce the amount of political content shared organically on Facebook, the prominence of campaign ads questioning elections could rise dramatically in 2024.“Today you can create hundreds of pieces of content in the snap of a finger and you can flood the zone,” Gina Pak, chief executive of Tech for Campaigns, a digital marketing political organization that works with Democrats, told the Journal.Over the past year Meta has laid off about 21,000 employees, many of whom worked on election policy.Facebook was accused of having a malign influence on the 2016 US presidential election by failing to tackle the spread of misinformation in the runup to the vote, in which Trump beat Hillary Clinton. Fake news, such as articles slandering Clinton as a murderer or saying the pope endorsed Trump, spread on the network as non-journalists – including a cottage industry of teenagers living in Macedonia – published false pro-Trump sites in order to reap advertising dollars when the stories went viral.Trump later appropriated the term “fake news” to slander legitimate reporting of his own falsehoods. More

  • in

    You think the internet is a clown show now? You ain’t seen nothing yet | John Naughton

    Robert F Kennedy Jr is a flake of Cadbury proportions with a famous name. He’s the son of Robert Kennedy, who was assassinated in 1968 when he was running for the Democratic presidential nomination (and therefore also JFK’s nephew). Let’s call him Junior. For years – even pre-Covid-19 – he’s been running a vigorous anti-vaccine campaign and peddling conspiracy theories. In 2021, for example, he was claiming that Dr Anthony Fauci was in cahoots with Bill Gates and the big pharma companies to run a “powerful vaccination cartel” that would prolong the pandemic and exaggerate its deadly effects with the aim of promoting expensive vaccinations. And it went without saying (of course) that the mainstream media and big tech companies were also in on the racket and busily suppressing any critical reporting of it.Like most conspiracists, Junior was big on social media, but then in 2021 his Instagram account was removed for “repeatedly sharing debunked claims about the coronavirus or vaccines”, and in August last year his anti-vaccination Children’s Health Defense group was removed by Facebook and Instagram on the grounds that it had repeatedly violated Meta’s medical-misinformation policies.But guess what? On 4 June, Instagram rescinded Junior’s suspension, enabling him to continue beaming his baloney, without let or hindrance, to his 867,000 followers. How come? Because he announced that he’s running against Joe Biden for the Democratic nomination and Meta, Instagram’s parent, has a policy that users should be able to engage with posts from “political leaders”. “As he is now an active candidate for president of the United States,” it said, “we have restored access to Robert F Kennedy Jr’s Instagram account.”Which naturally is also why the company allowed Donald Trump back on to its platform. So in addition to anti-vax propaganda, American voters can also look forward in 2024 to a flood of denialism about the validity of the 2020 election on their social media feeds as Republican acolytes of Trump stand for election and get a free pass from Meta and co.All of which led technology journalist Casey Newton, an astute observer of these things, to advance an interesting hypothesis last week about what’s happening. We may, he said, have passed “peak trust and safety”. Translation: we may have passed the point where tech platforms stopped caring about moderating what happens on their platforms. From now on, (almost) anything goes.If that’s true, then we have reached the most pivotal moment in the evolution of the tech industry since 1996. That was the year when two US legislators inserted a short clause – section 230 – into the Communications Decency Act that was then going through Congress. In 26 words, the clause guaranteed immunity for online computer services with respect to third-party content generated by its users. It basically meant that if you ran an online service on which people could post whatever they liked, you bore no legal liability for any of the bad stuff that could happen as a result of those publications.On the basis of that keep-out-of-jail card, corporations such as Google, Meta and Twitter prospered mightily for years. Bad stuff did indeed happen, but no legal shadow fell on the owners of the platforms on which it was hosted. Of course it often led to bad publicity – but that was ameliorated or avoided by recruiting large numbers of (overseas and poorly paid) moderators, whose job was to ensure that the foul things posted online did not sully the feeds of delicate and fastidious users in the global north.But moderation is difficult and often traumatising work. And, given the scale of the problem, keeping social media clean is an impossible, sisyphean task. The companies employ many thousands of moderators across the globe, but they can’t keep up with the deluge. For a time, these businesses argued that artificial intelligence (meaning machine-learning technology) would enable them to get on top of it. But the AI that can outwit the ingenuity of the bad actors who lurk in the depths of the internet has yet to be invented.And, more significantly perhaps, times have suddenly become harder for tech companies. The big ones are still very profitable, but that’s partly because they been shedding jobs at a phenomenal rate. And many of those who have been made redundant worked in areas such as moderation, or what the industry came to call “trust and safety”. After all, if there’s no legal liability for the bad stuff that gets through whatever filters there are, why keep these worthy custodians on board?Which is why democracies will eventually have to contemplate what was hitherto unthinkable: rethink section 230 and its overseas replications and make platforms legally liable for the harms that they enable. And send Junior back to the soapbox he deserves.What I’ve been readingHere’s looking at usTechno-Narcissism is Scott Galloway’s compelling blogpost on his No Mercy / No Malice site about the nauseating hypocrisy of the AI bros.Ode to JoyceThe Paris Review website has the text of novelist Sally Rooney’s 2022 TS Eliot lecture, Misreading Ulysses.Man of lettersRemembering Robert Gottlieb, Editor Extraordinaire is a lovely New Yorker piece by David Remnick on one of his predecessors, who has just died. More

  • in

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourse

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourseThe former president was banned from Instagram and Facebook following the Jan 6 attacks, but Meta argues that new ‘guardrails’ will keep his behaviour in check. Plus: is a chatbot coming for your job?

    Don’t get TechScape delivered to your inbox? Sign up for the full article here
    It’s been two years since Donald Trump was banned from Meta, but now he’s back. The company’s justification for allowing the former president to return to Facebook and Instagram – that the threat has subsided – seems to ignore that in the two years since the ban Trump hasn’t changed, it’s just that his reach has reduced.Last week, Meta’s president of global affairs, Nick Clegg, announced that soon Trump will be able to post on Instagram and Facebook. The company said “the risk has sufficiently receded” in the two years since the Capitol riots on 6 January 2021 to allow the ban to be lifted.What you might not have been aware of – except through media reports – was Trump’s response. That is because the former US president posted it on Truth Social, his own social media network that he retreated to after he was banned from the others. And it is effectively behind a wall for web users, because the company is not accepting new registrations. On that platform, Trump is said to have fewer than 5 million followers, compared to 34 million and almost 88 million he’d had on Facebook and Twitter respectively.Meta’s ban meant that Trump wouldn’t have space on its platforms during the US midterms elections in 2022, but would anything have been different if Trump had been given a larger audience? As Dan Milmo has detailed, almost half of the posts on Trump’s Truth Social account in the weeks after the midterms pushed election fraud claims or amplified QAnon accounts or content. But you wouldn’t know it unless you were on that platform, or reading a news report about it like this one.If given a larger audience, will Trump resume his Main Character role in online discourse (a role that Twitter’s new owner, Elon Musk, has gamely taken on in the past few months)? Or has his influence diminished? This is the gamble Meta is taking.When Musk lifted Trump’s ban on Twitter in November after a user poll won by a slim margin, it was easy to read the former president’s snub of the gesture as a burn on the tech CEO. But it seems increasingly likely that the Meta decision about whether to reinstate him was looming large in Trump’s mind. Earlier this month, NBC reported that Trump’s advisors had sent a letter to Meta pleading for the ban to be lifted, saying it “dramatically distorted and inhibited the public discourse”. If Trump had gone back to Twitter and started reposting what he had posted on Truth Social, there would have been more pressure on Meta to keep the ban in place (leaving aside the agreement Trump has with his own social media company that keeps his posts exclusive on Truth Social for several hours).Twitter lifting the ban and Trump not tweeting at all gave Meta sufficient cover.The financialsThere’s also the possible financial reasoning. Angelo Carusone, the president of Media Matters for America, said Facebook is “a dying platform” and restoring Trump is about clinging to relevance and revenue.For months, Trump has been posting on Truth Social about how poorly Meta is performing financially, and in part trying to link it to him no longer being on Facebook. Meta has lost more than US$80bn in market value, and last year sacked thousands of workers as the company aimed to stem a declining user base and loss of revenue after Apple made privacy changes on its software (£).But what of the ‘guardrails’?Meta’s justification for restoring Trump’s account is that there are new “guardrails” that could result in him being banned again for the most egregious policy breaches for between one month and two years. But that is likely only going to be for the most serious of breaches – such as glorifying those committing violence. Clegg indicated that if Trump is posting QAnon-adjacent content, for example, his reach will be limited on those posts.The ban itself was a pretty sufficient reach limiter, but we will have to see what happens if Trump starts posting again. The unpublished draft document from staff on the January 6 committee, reported by the Washington Post last week, was pretty telling about Meta, and social media companies generally. It states that both Facebook and Twitter, under its former management, were sensitive to claims that conservative political speech was being suppressed. “Fear of reprisal and accusations of censorship from the political right compromised policy, process, and decision-making. This was especially true at Facebook,” the document states.“In one instance, senior leadership intervened personally to prevent rightwing publishers from having their content demoted after receiving too many strikes from independent fact-checkers.“After the election, they debated whether they should change their fact-checking policy on former world leaders to accommodate President Trump.”Those “guardrails” don’t seem particularly reassuring, do they?Is AI really coming for your job?Layoffs continue to hit media and companies are looking to cut costs. So it was disheartening for new reporters in particular to learn that BuzzFeed plans to use AI such as ChatGPT “to create content instead of writers”.(Full disclosure: I worked at BuzzFeed News prior to joining the Guardian in 2019, but it’s been long enough that I am not familiar with any of its thinking about AI.)But perhaps it’s a bit too early to despair. Anyone who has used free AI to produce writing will know it’s OK but not great, so the concern about BuzzFeed dipping its toes in those waters seems to be overstated – at least for now.In an interview with Semafor, BuzzFeed tech reporter Katie Notopoulos explained that the tools aren’t intended to replace the quiz-creation work writers do now, but to create new quizzes unlike what is already around. “On the one hand,” she said, “I want to try to explain this isn’t an evil plan to replace me with AI. But on the other … maybe let Wall Street believe that for a little while.”That seems to be where AI is now: not a replacement for a skilled person, just a tool.The wider TechScape
    This is the first really good in-depth look at the last few months of Twitter since Elon Musk took over.
    Social media users are posting feelgood footage of strangers to build a following, but not every subject appreciates the clickbaity attention of these so-called #kindness videos.
    If you’re an influencer in Australia and you’re not declaring your sponcon properly, you might be targeted as part of a review by the local regulator.
    Speaking of influencers, Time has a good explanation for why you might have seen people posting about mascara on TikTok in the past few days.
    Writer Jason Okundaye makes the case that it’s time for people to stop filming strangers in public and uploading the videos online in the hope of going viral.
    Nintendo rereleasing GoldenEye007 this week is a reminder of how much the N64 game shaped video games back in the day.
    TopicsTechnologyTechScapeSocial mediaDonald TrumpDigital mediaMetaFacebookInstagramnewslettersReuse this content More

  • in

    Trump’s Facebook and Instagram ban to be lifted, Meta announces

    Trump’s Facebook and Instagram ban to be lifted, Meta announcesEx-president to be allowed back ‘in coming weeks … with new guardrails in place’ after ban that followed January 6 attack In a highly anticipated decision, Meta has said it will allow Donald Trump back on Facebook and Instagram following a two-year ban from the platforms over his online behavior during the 6 January insurrection.Meta will allow Trump to return “in coming weeks” but “with new guardrails in place to deter repeat offenses”, Meta’s president of global affairs Nick Clegg wrote in a blogpost explaining the decision.Two more papers found in Trump’s storage last year were marked secretRead more“Like any other Facebook or Instagram user, Mr Trump is subject to our community standards,” Clegg wrote.“In the event that Mr Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation.”Trump was removed from Meta platforms following the Capitol riots on 6 January 2021, during which he posted unsubstantiated claims that the election had been stolen, praised increasingly violent protestors and condemned former vice-president Mike Pence even as the mob threatened his life.Clegg said the suspension was “an extraordinary decision taken in extraordinary circumstances” and that Meta has weighed “whether there remain such extraordinary circumstances that extending the suspension beyond the original two-year period is justified”.Ultimately, the company has decided that its platforms should be available for “open, public and democratic debate” and that users “should be able to hear from a former President of the United States, and a declared candidate for that office again”, he wrote.“The public should be able to hear what their politicians are saying – the good, the bad and the ugly – so that they can make informed choices at the ballot box,” he said.As a general rule, we don’t want to get in the way of open debate on our platforms, esp in context of democratic elections. People should be able to hear what politicians are saying – good, bad & ugly – to make informed choices at the ballot box. 1/4— Nick Clegg (@nickclegg) January 25, 2023
    While it is unclear if the former president will begin posting again on the platform, his campaign indicated he had a desire to return in a letter sent to Meta in January.“We believe that the ban on President Trump’s account on Facebook has dramatically distorted and inhibited the public discourse,” the letter said.Safety concerns and a politicized debateThe move is likely to influence how other social media companies will handle the thorny balance of free speech and content moderation when it comes to world leaders and other newsworthy individuals, a debate made all the more urgent by Trump’s run for the US presidency once again.Online safety advocates have warned that Trump’s return will result in an increase of misinformation and real-life violence. Since being removed from Meta-owned platforms, the former president has continued to promote baseless conspiracy theories elsewhere, predominantly on his own network, Truth Social.While widely expected, it still drew sharp rebukes from civil rights advocates. “Facebook has policies but they under-enforce them,” said Laura Murphy, an attorney who led a two-year long audit of Facebook concluding in 2020. “I worry about Facebook’s capacity to understand the real world harm that Trump poses: Facebook has been too slow to act.”The Anti-Defamation League, the NAACP, Free Press and other groups also expressed concern on Wednesday over Facebook’s ability to prevent any future attacks on the democratic process, with Trump still repeating his false claim that he won the 2020 presidential election.“With the mass murders in Colorado or in Buffalo, you can see there is already a cauldron of extremism that is only intensified if Trump weighs in,” said Angelo Carusone, president and CEO of media watchdog Media Matters for America. “When Trump is given a platform, it ratchets up the temperature on a landscape that is already simmering – one that will put us on a path to increased violence.”After the 6 January riots, the former president was also banned from Twitter, Snapchat and YouTube. Some of those platforms have already allowed Trump to return. Twitter’s ban, while initially permanent, was later overruled by its new chief executive Elon Musk. YouTube has not shared a timeline on a decision to allow Trump to return. Trump remains banned from Snapchat. Meta, however, dragged out its ultimate decision. In 2021, CEO Mark Zuckerberg explained in a post Trump had been barred from the platforms for encouraging violence and that he would remain suspended until a peaceful transition of power could take place.While Zuckerberg did not initially offer a timeline on the ban, the company punted its decision about whether to remove him permanently to its oversight board: a group of appointed academics and former politicians meant to operate independently of Facebook’s corporate leadership. That group ruled in May 2021 that the penalties should not be “indeterminate”, but kicked the final ruling on Trump’s accounts back to Meta, suggesting it decide in six months – two years after the riots.The deadline was initially slated for 7 January, and reports from inside Meta suggested the company was intensely debating the decision. Clegg wrote in a 2021 blog post that Trump’s accounts would need to be strictly monitored in the event of his return.How the ‘guardrails’ could workAnnouncing the decision on Wednesday, Clegg said Meta’s “guardrails” would include taking action against content that does not directly violate their community standards but “contributes to the sort of risk that materialized on January 6th, such as content that delegitimizes an upcoming election or is related to QAnon”.Meta “may limit the distribution of such posts, and for repeated instances, may temporarily restrict access to our advertising tools”, Clegg said, or “remove the re-share button” from posts.Trump pleads with Meta to restore Facebook accountRead moreTrump responded to the news with a short statement on Truth Social, reposted by others on Twitter, saying that “such a thing should never happen again to a sitting president” but did not indicate if or when he would return to the platform.It remains to be seen if he will actually begin posting again on the platforms where his accounts have been reinstated. While he initially suggested he would be “staying on Truth [Social]”, his own social media platform, recent reports said he was eager to return to Facebook, formally appealing Meta to reinstate his accounts. But weeks after returning to Twitter, Trump had yet to tweet again. Some have suggested the silence has been due to an exclusivity agreement he has with Truth Social.A report from Rolling Stone said Trump planned to begin tweeting again when the agreement, which requires him to post all news to the app six hours in advance of any other platform, expires in June. Trump has a far broader reach on mainstream social platforms compared to Truth Social, where he has just 5 million followers.Many online safety advocates have warned Trump’s return would be toxic, and Democratic lawmakers on Capitol Hill urged Meta in a December letter to uphold the ban.Representative Adam Schiff, a Democrat who previously chaired the House intelligence committee, criticized the decision to reinstate him.“Trump incited an insurrection,” Schiff wrote on Twitter. “Giving him back access to a social media platform to spread his lies and demagoguery is dangerous.”Trump’s account has remained online even after his ban, but he had been unable to publish new posts. Civil rights groups say that regardless of the former president’s future actions the Meta decision marks a dangerous precedent. “Whether he uses the platforms or not, a reinstatement by Meta sends a message that there are no real consequences even for inciting insurrection and a coup on their channels,” said a group of scholars, advocates and activists calling itself the Real Facebook Oversight Board in a statement. “Someone who has violated their terms of service repeatedly, spread disinformation on their platforms and fomented violence would be welcomed back.”Reuters contributed reportingTopicsDonald TrumpMetaFacebookInstagramUS politicsSocial networkingUS Capitol attacknewsReuse this content More

  • in

    Kanye West’s Instagram and Twitter accounts locked over antisemitic posts

    Kanye West’s Instagram and Twitter accounts locked over antisemitic postsThe rapper has also drawn heavy criticism for donning a ‘white lives matter’ T-shirt during Paris fashion week Kanye West has now had both his Instagram and Twitter accounts locked after antisemitic posts over the weekend.Twitter locked his account Sunday after it removed one of West’s tweets saying he was going “death con 3 On JEWISH PEOPLE” because it violated the service’s policies against hate speech.“I’m a bit sleepy tonight but when I wake up I’m going death con 3 On JEWISH PEOPLE The funny thing is I actually can’t be Anti Semitic because black people are actually Jew also You guys have toyed with me and tried to black ball anyone whoever opposes your agenda,” he tweeted on Saturday in a series of messages. The tweet has since been removed and West’s account locked.“The account in question has been locked due to a violation of Twitter’s policies,” a spokesperson for the platform told BuzzFeed News.The social media company Meta also restricted West’s Instagram account after the rapper made an antisemitic post on Friday in which he appeared to suggest the rapper Diddy was controlled by Jewish people, an antisemitic trope, NBC News reported.The controversial rapper who legally changed his name to Ye recently drew heavy criticism for donning a “white lives matter” T-shirt during Paris fashion week. He also dressed models in the shirt containing the phrase that the Anti-Defamation League considers a “hate slogan”.The league, which monitors violent extremists, notes on its website that white supremacist groups have promoted the phrase.West told Fox News host Tucker Carlson he thought the shirt was “funny” and “the obvious thing to do”.“I said, ‘I thought the shirt was a funny shirt; I thought the idea of me wearing it was funny,’” he told Carlson. “And I said, ‘Dad, why did you think it was funny?’ He said, ‘Just a Black man stating the obvious.’”During the same interview, West told Carlson that Jared Kushner, the Jewish son-in-law of former president Donald Trump, negotiated Middle East peace deals “to make money”.West was diagnosed with bipolar disorder several years ago and has spoken publicly about his mental health challenges.TopicsKanye WestTwitterUS politicsInstagramnewsReuse this content More