More stories

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    Meta allows ads saying 2020 election was rigged on Facebook and Instagram

    Meta is now allowing Facebook and Instagram to run political advertising saying the 2020 election was rigged.The policy was reportedly introduced quietly in 2022 after the US midterm primary elections, according to the Wall Street Journal, citing people familiar with the decision. The previous policy prevented Republican candidates from running ads arguing during that campaign that the 2020 election, which Donald Trump lost to Joe Biden, was stolen.Meta will now allow political advertisers to say past elections were “rigged” or “stolen”, although it still prevents them from questioning whether ongoing or future elections are legitimate.Other social media platforms have been making changes to their policies ahead of the 2024 presidential election, for which online messaging is expected to be fiercely contested.In August, X (formerly known as Twitter) said it would reverse its ban on political ads, originally instituted in 2019.Earlier, in June, YouTube said it would stop removing content falsely claiming the 2020 election, or other past US presidential elections, were fraudulent, reversing the stance it took after the 2020 election. It said the move aimed to safeguard the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions”.Meta, too, reportedly weighed free-speech considerations in making its decision. The Journal reported that Nick Clegg, president of global affairs, took the position that the company should not decide whether elections were legitimate.The Wall Street Journal reported that Donald Trump ran a Facebook ad in August that was apparently only allowed because of the new rules, in which he lied: “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.”The Tech Oversight Project decried the change in a statement: “We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, deputy executive director. “This announcement is a horrible preview of what we can expect in 2024.”Combined with recent Meta moves to reduce the amount of political content shared organically on Facebook, the prominence of campaign ads questioning elections could rise dramatically in 2024.“Today you can create hundreds of pieces of content in the snap of a finger and you can flood the zone,” Gina Pak, chief executive of Tech for Campaigns, a digital marketing political organization that works with Democrats, told the Journal.Over the past year Meta has laid off about 21,000 employees, many of whom worked on election policy.Facebook was accused of having a malign influence on the 2016 US presidential election by failing to tackle the spread of misinformation in the runup to the vote, in which Trump beat Hillary Clinton. Fake news, such as articles slandering Clinton as a murderer or saying the pope endorsed Trump, spread on the network as non-journalists – including a cottage industry of teenagers living in Macedonia – published false pro-Trump sites in order to reap advertising dollars when the stories went viral.Trump later appropriated the term “fake news” to slander legitimate reporting of his own falsehoods. More

  • in

    You think the internet is a clown show now? You ain’t seen nothing yet | John Naughton

    Robert F Kennedy Jr is a flake of Cadbury proportions with a famous name. He’s the son of Robert Kennedy, who was assassinated in 1968 when he was running for the Democratic presidential nomination (and therefore also JFK’s nephew). Let’s call him Junior. For years – even pre-Covid-19 – he’s been running a vigorous anti-vaccine campaign and peddling conspiracy theories. In 2021, for example, he was claiming that Dr Anthony Fauci was in cahoots with Bill Gates and the big pharma companies to run a “powerful vaccination cartel” that would prolong the pandemic and exaggerate its deadly effects with the aim of promoting expensive vaccinations. And it went without saying (of course) that the mainstream media and big tech companies were also in on the racket and busily suppressing any critical reporting of it.Like most conspiracists, Junior was big on social media, but then in 2021 his Instagram account was removed for “repeatedly sharing debunked claims about the coronavirus or vaccines”, and in August last year his anti-vaccination Children’s Health Defense group was removed by Facebook and Instagram on the grounds that it had repeatedly violated Meta’s medical-misinformation policies.But guess what? On 4 June, Instagram rescinded Junior’s suspension, enabling him to continue beaming his baloney, without let or hindrance, to his 867,000 followers. How come? Because he announced that he’s running against Joe Biden for the Democratic nomination and Meta, Instagram’s parent, has a policy that users should be able to engage with posts from “political leaders”. “As he is now an active candidate for president of the United States,” it said, “we have restored access to Robert F Kennedy Jr’s Instagram account.”Which naturally is also why the company allowed Donald Trump back on to its platform. So in addition to anti-vax propaganda, American voters can also look forward in 2024 to a flood of denialism about the validity of the 2020 election on their social media feeds as Republican acolytes of Trump stand for election and get a free pass from Meta and co.All of which led technology journalist Casey Newton, an astute observer of these things, to advance an interesting hypothesis last week about what’s happening. We may, he said, have passed “peak trust and safety”. Translation: we may have passed the point where tech platforms stopped caring about moderating what happens on their platforms. From now on, (almost) anything goes.If that’s true, then we have reached the most pivotal moment in the evolution of the tech industry since 1996. That was the year when two US legislators inserted a short clause – section 230 – into the Communications Decency Act that was then going through Congress. In 26 words, the clause guaranteed immunity for online computer services with respect to third-party content generated by its users. It basically meant that if you ran an online service on which people could post whatever they liked, you bore no legal liability for any of the bad stuff that could happen as a result of those publications.On the basis of that keep-out-of-jail card, corporations such as Google, Meta and Twitter prospered mightily for years. Bad stuff did indeed happen, but no legal shadow fell on the owners of the platforms on which it was hosted. Of course it often led to bad publicity – but that was ameliorated or avoided by recruiting large numbers of (overseas and poorly paid) moderators, whose job was to ensure that the foul things posted online did not sully the feeds of delicate and fastidious users in the global north.But moderation is difficult and often traumatising work. And, given the scale of the problem, keeping social media clean is an impossible, sisyphean task. The companies employ many thousands of moderators across the globe, but they can’t keep up with the deluge. For a time, these businesses argued that artificial intelligence (meaning machine-learning technology) would enable them to get on top of it. But the AI that can outwit the ingenuity of the bad actors who lurk in the depths of the internet has yet to be invented.And, more significantly perhaps, times have suddenly become harder for tech companies. The big ones are still very profitable, but that’s partly because they been shedding jobs at a phenomenal rate. And many of those who have been made redundant worked in areas such as moderation, or what the industry came to call “trust and safety”. After all, if there’s no legal liability for the bad stuff that gets through whatever filters there are, why keep these worthy custodians on board?Which is why democracies will eventually have to contemplate what was hitherto unthinkable: rethink section 230 and its overseas replications and make platforms legally liable for the harms that they enable. And send Junior back to the soapbox he deserves.What I’ve been readingHere’s looking at usTechno-Narcissism is Scott Galloway’s compelling blogpost on his No Mercy / No Malice site about the nauseating hypocrisy of the AI bros.Ode to JoyceThe Paris Review website has the text of novelist Sally Rooney’s 2022 TS Eliot lecture, Misreading Ulysses.Man of lettersRemembering Robert Gottlieb, Editor Extraordinaire is a lovely New Yorker piece by David Remnick on one of his predecessors, who has just died. More

  • in

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourse

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourseThe former president was banned from Instagram and Facebook following the Jan 6 attacks, but Meta argues that new ‘guardrails’ will keep his behaviour in check. Plus: is a chatbot coming for your job?

    Don’t get TechScape delivered to your inbox? Sign up for the full article here
    It’s been two years since Donald Trump was banned from Meta, but now he’s back. The company’s justification for allowing the former president to return to Facebook and Instagram – that the threat has subsided – seems to ignore that in the two years since the ban Trump hasn’t changed, it’s just that his reach has reduced.Last week, Meta’s president of global affairs, Nick Clegg, announced that soon Trump will be able to post on Instagram and Facebook. The company said “the risk has sufficiently receded” in the two years since the Capitol riots on 6 January 2021 to allow the ban to be lifted.What you might not have been aware of – except through media reports – was Trump’s response. That is because the former US president posted it on Truth Social, his own social media network that he retreated to after he was banned from the others. And it is effectively behind a wall for web users, because the company is not accepting new registrations. On that platform, Trump is said to have fewer than 5 million followers, compared to 34 million and almost 88 million he’d had on Facebook and Twitter respectively.Meta’s ban meant that Trump wouldn’t have space on its platforms during the US midterms elections in 2022, but would anything have been different if Trump had been given a larger audience? As Dan Milmo has detailed, almost half of the posts on Trump’s Truth Social account in the weeks after the midterms pushed election fraud claims or amplified QAnon accounts or content. But you wouldn’t know it unless you were on that platform, or reading a news report about it like this one.If given a larger audience, will Trump resume his Main Character role in online discourse (a role that Twitter’s new owner, Elon Musk, has gamely taken on in the past few months)? Or has his influence diminished? This is the gamble Meta is taking.When Musk lifted Trump’s ban on Twitter in November after a user poll won by a slim margin, it was easy to read the former president’s snub of the gesture as a burn on the tech CEO. But it seems increasingly likely that the Meta decision about whether to reinstate him was looming large in Trump’s mind. Earlier this month, NBC reported that Trump’s advisors had sent a letter to Meta pleading for the ban to be lifted, saying it “dramatically distorted and inhibited the public discourse”. If Trump had gone back to Twitter and started reposting what he had posted on Truth Social, there would have been more pressure on Meta to keep the ban in place (leaving aside the agreement Trump has with his own social media company that keeps his posts exclusive on Truth Social for several hours).Twitter lifting the ban and Trump not tweeting at all gave Meta sufficient cover.The financialsThere’s also the possible financial reasoning. Angelo Carusone, the president of Media Matters for America, said Facebook is “a dying platform” and restoring Trump is about clinging to relevance and revenue.For months, Trump has been posting on Truth Social about how poorly Meta is performing financially, and in part trying to link it to him no longer being on Facebook. Meta has lost more than US$80bn in market value, and last year sacked thousands of workers as the company aimed to stem a declining user base and loss of revenue after Apple made privacy changes on its software (£).But what of the ‘guardrails’?Meta’s justification for restoring Trump’s account is that there are new “guardrails” that could result in him being banned again for the most egregious policy breaches for between one month and two years. But that is likely only going to be for the most serious of breaches – such as glorifying those committing violence. Clegg indicated that if Trump is posting QAnon-adjacent content, for example, his reach will be limited on those posts.The ban itself was a pretty sufficient reach limiter, but we will have to see what happens if Trump starts posting again. The unpublished draft document from staff on the January 6 committee, reported by the Washington Post last week, was pretty telling about Meta, and social media companies generally. It states that both Facebook and Twitter, under its former management, were sensitive to claims that conservative political speech was being suppressed. “Fear of reprisal and accusations of censorship from the political right compromised policy, process, and decision-making. This was especially true at Facebook,” the document states.“In one instance, senior leadership intervened personally to prevent rightwing publishers from having their content demoted after receiving too many strikes from independent fact-checkers.“After the election, they debated whether they should change their fact-checking policy on former world leaders to accommodate President Trump.”Those “guardrails” don’t seem particularly reassuring, do they?Is AI really coming for your job?Layoffs continue to hit media and companies are looking to cut costs. So it was disheartening for new reporters in particular to learn that BuzzFeed plans to use AI such as ChatGPT “to create content instead of writers”.(Full disclosure: I worked at BuzzFeed News prior to joining the Guardian in 2019, but it’s been long enough that I am not familiar with any of its thinking about AI.)But perhaps it’s a bit too early to despair. Anyone who has used free AI to produce writing will know it’s OK but not great, so the concern about BuzzFeed dipping its toes in those waters seems to be overstated – at least for now.In an interview with Semafor, BuzzFeed tech reporter Katie Notopoulos explained that the tools aren’t intended to replace the quiz-creation work writers do now, but to create new quizzes unlike what is already around. “On the one hand,” she said, “I want to try to explain this isn’t an evil plan to replace me with AI. But on the other … maybe let Wall Street believe that for a little while.”That seems to be where AI is now: not a replacement for a skilled person, just a tool.The wider TechScape
    This is the first really good in-depth look at the last few months of Twitter since Elon Musk took over.
    Social media users are posting feelgood footage of strangers to build a following, but not every subject appreciates the clickbaity attention of these so-called #kindness videos.
    If you’re an influencer in Australia and you’re not declaring your sponcon properly, you might be targeted as part of a review by the local regulator.
    Speaking of influencers, Time has a good explanation for why you might have seen people posting about mascara on TikTok in the past few days.
    Writer Jason Okundaye makes the case that it’s time for people to stop filming strangers in public and uploading the videos online in the hope of going viral.
    Nintendo rereleasing GoldenEye007 this week is a reminder of how much the N64 game shaped video games back in the day.
    TopicsTechnologyTechScapeSocial mediaDonald TrumpDigital mediaMetaFacebookInstagramnewslettersReuse this content More

  • in

    Trump’s Facebook and Instagram ban to be lifted, Meta announces

    Trump’s Facebook and Instagram ban to be lifted, Meta announcesEx-president to be allowed back ‘in coming weeks … with new guardrails in place’ after ban that followed January 6 attack In a highly anticipated decision, Meta has said it will allow Donald Trump back on Facebook and Instagram following a two-year ban from the platforms over his online behavior during the 6 January insurrection.Meta will allow Trump to return “in coming weeks” but “with new guardrails in place to deter repeat offenses”, Meta’s president of global affairs Nick Clegg wrote in a blogpost explaining the decision.Two more papers found in Trump’s storage last year were marked secretRead more“Like any other Facebook or Instagram user, Mr Trump is subject to our community standards,” Clegg wrote.“In the event that Mr Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation.”Trump was removed from Meta platforms following the Capitol riots on 6 January 2021, during which he posted unsubstantiated claims that the election had been stolen, praised increasingly violent protestors and condemned former vice-president Mike Pence even as the mob threatened his life.Clegg said the suspension was “an extraordinary decision taken in extraordinary circumstances” and that Meta has weighed “whether there remain such extraordinary circumstances that extending the suspension beyond the original two-year period is justified”.Ultimately, the company has decided that its platforms should be available for “open, public and democratic debate” and that users “should be able to hear from a former President of the United States, and a declared candidate for that office again”, he wrote.“The public should be able to hear what their politicians are saying – the good, the bad and the ugly – so that they can make informed choices at the ballot box,” he said.As a general rule, we don’t want to get in the way of open debate on our platforms, esp in context of democratic elections. People should be able to hear what politicians are saying – good, bad & ugly – to make informed choices at the ballot box. 1/4— Nick Clegg (@nickclegg) January 25, 2023
    While it is unclear if the former president will begin posting again on the platform, his campaign indicated he had a desire to return in a letter sent to Meta in January.“We believe that the ban on President Trump’s account on Facebook has dramatically distorted and inhibited the public discourse,” the letter said.Safety concerns and a politicized debateThe move is likely to influence how other social media companies will handle the thorny balance of free speech and content moderation when it comes to world leaders and other newsworthy individuals, a debate made all the more urgent by Trump’s run for the US presidency once again.Online safety advocates have warned that Trump’s return will result in an increase of misinformation and real-life violence. Since being removed from Meta-owned platforms, the former president has continued to promote baseless conspiracy theories elsewhere, predominantly on his own network, Truth Social.While widely expected, it still drew sharp rebukes from civil rights advocates. “Facebook has policies but they under-enforce them,” said Laura Murphy, an attorney who led a two-year long audit of Facebook concluding in 2020. “I worry about Facebook’s capacity to understand the real world harm that Trump poses: Facebook has been too slow to act.”The Anti-Defamation League, the NAACP, Free Press and other groups also expressed concern on Wednesday over Facebook’s ability to prevent any future attacks on the democratic process, with Trump still repeating his false claim that he won the 2020 presidential election.“With the mass murders in Colorado or in Buffalo, you can see there is already a cauldron of extremism that is only intensified if Trump weighs in,” said Angelo Carusone, president and CEO of media watchdog Media Matters for America. “When Trump is given a platform, it ratchets up the temperature on a landscape that is already simmering – one that will put us on a path to increased violence.”After the 6 January riots, the former president was also banned from Twitter, Snapchat and YouTube. Some of those platforms have already allowed Trump to return. Twitter’s ban, while initially permanent, was later overruled by its new chief executive Elon Musk. YouTube has not shared a timeline on a decision to allow Trump to return. Trump remains banned from Snapchat. Meta, however, dragged out its ultimate decision. In 2021, CEO Mark Zuckerberg explained in a post Trump had been barred from the platforms for encouraging violence and that he would remain suspended until a peaceful transition of power could take place.While Zuckerberg did not initially offer a timeline on the ban, the company punted its decision about whether to remove him permanently to its oversight board: a group of appointed academics and former politicians meant to operate independently of Facebook’s corporate leadership. That group ruled in May 2021 that the penalties should not be “indeterminate”, but kicked the final ruling on Trump’s accounts back to Meta, suggesting it decide in six months – two years after the riots.The deadline was initially slated for 7 January, and reports from inside Meta suggested the company was intensely debating the decision. Clegg wrote in a 2021 blog post that Trump’s accounts would need to be strictly monitored in the event of his return.How the ‘guardrails’ could workAnnouncing the decision on Wednesday, Clegg said Meta’s “guardrails” would include taking action against content that does not directly violate their community standards but “contributes to the sort of risk that materialized on January 6th, such as content that delegitimizes an upcoming election or is related to QAnon”.Meta “may limit the distribution of such posts, and for repeated instances, may temporarily restrict access to our advertising tools”, Clegg said, or “remove the re-share button” from posts.Trump pleads with Meta to restore Facebook accountRead moreTrump responded to the news with a short statement on Truth Social, reposted by others on Twitter, saying that “such a thing should never happen again to a sitting president” but did not indicate if or when he would return to the platform.It remains to be seen if he will actually begin posting again on the platforms where his accounts have been reinstated. While he initially suggested he would be “staying on Truth [Social]”, his own social media platform, recent reports said he was eager to return to Facebook, formally appealing Meta to reinstate his accounts. But weeks after returning to Twitter, Trump had yet to tweet again. Some have suggested the silence has been due to an exclusivity agreement he has with Truth Social.A report from Rolling Stone said Trump planned to begin tweeting again when the agreement, which requires him to post all news to the app six hours in advance of any other platform, expires in June. Trump has a far broader reach on mainstream social platforms compared to Truth Social, where he has just 5 million followers.Many online safety advocates have warned Trump’s return would be toxic, and Democratic lawmakers on Capitol Hill urged Meta in a December letter to uphold the ban.Representative Adam Schiff, a Democrat who previously chaired the House intelligence committee, criticized the decision to reinstate him.“Trump incited an insurrection,” Schiff wrote on Twitter. “Giving him back access to a social media platform to spread his lies and demagoguery is dangerous.”Trump’s account has remained online even after his ban, but he had been unable to publish new posts. Civil rights groups say that regardless of the former president’s future actions the Meta decision marks a dangerous precedent. “Whether he uses the platforms or not, a reinstatement by Meta sends a message that there are no real consequences even for inciting insurrection and a coup on their channels,” said a group of scholars, advocates and activists calling itself the Real Facebook Oversight Board in a statement. “Someone who has violated their terms of service repeatedly, spread disinformation on their platforms and fomented violence would be welcomed back.”Reuters contributed reportingTopicsDonald TrumpMetaFacebookInstagramUS politicsSocial networkingUS Capitol attacknewsReuse this content More

  • in

    Kanye West’s Instagram and Twitter accounts locked over antisemitic posts

    Kanye West’s Instagram and Twitter accounts locked over antisemitic postsThe rapper has also drawn heavy criticism for donning a ‘white lives matter’ T-shirt during Paris fashion week Kanye West has now had both his Instagram and Twitter accounts locked after antisemitic posts over the weekend.Twitter locked his account Sunday after it removed one of West’s tweets saying he was going “death con 3 On JEWISH PEOPLE” because it violated the service’s policies against hate speech.“I’m a bit sleepy tonight but when I wake up I’m going death con 3 On JEWISH PEOPLE The funny thing is I actually can’t be Anti Semitic because black people are actually Jew also You guys have toyed with me and tried to black ball anyone whoever opposes your agenda,” he tweeted on Saturday in a series of messages. The tweet has since been removed and West’s account locked.“The account in question has been locked due to a violation of Twitter’s policies,” a spokesperson for the platform told BuzzFeed News.The social media company Meta also restricted West’s Instagram account after the rapper made an antisemitic post on Friday in which he appeared to suggest the rapper Diddy was controlled by Jewish people, an antisemitic trope, NBC News reported.The controversial rapper who legally changed his name to Ye recently drew heavy criticism for donning a “white lives matter” T-shirt during Paris fashion week. He also dressed models in the shirt containing the phrase that the Anti-Defamation League considers a “hate slogan”.The league, which monitors violent extremists, notes on its website that white supremacist groups have promoted the phrase.West told Fox News host Tucker Carlson he thought the shirt was “funny” and “the obvious thing to do”.“I said, ‘I thought the shirt was a funny shirt; I thought the idea of me wearing it was funny,’” he told Carlson. “And I said, ‘Dad, why did you think it was funny?’ He said, ‘Just a Black man stating the obvious.’”During the same interview, West told Carlson that Jared Kushner, the Jewish son-in-law of former president Donald Trump, negotiated Middle East peace deals “to make money”.West was diagnosed with bipolar disorder several years ago and has spoken publicly about his mental health challenges.TopicsKanye WestTwitterUS politicsInstagramnewsReuse this content More

  • in

    Nick Clegg to decide on Trump’s 2023 return to Instagram and Facebook

    Nick Clegg to decide on Trump’s 2023 return to Instagram and FacebookMeta’s president of global affairs said it would be a decision ‘I oversee’ after the ex-president’s accounts were suspended in 2021 Nick Clegg, Meta’s president of global affairs, is charged with deciding whether Donald Trump will be allowed to return to Facebook and Instagram in 2023, Clegg said on Thursday.Speaking at an event held in Washington by news organization Semafor, Clegg said the company was seriously debating whether Trump’s accounts should be reinstated and said it was a decision that “I oversee and I drive”.Judge asks Trump’s team for proof that FBI planted documents at Mar-a-Lago Read moreClegg added that while he will be making the final call, he will consult the CEO, Mark Zuckerberg, the Facebook board of directors and outside experts.“It’s not a capricious decision,” he said. “We will look at the signals related to real-world harm to make a decision whether at the two-year point – which is early January next year – whether Trump gets reinstated to the platform.”The former president was suspended from a number of online platforms, including those owned by Meta, following the 6 January 2021 Capitol riot during which Trump used his social media accounts to praise and perpetuate the violence.While Twitter banned Trump permanently, Meta suspended Trump’s accounts for two years, to be later re-evaluated. In May 2021, a temporary ban was upheld by Facebook’s oversight board – a group of appointed academics and former politicians meant to operate independently of Facebook’s corporate leadership.However, the board returned the final decision on Trump’s accounts to Meta, suggesting the company decide in six months whether to make the ban permanent. Clegg said that decision will be made by 7 January 2023.Clegg previously served as Britain’s deputy prime minister and joined Facebook as vice‑president for global affairs and communications in 2018. In February, he was promoted to the top company policy executive role.In the years since he began at Meta, Clegg has seen the company through a number of scandals, including scrutiny of its policies during the 2016 US presidential election, Facebook’s role in the persecution of the Rohingya in Myanmar, and the revelations made by whistleblower Frances Haugen.TopicsDonald TrumpNick CleggMark ZuckerbergFacebookInstagramUS Capitol attackUS politicsnewsReuse this content More

  • in

    Instagram CEO testifies before Congress over platform’s impact on kids

    Instagram CEO testifies before Congress over platform’s impact on kidsAdam Mosseri defends platform and calls for creation of body to determine best practices to help keep young people safe online The head of Instagram began testimony before US lawmakers on Wednesday afternoon about protecting children online, in the latest congressional hearing scrutinizing the social media platform’s impact on young users.Adam Mosseri defended the platform and called for the creation of an industry body to determine best practices to help keep young people safe online. Mosseri said in written testimony before the Senate commerce consumer protection panel the industry body should address “how to verify age, how to design age-appropriate experiences, and how to build parental controls”.“We all want teens to be safe online,” Mosseri said in opening statements. “The internet isn’t going away, and I believe there’s important work that we can do together – industry and policymakers – to raise the standards across the internet to better serve and protect young people.”Instagram and its parent company, Meta Platforms (formerly Facebook), have been facing global criticism over the ways their services affect the mental health, body image and online safety of younger users.In opening statements, Senator Richard Blumenthal promised to be “ruthless” in the hearing, saying “the time for self-policing and self-regulation is over”.“Self policing depends on trust, and the trust is gone,” he said. “The magnitude of these problems requires both and broad solutions and accountability which has been lacking so far.”In November, a bipartisan coalition of US state attorneys general said it had opened an inquiry into Meta for promoting Instagram to children despite potential harms. And in September, US lawmakers grilled Facebook’s head of safety, Antigone Davis, about the impacts of the company’s products on children.The scrutiny follows the release of internal Facebook documents by a former employee turned whistleblower, which revealed the company’s own internal research showed Instagram negatively affected the mental health of teens, particularly regarding body image issues.Ahead of Wednesday’s hearing, Instagram said it will be stricter about the types of content it recommends to teens and will nudge young users toward different areas if they dwell on one topic for a long time.In a blogpost published on Tuesday, the social media service announced it was switching off the ability for people to tag or mention teens who do not follow them on the app and would enable teen users to to bulk delete their content and previous likes and comments.In the blogpost, Mosseri also said Instagram was exploring controls to limit potentially harmful or sensitive material, was working on parental control tools and was launching a “Take a Break” feature, which reminds people to take a brief pause from the app after using it for a certain amount of time, in certain countries.Democratic senator and chair of the panel, Richard Blumenthal called the company’s product announcement “baby steps”.“They are more a PR gambit than real action done within hours of the CEO testifying that are more to distract than really solve the problem,” he told Politico.Republican Senator Marsha Blackburn criticized the company’s product announcement as “hollow”, saying in a statement: “Meta is attempting to shift attention from their mistakes by rolling out parental guides, use timers and content control features that consumers should have had all along.”An Instagram spokeswoman said the company would continue its pause on plans for a version of Instagram for kids. Instagram suspended plans for that project in September amid growing opposition to the project.TopicsInstagramUS CongressUS politicsnewsReuse this content More