More stories

  • in

    Meta lifts restrictions on Trump’s Facebook and Instagram accounts

    Meta has removed previous restrictions on the Facebook and Instagram accounts of Donald Trump as the 2024 election nears, the company announced on Friday.Trump was allowed to return to the social networks in 2023 with “guardrails” in place, after being banned over his online behavior during the 6 January insurrection. Those guardrails have now been removed.“In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for president on the same basis,” Meta said in a blogpost, citing the Republican national convention, slated for next week, which will formalize Trump as the party’s candidate.As a result, Meta said, Trump’s accounts will no longer be subject to heightened suspension penalties, which Meta said were created in response to “extreme and extraordinary circumstances” and “have not had to be deployed”.“All US presidential candidates remain subject to the same community standards as all Facebook and Instagram users, including those policies designed to prevent hate speech and incitement to violence,” the company’s blogpost reads.Since his return to Meta’s social networks, Trump has primarily shared campaign information, attacks on Democratic candidate Biden, and memes on his accounts.Critics of Trump and online safety advocates have expressed concern that Trump’s return could lead to a rise of misinformation and incitement of violence, as was seen during the Capitol riot that prompted his initial ban.The Biden campaign condemned Meta’s decision in a statement on Friday, saying it is a “greedy, reckless decision” that constitutes “ a direct attack on our safety and our democracy”.“Restoring his access is like handing your car keys to someone you know will drive your car into a crowd and off a cliff,” said campaign spokesperson Charles Kretchmer Lutvak. “It is holding a megaphone for a bonafide racist who will shout his hate and white supremacy from the rooftops and try to take it mainstream.”In addition to Meta platforms, other major social media firms banned Trump due to his online activity surrounding the 6 January attack, including Twitter (now X), Snapchat and YouTube.The former president was allowed back on X last year by the decision of Elon Musk, who bought the company in 2022, though the former president has not yet tweeted.Trump returned to YouTube in March 2023. He remains banned from Snapchat.Trump founded his own social network, Truth Social, in early 2022. More

  • in

    The Guardian view on the US and vaccine disinformation: a stupid, shocking and deadly game | Editorial

    In July 2021, Joe Biden rightly inveighed against social media companies failing to tackle vaccine disinformation: “They’re killing people,” the US president said. Despite their pledges to take action, lies and sensationalised accounts were still spreading on platforms. Most of those dying in the US were unvaccinated. An additional source of frustration for the US was the fact that Russia and China were encouraging mistrust of western vaccines, questioning their efficacy, exaggerating side-effects and sensationalising the deaths of people who had been inoculated.How, then, would the US describe the effects of its own disinformation at the height of the Covid-19 pandemic? A shocking new report has revealed that its military ran a secret campaign to discredit China’s Sinovac vaccine with Filipinos – when nothing else was available to the Philippines. The Reuters investigation found that this spread to audiences in central Asia and the Middle East, with fake social media accounts not only questioning Sinovac’s efficacy and safety but also claiming it used pork gelatine, to discourage Muslims from receiving it. In the case of the Philippines, the poor take-up of vaccines contributed to one of the highest death rates in the region. Undermining confidence in a specific vaccine can also contribute to broader vaccine hesitancy.The campaign, conducted via Facebook, Instagram, Twitter (now X) and other platforms, was launched under the Trump administration despite the objections of multiple state department officials. The Biden administration ended it after the national security council was alerted to the issue in spring 2021. The drive seems to have been retaliation for Chinese claims – without any evidence – that Covid had been brought to Wuhan by a US soldier. It was also driven by military concerns that the Philippines was growing closer to Beijing.It is all the more disturbing because the US has seen what happens when it plays strategic games with vaccination. In 2011, in preparation for the assassination of Osama bin Laden in Abbottabad, Pakistan, the CIA tried to confirm that it had located him by gathering the DNA of relatives through a staged hepatitis B vaccination campaign. The backlash was entirely predictable, especially in an area that had already seen claims that the west was using polio vaccines to sterilise Pakistani Muslim girls. NGOs were vilified and polio vaccinators were murdered. Polio resurged in Pakistan; Islamist militants in Nigeria killed vaccinators subsequently.The report said that the Pentagon has now rescinded parts of the 2019 order that allowed the military to sidestep the state department when running psychological operations. But while the prospect of a second Trump administration resuming such tactics is alarming, the attitude that bred them goes deeper. Reuters pointed to a strategy document from last year in which generals noted that the US could weaponise information, adding: “Disinformation spread across social media, false narratives disguised as news, and similar subversive activities weaken societal trust by undermining the foundations of government.”The US is right to challenge the Kremlin’s troll farms, Beijing’s propaganda and the irresponsibility of social media companies. But it’s hard to take the moral high ground when you’ve been pumping out lies. The repercussions in this case were particularly predictable, clear and horrifying. It was indefensible to pursue a project with such obvious potential to cause unnecessary deaths. It must not be repeated. More

  • in

    Battle lines drawn as US states take on big tech with online child safety bills

    On 6 April, Maryland became the first state in the US to pass a “Kids Code” bill, which aims to prevent tech companies from collecting predatory data from children and using design features that could cause them harm. Vermont’s legislature held its final hearing before a full vote on its Kids Code bill on 11 April. The measures are the latest in a salvo of proposed policies that, in the absence of federal rules, have made state capitols a major battlefield in the war between parents and child advocates, who lament that there are too few protections for minors online, and Silicon Valley tech companies, who protest that the recommended restrictions would hobble both business and free speech.Known as Age-Appropriate Design Code or Kids Code bills, these measures call for special data safeguards for underage users online as well as blanket prohibitions on children under certain ages using social media. Maryland’s measure passed with unanimous votes in its house and senate.In all, nine states across the country – Maryland, Vermont, Minnesota, Hawaii, Illinois, New Mexico, South Carolina, New Mexico and Nevada – have introduced and are now hashing out bills aimed at improving online child safety. Minnesota’s bill passed the house committee in February.Lawmakers in multiple states have accused lobbyists for tech firms of deception during public hearings. Tech companies have also spent a quarter of a million dollars lobbying against the Maryland bill to no avail.Carl Szabo, vice-president and general counsel of the tech trade association NetChoice, spoke against the Maryland bill at a state senate finance committee meeting in mid-2023 as a “lifelong Maryland resident, parent, [spouse] of a child therapist”.Later in the hearing, a Maryland state senator asked: “Who are you, sir? … I don’t believe it was revealed at the introduction of your commentary that you work for NetChoice. All I heard was that you were here testifying as a dad. I didn’t hear you had a direct tie as an employee and representative of big tech.”For the past two years, technology giants have been directly lobbying in some states looking to pass online safety bills. In Maryland alone, tech giants racked up more than $243,000 in lobbying fees in 2023, the year the bill was introduced. Google spent $93,076, Amazon $88,886, and Apple $133,449 last year, according to state disclosure forms.Amazon, Apple, Google and Meta hired in-state lobbyists in Minnesota and sent employees to lobby directly in 2023. In 2022, the four companies also spent a combined $384,000 on lobbying in Minnesota, the highest total up to that point, according to the Minnesota campaign finance and public disclosure board.The bills require tech companies to undergo a series of steps aimed at safeguarding children’s experiences on their websites and assessing their “data protection impact”. Companies must configure all default privacy settings provided to children by online products to offer a high level of privacy, “unless the covered entity can demonstrate a compelling reason that a different setting is in the best interests of children”. Another requirement is to provide privacy information and terms of service in clear, understandable language for children and provide responsive tools to help children or their parents or guardians exercise their privacy rights and report concerns.The legislation leaves it to tech companies to determine whether users are underage but does not require verification by documents such as a driver’s license. Determining age could come from data profiles companies have on a user, or self-declaration, where users must enter their birth date, known as “age-gating”.Critics argue the process of tech companies guessing a child’s age may lead to privacy invasions.“Generally, this is how it will work: to determine whether a user in a state is under a specific age and whether the adult verifying a minor over that designated age is truly that child’s parent or guardian, online services will need to conduct identity verification,” said a spokesperson for NetChoice.The bills’ supporters argue that users of social media should not be required to upload identity documents since the companies already know their age.“They’ve collected so many data points on users that they are advertising to kids because they know the user is a kid,” said a spokesperson for the advocacy group the Tech Oversight Project. “Social media companies’ business models are based on knowing who their users are.”NetChoice – and by extension, the tech industry – has several alternative proposals for improving child safety online. They include digital literacy and safety education in the classroom for children to form “an understanding of healthy online practices in a classroom environment to better prepare them for modern challenges”.At a meeting in February to debate a proposed bill aimed at online child safety, NetChoice’s director, Amy Bos, argued that parental safety controls introduced by social media companies and parental interventions such as parents taking away children’s phones when they have racked up too much screen time were better courses of action than regulation. Asking parents to opt into protecting their children often fails to achieve wide adoption, though. Snapchat and Discord told the US Senate in February that fewer than 1% of under-18 users on either social network had parents who monitor their online behavior using parental controls.Bos also ardently argued that the proposed bill breached first amendment rights. Her testimony prompted a Vermont state senator to ask: “You said, ‘We represent eBay and Etsy.’ Why would you mention those before TikTok and X in relation to a bill about social media platforms and teenagers?”NetChoice is also promoting the bipartisan Invest in Child Safety Act, which is aimed at giving “cops the needed resources to put predators behind bars”, it says, highlighting that less than 1% of reported child sexual abuse material (CSAM) violations are investigated by law enforcement due to a lack of resources and capacity.However, critics of NetChoice’s stance argue that more needs to be done proactively to prevent children from harm in the first place and that tech companies should take responsibility for ensuring safety rather than placing it on the shoulders of parents and children.“Big Tech and NetChoice are mistaken if they think they’re still fooling anybody with this ‘look there not here’ act,” said Sacha Haworth, executive director of the Tech Oversight Project. “The latest list of alleged ‘solutions’ they propose is just another feint to avoid any responsibility and kick the can down the road while continuing to profit off our kids.”All the state bills have faced opposition by tech companies in the form of strenuous statements or in-person lobbying by representatives of these firms.Other tech lobbyists needed similar prompting to Bos and Szabo to disclose their relevant tech patrons during their testimonies at hearings on child safety bills, if they notified legislators at all. A registered Amazon lobbyist who has spoken at two hearings on New Mexico’s version of the Kids Code bill said he represented the Albuquerque Hispano Chamber of Commerce and the New Mexico Hospitality Association. He never mentioned the e-commerce giant. A representative of another tech trade group did not disclose his organization’s backing from Meta at the same Vermont hearing that saw Bos’s motives and affiliations questioned – arguably the company that would be most affected by the bill’s stipulations.The bills’ supporters say these speakers are deliberately concealing who they work for to better convince lawmakers of their messaging.“We see a clear and accelerating pattern of deception in anti-Kids Code lobbying,” said Haworth of the Tech Oversight Project, which supports the bills. “Big tech companies that profit billions a year off kids refuse to face outraged citizens and bereaved parents themselves in all these states, instead sending front-group lobbyists in their place to oppose this legislation.”NetChoice denied the accusations. In a statement, a spokesperson for the group said: “We are a technology trade association. The claim that we are trying to conceal our affiliation with the tech industry is ludicrous.”These state-level bills follow attempts in California to introduce regulations aimed at protecting children’s privacy online. The California Age-Appropriate Design Code Act is based on similar legislation from the UK that became law in October. The California bill, however, was blocked from being passed into law in late 2023 by a federal judge, who granted NetChoice a preliminary injunction, citing potential threats to the first amendment. Rights groups such as the American Civil Liberties Union also opposed the bill. Supporters in other states say they have learned from the fight in California. They point out that language in the eight other states’ bills has been updated to address concerns raised in the Golden state.The online safety bills come amid increasing scrutiny of Meta’s products for their alleged roles in facilitating harm against children. Mark Zuckerberg, its CEO, was told he had “blood on his hands” at a January US Senate judiciary committee hearing on digital sexual exploitation. Zuckerberg turned and apologized to a group of assembled parents. In December, the New Mexico attorney general’s office filed a lawsuit against Meta for allegedly allowing its platforms to become a marketplace for child predators. The suit follows a 2023 Guardian investigation that revealed how child traffickers were using Meta platforms, including Instagram, to buy and sell children into sexual exploitation.“In time, as Meta’s scandals have piled up, their brand has become toxic to public policy debates,” said Jason Kint, CEO of Digital Content Next, a trade association focused on the digital content industry. “NetChoice leading with Apple, but then burying that Meta and TikTok are members in a hearing focused on social media harms sort of says it all.”A Meta spokesperson said the company wanted teens to have age-appropriate experiences online and that the company has developed more than 30 child safety tools.“We support clear, consistent legislation that makes it simple for parents to manage their teens’ online experiences,” said the spokesperson. “While some laws align with solutions we support, we have been open about our concerns over state legislation that holds apps to different standards in different states. Instead, parents should approve their teen’s app downloads, and we support legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps.” More

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    When dead children are just the price of doing business, Zuckerberg’s apology is empty | Carole Cadwalladr

    I don’t generally approve of blood sports but I’m happy to make an exception for the hunting and baiting of Silicon Valley executives in a congressional committee room. But then I like expensive, pointless spectacles. And waterboarding tech CEOs in Congress is right up there with firework displays, a brief, thrillingly meaningless sensation on the retina and then darkness.Last week’s grilling of Mark Zuckerberg and his fellow Silicon Valley Übermenschen was a classic of the genre: front pages, headlines, and a genuinely stand-out moment of awkwardness in which he was forced to face victims for the first time ever and apologise: stricken parents holding the photographs of their dead children lost to cyberbullying and sexual exploitation on his platform.Less than six hours later, his company delivered its quarterly results, Meta’s stock price surged by 20.3% delivering a $200bn bump to the company’s market capitalisation and, if you’re counting, which as CEO he presumably does, a $700m sweetener for Zuckerberg himself. Those who listened to the earnings call tell me there was no mention of dead children.A day later, Biden announced, “If you harm an American, we will respond”, and dropped missiles on more than 80 targets across Syria and Iraq. Sure bro, just so long as the Americans aren’t teenagers with smart phones. US tech companies routinely harm Americans, and in particular, American children, though to be fair they routinely harm all other nationalities’ children too: the Wall Street Journal has shown Meta’s algorithms enable paedophiles to find each other. New Mexico’s attorney general is suing the company for being the “largest marketplace for predators and paedophiles globally”. A coroner in Britain found that 14-year-old Molly Jane Russell, “died from an act of self-harm while suffering from depression and the negative effects of online content” – which included Instagram videos depicting suicide.And while dispatching a crack squad of Navy Seals to Menlo Park might be too much to hope for, there are other responses that the US Congress could have mandated, such as, here’s an idea, a law. Any law. One that, say, prohibits tech companies from treating dead children as just a cost of doing business.Because demanding that tech companies don’t enable paedophiles to find and groom children is the lowest of all low-hanging fruit in the tech regulation space. And yet even that hasn’t happened yet. What America urgently needs is to act on its anti-trust laws and break up these companies as a first basic step. It needs to take an axe to Section 230, the law that gives platforms immunity from lawsuits for hosting harmful or illegal content.It needs basic product safety legislation. Imagine GlaxoSmithKline launched an experimental new wonder drug last year. A drug that has shown incredible benefits, including curing some forms of cancer and slowing down ageing. It might also cause brain haemorrhages and abort foetuses, but the data on that is not yet in so we’ll just have to wait and see. There’s a reason that doesn’t happen. They’re called laws. Drug companies go through years of testing. Because they have to. Because at some point, a long time ago, Congress and other legislatures across the world did their job.Yet Silicon Valley’s latest extremely disruptive technology, generative AI, was released into the wild last year without even the most basic federally mandated product testing. Last week, deep fake porn images of the most famous female star on the planet, Taylor Swift, flooded social media platforms, which had no legal obligation to take them down – and hence many of them didn’t.But who cares? It’s only violence being perpetrated against a woman. It’s only non-consensual sexual assault, algorithmically distributed to millions of people across the planet. Punishing women is the first step in the rollout of any disruptive new technology, so get used to that, and if you think deep fakes are going to stop with pop stars, good luck with that too.You thought misinformation during the US election and Brexit vote in 2016 was bad? Well, let’s wait and see what 2024 has to offer. Could there be any possible downside to releasing this untested new technology – one that enables the creation of mass disinformation at scale for no cost – at the exact moment in which more people will go to the polls than at any time in history?You don’t actually have to imagine where that might lead because it’s already happened. A deep fake targeting a progressive candidate dropped days before the Slovakian general election in October. It’s impossible to know what impact it had or who created it, but the candidate lost, and the opposition pro-Putin candidate won. CNN reports that the messaging of the deepfake echoed that put out by Russia’s foreign intelligence service, just an hour before it dropped. And where was Facebook in all of this, you ask? Where it usually is, refusing to take many of the deep fake posts down.Back in Congress, grilling tech execs is something to do to fill the time in between the difficult job of not passing tech legislation. It’s now six years since the Cambridge Analytica scandal when Zuckerberg became the first major tech executive to be commanded to appear before Congress. That was a revelation because it felt like Facebook might finally be brought to heel.But Wednesday’s outing was Zuckerberg’s eighth. And neither Facebook, nor any other tech platform, has been brought to heel. The US has passed not a single federal law. Meanwhile, Facebook has done some exculpatory techwashing of its name to remove the stench of data scandals and Kremlin infiltration and occasionally offers up its CEO for a ritual slaughtering on the Senate floor.To understand America’s end-of-empire waning dominance in the world, its broken legislature and its capture by corporate interests, the symbolism of a senator forcing Zuckerberg to apologise to bereaved parents while Congress – that big white building stormed by insurrectionists who found each other on social media platforms – does absolutely nothing to curb his company’s singular power is as good as any place to start.We’ve had eight years to learn the lessons of 2016 and yet here we are. Britain has responded by weakening the body that protects our elections and degrading our data protection laws to “unlock post-Brexit opportunities”. American congressional committees are now a cargo cult that go through ritualised motions of accountability. Meanwhile, there’s a new tech wonder drug on the market that may create untold economic opportunities or lethal bioweapons and the destabilisation of what is left of liberal democracy. Probably both. Carole Cadwalladr is a reporter and feature writer for the Observer More

  • in

    Meta allows ads saying 2020 election was rigged on Facebook and Instagram

    Meta is now allowing Facebook and Instagram to run political advertising saying the 2020 election was rigged.The policy was reportedly introduced quietly in 2022 after the US midterm primary elections, according to the Wall Street Journal, citing people familiar with the decision. The previous policy prevented Republican candidates from running ads arguing during that campaign that the 2020 election, which Donald Trump lost to Joe Biden, was stolen.Meta will now allow political advertisers to say past elections were “rigged” or “stolen”, although it still prevents them from questioning whether ongoing or future elections are legitimate.Other social media platforms have been making changes to their policies ahead of the 2024 presidential election, for which online messaging is expected to be fiercely contested.In August, X (formerly known as Twitter) said it would reverse its ban on political ads, originally instituted in 2019.Earlier, in June, YouTube said it would stop removing content falsely claiming the 2020 election, or other past US presidential elections, were fraudulent, reversing the stance it took after the 2020 election. It said the move aimed to safeguard the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions”.Meta, too, reportedly weighed free-speech considerations in making its decision. The Journal reported that Nick Clegg, president of global affairs, took the position that the company should not decide whether elections were legitimate.The Wall Street Journal reported that Donald Trump ran a Facebook ad in August that was apparently only allowed because of the new rules, in which he lied: “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.”The Tech Oversight Project decried the change in a statement: “We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, deputy executive director. “This announcement is a horrible preview of what we can expect in 2024.”Combined with recent Meta moves to reduce the amount of political content shared organically on Facebook, the prominence of campaign ads questioning elections could rise dramatically in 2024.“Today you can create hundreds of pieces of content in the snap of a finger and you can flood the zone,” Gina Pak, chief executive of Tech for Campaigns, a digital marketing political organization that works with Democrats, told the Journal.Over the past year Meta has laid off about 21,000 employees, many of whom worked on election policy.Facebook was accused of having a malign influence on the 2016 US presidential election by failing to tackle the spread of misinformation in the runup to the vote, in which Trump beat Hillary Clinton. Fake news, such as articles slandering Clinton as a murderer or saying the pope endorsed Trump, spread on the network as non-journalists – including a cottage industry of teenagers living in Macedonia – published false pro-Trump sites in order to reap advertising dollars when the stories went viral.Trump later appropriated the term “fake news” to slander legitimate reporting of his own falsehoods. More

  • in

    You think the internet is a clown show now? You ain’t seen nothing yet | John Naughton

    Robert F Kennedy Jr is a flake of Cadbury proportions with a famous name. He’s the son of Robert Kennedy, who was assassinated in 1968 when he was running for the Democratic presidential nomination (and therefore also JFK’s nephew). Let’s call him Junior. For years – even pre-Covid-19 – he’s been running a vigorous anti-vaccine campaign and peddling conspiracy theories. In 2021, for example, he was claiming that Dr Anthony Fauci was in cahoots with Bill Gates and the big pharma companies to run a “powerful vaccination cartel” that would prolong the pandemic and exaggerate its deadly effects with the aim of promoting expensive vaccinations. And it went without saying (of course) that the mainstream media and big tech companies were also in on the racket and busily suppressing any critical reporting of it.Like most conspiracists, Junior was big on social media, but then in 2021 his Instagram account was removed for “repeatedly sharing debunked claims about the coronavirus or vaccines”, and in August last year his anti-vaccination Children’s Health Defense group was removed by Facebook and Instagram on the grounds that it had repeatedly violated Meta’s medical-misinformation policies.But guess what? On 4 June, Instagram rescinded Junior’s suspension, enabling him to continue beaming his baloney, without let or hindrance, to his 867,000 followers. How come? Because he announced that he’s running against Joe Biden for the Democratic nomination and Meta, Instagram’s parent, has a policy that users should be able to engage with posts from “political leaders”. “As he is now an active candidate for president of the United States,” it said, “we have restored access to Robert F Kennedy Jr’s Instagram account.”Which naturally is also why the company allowed Donald Trump back on to its platform. So in addition to anti-vax propaganda, American voters can also look forward in 2024 to a flood of denialism about the validity of the 2020 election on their social media feeds as Republican acolytes of Trump stand for election and get a free pass from Meta and co.All of which led technology journalist Casey Newton, an astute observer of these things, to advance an interesting hypothesis last week about what’s happening. We may, he said, have passed “peak trust and safety”. Translation: we may have passed the point where tech platforms stopped caring about moderating what happens on their platforms. From now on, (almost) anything goes.If that’s true, then we have reached the most pivotal moment in the evolution of the tech industry since 1996. That was the year when two US legislators inserted a short clause – section 230 – into the Communications Decency Act that was then going through Congress. In 26 words, the clause guaranteed immunity for online computer services with respect to third-party content generated by its users. It basically meant that if you ran an online service on which people could post whatever they liked, you bore no legal liability for any of the bad stuff that could happen as a result of those publications.On the basis of that keep-out-of-jail card, corporations such as Google, Meta and Twitter prospered mightily for years. Bad stuff did indeed happen, but no legal shadow fell on the owners of the platforms on which it was hosted. Of course it often led to bad publicity – but that was ameliorated or avoided by recruiting large numbers of (overseas and poorly paid) moderators, whose job was to ensure that the foul things posted online did not sully the feeds of delicate and fastidious users in the global north.But moderation is difficult and often traumatising work. And, given the scale of the problem, keeping social media clean is an impossible, sisyphean task. The companies employ many thousands of moderators across the globe, but they can’t keep up with the deluge. For a time, these businesses argued that artificial intelligence (meaning machine-learning technology) would enable them to get on top of it. But the AI that can outwit the ingenuity of the bad actors who lurk in the depths of the internet has yet to be invented.And, more significantly perhaps, times have suddenly become harder for tech companies. The big ones are still very profitable, but that’s partly because they been shedding jobs at a phenomenal rate. And many of those who have been made redundant worked in areas such as moderation, or what the industry came to call “trust and safety”. After all, if there’s no legal liability for the bad stuff that gets through whatever filters there are, why keep these worthy custodians on board?Which is why democracies will eventually have to contemplate what was hitherto unthinkable: rethink section 230 and its overseas replications and make platforms legally liable for the harms that they enable. And send Junior back to the soapbox he deserves.What I’ve been readingHere’s looking at usTechno-Narcissism is Scott Galloway’s compelling blogpost on his No Mercy / No Malice site about the nauseating hypocrisy of the AI bros.Ode to JoyceThe Paris Review website has the text of novelist Sally Rooney’s 2022 TS Eliot lecture, Misreading Ulysses.Man of lettersRemembering Robert Gottlieb, Editor Extraordinaire is a lovely New Yorker piece by David Remnick on one of his predecessors, who has just died. More