More stories

  • in

    Meta allows ads saying 2020 election was rigged on Facebook and Instagram

    Meta is now allowing Facebook and Instagram to run political advertising saying the 2020 election was rigged.The policy was reportedly introduced quietly in 2022 after the US midterm primary elections, according to the Wall Street Journal, citing people familiar with the decision. The previous policy prevented Republican candidates from running ads arguing during that campaign that the 2020 election, which Donald Trump lost to Joe Biden, was stolen.Meta will now allow political advertisers to say past elections were “rigged” or “stolen”, although it still prevents them from questioning whether ongoing or future elections are legitimate.Other social media platforms have been making changes to their policies ahead of the 2024 presidential election, for which online messaging is expected to be fiercely contested.In August, X (formerly known as Twitter) said it would reverse its ban on political ads, originally instituted in 2019.Earlier, in June, YouTube said it would stop removing content falsely claiming the 2020 election, or other past US presidential elections, were fraudulent, reversing the stance it took after the 2020 election. It said the move aimed to safeguard the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions”.Meta, too, reportedly weighed free-speech considerations in making its decision. The Journal reported that Nick Clegg, president of global affairs, took the position that the company should not decide whether elections were legitimate.The Wall Street Journal reported that Donald Trump ran a Facebook ad in August that was apparently only allowed because of the new rules, in which he lied: “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.”The Tech Oversight Project decried the change in a statement: “We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, deputy executive director. “This announcement is a horrible preview of what we can expect in 2024.”Combined with recent Meta moves to reduce the amount of political content shared organically on Facebook, the prominence of campaign ads questioning elections could rise dramatically in 2024.“Today you can create hundreds of pieces of content in the snap of a finger and you can flood the zone,” Gina Pak, chief executive of Tech for Campaigns, a digital marketing political organization that works with Democrats, told the Journal.Over the past year Meta has laid off about 21,000 employees, many of whom worked on election policy.Facebook was accused of having a malign influence on the 2016 US presidential election by failing to tackle the spread of misinformation in the runup to the vote, in which Trump beat Hillary Clinton. Fake news, such as articles slandering Clinton as a murderer or saying the pope endorsed Trump, spread on the network as non-journalists – including a cottage industry of teenagers living in Macedonia – published false pro-Trump sites in order to reap advertising dollars when the stories went viral.Trump later appropriated the term “fake news” to slander legitimate reporting of his own falsehoods. More

  • in

    Today’s Top News: Key Takeaways From the G.O.P. Debate, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes. Hosted by Annie Correal, the new morning show features three top stories from reporters across the newsroom and around the world, so you always have a sense of what’s happening, even if you only have a few minutes to spare.The candidates mostly ignored former President Donald J. Trump’s overwhelming lead during the debate last night.Todd Heisler/The New York TimesOn Today’s Episode:5 Takeaways From Another Trump-Free Republican Debate, with Jonathan SwanMeet the A.I. Jane Austen: Meta Weaves A.I. Throughout Its Apps, with Mike IsaacHow Complete Was Stephen Sondheim’s Final Musical?, with Michael PaulsonEli Cohen More

  • in

    TechScape: As the US election campaign heats up, so could the market for misinformation

    X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised.Social media firms’ handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.As the election cycle heats up, Twitter’s move this week is not the first to raise major concerns about the online landscape for 2024 – and it won’t be the last.Musk’s free speech fantasyTwitter’s change to election advertising policies is hardly surprising to those following the platform’s evolution under the leadership of Elon Musk, who purchased the company in 2022. In the months since his takeover, the erratic billionaire has made a number of unilateral changes to the site – not least of all the rebrand of Twitter to X.Many of these changes have centered on Musk’s goal to make Twitter profitable at all costs. The platform, he complained, was losing $4m per day at the time of his takeover, and he stated in July that its cash flow was still in the negative. More than half of the platform’s top advertisers have fled since the takeover – roughly 70% of the platforms leading advertisers were not spending there as of last December. For his part, this week Musk threatened to sue the Anti-Defamation League, saying, “based on what we’ve heard from advertisers, ADL seems to be responsible for most of our revenue loss”. Whatever the reason, his decision to re-allow political advertisers could help boost revenue at a time when X sorely needs it.But it’s not just about money. Musk has identified himself as a “free speech absolutist” and seems hell bent on turning the platform into a social media free-for-all. Shortly after taking the helm of Twitter, he lifted bans on the accounts of Trump and other rightwing super-spreaders of misinformation. Ahead of the elections, he has expressed a goal of turning Twitter into “digital town square” where voters and candidates can discuss politics and policies – solidified recently by its (disastrous) hosting of Republican governor Ron DeSantis’s campaign announcement.Misinformation experts and civil rights advocates have said this could spell disaster for future elections. “Elon Musk is using his absolute control over Twitter to exert dangerous influence over the 2024 election,” said Imran Ahmed, head of the Center for Countering Digital Hate, a disinformation and hate speech watchdog that Musk himself has targeted in recent weeks.In addition to the policy changes, experts warn that the massive workforce reduction Twitter has carried out under Musk could impact the ability to deal with misinformation, as trust and safety teams are now reported to be woefully understaffed.Let the misinformation wars beginWhile Musk’s decisions have been the most high profile in recent weeks, it is not the only platform whose policies have raised alarm. In June, YouTube reversed its election integrity policy, now allowing content contesting the validity of the 2020 elections to remain on the platform. Meanwhile, Meta has also reinstated accounts of high-profile spreaders of misinformation, including Donald Trump and Robert F Kennedy Jr.Experts say these reversals could create an environment similar to that which fundamentally threatened democracy in 2020. But now there is an added risk: the meteoric rise of artificial intelligence tools. Generative AI, which has increased its capabilities in the last year, could streamline the ability to manipulate the public on a massive scale.Meta has a longstanding policy that exempts political ads from its misinformation policies and has declined to state whether that immunity will extend to manipulated and AI-generated images in the upcoming elections. Civil rights watchdogs have envisioned a worst-case scenario in which voters’ feeds are flooded with deceptively altered and fabricated images of political figures, eroding their ability to trust what they read online and chipping away at the foundations of democracy.While Twitter is not the only company rolling back its protections against misinformation, its extreme stances are moving the goalposts for the entire industry. The Washington Post reported this week that Meta was considering banning all political advertising on Facebook, but reversed course to better compete with its rival Twitter, which Musk had promised to transform into a haven for free speech. Meta also dissolved its Facebook Journalism Project, tasked with promoting accurate information online, and its “responsible innovation team,” which monitored the company’s products for potential risks, according to the Washington Post.Twitter may be the most scrutinised in recent weeks, but it’s clear that almost all platforms are moving towards an environment in which they throw up their hands and say they cannot or will not police dangerous misinformation online – and that should concern us all.skip past newsletter promotionafter newsletter promotionThe wider TechScape David Shariatmadari goes deep with the co-founder of DeepMind about the mind-blowing potential of artificial intelligence in biotech in this long read. New tech news site 404 Media has published a distressing investigation into AI-generated mushroom-foraging books on Amazon. In a space where misinformation could mean the difference between eating something delicious and something deadly, the stakes are high. If you can’t beat them, join them: celebrities have been quietly working to sign deals licensing their artificially generated likenesses as the AI arms race continues. Elsewhere in AI – scammers are on the rise, and their tactics are terrifying. And the Guardian has blocked OpenAI from trawling its content. Can you be “shadowbanned” on a dating app? Some users are convinced their profiles are not being prioritised in the feed. A look into this very modern anxiety, and how the algorithms of online dating actually work. More

  • in

    Republicans attack FTC chair and big tech critic Lina Khan at House hearing

    Lina Khan, the chair of the Federal Trade Commission, faced a grueling four hours of questioning during a House judiciary committee oversight hearing on Thursday.Republicans criticized Khan – an outspoken critic of big tech – for “mismanagement” and for “politicizing” legal action against large companies such as Twitter and Google as head of the powerful antitrust agency.In his opening statement, committee chair Jim Jordan, an Ohio Republican, said Khan has given herself and the FTC “unchecked power” by taking aggressive steps to regulate practices at big tech companies such as Twitter, Meta and Google.He said Khan carried out “targeted harassment against Twitter” by asking for all communications related to Elon Musk, including conversations with journalists, following Musk’s acquisition because she does not share his political views.Khan, a former journalist, said the company has “a history of lax security and privacy policies” that did not begin with Musk.Other Democrats agreed. “Protecting user privacy is not political,” said congressman Jerry Nadler, a Democrat of New York, in response to Jordan’s remarks.Republicans also condemned Khan for allegedly wasting government money by pursuing more legal action to prevent mergers than her predecessors – but losing. On Tuesday, a federal judge ruled against the FTC’s bid to delay Microsoft from acquiring video game company Activision Blizzard, saying the agency failed to prove it would decrease competition and harm consumers. The FTC is appealing against that ruling.“She has pushed investigations to burden parties with vague and costly demands without any substantive follow-through, or, frankly, logic, for the requests themselves,” said Jordan.Another Republican member, Darrell Issa, of California, called Khan a “bully” for trying to prevent mergers.“I believe you’ve taken the idea that companies should have to be less competitive in order to merge, [and] that every merger has to be somehow bad for the company and good for the consumer – a standard that cannot be met,” Issa said.Khan earlier came under scrutiny from Republicans participating in an FTC case reviewing Meta’s bid to acquire a virtual reality company despite a recommendation from an ethics official to recuse herself. She defended her decision to remain on the case Thursday, saying she consulted with the ethics official. Khan testified she had “not a penny” in the company’s financial stock and thus did not violate ethics laws.But enforcing antitrust laws for big tech companies such as Twitter has traditionally been a bipartisan issue.“It’s a little strange that you have this real antipathy among the Republicans of Lina Khan, who in many ways is doing exactly what the Republicans say needs to be done, which is bringing a lot more antitrust scrutiny of big tech,” said Daniel Crane, a professor on antitrust law and enforcement at the University of Michigan Law School.“There’s a broad consensus that we need to do more, but that’s kind of where the agreement ends,” he said.Republicans distrust big tech companies over issues of censorship, political bias and cultural influence, whereas Democrats come from a traditional scrutiny of corporations and concentration of economic power, said Crane.“I don’t fundamentally think she’s doing something other than what she was put in office to do,” he said.Congress has not yet passed a major antitrust statute that would be favorable to the FTC in these court battles and does not seem to be pursuing one any time soon, said Crane. “They’re just going to lose a lot of cases, and that’s foreseen.”The FTC’s list of battles with big tech companies is growing.Hours earlier on Thursday, Twitter – which now legally goes by X Corp – asked a federal court to terminate a 2011 settlement with the FTC that placed restrictions on its user data and privacy practices. Khan noted Twitter voluntarily entered into that agreement.Also on Thursday, the Washington Post reported the FTC opened an investigation in OpenAI on whether its chatbot, ChatGPT, is harmful to consumers. A spokesperson for the FTC would not comment on the OpenAI investigation but Khan said during the hearing that “it has been publicly reported”.In 2017, Khan, now 34, gained fame for an academic article she wrote as a law student at Yale that used Amazon’s business practices to explain gaps in US antitrust policy. Biden announced he intended to nominate the antitrust researcher to head the FTC in March 2021. She was sworn in that June.“Chair Khan has delivered results for families, consumers, workers, small businesses, and entrepreneurs,” White House spokesperson Michael Kikukawa said in a statement. More

  • in

    You think the internet is a clown show now? You ain’t seen nothing yet | John Naughton

    Robert F Kennedy Jr is a flake of Cadbury proportions with a famous name. He’s the son of Robert Kennedy, who was assassinated in 1968 when he was running for the Democratic presidential nomination (and therefore also JFK’s nephew). Let’s call him Junior. For years – even pre-Covid-19 – he’s been running a vigorous anti-vaccine campaign and peddling conspiracy theories. In 2021, for example, he was claiming that Dr Anthony Fauci was in cahoots with Bill Gates and the big pharma companies to run a “powerful vaccination cartel” that would prolong the pandemic and exaggerate its deadly effects with the aim of promoting expensive vaccinations. And it went without saying (of course) that the mainstream media and big tech companies were also in on the racket and busily suppressing any critical reporting of it.Like most conspiracists, Junior was big on social media, but then in 2021 his Instagram account was removed for “repeatedly sharing debunked claims about the coronavirus or vaccines”, and in August last year his anti-vaccination Children’s Health Defense group was removed by Facebook and Instagram on the grounds that it had repeatedly violated Meta’s medical-misinformation policies.But guess what? On 4 June, Instagram rescinded Junior’s suspension, enabling him to continue beaming his baloney, without let or hindrance, to his 867,000 followers. How come? Because he announced that he’s running against Joe Biden for the Democratic nomination and Meta, Instagram’s parent, has a policy that users should be able to engage with posts from “political leaders”. “As he is now an active candidate for president of the United States,” it said, “we have restored access to Robert F Kennedy Jr’s Instagram account.”Which naturally is also why the company allowed Donald Trump back on to its platform. So in addition to anti-vax propaganda, American voters can also look forward in 2024 to a flood of denialism about the validity of the 2020 election on their social media feeds as Republican acolytes of Trump stand for election and get a free pass from Meta and co.All of which led technology journalist Casey Newton, an astute observer of these things, to advance an interesting hypothesis last week about what’s happening. We may, he said, have passed “peak trust and safety”. Translation: we may have passed the point where tech platforms stopped caring about moderating what happens on their platforms. From now on, (almost) anything goes.If that’s true, then we have reached the most pivotal moment in the evolution of the tech industry since 1996. That was the year when two US legislators inserted a short clause – section 230 – into the Communications Decency Act that was then going through Congress. In 26 words, the clause guaranteed immunity for online computer services with respect to third-party content generated by its users. It basically meant that if you ran an online service on which people could post whatever they liked, you bore no legal liability for any of the bad stuff that could happen as a result of those publications.On the basis of that keep-out-of-jail card, corporations such as Google, Meta and Twitter prospered mightily for years. Bad stuff did indeed happen, but no legal shadow fell on the owners of the platforms on which it was hosted. Of course it often led to bad publicity – but that was ameliorated or avoided by recruiting large numbers of (overseas and poorly paid) moderators, whose job was to ensure that the foul things posted online did not sully the feeds of delicate and fastidious users in the global north.But moderation is difficult and often traumatising work. And, given the scale of the problem, keeping social media clean is an impossible, sisyphean task. The companies employ many thousands of moderators across the globe, but they can’t keep up with the deluge. For a time, these businesses argued that artificial intelligence (meaning machine-learning technology) would enable them to get on top of it. But the AI that can outwit the ingenuity of the bad actors who lurk in the depths of the internet has yet to be invented.And, more significantly perhaps, times have suddenly become harder for tech companies. The big ones are still very profitable, but that’s partly because they been shedding jobs at a phenomenal rate. And many of those who have been made redundant worked in areas such as moderation, or what the industry came to call “trust and safety”. After all, if there’s no legal liability for the bad stuff that gets through whatever filters there are, why keep these worthy custodians on board?Which is why democracies will eventually have to contemplate what was hitherto unthinkable: rethink section 230 and its overseas replications and make platforms legally liable for the harms that they enable. And send Junior back to the soapbox he deserves.What I’ve been readingHere’s looking at usTechno-Narcissism is Scott Galloway’s compelling blogpost on his No Mercy / No Malice site about the nauseating hypocrisy of the AI bros.Ode to JoyceThe Paris Review website has the text of novelist Sally Rooney’s 2022 TS Eliot lecture, Misreading Ulysses.Man of lettersRemembering Robert Gottlieb, Editor Extraordinaire is a lovely New Yorker piece by David Remnick on one of his predecessors, who has just died. More

  • in

    When the tech boys start asking for new regulations, you know something’s up | John Naughton

    Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.What I’ve been readingCapitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon. More

  • in

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourse

    Why Donald Trump’s return to Facebook could mark a rocky new age for online discourseThe former president was banned from Instagram and Facebook following the Jan 6 attacks, but Meta argues that new ‘guardrails’ will keep his behaviour in check. Plus: is a chatbot coming for your job?

    Don’t get TechScape delivered to your inbox? Sign up for the full article here
    It’s been two years since Donald Trump was banned from Meta, but now he’s back. The company’s justification for allowing the former president to return to Facebook and Instagram – that the threat has subsided – seems to ignore that in the two years since the ban Trump hasn’t changed, it’s just that his reach has reduced.Last week, Meta’s president of global affairs, Nick Clegg, announced that soon Trump will be able to post on Instagram and Facebook. The company said “the risk has sufficiently receded” in the two years since the Capitol riots on 6 January 2021 to allow the ban to be lifted.What you might not have been aware of – except through media reports – was Trump’s response. That is because the former US president posted it on Truth Social, his own social media network that he retreated to after he was banned from the others. And it is effectively behind a wall for web users, because the company is not accepting new registrations. On that platform, Trump is said to have fewer than 5 million followers, compared to 34 million and almost 88 million he’d had on Facebook and Twitter respectively.Meta’s ban meant that Trump wouldn’t have space on its platforms during the US midterms elections in 2022, but would anything have been different if Trump had been given a larger audience? As Dan Milmo has detailed, almost half of the posts on Trump’s Truth Social account in the weeks after the midterms pushed election fraud claims or amplified QAnon accounts or content. But you wouldn’t know it unless you were on that platform, or reading a news report about it like this one.If given a larger audience, will Trump resume his Main Character role in online discourse (a role that Twitter’s new owner, Elon Musk, has gamely taken on in the past few months)? Or has his influence diminished? This is the gamble Meta is taking.When Musk lifted Trump’s ban on Twitter in November after a user poll won by a slim margin, it was easy to read the former president’s snub of the gesture as a burn on the tech CEO. But it seems increasingly likely that the Meta decision about whether to reinstate him was looming large in Trump’s mind. Earlier this month, NBC reported that Trump’s advisors had sent a letter to Meta pleading for the ban to be lifted, saying it “dramatically distorted and inhibited the public discourse”. If Trump had gone back to Twitter and started reposting what he had posted on Truth Social, there would have been more pressure on Meta to keep the ban in place (leaving aside the agreement Trump has with his own social media company that keeps his posts exclusive on Truth Social for several hours).Twitter lifting the ban and Trump not tweeting at all gave Meta sufficient cover.The financialsThere’s also the possible financial reasoning. Angelo Carusone, the president of Media Matters for America, said Facebook is “a dying platform” and restoring Trump is about clinging to relevance and revenue.For months, Trump has been posting on Truth Social about how poorly Meta is performing financially, and in part trying to link it to him no longer being on Facebook. Meta has lost more than US$80bn in market value, and last year sacked thousands of workers as the company aimed to stem a declining user base and loss of revenue after Apple made privacy changes on its software (£).But what of the ‘guardrails’?Meta’s justification for restoring Trump’s account is that there are new “guardrails” that could result in him being banned again for the most egregious policy breaches for between one month and two years. But that is likely only going to be for the most serious of breaches – such as glorifying those committing violence. Clegg indicated that if Trump is posting QAnon-adjacent content, for example, his reach will be limited on those posts.The ban itself was a pretty sufficient reach limiter, but we will have to see what happens if Trump starts posting again. The unpublished draft document from staff on the January 6 committee, reported by the Washington Post last week, was pretty telling about Meta, and social media companies generally. It states that both Facebook and Twitter, under its former management, were sensitive to claims that conservative political speech was being suppressed. “Fear of reprisal and accusations of censorship from the political right compromised policy, process, and decision-making. This was especially true at Facebook,” the document states.“In one instance, senior leadership intervened personally to prevent rightwing publishers from having their content demoted after receiving too many strikes from independent fact-checkers.“After the election, they debated whether they should change their fact-checking policy on former world leaders to accommodate President Trump.”Those “guardrails” don’t seem particularly reassuring, do they?Is AI really coming for your job?Layoffs continue to hit media and companies are looking to cut costs. So it was disheartening for new reporters in particular to learn that BuzzFeed plans to use AI such as ChatGPT “to create content instead of writers”.(Full disclosure: I worked at BuzzFeed News prior to joining the Guardian in 2019, but it’s been long enough that I am not familiar with any of its thinking about AI.)But perhaps it’s a bit too early to despair. Anyone who has used free AI to produce writing will know it’s OK but not great, so the concern about BuzzFeed dipping its toes in those waters seems to be overstated – at least for now.In an interview with Semafor, BuzzFeed tech reporter Katie Notopoulos explained that the tools aren’t intended to replace the quiz-creation work writers do now, but to create new quizzes unlike what is already around. “On the one hand,” she said, “I want to try to explain this isn’t an evil plan to replace me with AI. But on the other … maybe let Wall Street believe that for a little while.”That seems to be where AI is now: not a replacement for a skilled person, just a tool.The wider TechScape
    This is the first really good in-depth look at the last few months of Twitter since Elon Musk took over.
    Social media users are posting feelgood footage of strangers to build a following, but not every subject appreciates the clickbaity attention of these so-called #kindness videos.
    If you’re an influencer in Australia and you’re not declaring your sponcon properly, you might be targeted as part of a review by the local regulator.
    Speaking of influencers, Time has a good explanation for why you might have seen people posting about mascara on TikTok in the past few days.
    Writer Jason Okundaye makes the case that it’s time for people to stop filming strangers in public and uploading the videos online in the hope of going viral.
    Nintendo rereleasing GoldenEye007 this week is a reminder of how much the N64 game shaped video games back in the day.
    TopicsTechnologyTechScapeSocial mediaDonald TrumpDigital mediaMetaFacebookInstagramnewslettersReuse this content More

  • in

    Trump’s return to Facebook will ‘fan the flames of hatred’, Democrats say

    Trump’s return to Facebook will ‘fan the flames of hatred’, Democrats sayDemocrats and liberal groups deplored decision to revoke ban on former president who incited insurrection but ACLU defends move Donald Trump’s return to Facebook and Instagram will “fan the flames of hatred and division”, a Democratic congresswoman said, amid liberal outrage after parent company Meta announced its decision to lift a ban on the former US president imposed after the January 6 Capitol attack.Donald Trump’s Truth Social posts bode ill for his return to FacebookRead moreJan Schakowsky, of Illinois, said: “Reinstating former president Trump’s Facebook and Instagram accounts will only fan the flames of hatred and division that led to an insurrection.”Trump was impeached for inciting the January 6 riot, a deadly attempt to overturn his defeat by Joe Biden in the 2020 election.Trump was also banned from major social media platforms. His ban from Twitter was lifted in November, after the site was purchased by the Tesla owner, Elon Musk. Trump has not tweeted since, although he is active on his own social media platform and would-be Twitter rival Truth Social.Announcing Meta’s decision, its president of global affairs, the former British deputy prime minister Nick Clegg, told NBC “the rough and tumble of democratic debate should play out on Facebook and Instagram as much as anywhere else”.Schakowsky countered: “The reinstatement of Trump’s accounts show that there is no low [Meta chief executive] Mark Zuckerberg will not stoop to in order to reverse Meta’s cratering revenue and stagnant consumer growth, even if it means destroying our democracy.”Among other Democrats, Adam Schiff, a former House intelligence committee chair, said Trump had “shown no remorse [or] contrition” for January 6, and Facebook had “caved, giving him a platform to do more harm”.Eric Swalwell, like Schiff now barred from the intelligence committee by Republican leaders, said: “We know that [Trump’s] words have power and they inspire, and then the leaders in the Republican party, like Speaker [Kevin] McCarthy, they don’t condemn them. And so when they’re not condemned, they’re a green light and open lane for more violence to occur.”Some liberal groups also scorned the decision.Angelo Carusone, president of Media Matters for America, said: “Meta is refuelling Trump’s misinformation and extremism engine … one that will put us on a path to increased violence.”Noah Bookbinder, president of Citizens for Responsibility and Ethics in Washington, or Crew, pointed out that Meta was not bound by constitutional free speech protections.“Facebook is not the government,” he wrote. “The first amendment does not require it to give Donald Trump a platform for speech. And the first amendment does not protect speech to incite an insurrection or overturn an election. The justifications for this are nonsense.”But there was support for Meta among civil liberties groups.Anthony Romero, executive director of the American Civil Liberties Union, said: “Like it or not, President Trump is one of the country’s leading political figures and the public has a strong interest in hearing his speech.“The biggest social media companies are central actors when it comes to our collective ability to speak – and hear the speech of others – online. They should err on the side of allowing a wide range of political speech, even when it offends.”Trump used his own platform, Truth Social, to celebrate.‘Reckless’: Fury among rights groups as Facebook lifts Trump banRead moreHe wrote: “Facebook, which has lost billions of dollars in value since ‘deplatforming’ your favorite president, me, has just announced that they are reinstating my account. Such a thing should never again happen to a sitting president, or anybody else who is not deserving of retribution!”CNN reported that Meta said Trump would be “permitted to attack the results of the 2020 election without facing consequences” but would face action if he “were to cast doubt on an upcoming election – like, the 2024 presidential race”.Crew, the ethics watchdog, was not alone in expressing skepticism. It said: “If anyone thinks Donald Trump will rejoin Facebook, then not do the same exact thing he did before, well, they clearly don’t know anything about Donald Trump.”TopicsDonald TrumpFacebookUS politicsDemocratsMetanewsReuse this content More