More stories

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    When dead children are just the price of doing business, Zuckerberg’s apology is empty | Carole Cadwalladr

    I don’t generally approve of blood sports but I’m happy to make an exception for the hunting and baiting of Silicon Valley executives in a congressional committee room. But then I like expensive, pointless spectacles. And waterboarding tech CEOs in Congress is right up there with firework displays, a brief, thrillingly meaningless sensation on the retina and then darkness.Last week’s grilling of Mark Zuckerberg and his fellow Silicon Valley Übermenschen was a classic of the genre: front pages, headlines, and a genuinely stand-out moment of awkwardness in which he was forced to face victims for the first time ever and apologise: stricken parents holding the photographs of their dead children lost to cyberbullying and sexual exploitation on his platform.Less than six hours later, his company delivered its quarterly results, Meta’s stock price surged by 20.3% delivering a $200bn bump to the company’s market capitalisation and, if you’re counting, which as CEO he presumably does, a $700m sweetener for Zuckerberg himself. Those who listened to the earnings call tell me there was no mention of dead children.A day later, Biden announced, “If you harm an American, we will respond”, and dropped missiles on more than 80 targets across Syria and Iraq. Sure bro, just so long as the Americans aren’t teenagers with smart phones. US tech companies routinely harm Americans, and in particular, American children, though to be fair they routinely harm all other nationalities’ children too: the Wall Street Journal has shown Meta’s algorithms enable paedophiles to find each other. New Mexico’s attorney general is suing the company for being the “largest marketplace for predators and paedophiles globally”. A coroner in Britain found that 14-year-old Molly Jane Russell, “died from an act of self-harm while suffering from depression and the negative effects of online content” – which included Instagram videos depicting suicide.And while dispatching a crack squad of Navy Seals to Menlo Park might be too much to hope for, there are other responses that the US Congress could have mandated, such as, here’s an idea, a law. Any law. One that, say, prohibits tech companies from treating dead children as just a cost of doing business.Because demanding that tech companies don’t enable paedophiles to find and groom children is the lowest of all low-hanging fruit in the tech regulation space. And yet even that hasn’t happened yet. What America urgently needs is to act on its anti-trust laws and break up these companies as a first basic step. It needs to take an axe to Section 230, the law that gives platforms immunity from lawsuits for hosting harmful or illegal content.It needs basic product safety legislation. Imagine GlaxoSmithKline launched an experimental new wonder drug last year. A drug that has shown incredible benefits, including curing some forms of cancer and slowing down ageing. It might also cause brain haemorrhages and abort foetuses, but the data on that is not yet in so we’ll just have to wait and see. There’s a reason that doesn’t happen. They’re called laws. Drug companies go through years of testing. Because they have to. Because at some point, a long time ago, Congress and other legislatures across the world did their job.Yet Silicon Valley’s latest extremely disruptive technology, generative AI, was released into the wild last year without even the most basic federally mandated product testing. Last week, deep fake porn images of the most famous female star on the planet, Taylor Swift, flooded social media platforms, which had no legal obligation to take them down – and hence many of them didn’t.But who cares? It’s only violence being perpetrated against a woman. It’s only non-consensual sexual assault, algorithmically distributed to millions of people across the planet. Punishing women is the first step in the rollout of any disruptive new technology, so get used to that, and if you think deep fakes are going to stop with pop stars, good luck with that too.You thought misinformation during the US election and Brexit vote in 2016 was bad? Well, let’s wait and see what 2024 has to offer. Could there be any possible downside to releasing this untested new technology – one that enables the creation of mass disinformation at scale for no cost – at the exact moment in which more people will go to the polls than at any time in history?You don’t actually have to imagine where that might lead because it’s already happened. A deep fake targeting a progressive candidate dropped days before the Slovakian general election in October. It’s impossible to know what impact it had or who created it, but the candidate lost, and the opposition pro-Putin candidate won. CNN reports that the messaging of the deepfake echoed that put out by Russia’s foreign intelligence service, just an hour before it dropped. And where was Facebook in all of this, you ask? Where it usually is, refusing to take many of the deep fake posts down.Back in Congress, grilling tech execs is something to do to fill the time in between the difficult job of not passing tech legislation. It’s now six years since the Cambridge Analytica scandal when Zuckerberg became the first major tech executive to be commanded to appear before Congress. That was a revelation because it felt like Facebook might finally be brought to heel.But Wednesday’s outing was Zuckerberg’s eighth. And neither Facebook, nor any other tech platform, has been brought to heel. The US has passed not a single federal law. Meanwhile, Facebook has done some exculpatory techwashing of its name to remove the stench of data scandals and Kremlin infiltration and occasionally offers up its CEO for a ritual slaughtering on the Senate floor.To understand America’s end-of-empire waning dominance in the world, its broken legislature and its capture by corporate interests, the symbolism of a senator forcing Zuckerberg to apologise to bereaved parents while Congress – that big white building stormed by insurrectionists who found each other on social media platforms – does absolutely nothing to curb his company’s singular power is as good as any place to start.We’ve had eight years to learn the lessons of 2016 and yet here we are. Britain has responded by weakening the body that protects our elections and degrading our data protection laws to “unlock post-Brexit opportunities”. American congressional committees are now a cargo cult that go through ritualised motions of accountability. Meanwhile, there’s a new tech wonder drug on the market that may create untold economic opportunities or lethal bioweapons and the destabilisation of what is left of liberal democracy. Probably both. Carole Cadwalladr is a reporter and feature writer for the Observer More

  • in

    When Mark Zuckerberg can face US senators and claim the moral high ground, we’re through the looking glass | Marina Hyde

    Did you catch a clip of the tech CEOs in Washington this week? The Senate judiciary committee had summoned five CEOs to a hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. There was Meta’s Mark Zuckerberg, TikTok’s Shou Zi Chew, Snapchat’s Evan Spiegel, Discord’s Jason Citron and X’s Linda Yaccarino – and a predictable vibe of “Senator, I’m a parent myself …” Listen, these moguls simply want to provide the tools to help families and friends connect with each other. Why must human misery and untold, tax-avoidant billions attend them at every turn?If you did see footage from the hearing, it was probably one of two moments of deliberately clippable news content. Ranking committee member Lindsey Graham addressed Zuckerberg with the words: “I know you don’t mean it to be so, but you have blood on your hands.” Well, ditto, Senator. “You have a product that is killing people,” continued Graham, who strangely has yet to make the same point to the makers of whichever brand of AR-15 he proudly owns, or indeed to the makers of the assault rifles responsible for another record high of US school shootings last year. Firearms fatalities are the number one cause of death among US children and teenagers, a fact the tech CEOs at this hearing politely declined to mention, because no one likes a whatabouterist. And after all, the point of these things is to just get through the posturing of politicians infinitely less powerful than you, then scoot back to behaving precisely as you were before. Zuckerberg was out of there in time to report bumper results and announce Meta’s first ever dividend on Thursday. At time of writing, its shares were soaring.Anyhow, if it wasn’t that clip, maybe it was the one of Zuckerberg being goaded by sedition fist-pumper Josh Hawley into apologising to those in the committee room audience who had lost children to suicide following exploitation on his platform. Thanks to some stagey prodding by Senator Hawley, who famously encouraged the mob on 6 January 2020 (before later being filmed running away from them after they stormed the Capitol), Zuckerberg turned round, stood up, and faced his audience of the bereaved. “I’m sorry for everything you’ve all gone through,” he began. Helpfully, a transcribed version of this off-the-cuff moment found its way into a Meta press release minutes after the event.View image in fullscreenSo I guess that was the hearing. “Tense”, “heated”, “stunning” – listen, if adjectival cliches were legislation, this exercise would have been something more than pointless. And yet, they’re not and it wasn’t. There really ought to be a genre name for this kind of performative busywork – the theatre of failure, perhaps.Other outcomes were once available. Back in 1994, the CEOs of seven big tobacco firms took their oaths before a Senate committee, then spouted a communal line that nicotine wasn’t addictive. Within two years, all seven had quit the tobacco industry – a development not unrelated to the fact that all seven were under investigation by the justice department for perjury. Those were different times, and not just because we probably wouldn’t slap them with the “seven dwarfs” moniker now. These days, you can’t escape the sense that old guys were shouting at Zuckerberg at a hearing six years ago, while he offered 2018’s variation on his favourite blandishment: “We know we have more work to do”. And you suspect they’ll be shouting at him again in five years’ time, when he will still know they have more work to do. “If you’re waiting on these guys to solve the problem,” sniffed Graham of the tech CEOs, “we’re gonna die waiting.” Again, the senator speaks of what he knows. There is always talk of legislation, but there is never really much legislation.There’s a line near the start of the movie version of Ready Player One, the cult dystopian book about a VR world that weirdly feels like the lodestar for Zuckerberg’s pivot towards the metaverse: “I was born in 2027,” explains the teenage protagonist, “after the corn syrup droughts, after the bandwidth riots … after people stopped trying to fix problems, and just tried to outlive them.” It was hard to watch any amount of Wednesday’s hearing – it’s hard to watch a lot of news about the intersection of politics and mega-business these days, in fact – and not feel we are in a very similar place. Few of the politicians giving it the hero act could be said to have left the world in a better place than the one in which they found it when they took office. A necrotic form of politics has gripped the Republican party in particular, and this is the vacuum in which they have been downgraded by corporations they don’t even understand, let alone have the will, foresight, or political skill to control.“Companies over countries,” as Mark Zuckerberg said a long time ago. This once-unformed thought becomes more realised all the time, with the Meta boss last year explaining that, “Increasingly, the real world is a combination of the physical world we inhabit and the digital world we are building.” The added irony is that the more the Lindsey Grahams fail the real world, the more people retreat further into the unregulated embrace of the worlds that the Mark Zuckerbergs run. It’s going to take so much more than the theatre of failure to solve it – but bad actors currently dominate the bill.
    Marina Hyde is a Guardian columnist More

  • in

    Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live | André Spicer

    During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified.If AI-produced hallucinations are used to answer important but difficult to verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change. The Brookings Institution points out that there are many positive ways generative AI could be used in politics. But at the moment its negative uses are most obvious, and more likely to affect us imminently. It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
    André Spicer is professor of organisational behaviour at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit More

  • in

    How 2023 became the year Congress forgot to ban TikTok

    Banning TikTok in the US seemed almost inevitable at the start of 2023. The previous year saw a trickle of legislative actions against the short-form video app, after dozens of individual states barred TikTok from government devices in late 2022 over security concerns. At the top of the new year, the US House followed suit, and four universities blocked TikTok from campus wifi.The movement to prohibit TikTok grew into a flash flood by spring. CEO Shou Zi Chew was called before Congress for brutal questioning in March. By April – with support from the White House (and Joe Biden’s predecessor) – it seemed a federal ban of the app was not just possible, but imminent.But now, as quickly as the deluge arrived, it has petered out – with the US Senate commerce committee confirming in December it would not be taking up TikTok-related legislation before the end of the year. With the final word from the Senate, 2023 became the year Congress forgot to ban TikTok.“A lot of the momentum that was gained after the initial flurry of attention has faded,” said David Greene, a civil liberties attorney with the Electronic Frontier Foundation (EFF). “It seems now like the idea of a ban was being pushed more so to make political points and less as a serious effort to legislate.”Lots of legislation, little actionThe political war over TikTok centered on allegations that its China-based parent company, ByteDance, could collect sensitive user data and censor content that goes against the demands of the Chinese Communist party.TikTok, which has more than 150 million users in the United States, denies it improperly uses US data and has emphasized its billion-dollar efforts to store that information on servers outside its home country. Reports have cast doubt on the veracity of some of TikTok’s assertions about user data. The company declined to comment on a potential federal ban.With distress over the influence of social media giants mounting for years, and tensions with China high after the discovery of a Chinese spy balloon hovering over the US in February 2023, attacks on TikTok became more politically viable for lawmakers on both sides of the aisle. Legislative efforts ensued, and intensified.The House foreign affairs committee voted in March along party lines on a bill aimed at TikTok that Democrats said would require the administration to effectively ban the app and other subsidiaries of ByteDance. The US treasury-led Committee on Foreign Investment in the United States (CFIUS) in March demanded that TikTok’s Chinese owners sell off the app or face the possibility of a ban. Senator Mark Warner, a Democrat from Virginia, and more than two dozen other senators in April sponsored legislation – backed by the White House – that would give the administration new powers to ban TikTok and other foreign-based technologies if they pose national security threats.But none of these laws ever made it to a vote, and many have stalled entirely as lawmakers turned their attention to the boom in artificial intelligence. Warner told Reuters in December that the bill he authored has faced intensive lobbying from TikTok and had little chance of survival. “There is going to be pushback on both ends of the political spectrum,” he said.The Montana effectMontana passed a total statewide ban on TikTok in May, to start on 1 January 2024, setting the stage for a federal one. That momentum for a nationwide prohibition ebbed, however, when a US judge last week blocked the legislation from going into effect – a move that TikTok applauded.“We are pleased the judge rejected this unconstitutional law and hundreds of thousands of Montanans can continue to express themselves, earn a living, and find community on TikTok,” the company’s statement reads.In a preliminary injunction blocking the ban, US district judge Donald Molloy said the law “oversteps state power and infringes on the constitutional rights of users”. The closely watched decision indicated that broader bans are unlikely to be successful.“The Montana court blocking the effort to ban TikTok not only threw a wet blanket on any federal efforts to do the same, but sent a clear message to every lawmaker that banning an app is a violation of the first amendment,” said Carl Szabo, general counsel at the freedom of speech advocacy group NetChoice, of which TikTok is a member.The EFF’s Greene, who also watched the Montana case closely, echoed that the results proved what many free speech advocates have long argued: a broad ban of an app is not viable under US law.“This confirmed what most people assumed, which is that what is being suggested is blatantly not possible,” he said. “Free speech regulation requires really, really precise tailoring to avoid banning more speech than necessary. And a total ban on an app simply does not do that.”skip past newsletter promotionafter newsletter promotionPolitical discussions around the ban also exposed a need for comprehensive privacy legislation, Greene said. The same politicians raising concerns about the Chinese government collecting data had done little to address companies like Meta collecting similar reams of data in the US.“The ideas that were floated were legally problematic and belied a real, sincere interest in addressing privacy harms,” he said. “I think that can cause anyone to question whether they really cared about users.”Election year fearsMeanwhile, some analysts think Congress and the White House are unlikely to even attempt to ban TikTok in 2024, an election year, given the app’s popularity with young voters.Joe Biden’s re-election campaign team has been reportedly debating whether to join TikTok, on which the president does not currently have an official page, to attempt to reach more young voters. Nearly half of people between 18 and 30 in the US use TikTok, and 32% of users in that age group say they regularly consume news there. To date, Vivek Ramaswamy is the only Republican candidate to join the app, a move which has elicited lashings from his opponents in multiple debates.“The same lawmakers calling for a ban are going to need to pivot to online platforms like TikTok for their upcoming get-out-the-vote efforts,” said Szabo. “To cut off a major avenue of reaching voters during an election year doesn’t make political sense.”Even as interest in banning TikTok wanes – politically and among voters – the efforts are not entirely dead. Senator Maria Cantwell, a Democrat from Washington, told Reuters she is still working on legislation and in talks with federal agencies, noting that the Senate held a secure briefing on concerns about foreign influence by way of social media last month.Even as the interest and political power to fuel a TikTok ban wanes, social networks are going to be under the magnifying glass in the coming year, said Szabo.“As we go into 2024, I will say that control of speech on the internet is going to be even more heated, as lawmakers try to control what people can say about their campaigns,” he said. “I would also expect to see those very same politicians using the platform to raise money and to get out the vote.”Reuters contributed reporting More

  • in

    Brawny billionaires, pumped-up politicians: why powerful men are challenging each other to fights

    The first rule of insecure masculinity fight club? Tell everyone about it. And I mean everyone. Tweet about it, talk to reporters, shout about it from the rooftops. Make sure the entire world knows that you are a big boy who could beat just about anyone in a fistfight.Twenty twenty-three, as I’m sure you will have observed, was the year that tech CEOs stepped away from their screens and decided to get physical. Elon Musk, perennially thirsty for attention, was at the center of this embarrassing development. The 52-year-old – who challenged Vladimir Putin to single combat in 2022 – spent much of the year teasing the idea that he was going head-to-head with Mark Zuckerberg in a cage fight. At one point he suggested the fight would be held at the Colosseum in Rome.Don’t worry, you didn’t miss it. The fight never happened and will never ever happen for the simple reason that Musk would get destroyed by Zuckerberg, who has been obsessively training in mixed martial arts (MMA) and won a bunch of medals in a Brazilian jiujitsu tournament. The only way Musk will actually follow through with the cage match is if he manages to get his hands on some kind of brain-implant technology that magically transforms him into a lean, mean, fighting machine. Indeed, I wouldn’t be surprised if Neuralink, Musk’s brain-chip startup, was working on that brief right now. Although seeing as the company is under federal investigation after killing 1,500 animals in testing– many of which died extremely grisly deaths – it may be a while before any such technology comes to fruition.Musk and Zuck aren’t the only tech execs looking to get physical. Vin Diesel-level biceps have become the latest billionaire status symbol. Just look at Jeff Bezos: his muscles have increased at about the same rate as his bank account. The Airbnb CEO, Brian Chesky, has also been working on getting swole. Back in June, Chesky told the Bloomberg writer Dave Lee that he’d “challenge any leader in tech to bench press”. He added: “I’ve been waiting for these physical battles in tech. It’s just so funny.”It’s not just tech bros. Politicians are at it too. Over the summer, Robert F Kennedy Jr posted a video of himself doing push-ups while shirtless with the caption “Getting in shape for my debates with President Biden!” Which may or may not have been prompted by Biden once challenging an Iowa voter and Donald Trump to a push-up contest.I don’t know how good Kevin McCarthy is at push-ups, but he’s certainly fond of shoving. In November, the former speaker bumped into the congressman Tim Burchett of Tennessee and reportedly elbowed him in the back. Burchett then chased after him, calling him a “jerk” and a “chicken”. McCarthy, it seems, was angry that Burchett had helped oust him from the speakership in October, making him the first speaker in US history to have been removed by his own side.Just a few hours after that altercation, Markwayne Mullin, a Republican senator from Oklahoma, challenged Sean O’Brien, president of the International Brotherhood of Teamsters, to a physical confrontation during a Senate committee hearing on labor unions. Mullin, a former businessman who regularly boasts about his prowess as an MMA fighter, was miffed that O’Brien had once called him a “greedy CEO” and a “clown” on Twitter. He decided to settle his private grievance during a public hearing and the two agreed to have a fight right there and then – yelling at each other to “stand your butt up” and get started. Eventually Bernie Sanders got them to calm down.Just pause for a moment and imagine acting like this in your own job. I don’t know about you, but I’m pretty sure that if I challenged a colleague to a fight and started yelling at them to “sit their butt down” in the middle of a public meeting, I would face some sort of consequences. In the Mullins case, the meltdown doesn’t seem to have had any impact on his career. It may have even increased his popularity among his base. Politicians routinely seem to be held to a lower standard than the rest of us.If you ignore the fact that we’re being ruled by people with enormous egos and no self-restraint, then there is an amusing element to all this. But more than anything, it’s just pathetic, isn’t it? All these grown men so clearly worried about their masculinity that they feel the need to puff out their chests and show everyone just how strong they are.The one per cent’s desperate shows of bravado are part of a broader insecurity about masculinity in the west that plenty of snake-oil salesmen and opportunists are exploiting for all it’s worth. In 2022, for example, the rightwing commentator Tucker Carlson came out with a documentary called The End of Men that argues testosterone counts are plummeting and “real men” are an endangered species. The documentary was full of bizarre ways to counteract this, including testicle tanning. I’m not sure how many tech bros and politicians are regularly exposing their balls to red-light therapy, but there does seem to be a widespread preoccupation with “bromeopathic” ways to increase testosterone. Testosterone blood-test “T parties” are apparently a growing trend among tech types: a bunch of founders get together and find ways to raise their T.Do whatever you like in private, I say. Tan your testicles, go to T parties, organize push-up competitions. Just don’t foist your masculine insecurities on the rest of us. Stop challenging each other to public fights and getting into brawls in government. It seems to be easy enough for women to follow this advice, doesn’t it? I mean … has a female CEO or politician ever tried to organize a public fistfight with a female counterpart? I’ve got a weird feeling the answer is “no, they would be a complete laughingstock if they did”, but if anyone can find me a recent example then I’ll eat my hat. Or – on second thoughts – I’ll throw my hat in the ring and fight Elon Musk myself in the Roman Colosseum. Consider that a challenge. More

  • in

    Meta allows ads saying 2020 election was rigged on Facebook and Instagram

    Meta is now allowing Facebook and Instagram to run political advertising saying the 2020 election was rigged.The policy was reportedly introduced quietly in 2022 after the US midterm primary elections, according to the Wall Street Journal, citing people familiar with the decision. The previous policy prevented Republican candidates from running ads arguing during that campaign that the 2020 election, which Donald Trump lost to Joe Biden, was stolen.Meta will now allow political advertisers to say past elections were “rigged” or “stolen”, although it still prevents them from questioning whether ongoing or future elections are legitimate.Other social media platforms have been making changes to their policies ahead of the 2024 presidential election, for which online messaging is expected to be fiercely contested.In August, X (formerly known as Twitter) said it would reverse its ban on political ads, originally instituted in 2019.Earlier, in June, YouTube said it would stop removing content falsely claiming the 2020 election, or other past US presidential elections, were fraudulent, reversing the stance it took after the 2020 election. It said the move aimed to safeguard the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions”.Meta, too, reportedly weighed free-speech considerations in making its decision. The Journal reported that Nick Clegg, president of global affairs, took the position that the company should not decide whether elections were legitimate.The Wall Street Journal reported that Donald Trump ran a Facebook ad in August that was apparently only allowed because of the new rules, in which he lied: “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.”The Tech Oversight Project decried the change in a statement: “We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, deputy executive director. “This announcement is a horrible preview of what we can expect in 2024.”Combined with recent Meta moves to reduce the amount of political content shared organically on Facebook, the prominence of campaign ads questioning elections could rise dramatically in 2024.“Today you can create hundreds of pieces of content in the snap of a finger and you can flood the zone,” Gina Pak, chief executive of Tech for Campaigns, a digital marketing political organization that works with Democrats, told the Journal.Over the past year Meta has laid off about 21,000 employees, many of whom worked on election policy.Facebook was accused of having a malign influence on the 2016 US presidential election by failing to tackle the spread of misinformation in the runup to the vote, in which Trump beat Hillary Clinton. Fake news, such as articles slandering Clinton as a murderer or saying the pope endorsed Trump, spread on the network as non-journalists – including a cottage industry of teenagers living in Macedonia – published false pro-Trump sites in order to reap advertising dollars when the stories went viral.Trump later appropriated the term “fake news” to slander legitimate reporting of his own falsehoods. More