More stories

  • in

    Trump tells Logan Paul he used AI to ‘so beautifully’ rewrite a speech

    Donald Trump has said he used a speech generated by artificial intelligence (AI) after being impressed by the content.The former US president, whose oratory is noted for its rambling, off-the-cuff style but also for its demagoguery, made the claim in an interview with Logan Paul’s podcast in which he lauded AI as “a superpower” but also warned of its potential dangers.He said the rewritten speech came during a meeting with one of the industry’s “top people”, whom he did not identify.“I had a speech rewritten by AI out there, one of the top people,” Trump said. “He said, ‘Oh, you’re gonna make a speech? Yeah?’ He goes, click, click, click, and like, 15 seconds later, he shows me my speech that’s written that’s great, so beautifully. I said, ‘I’m gonna use this.’ I’ve never seen anything like it.” Trump did not say at what event he had used the AI-generated speech.He predicted that AI’s oratorical gifts could sound the death knell for speech writers, long a part of Washington’s political landscape.“One industry I think that will be gone are these wonderful speechwriters,” he said. I’ve never seen anything like it, and so quickly, a matter of literally minutes, it’s done. It’s a little bit scary.”Asked what he said to his speech writer, Trump jokingly responded, “You’re fired,” a line associated with The Apprentice, the TV reality show that helped propel his political rise.Trump, the Republican presumptive 2024 presidential nominee, also acknowledged that AI had dangers, especially in regard to deepfakes. He warned of an imaginary situation where a faked voice warned a foreign power that a US nuclear attack was being launched, possibly triggering a retaliatory strike.“If you’re the president of the United States, and you announced that 13 missiles have been sent to, let’s not use the name of a country,” he said. “We have just sent 13 nuclear missiles heading to somewhere, and they will hit their targets in 12 minutes and 59 seconds, and you’re that country.”skip past newsletter promotionafter newsletter promotionHe said he had asked the entrepreneur Elon Musk – referring to him by his first name – if Russia or China would be able to identify that the attack warning was fake and was told that they would have to use a code to check its veracity.“Who the hell’s going to check. You got, like, 12 minutes – let’s check the code,” he said. “So what do they do when they see this? They have maybe a counterattack. It’s so dangerous in that way.” More

  • in

    Deepfakes are here and can be dangerous, but ignore the alarmists – they won’t harm our elections | Ciaran Martin

    Sixteen days before the Brexit referendum, and only two days before the deadline to apply to cast a ballot, the IT system for voter registrations collapsed. The remain and leave campaigns were forced to agree a 48-hour registration extension. Around the same time, evidence was beginning to emerge of a major Russian “hack-and-leak” operation targeting the US presidential election. Inevitably, questions arose as to whether the Russians had successfully disrupted the Brexit vote.The truth was more embarrassingly simple. A comprehensive technical investigation, supported by the National Cyber Security Centre – which I headed at the time – set out in detail what had happened. A TV debate on Brexit had generated unexpected interest. Applications spiked to double those projected. The website couldn’t cope and crashed. There was no sign of any hostile activity.But this conclusive evidence did not stop a parliamentary committee, a year later, saying that it did “not rule out the possibility that there was foreign interference” in the incident. No evidence was provided for this remarkable assertion. What actually happened was a serious failure of state infrastructure, but it was not a hostile act.This story matters because it has become too easy – even fashionable – to cast the integrity of elections into doubt. “Russia caused Brexit” is nothing more than a trope that provides easy comfort to the losing side. There was, and is, no evidence of any successful cyber operations or other digital interference in the UK’s 2016 vote.But Brexit is far from the only example of such electoral alarmism. In its famous report on Russia in 2020, the Intelligence and Security Committee correctly said that the first detected attempt by Russia to interfere in British politics occurred in the context of the Scottish referendum campaign in 2014.However, the committee did not add that the quality of such efforts was risible, and the impact of them was zero. Russia has been waging such campaigns against the UK and other western democracies for years. Thankfully, though, it hasn’t been very good at it. At least so far.Over the course of the past decade, there are only two instances where digital interference can credibly be seen to have severely affected a democratic election anywhere in the world. The US in 2016 is undoubtedly one. The other is Slovakia last year, when an audio deepfake seemed to have an impact on the polls late on.The incident in Slovakia fuelled part of a new wave of hysteria about electoral integrity. Now the panic is all about deepfakes. But we risk making exactly the same mistake with deepfakes as we did with cyber-attacks on elections: confusing activity and intent with impact, and what might be technically possible with what is realistically achievable.So far, it has proved remarkably hard to fool huge swathes of voters with deepfakes. Many of them, including much of China’s information operations, are poor in quality. Even some of the better ones – like a recent Russian fake of Ukrainian TV purporting to show Kyiv admitting it was behind the Moscow terror attacks – look impressive, but are so wholly implausible in substance they are not believed by anyone. Moreover, a co-ordinated response by a country to a deepfake can blunt its impact: think of the impressive British response to the attempt to smear Sadiq Khan last November, when the government security minister lined up behind the Labour mayor of London in exhorting the British media and public to pay no attention to a deepfake audio being circulated.This was in marked contrast to events in Slovakia, where gaps in Meta’s removal policy, and the country’s electoral reporting restrictions, made it much harder to circulate the message that the controversial audio was fake. If a deepfake does cut through in next month’s British election, what matters is how swiftly and comprehensively it is debunked.None of this is to be complacent about the reality that hostile states are trying to interfere in British politics. They are. And with fast-developing tech and techniques, the threat picture can change. “Micro” operations, such as a localised attempt to use AI to persuade voters in New Hampshire to stay at home during the primaries, are one such area of concern. In the course of the UK campaign, one of my main worries would be about targeted local disinformation and deepfake campaigns in individual contests. It is important that the government focuses resources and capabilities on blunting these operations.But saying that hostile states are succeeding in interfering in our elections, or that they are likely to, without providing any tangible evidence is not a neutral act. In fact, it’s really dangerous. If enough supposedly credible voices loudly cast aspersions on the integrity of elections, at least some voters will start to believe them. And if that happens, we will have done the adversaries’ job for them.There is a final reason why we should be cautious about the “something-must-be-done” tendency where the risk of electoral interference is concerned. State intervention in these matters is not some cost-free, blindingly obvious solution that the government is too complacent to use. If false information is so great a problem that it requires government action, that requires, in effect, creating an arbiter of truth. To which arm of the state would we wish to assign this task?
    Ciaran Martin is a professor at the Blavatnik School of Government at the University of Oxford, and a former chief executive of the National Cyber Security Centre More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More

  • in

    US cites AI deepfakes as reason to keep Biden recording with Robert Hur secret

    The US Department of Justice is making a novel legal argument to keep a recording of an interview with Joe Biden from becoming public. In a filing late last week, the bureau cited the risk of AI-generated deepfakes as one of the reasons it refuses to release audio of the president’s interview with special counsel Robert Hur. The conversation about Biden’s handling of classified documents is a source of heated political contention, with Republicans pushing for release of the recordings and the White House moving to block them.The justice department’s filing, which it released late on Friday night, argues that the recording should not be released on a variety of grounds including privacy interests and executive privilege. One section of the filing, however, is specifically dedicated to the threat of deepfakes and disinformation, stating that there is substantial risk people could maliciously manipulate the audio if it were to be made public.“The passage of time and advancements in audio, artificial intelligence, and ‘deep fake’ technologies only amplify concerns about malicious manipulation of audio files,” the justice department stated. “If the audio recording is released here, it is easy to foresee that it could be improperly altered, and that the altered file could be passed off as an authentic recording and widely distributed.”The filing presents a novel argument about the threat of AI-generated disinformation from the release of government materials, potentially setting up future legal battles over the balance between transparency and preventing the spread of misinformation.“A malicious actor could slow down the speed of the recording or insert words that President Biden did not say or delete words that he did say,” the filing argues. “That problem is exacerbated by the fact that there is now widely available technology that can be used to create entirely different audio ‘deepfakes’ based on a recording.”Biden’s interview with Hur reignited a longstanding conservative campaign of questioning Biden’s mental faculties and drawing attention to his age, which critics claim make him unfit to be president. While Hur’s report into classified documents found at Biden’s private residence did not result in charges against him, the special counsel’s description of him as an “elderly man with poor memory” became ammunition for Republicans and prompted Biden to defend his mental fitness.Although transcripts of Hur’s interview with Biden are public, conservative groups and House Republicans have taken legal action, filed Freedom of Information Act requests and demanded the release of recorded audio from the conversation as he campaigns against Donald Trump. Biden has asserted executive privilege to prevent the release of the audio, while the latest justice department filing pushes back against many of the conservative claims about the recording.The justice department’s filing argues that releasing the recording would create increased public awareness that audio of the interview is circulating, making it more believable when people encounter doctored versions of it.A number of politicians have become the target of deepfakes created in attempts to swing political opinion, including Biden. A robocall earlier this year that mimicked Biden’s voice and told people not to vote in New Hampshire’s Democratic primary was sent to thousands of people. The political consultant allegedly behind the disinformation campaign is now facing criminal charges and a potential $6m fine. More

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    When dead children are just the price of doing business, Zuckerberg’s apology is empty | Carole Cadwalladr

    I don’t generally approve of blood sports but I’m happy to make an exception for the hunting and baiting of Silicon Valley executives in a congressional committee room. But then I like expensive, pointless spectacles. And waterboarding tech CEOs in Congress is right up there with firework displays, a brief, thrillingly meaningless sensation on the retina and then darkness.Last week’s grilling of Mark Zuckerberg and his fellow Silicon Valley Übermenschen was a classic of the genre: front pages, headlines, and a genuinely stand-out moment of awkwardness in which he was forced to face victims for the first time ever and apologise: stricken parents holding the photographs of their dead children lost to cyberbullying and sexual exploitation on his platform.Less than six hours later, his company delivered its quarterly results, Meta’s stock price surged by 20.3% delivering a $200bn bump to the company’s market capitalisation and, if you’re counting, which as CEO he presumably does, a $700m sweetener for Zuckerberg himself. Those who listened to the earnings call tell me there was no mention of dead children.A day later, Biden announced, “If you harm an American, we will respond”, and dropped missiles on more than 80 targets across Syria and Iraq. Sure bro, just so long as the Americans aren’t teenagers with smart phones. US tech companies routinely harm Americans, and in particular, American children, though to be fair they routinely harm all other nationalities’ children too: the Wall Street Journal has shown Meta’s algorithms enable paedophiles to find each other. New Mexico’s attorney general is suing the company for being the “largest marketplace for predators and paedophiles globally”. A coroner in Britain found that 14-year-old Molly Jane Russell, “died from an act of self-harm while suffering from depression and the negative effects of online content” – which included Instagram videos depicting suicide.And while dispatching a crack squad of Navy Seals to Menlo Park might be too much to hope for, there are other responses that the US Congress could have mandated, such as, here’s an idea, a law. Any law. One that, say, prohibits tech companies from treating dead children as just a cost of doing business.Because demanding that tech companies don’t enable paedophiles to find and groom children is the lowest of all low-hanging fruit in the tech regulation space. And yet even that hasn’t happened yet. What America urgently needs is to act on its anti-trust laws and break up these companies as a first basic step. It needs to take an axe to Section 230, the law that gives platforms immunity from lawsuits for hosting harmful or illegal content.It needs basic product safety legislation. Imagine GlaxoSmithKline launched an experimental new wonder drug last year. A drug that has shown incredible benefits, including curing some forms of cancer and slowing down ageing. It might also cause brain haemorrhages and abort foetuses, but the data on that is not yet in so we’ll just have to wait and see. There’s a reason that doesn’t happen. They’re called laws. Drug companies go through years of testing. Because they have to. Because at some point, a long time ago, Congress and other legislatures across the world did their job.Yet Silicon Valley’s latest extremely disruptive technology, generative AI, was released into the wild last year without even the most basic federally mandated product testing. Last week, deep fake porn images of the most famous female star on the planet, Taylor Swift, flooded social media platforms, which had no legal obligation to take them down – and hence many of them didn’t.But who cares? It’s only violence being perpetrated against a woman. It’s only non-consensual sexual assault, algorithmically distributed to millions of people across the planet. Punishing women is the first step in the rollout of any disruptive new technology, so get used to that, and if you think deep fakes are going to stop with pop stars, good luck with that too.You thought misinformation during the US election and Brexit vote in 2016 was bad? Well, let’s wait and see what 2024 has to offer. Could there be any possible downside to releasing this untested new technology – one that enables the creation of mass disinformation at scale for no cost – at the exact moment in which more people will go to the polls than at any time in history?You don’t actually have to imagine where that might lead because it’s already happened. A deep fake targeting a progressive candidate dropped days before the Slovakian general election in October. It’s impossible to know what impact it had or who created it, but the candidate lost, and the opposition pro-Putin candidate won. CNN reports that the messaging of the deepfake echoed that put out by Russia’s foreign intelligence service, just an hour before it dropped. And where was Facebook in all of this, you ask? Where it usually is, refusing to take many of the deep fake posts down.Back in Congress, grilling tech execs is something to do to fill the time in between the difficult job of not passing tech legislation. It’s now six years since the Cambridge Analytica scandal when Zuckerberg became the first major tech executive to be commanded to appear before Congress. That was a revelation because it felt like Facebook might finally be brought to heel.But Wednesday’s outing was Zuckerberg’s eighth. And neither Facebook, nor any other tech platform, has been brought to heel. The US has passed not a single federal law. Meanwhile, Facebook has done some exculpatory techwashing of its name to remove the stench of data scandals and Kremlin infiltration and occasionally offers up its CEO for a ritual slaughtering on the Senate floor.To understand America’s end-of-empire waning dominance in the world, its broken legislature and its capture by corporate interests, the symbolism of a senator forcing Zuckerberg to apologise to bereaved parents while Congress – that big white building stormed by insurrectionists who found each other on social media platforms – does absolutely nothing to curb his company’s singular power is as good as any place to start.We’ve had eight years to learn the lessons of 2016 and yet here we are. Britain has responded by weakening the body that protects our elections and degrading our data protection laws to “unlock post-Brexit opportunities”. American congressional committees are now a cargo cult that go through ritualised motions of accountability. Meanwhile, there’s a new tech wonder drug on the market that may create untold economic opportunities or lethal bioweapons and the destabilisation of what is left of liberal democracy. Probably both. Carole Cadwalladr is a reporter and feature writer for the Observer More