More stories

  • in

    Trump tells Logan Paul he used AI to ‘so beautifully’ rewrite a speech

    Donald Trump has said he used a speech generated by artificial intelligence (AI) after being impressed by the content.The former US president, whose oratory is noted for its rambling, off-the-cuff style but also for its demagoguery, made the claim in an interview with Logan Paul’s podcast in which he lauded AI as “a superpower” but also warned of its potential dangers.He said the rewritten speech came during a meeting with one of the industry’s “top people”, whom he did not identify.“I had a speech rewritten by AI out there, one of the top people,” Trump said. “He said, ‘Oh, you’re gonna make a speech? Yeah?’ He goes, click, click, click, and like, 15 seconds later, he shows me my speech that’s written that’s great, so beautifully. I said, ‘I’m gonna use this.’ I’ve never seen anything like it.” Trump did not say at what event he had used the AI-generated speech.He predicted that AI’s oratorical gifts could sound the death knell for speech writers, long a part of Washington’s political landscape.“One industry I think that will be gone are these wonderful speechwriters,” he said. I’ve never seen anything like it, and so quickly, a matter of literally minutes, it’s done. It’s a little bit scary.”Asked what he said to his speech writer, Trump jokingly responded, “You’re fired,” a line associated with The Apprentice, the TV reality show that helped propel his political rise.Trump, the Republican presumptive 2024 presidential nominee, also acknowledged that AI had dangers, especially in regard to deepfakes. He warned of an imaginary situation where a faked voice warned a foreign power that a US nuclear attack was being launched, possibly triggering a retaliatory strike.“If you’re the president of the United States, and you announced that 13 missiles have been sent to, let’s not use the name of a country,” he said. “We have just sent 13 nuclear missiles heading to somewhere, and they will hit their targets in 12 minutes and 59 seconds, and you’re that country.”skip past newsletter promotionafter newsletter promotionHe said he had asked the entrepreneur Elon Musk – referring to him by his first name – if Russia or China would be able to identify that the attack warning was fake and was told that they would have to use a code to check its veracity.“Who the hell’s going to check. You got, like, 12 minutes – let’s check the code,” he said. “So what do they do when they see this? They have maybe a counterattack. It’s so dangerous in that way.” More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More

  • in

    US cites AI deepfakes as reason to keep Biden recording with Robert Hur secret

    The US Department of Justice is making a novel legal argument to keep a recording of an interview with Joe Biden from becoming public. In a filing late last week, the bureau cited the risk of AI-generated deepfakes as one of the reasons it refuses to release audio of the president’s interview with special counsel Robert Hur. The conversation about Biden’s handling of classified documents is a source of heated political contention, with Republicans pushing for release of the recordings and the White House moving to block them.The justice department’s filing, which it released late on Friday night, argues that the recording should not be released on a variety of grounds including privacy interests and executive privilege. One section of the filing, however, is specifically dedicated to the threat of deepfakes and disinformation, stating that there is substantial risk people could maliciously manipulate the audio if it were to be made public.“The passage of time and advancements in audio, artificial intelligence, and ‘deep fake’ technologies only amplify concerns about malicious manipulation of audio files,” the justice department stated. “If the audio recording is released here, it is easy to foresee that it could be improperly altered, and that the altered file could be passed off as an authentic recording and widely distributed.”The filing presents a novel argument about the threat of AI-generated disinformation from the release of government materials, potentially setting up future legal battles over the balance between transparency and preventing the spread of misinformation.“A malicious actor could slow down the speed of the recording or insert words that President Biden did not say or delete words that he did say,” the filing argues. “That problem is exacerbated by the fact that there is now widely available technology that can be used to create entirely different audio ‘deepfakes’ based on a recording.”Biden’s interview with Hur reignited a longstanding conservative campaign of questioning Biden’s mental faculties and drawing attention to his age, which critics claim make him unfit to be president. While Hur’s report into classified documents found at Biden’s private residence did not result in charges against him, the special counsel’s description of him as an “elderly man with poor memory” became ammunition for Republicans and prompted Biden to defend his mental fitness.Although transcripts of Hur’s interview with Biden are public, conservative groups and House Republicans have taken legal action, filed Freedom of Information Act requests and demanded the release of recorded audio from the conversation as he campaigns against Donald Trump. Biden has asserted executive privilege to prevent the release of the audio, while the latest justice department filing pushes back against many of the conservative claims about the recording.The justice department’s filing argues that releasing the recording would create increased public awareness that audio of the interview is circulating, making it more believable when people encounter doctored versions of it.A number of politicians have become the target of deepfakes created in attempts to swing political opinion, including Biden. A robocall earlier this year that mimicked Biden’s voice and told people not to vote in New Hampshire’s Democratic primary was sent to thousands of people. The political consultant allegedly behind the disinformation campaign is now facing criminal charges and a potential $6m fine. More

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    Can US Congress control the abuse of AI in the 2024 election? – podcast

    In January, voters in New Hampshire answered a phone call from what sounded like President Joe Biden. What turned out to be an AI-generated robocall caused a stir because it was trying to convince Democratic voters not to turn up to polling stations on election day.
    In response to this scam, just a couple of weeks later, the US government outlawed robocalls that use voices generated by artificial intelligence. But experts are warning that this story is just one example of why 2024 will be a year of unprecedented election disinformation in the US and around the world.
    This week, Jonathan Freedland and Rachel Leingang discuss why people are so worried about the influence of artificial intelligence on November’s presidential election, and what politicians can do to catch up

    How to listen to podcasts: everything you need to know More

  • in

    Political operative and firms behind Biden AI robocall sued for thousands

    A political operative and two companies that facilitated a fake robocall using AI to impersonate Joe Biden should be required to pay thousands of dollars in damages and should be barred from taking similar future actions, a group of New Hampshire voters and a civic action group said in a federal lawsuit filed on Thursday.The suit comes weeks after Steve Kramer, a political operative, admitted that he was behind the robocall that spoofed Biden’s voice on the eve of the New Hampshire primary and urged Democrats in the state not to vote. Kramer was working for Biden’s challenger Dean Phillips, but Phillips’s campaign said he had nothing to do with the call and Kramer has said he did it as an act of civil disobedience to draw attention to the dangers of AI in elections. The incident may have been the first time AI was used to interfere in a US election.Lawyers for the plaintiffs – three New Hampshire voters who received the calls and the League of Women Voters, a voting rights group – said they believed it was the first lawsuit of its kind seeking redress for using AI in robocalls in elections. The New Hampshire attorney general’s office is investigating the matter.Two Texas companies, Life Corporation and Lingo Telecom, also helped facilitate the calls.“If Defendants are not permanently enjoined from deploying AI-generated robocalls, there is a strong likelihood that it will happen again,” the lawsuit says.The plaintiffs say Kramer and the two companies violated a provision of the Voting Rights Act that prohibits voter intimidation as well a ban in the Telephone Consumer Protection Act on delivering a prerecorded call to someone without their consent. They also say the calls violated New Hampshire state laws that require disclosure of the source of politically related calls.The plaintiffs are seeking up to $7,500 in damages for each plaintiff that received a call that violated federal and state law. The recorded call was sent to anywhere between 5,000 and 25,000 people.“It’s really imperative that we address the threat that these defendants are creating for voters,” Courtney Hostetler, a lawyer with the civic action group Free Speech for People, which is helping represent the plaintiffs, said in a press call with reporters on Thursday.“The other hope of this lawsuit is that it will demonstrate to other people who might attempt similar campaigns that this is illegal, that there are parties out there like the League of Women Voters who are prepared to challenge this sort of illegal voter intimidation, and these illegal deceptive practices, hopefully make them think twice before they do the same,” she added.NBC News reported Kramer paid a street magician in New Orleans $150 to create the call using a script Kramer prepared.“This is a way for me to make a difference, and I have,” he said in the interview last month. “For $500, I got about $5m worth of action, whether that be media attention or regulatory action.”Mark Herring, a former Virginia attorney general who is helping represent the plaintiffs, told reporters on Thursday that kind of justification was “self-serving”.“Regardless of the motivation, the intent here was to suppress the vote, and to threaten and coerce voters into not voting out of fear that they might lose their right to vote,” he said. More

  • in

    ‘Disinformation on steroids’: is the US prepared for AI’s influence on the election?

    The AI election is here.Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said.Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings might have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.But US regulations aren’t ready for the boom in fast-paced AI technology and how it could influence voters. Soon after the fake call in New Hampshire, the Federal Communications Commission announced a ban on robocalls that use AI audio. The FEC has yet to put rules in place to govern the use of AI in political ads, though states are moving quickly to fill the gap in regulation.The US House launched a bipartisan taskforce on 20 February that will research ways AI could be regulated and issue a report with recommendations. But with partisan gridlock ruling Congress, and US regulation trailing the pace of AI’s rapid advance, it’s unclear what, if anything, could be in place in time for this year’s elections.Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real. AI – in the form of text, bots, audio, photo or video – can be used to make it look like candidates are saying or doing things they didn’t do, either to damage their reputations or mislead voters. It can be used to beef up disinformation campaigns, making imagery that looks real enough to create confusion for voters.Audio content, in particular, can be even more manipulative because the technology for video isn’t as advanced yet and recipients of AI-generated calls lose some of the contextual clues that something is fake that they might find in a deepfake video. Experts also fear that AI-generated calls will mimic the voices of people a caller knows in real life, which has the potential for a bigger influence on the recipient because the caller would seem like someone they know and trust. Commonly called the “grandparent” scam, callers can now use AI to clone a loved one’s voice to trick the target into sending money. That could theoretically be applied to politics and elections.“It could come from your family member or your neighbor and it would sound exactly like them,” Gilbert said. “The ability to deceive from AI has put the problem of mis- and disinformation on steroids.”There are less misleading uses of the technology to underscore a message, like the recent creation of AI audio calls using the voices of kids killed in mass shootings aimed at swaying lawmakers to act on gun violence. Some political campaigns even use AI to show alternate realities to make their points, like a Republican National Committee ad that used AI to create a fake future if Biden is re-elected. But some AI-generated imagery can seem innocuous at first, like the rampant faked images of people next to carved wooden dog sculptures popping up on Facebook, but then be used to dispatch nefarious content later on.People wanting to influence elections no longer need to “handcraft artisanal election disinformation”, said Chester Wisniewski, a cybersecurity expert at Sophos. Now, AI tools help dispatch bots that sound like real people more quickly, “with one bot master behind the controls like the guy on the Wizard of Oz”.Perhaps most concerning, though, is that the advent of AI can make people question whether anything they’re seeing is real or not, introducing a heavy dose of doubt at a time when the technologies themselves are still learning how to best mimic reality.skip past newsletter promotionafter newsletter promotion“There’s a difference between what AI might do and what AI is actually doing,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersection between technology and democracy. People will start to wonder, she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”Even without government regulation, companies that manage AI tools have announced and launched plans to limit its potential influence on elections, such as having their chatbots direct people to trusted sources on where to vote and not allowing chatbots that imitate candidates. A recent pact among companies such as Google, Meta, Microsoft and OpenAI includes “reasonable precautions” such as additional labeling of and education about AI-generated political content, though it wouldn’t ban the practice.But bad actors often flout or skirt around government regulations and limitations put in place by platforms. Think of the “do not call” list: even if you’re on it, you still probably get some spam calls.At the national level, or with major public figures, debunking a deepfake happens fairly quickly, with outside groups and journalists jumping in to spot a spoof and spread the word that it’s not real. When the scale is smaller, though, there are fewer people working to debunk something that could be AI-generated. Narratives begin to set in. In Baltimore, for example, recordings posted in January of a local principal allegedly making offensive comments could be AI-generated – it’s still under investigation.In the absence of regulations from the Federal Election Commission (FEC), a handful of states have instituted laws over the use of AI in political ads, and dozens more states have filed bills on the subject. At the state level, regulating AI in elections is a bipartisan issue, Gilbert said. The bills often call for clear disclosures or disclaimers in political ads that make sure voters understand content was AI-generated; without such disclosure, the use of AI is then banned in many of the bills, she said.The FEC opened a rule-making process for AI last summer, and the agency said it expects to resolve it sometime this summer, the Washington Post has reported. Until then, political ads with AI may have some state regulations to follow, but otherwise aren’t restricted by any AI-specific FEC rules.“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert said. “But it’s closing in on that point, and we need to move really fast.” More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More