More stories

  • in

    How did Donald Trump end up posting Taylor Swift deepfakes?

    When Donald Trump shared a slew of AI-generated images this week that falsely depicted Taylor Swift and her fans endorsing his campaign for president, the former US president was amplifying the work of a murky non-profit with aspirations to bankroll rightwing media influencers and a history of spreading misinformation.Several of the images Trump posted on his Truth Social platform, which showed digitally rendered young women in “Swifties for Trump” T-shirts, were the products of the John Milton Freedom Foundation. Launched last year, the Texas-based non-profit organization frames itself as a press freedom group with the goal of “empowering independent journalists” and “fortifying the bedrock of democracy”.View image in fullscreenView image in fullscreenThe group’s day-to-day operations appear to revolve around sharing engagement bait on X and seeking millions from donors for a “fellowship program” chaired by a high school sophomore that would award $100,000 to Twitter personalities such as Glenn Greenwald, Andy Ngo and Lara Logan, according to a review of the group’s tax records, investor documents and social media output. The John Milton Freedom Foundation did not return request for comment to a set of questions about its operations and fellowship program.After months of retweeting conservative media influencers and echoing Elon Musk’s claims that freedom of speech is under attack from leftwing forces, one of the organization’s messages found its way to Trump and then his millions of supporters.Disinformation researchers have long warned that generative AI has the ability to lower the bar for creating misleading content and threaten information around elections. After Musk’s xAI company released its largely unregulated Grok image generator last week, there has been a surge of AI content that has included depictions of Trump, Kamala Harris and other political figures. The Milton Freedom Foundation is one of many small groups flooding social media with so-called AI slop.A niche non-profit’s AI slop makes its way to TrumpDuring the spike in AI images on X, the conservative @amuse account posted the images of AI-generated Swift fans to more than 300,000 followers. On the text of the post, which was labeled “satire”, was a watermark that stated it was “sponsored by the John Milton Freedom Foundation”. Trump posted a screenshot of @amuse’s tweet on Truth Social.The @amuse account has considerable reach itself, with about 390,000 followers on X and dozens of daily posts. Running @amuse appears to be Alexander Muse, listed as a consultant in the investor prospectus of the Milton Foundation, who also writes a rightwing commentary Substack that includes posts exploring election conspiracy theories. The @amuse account has numerous connections with Muse. The X account is connected to a Substack posting the same exact articles that Muse publishes on his LinkedIn page, which also has the username “amuse”, reflecting his first initial and last name. Muse’s book on how to secure startup funding, which includes examples of him asking ChatGPT to pretend it’s Musk and offer business advice, lists that same Substack account as its publisher.Prominent accounts including Musk have shared and replied to @amuse’s posts, which recently have included AI depictions of Trump fighting Darth Vader and sexualized imagery of Harris. Its banner picture is currently an AI-generated photo of Trump surrounded by women in “Swifties” shirts. The account posts misleading, pro-Trump headlines such as claiming Harris turned hundreds of thousands of children over to human traffickers as “border czar”. The headlines, like the AI-generated Swifties for Trump images, come with the watermark “sponsored by the John Milton Freedom Foundation”.The John Milton Freedom Foundation, named after the 17th-century British poet and essayist, has a small online footprint: a website, an investor prospectus and an X account with fewer than 500 followers. The team behind it, according to its own documents, consists of five people based in the Dallas-Fort Worth area with varying degrees of experience in Republican politics. Muse’s daughter, described as a 10th grade honor student on the non-profit’s site, serves as the Milton Foundation’s “fellowship chair”.The foundation’s stated goal is to raise $2m from major donors to award $100,000 grants to a list of “fellows” made up of rightwing media influencers. These include people like the former CBS journalist turned far-right star Lara Logan, who was cut from Newsmax in recent years for going on a QAnon-inspired rant that claimed world leaders drink children’s blood, as well as the author of an anti-trans children’s book. The organization believes that this money would allow these already established influencers to “increase their reach by more than 10x in less than a year”, according to its investor prospectus.While only one of the fellows listed on the foundation’s site mentions the organization on their X profiles and none follow its account, the @amuse account has a prominent link to the group’s community page and the foundation often engages with its posts.It is not clear that the foundation has any money to give and if all the media influencers listed as its 2024 fellowship class know about the organization. One Texas-based account that posts anti-vaccine content lists itself as a “JMFF” fellow in their bio, but none of the others advertise any connection. The most recent tax records for the Freedom Foundation place it in the category of non-profits whose gross receipts, or total funds received from all sources, range from $0 to $50,000 – far below the millions it is seeking.The organization’s board includes its chair, Brad Merritt, who is touted as an experienced Republican organizer with claims to have raised $300m for various non-profits; its director, Shiree Sanchez, who served as assistant director of the Republican party of Texas between 1985 and 1986; and Mark Karaffa, a retired healthcare industry executive.Muse’s experience in digital media appears to be far more extensive than the non-profit’s other members. In addition to his blog, he claims to have worked with James O’Keefe, the former CEO of the rightwing organization Project Veritas, who was known for hidden camera stings until he was ousted last year over allegations of misplaced funds. Muse, who is described in the prospectus as a “serial entrepreneur”, also blogs about how to make money from generative AI. More

  • in

    Harris wants to bring ‘joy, joy, joy’ to Americans. What about Palestinians? | Arwa Mahdawi

    Muslim Women for Harris is disbandingGot any spare brooms to hand? I think the folk at the Democratic national convention may need a few extra because they’ve been very busy this week trying to sweep the carnage in Gaza under the rug.Hope and joy have been the big themes of the convention. On Wednesday, Hakeem Jeffries, the House minority leader, told the crowd that working to get Kamala Harris elected would mean “joy, joy, joy comes in the morning”. It is wonderful to see all this exuberance, all this optimism for a brighter future. But it is also impossible not to contrast the revelry in Chicago with the Biden administration-sponsored suffering coming out of Gaza.Well, it’s impossible for some of us, anyway. For plenty of delegates at the convention, the suffering of Palestinians, the harrowing images on social media of charred babies and toddlers in Gaza whose heads have been caved in from US-manufactured bombs, seem to be nothing more than an annoying distraction. Pro-Palestinian protesters at the convention haven’t just been met with stony faces, they’ve been met with jeers and violence. One delegate inside the convention was caught on camera repeatedly hitting a Muslim woman in the head with a “We Love Joe” sign. The woman’s crime was that she had peacefully unfurled a banner saying “Stop Arming Israel”. It’s not clear who the man assaulting this woman was but one imagines he will not face any consequences.To be fair, Gaza hasn’t been completely ignored. On Monday, there was a panel centered on Palestinian human rights, in which Dr Tanya Haj-Hassan, a pediatric doctor who treated patients in Gaza, talked about the horrors she had witnessed. But the panel, while important, wasn’t on the main stage. It wasn’t given star billing like the parents of the Israeli-American hostage Hersh Goldberg-Polin, who gave an emotional speech on Wednesday. It felt a lot like pro-Palestinian activists had just been tossed a few crumbs.For a brief moment, it did seem like a Palestinian might get a proper chance to speak. The Uncommitted National Movement, which launched an anti-war protest vote during the primaries, had been urging convention officials to include two Palestinian American speakers on the convention’s main stage. “We are learning that Israeli hostages’ families will be speaking from the main stage. We strongly support that decision and also strongly hope that we will also be hearing from Palestinians who’ve endured the largest civilian death toll since 1948,” the movement’s statement released on Tuesday read.By Wednesday evening, however, it seemed clear that the convention had rejected these requests. In response, a group of uncommitted delegates staged a sit-in in front of Chicago’s United Center. Ilhan Omar joined the demonstration, and Alexandria Ocasio-Cortez called in via FaceTime.In light of the convention’s refusal to have a Palestinian American speaker, the group Muslim Women for Harris made the decision to disband and withdraw support for Harris. “The family of the Israeli hostage that was on the stage tonight, has shown more empathy towards Palestinian Americans and Palestinians, than our candidate or the DNC has,” Muslim Women for Harris’s statement read.For those of us who have been cautiously optimistic that Harris might break from Joe Biden’s disastrous policy of unconditional support for Israel, this week has been bitterly disappointing. Whoever wins this election, it seems clear joy, joy, joy will not be coming to Gaza anytime soon. Just more bombs, bombs, bombs.Dismiss ‘grannies’ as frail old biddies at your perilWhether it’s “Nans against Nazis” protesting in Liverpool or the Raging Nannies getting arrested at US army recruitment centers, older women are some of the toughest activists out there, writes Sally Feldman.Woman, 75, uses gardening tools to fill in potholes outside home in Scottish villageArmed with a bucket and spade, Jenny Paterson undertook the resurfacing work against her doctor’s orders. She’d had surgery and wasn’t supposed to lift things but said: “I’m fine and I’m not a person to sit around and do nothing anyway.” Which has given me some inspiration to pick up a rake and go tackle the raggedy roads of Philadelphia.The late Queen Elizabeth II thought Donald Trump was ‘very rude’Apparently, she also “believed Trump ‘must have some sort of arrangement’ with his wife, Melania, or else why would she have remained married to him?”skip past newsletter promotionafter newsletter promotionHow Tanya Smith stole $40m, evaded the FBI and broke out of prisonThe Guardian has a fascinating profile of Smith that touches on how the FBI couldn’t catch her for so long because they didn’t think a Black woman was capable of orchestrating her crimes. In Smith’s memoir, she recounts how one officer told her that “neeee-grroes murder, steal and rob, but they don’t have the brains to commit sophisticated crimes like this”.A clueless Alicia Silverstone eats poisonous fruit off a bushIf you’re wandering the streets of London and see a bush in someone’s front garden with mysterious fruit on it, should you a) admire it and move on? Or b) reach through the fence and film a TikTok of yourself munching the lil street snack while asking whether anyone knows what the heck it is? This week, Silverstone chose option b. The woman thinks vaccines are dodgy and yet she has no problem sticking an unknown fruit into her mouth. Turns out it was toxic but Silverstone has confirmed she’s OK, which means we can all laugh at her without feeling too bad about it.Women use ChatGPT 16%-20% less than their male peersThat’s according to two recent studies examined by the Economist. One explanation for this was that high-achieving women appeared to impose an AI ban on themselves. “It’s the ‘good girl’ thing,” one researcher said. “It’s this idea that ‘I have to go through this pain, I have to do it on my own and I shouldn’t cheat and take short-cuts.’” Very demure, very mindful.Patriarchal law cuts some South African women off from owning their homesBack in the 1990s, South Africa introduced a new land law (the Upgrading of Land Tenure Rights Act) that was supposed to fix the injustices of apartheid. It upgraded the property rights of Black long-term leaseholders so they could own their homes. But only a man could hold the property permit, effectively pushing women out of inheriting. Since the 1990s, there have been challenges and changes to the Upgrading Act, but experts say that women’s property rights are still not sufficiently recognized and “customary law has placed women outside the law”.The week in pawtriarchyThey stared into the void of an arcade game, and the void stared back. Punters at a Pennsylvania custard shop were startled when they realized that the cute little groundhog nestled among the stuffed animals in a mechanical-claw game was a real creature. Nobody knows exactly how he got into the game but he has since been rescued and named Colonel Custard. “It’s a good story that ended well,” the custard shop manager said. “He got set free. No one got bit.” More

  • in

    If you are outraged by Trump’s use of AI and deepfakes, don’t be – that’s exactly what he wants | Sophia Smith Galer

    A couple of weeks ago, Donald Trump decided it would be fun to accuse the US vice-president, Kamala Harris, of using AI in images showing a large crowd greeting her at an airport. “Has anyone noticed that Kamala CHEATED at the airport?” Trump furiously thumbed into his phone. “There was nobody at the plane, and she ‘AI’d it […] She should be disqualified because the creation of a fake image is ELECTION INTERFERENCE. Anyone who does that will cheat at ANYTHING!”Just as some animals are more equal than others, some politicians are more honest. So this week, when the former president himself posted an obviously AI-generated image of what looks like the back of Harris’s head in front of an enormous communist crowd with a huge hammer and sickle unfurled above them, he presumably did not consider it election interference. Trump has also recently shared AI-generated images of himself, Elon Musk and Taylor Swift.These images are concerning – especially given most image generators have put up guardrails against making content of real people. But it seems that Trump isn’t trying to pass the images off as real: I think this is him trying to be funny.Over on the Trump campaign team, someone has learned how to work an AI-image generator and has become a little prompt-happy. A weird video of Trump and Elon Musk dancing together isn’t exactly an example of the kind of election-manipulating deepfake media that many disinformation commentators are worried about. It is an example of a candidate desperately trying to remain on your algorithm. AI generation just requires a few prompts and maybe a paid subscription to a generator. It’s a lot cheaper, and quicker, than hiring creatives who need to spend time ideating and creating before something is ready to publish.AI-generated images and deepfakes are the poor man’s meme. Actual successful memes – a humorous piece of content designed to be spread online – are crafted by individuals who have adopted the language and culture of the internet, and know how to inject zeitgeisty topics into social posts designed to resonate and go viral. The combination of text with images or video is a subtle art, and it is one that Harris’s campaign team practises well. Everyone online knows about the coconut tree, and the chronically online will know about the Charli xcx accolade that “Kamala IS brat”.By contrast, the AI posts Trump has shared are not high internet humour; they’re cheap algo-fodder. A trick he is also trialling is combining AI images with real ones in an attempt to lend them some veracity, or perhaps just to sharpen the comedy potential. In his post where he states “I accept!” alongside images suggesting Swifties are “turning to Trump”, he has combined a real photograph of a woman wearing a “Swifties for Trump” T-shirt with a satirical AI compilation of fans wearing T-shirts with the same slogan and an AI-generated image of Swift as Uncle Sam, captioned “Taylor wants you to vote for Donald Trump”. It’s the kind of content your family’s errant uncle might forward to you, that he in turn got from his mate, because they haven’t got anything better to do.Trump doesn’t expect or need Swift’s endorsement, and so the humour is in the incredulousness of it. Posts like this aren’t about genuinely persuading audiences that Swift supports him; it’s about ensuring the intravenous drip of content into his supporters’ Facebook groups and WhatsApp conversations never runs dry. Trump has also always been a wind-up merchant. He knows Swift fans would react angrily to his post. He also knows that such rage-baiting will amplify his content on Truth Social’s and X’s algorithms – and garner coverage in the mainstream media. When people wag their fingers at him for posting content like this, some see it as righteously battling misinformation, but to his fans it looks like not getting the joke. (Of course, it is easier to understand jokes when they have at least one measly crumb of decent comedy to them.)The idea that Harris is a communist, that Trump and Musk are dancing pals, and that even Swifties can’t escape Trump fandom aligns with the narrative of popularity, relatable light-heartedness and prestige that Trump likes to court. Narrative is far more important than truth, particularly in the US, where political ideology is so powerful it was one of the most significant factors determining whether somebody would take the Covid-19 vaccine or not. Trump’s AI posts are best understood not as outright misinformation – intended to be taken at face value – but as part of the same intoxicating mix of real and false information that has always characterised his rhetoric. Trump isn’t interested in telling the truth; he’s interested in telling his truth – as are his fiercest supporters. In his world, AI is just another tool to do this. Whether he is willing to accept the reality that he can’t make a joke, or take one, is another story.

    Sophia Smith Galer is a journalist, content creator and the author of Losing It More

  • in

    Democrats use AI in effort to stay ahead with Latino and Black voters

    Latino and Black-led Democratic and progressive organizations are mobilizing to come up with novel uses of AI to reach voters of color.On Discord, a social messaging app that connects gamers, it’s taking the form of a smiling chatbot powered by artificial intelligence that evokes Pixar’s animated robot Wall-E. When you click, a conversation opens up that says: “This is the very beginning of your legendary conversation with Vote-E.”You can ask election related questions such as “How do I register to vote?” or when North Carolina’s voter registration deadline is – and the answers are almost instantaneous.Vote-E is an experiment in how to crack one of the toughest problems for Democrats – reaching voters of color, especially younger ones, using platforms where they actually spend time, and persuading them to vote for Democrats. And it comes at a transformative, but uncertain time for the party, with Kamala Harris replacing Joe Biden at the top of the ticket, who must use existing infrastructure to beat Donald Trump.NextGen America, which built Vote-E and is one of the nation’s largest youth voter organizations, says it allows young men to access the bot from Discord chats and Twitch streams of Latino and Black gaming influencers.“We’re seeing voter turnout gaps between Black men and women and Latino men and women,” said Cristina Tzintzún Ramirez, NextGen America’s president, noting that while there’s a focus on connecting with young people on college campuses, not everyone is there. The chatbot is active in Arizona, Michigan, Pennsylvania, Nevada and North Carolina.It’s just one example of progressive groups of color experimenting with artificial intelligence which wasn’t on their radar four years ago: AI chatbots are now also recruiting Latino voters from WhatsApp and Black voters from Facebook Messenger; they’re using natural language processing to record voter interactions with canvassers and identify shared concerns; and even using it to index and identify friendly Spanish-language sites to place an ad touting Democrats’ clean energy plan.With the election mere months away, the challenge facing Democrats remains how to galvanize younger voters and voters of color.While more Latinos turned out in 2020 than ever before, Hispanics still lag behind white, Black, and Asian and Pacific Islander voters as a proportion of their population of eligible voters, according to Catalist, a progressive data hub, which noted this is true across communities of color, “where non-voting rates are substantially higher”.Héctor Sánchez Barba, the president and chief executive of Mi Familia Vota, told companies he was less interested in their diversity dollars than in their budgets and expertise in the realm of data, research, and innovation. It’s why he recruited Denise Cook, a Cuban American former enterprise software architect who spent 16 years at IBM to join MFV as its chief data and innovation officer. She leads an all-Latina team, which created its own chatbot and uses AI to have human-sounding, bilingual conversations with Latino voters on platforms like WhatsApp.Canvassers with the group ask for permission to record conversations with voters on their mobile phones or tablets. Those interactions are then turned into data using natural language processing, a type of AI. This way, MFV is able to quickly summarize voter priorities and figure out if it is speaking about the economy, reproductive rights or climate optimally to voters.“We need this kind of brainpower when we’re fighting the biggest enemy our community has ever had,” Sánchez Barba said of Trump. “This is about using the most important technological advancements, including artificial intelligence, for good and to save our democracy.”Many leaders of color said they are mindful of pitfalls around AI but open to harnessing its power and testing possible strategies. Larry Huynh, the president of the American Association of Political Consultants and the founder of Trilogy Interactive, is so interested in incorporating AI into political campaigns that he followed leaders in other industries by creating an internal taskforce at his company.He believes campaigns should follow the lead of brands, which use AI voiceovers of celebrities and public figures, to have their natural mouth movements seamlessly disseminate campaign messages. Huynh’s research has found AI voices tailored to their target audience – young male speaker, young male voter, say – appear to be more persuasive.One example he gave is of an allied group creating a video of the candidate – now Harris – speaking perfect Spanish in her own voice aimed at Arizona or Nevada voters.“If it’s well-delivered and it doesn’t seem odd or off, some voters could appreciate that communication in their predominant language,” he said.Putting out a wholly AI-made Harris, however, would be highly scrutinized both from within the party and by Republicans. Harris is already a target of deepfakes that put words in her mouth as well as ones meant to sexualize and demean her. Yet another deepfake of her, even a positive one, could strike the wrong chord. A Trump has said she used AI to fake a huge rally crowd. The photo of her campaign stop was real, though. Concerns over disinformation have only been heightened by the spread of AI-generated images of Trump getting arrested in New York and an AI robocall that mimicked Biden’s voice telling New Hampshire voters not to cast a ballot.Still, progressive groups are charging forward. Poder Latinx, an advocacy group committed to building Latino political power, created an ad touting the clean energy plan from the Biden administration’s Inflation Reduction Act. It was timed to coincide with the popular Copa América soccer tournament last month. Partnering with Mundial Media, the group was able to serve the ad to US Latinos reading Spanish-language news sites in places like Arizona. Mundial Media’s Cadmus AI engine crawled the sites and indexed their keywords to make sure the soccer-themed clean energy ad would fit in with the content on the pages.Yadira Sanchez, the co-founder of Poder Latinx, was happy with how the campaign reached voters, over-delivering impressions and click-thru rates from Latinos, including finding a 64% Hispanic male audience.“We know that the best connection is voter to voter contact. This technology is complementing the on-the-ground canvassing we are already using,” she said. “Technology, AI in particular, is great to reach younger, more online voters.“But AI may not be viewed as safe enough for initiatives that require serious resources to scale up in time for November. And there are concerns it could freak out voters in the wrong context.In focus groups in Detroit, Cleveland and Philadelphia this year, Adrianne Shropshire, the executive director of BlackPac, found “hesitation” from Black voters around AI.“There’s a concern people have with what they’re seeing and where it’s coming from exactly,” she said, noting voters “don’t know what to trust and are suspicious and skeptical of everything.”Rashad Robinson, president of Color of Change, a group that advocates for Black Americans and has a $25m program for 2024, has met with Mark Zuckerberg and Elon Musk, alongside senior staff at Meta, Google and OpenAI, to call for commitments on how AI will be used around election tools, which he says aren’t ready for primetime.“Imagine if there were no regulations for cars and it was all about who could get their new vehicle to market fastest?” he said, citing Musk’s Tesla, which has recalled its latest model four times. “It’s Tesla on steroids. At least cars get recalled, but there is no infrastructure or body that recalls tech.”Quentin James, founder and president of The Collective Pac, a group that works to elect Black Democrats and is using the Facebook Messenger chatbot to get registration information from voters, stressed that deepfakes or ads where one campaign is using the likeness of their opponent to mislead voters should be shut down immediately.Still, he said, Democrats must be willing to use the tools at their disposal to beat Trump, because the other side will be looking at them as well.“I don’t know if FEC law can catch up to this in a few months, so we should use it to our advantage,” he said. “There’s no way we can control what happens with technology in this short time period.” More

  • in

    Iranian group used ChatGPT to try to influence US election, OpenAI says

    OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues.The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel’s presence at the Olympic Games and then shared it via social media accounts and websites, Open AI said.Investigation by the Microsoft-backed AI company showed ChatGPT was used for generating long-form articles and shorter social media comments.OpenAI said the operation did not appear to have achieved meaningful audience engagement.The majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media.The accounts have been banned from using OpenAI’s services and the company continues to monitor activities for any further attempts to violate policies, it said.Earlier in August, a Microsoft threat-intelligence report said the Iranian network Storm-2035, comprising four websites masquerading as news outlets, was actively engaging US voter groups on opposing ends of the political spectrum.The engagement was being built with “polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict”, the report stated.The Democratic candidate, Kamala Harris, and her Republican rival, Donald Trump, are locked in a tight race, ahead of the presidential election on 5 November.The AI firm said in May it had disrupted five covert influence operations that sought to use its models for “deceptive activity” across the internet. More

  • in

    Elon’s politics: how Musk became a driver of elections misinformation

    When Elon Musk took over as owner of Twitter, researchers and elections officials feared a rampant spread of misinformation that would lead to threats and harassment and undermine democracy.Their fears came true – and Musk himself has emerged as one of its main drivers.The tech billionaire has cast doubt on machines that tabulate votes and mail ballots, both common features of US elections. He has repeatedly claimed there is rampant non-citizen voting, a frequent Republican talking point in this election.Musk, the ultra-wealthy owner of Tesla and other tech companies, is scheduled to interview Donald Trump on Monday, where they are sure to find common ground on these election conspiracies. Musk is a vocal supporter of the former US president and current Republican nominee. He has restored the Twitter/X accounts of people banned under previous ownership, dismantling the platform’s fact-checking and safety features. Trump’s X account, which was suspended after the January 6 insurrection, was restored as well, though Trump has not returned actively to the platform.“Electronic voting machines and anything mailed in is too risky. We should mandate paper ballots and in-person voting only,” he wrote on X in July.Maricopa county recorder Stephen Richer responded, asking if he could give Musk a tour of the large Arizona county’s facilities and run through the mail voting processes.“You can go into all the rooms. You can examine all the equipment. You can ask any question you want. We’d love to show you the security steps already in place, which I think are very sound,” Richer said.It wasn’t the only time Richer has sought to correct election misinformation Musk had shared. He previously tried to fix misunderstandings of Arizona voter data and rules for proof of citizenship.Social media platforms overall have taken less aggressive stances on fact-checking election falsehoods after an ongoing campaign by Republican lawmakers and their allies to attack the ways information was flagged by elected officials and researchers and how platforms responded.“I think X really kind of sticks out as a place where that change has been striking, and for it to come from the very top kind of just shows how much of an issue it is,” said Mekela Panditharatne, senior counsel for the Brennan Center’s elections & government program.Musk shared a video that used an AI-generated voice for Kamala Harris, which raised concerns that it could fool some people into thinking it was real. Musk and the video creator defended it as parody.He has also written multiple times claiming that non-citizens are voting in US elections, which is illegal except in a few local elections. There are few instances of non-citizens voting, or even registering to vote. In late July, he shared a video of Elizabeth Warren talking about a pathway to citizenship for the millions of undocumented people living in the US. “As I was saying, they’re importing voters,” he said, a nod to “great replacement” theory.Grok, the platform’s artificial intelligence chatbot that Musk has billed as an “anti-woke” antidote to left-biased chatbots, has spread false information that ballot deadlines had passed in nine states, meaning the vice-president couldn’t get on the ballot in those places, which is untrue. Secretaries of state are urging Musk to fix this issue for the chatbot that doesn’t have election information guardrails that other chatbots, like ChatGPT, do.skip past newsletter promotionafter newsletter promotion“It’s important that social media companies, especially those with global reach, correct mistakes of their own making – as in the case of the Grok AI chatbot simply getting the rules wrong,” Minnesota secretary of state Steve Simon told the Washington Post. “Speaking out now will hopefully reduce the risk that any social media company will decline or delay correction of its own mistakes between now and the November election.”Off the platform, a political action committee Musk created is mining personal information from voters in key states in what appears to users to initially look like a voter registration portal, CNBC reported. America Pac, a pro-Trump group backed by Musk’s enormous wealth, is targeting swing states voters. The data scraping is now being investigated by at least two states.Despite his endless claims about election fraud, Musk told the Atlantic this month he would accept the results of the 2024 election – with a caveat.“If there are questions of election integrity, they should be properly investigated and neither be dismissed out of hand nor unreasonably questioned,” he said. “If, after review of the election results, it turns out that Kamala wins, that win should be recognized and not disputed.” More

  • in

    Trump tells Logan Paul he used AI to ‘so beautifully’ rewrite a speech

    Donald Trump has said he used a speech generated by artificial intelligence (AI) after being impressed by the content.The former US president, whose oratory is noted for its rambling, off-the-cuff style but also for its demagoguery, made the claim in an interview with Logan Paul’s podcast in which he lauded AI as “a superpower” but also warned of its potential dangers.He said the rewritten speech came during a meeting with one of the industry’s “top people”, whom he did not identify.“I had a speech rewritten by AI out there, one of the top people,” Trump said. “He said, ‘Oh, you’re gonna make a speech? Yeah?’ He goes, click, click, click, and like, 15 seconds later, he shows me my speech that’s written that’s great, so beautifully. I said, ‘I’m gonna use this.’ I’ve never seen anything like it.” Trump did not say at what event he had used the AI-generated speech.He predicted that AI’s oratorical gifts could sound the death knell for speech writers, long a part of Washington’s political landscape.“One industry I think that will be gone are these wonderful speechwriters,” he said. I’ve never seen anything like it, and so quickly, a matter of literally minutes, it’s done. It’s a little bit scary.”Asked what he said to his speech writer, Trump jokingly responded, “You’re fired,” a line associated with The Apprentice, the TV reality show that helped propel his political rise.Trump, the Republican presumptive 2024 presidential nominee, also acknowledged that AI had dangers, especially in regard to deepfakes. He warned of an imaginary situation where a faked voice warned a foreign power that a US nuclear attack was being launched, possibly triggering a retaliatory strike.“If you’re the president of the United States, and you announced that 13 missiles have been sent to, let’s not use the name of a country,” he said. “We have just sent 13 nuclear missiles heading to somewhere, and they will hit their targets in 12 minutes and 59 seconds, and you’re that country.”skip past newsletter promotionafter newsletter promotionHe said he had asked the entrepreneur Elon Musk – referring to him by his first name – if Russia or China would be able to identify that the attack warning was fake and was told that they would have to use a code to check its veracity.“Who the hell’s going to check. You got, like, 12 minutes – let’s check the code,” he said. “So what do they do when they see this? They have maybe a counterattack. It’s so dangerous in that way.” More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More