More stories

  • in

    The Lie Detectives: Trump, US politics and the disinformation damage done

    Most of Joe Biden’s past supporters see him as too old. An 81-year-old president with an unsteady step is a turn-off. But Donald Trump, Biden’s malignant, 77-year-old predecessor, vows to be a dictator for “a day”, calls for suspending the constitution and threatens Nato. “Russia, if you’re listening”, his infamous 2016 shout-out to Vladimir Putin, still haunts us eight years on. Democracy is on the ballot again.Against this bleak backdrop, Sasha Issenberg delivers The Lie Detectives, an examination of disinformation in politics. It is a fitting follow-up to The Victory Lab, his look at GOTV (“getting out the vote”) which was published weeks before the 2012 US election.Issenberg lectures at UCLA and writes for Monocle. He has covered presidential campaigns for the Boston Globe and he co-founded Votecastr, a private venture designed to track, project and publish real-time results. Voting science, though, is nothing if not tricky. A little after 4pm on election day 2016, hours before polls closed, Votecastr calculations led Slate to pronounce: Hillary Clinton Has to Like Where She Stands in Florida.The Victory Lab and The Lie Detectives are of a piece, focused on the secret sauce of winning campaigns. More than a decade ago, Issenberg gave props to Karl Rove, the architect of George W Bush’s successful election drives, and posited that micro-targeting voters had become key to finishing first. He also observed that ideological conflicts had become marbled through American politics. On that front, there has been an acceleration. These days, January 6 and its aftermath linger but much of the country has moved on, averting its gaze or embracing alternative facts.In 2016, Issenberg and Joshua Green of Businessweek spoke to Trump campaign digital gurus who bragged of using the internet to discourage prospective Clinton supporters.“We have three major voter suppression operations under way,” Issenberg and Green quote a senior official as saying. “They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women and African Americans.”It was micro-targeting on steroids.The exchange stuck with Issenberg. “I thought back often to that conversation with the Trump officials in the years that followed,” he writes now. “I observed so much else online that was manufactured and perpetuated with a similarly brazen impunity.”In The Lie Detectives, Issenberg pays particular attention and respect to Jiore Craig and her former colleagues at Greenberg Quinlan Rosner Research, a leading Democratic polling and strategy firm founded by Stan Greenberg, Bill Clinton’s pollster. Issenberg also examines the broader liberal ecosystem and its members, including the billionaire Reid Hoffman, a founder of LinkedIn and PayPal. The far-right former Brazilian president Jair Bolsonaro and his “office of hate” come under the microscope too.Craig’s experience included more than a dozen elections across six continents. But until Trump’s triumph, she had not worked on a domestic race. To her, to quote Issenberg, US politics was essentially “a foreign country”. Nonetheless, Craig emerged as the Democrats’ go-to for countering disinformation.“It was a unique moment in time where everybody who had looked for an answer up until that point had been abundantly wrong,” Craig says. “The fact that I had to start every race in a new country with the building blocks allowed me to see things that you couldn’t.”No party holds a monopoly on disinformation. In a 2017 special election for US Senate in Alabama, Democratic-aligned consultants launched Project Birmingham, a $100,000 disinformation campaign under which Republicans were urged to cast write-in ballots instead of voting for Roy Moore, the controversial GOP candidate.The project posed as a conservative operation. Eventually, Hoffman acknowledged funding it, but disavowed knowledge of disinformation and said sorry. Doug Jones, the Democrat, won by fewer than 22,000 votes. The write-in total was 22,819.skip past newsletter promotionafter newsletter promotionMore recently, Steve Kramer, a campaign veteran working for Dean Phillips, a long-shot candidate for the Democratic nomination against Biden, launched an AI-generated robocall that impersonated the president.Comparing himself to Paul Revere and Thomas Paine, patriots who challenged the mother country, Kramer, who also commissioned a deepfake impersonation of Senator Lindsey Graham, said Phillips was not in on the effort. If the sorry little episode showed anything, it showed disinformation is here to stay.Under the headline Disinformation on steroids: is the US prepared for AI’s influence on the election?, a recent Guardian story said: “Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real.”Free speech is on the line. Last fall, the US court of appeals for the fifth circuit – “the Trumpiest court in America”, as Vox put it – unanimously held that Biden, the surgeon general, the Centers for Disease Control and Prevention (CDC) and the FBI violated the first amendment by seeking to tamp down on Covid-related misinformation.In the court’s view, social media platforms were impermissibly “coerced” or “significantly encouraged” to suppress speech government officials viewed as dangerously inaccurate or misleading. The matter remains on appeal, oral argument before the supreme court set for later this month.Issenberg reminds us that Trump’s current presidential campaign has pledged that a second Trump administration will bar government agencies from assisting any effort to “label domestic speech as mis- or dis-information”. A commitment to free speech? Not exactly. More like Putinism, US-style.According to Kash Patel, a Trump administration veteran and true believer, a second Trump administration will target journalists for prosecution.“We will go out and find the conspirators, not just in government but in the media,” Patel told Steve Bannon, Trump’s former campaign chair and White House strategist. “Yes, we’re going to come after the people in the media who lied about American citizens, who helped Joe Biden rig presidential elections. We’re going to come after you.”Welcome to the Trump Vengeance tour.
    The Lie Detectives is published in the US by Columbia University’s Columbia Global Reports More

  • in

    ‘Disinformation on steroids’: is the US prepared for AI’s influence on the election?

    The AI election is here.Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said.Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings might have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.But US regulations aren’t ready for the boom in fast-paced AI technology and how it could influence voters. Soon after the fake call in New Hampshire, the Federal Communications Commission announced a ban on robocalls that use AI audio. The FEC has yet to put rules in place to govern the use of AI in political ads, though states are moving quickly to fill the gap in regulation.The US House launched a bipartisan taskforce on 20 February that will research ways AI could be regulated and issue a report with recommendations. But with partisan gridlock ruling Congress, and US regulation trailing the pace of AI’s rapid advance, it’s unclear what, if anything, could be in place in time for this year’s elections.Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real. AI – in the form of text, bots, audio, photo or video – can be used to make it look like candidates are saying or doing things they didn’t do, either to damage their reputations or mislead voters. It can be used to beef up disinformation campaigns, making imagery that looks real enough to create confusion for voters.Audio content, in particular, can be even more manipulative because the technology for video isn’t as advanced yet and recipients of AI-generated calls lose some of the contextual clues that something is fake that they might find in a deepfake video. Experts also fear that AI-generated calls will mimic the voices of people a caller knows in real life, which has the potential for a bigger influence on the recipient because the caller would seem like someone they know and trust. Commonly called the “grandparent” scam, callers can now use AI to clone a loved one’s voice to trick the target into sending money. That could theoretically be applied to politics and elections.“It could come from your family member or your neighbor and it would sound exactly like them,” Gilbert said. “The ability to deceive from AI has put the problem of mis- and disinformation on steroids.”There are less misleading uses of the technology to underscore a message, like the recent creation of AI audio calls using the voices of kids killed in mass shootings aimed at swaying lawmakers to act on gun violence. Some political campaigns even use AI to show alternate realities to make their points, like a Republican National Committee ad that used AI to create a fake future if Biden is re-elected. But some AI-generated imagery can seem innocuous at first, like the rampant faked images of people next to carved wooden dog sculptures popping up on Facebook, but then be used to dispatch nefarious content later on.People wanting to influence elections no longer need to “handcraft artisanal election disinformation”, said Chester Wisniewski, a cybersecurity expert at Sophos. Now, AI tools help dispatch bots that sound like real people more quickly, “with one bot master behind the controls like the guy on the Wizard of Oz”.Perhaps most concerning, though, is that the advent of AI can make people question whether anything they’re seeing is real or not, introducing a heavy dose of doubt at a time when the technologies themselves are still learning how to best mimic reality.skip past newsletter promotionafter newsletter promotion“There’s a difference between what AI might do and what AI is actually doing,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersection between technology and democracy. People will start to wonder, she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”Even without government regulation, companies that manage AI tools have announced and launched plans to limit its potential influence on elections, such as having their chatbots direct people to trusted sources on where to vote and not allowing chatbots that imitate candidates. A recent pact among companies such as Google, Meta, Microsoft and OpenAI includes “reasonable precautions” such as additional labeling of and education about AI-generated political content, though it wouldn’t ban the practice.But bad actors often flout or skirt around government regulations and limitations put in place by platforms. Think of the “do not call” list: even if you’re on it, you still probably get some spam calls.At the national level, or with major public figures, debunking a deepfake happens fairly quickly, with outside groups and journalists jumping in to spot a spoof and spread the word that it’s not real. When the scale is smaller, though, there are fewer people working to debunk something that could be AI-generated. Narratives begin to set in. In Baltimore, for example, recordings posted in January of a local principal allegedly making offensive comments could be AI-generated – it’s still under investigation.In the absence of regulations from the Federal Election Commission (FEC), a handful of states have instituted laws over the use of AI in political ads, and dozens more states have filed bills on the subject. At the state level, regulating AI in elections is a bipartisan issue, Gilbert said. The bills often call for clear disclosures or disclaimers in political ads that make sure voters understand content was AI-generated; without such disclosure, the use of AI is then banned in many of the bills, she said.The FEC opened a rule-making process for AI last summer, and the agency said it expects to resolve it sometime this summer, the Washington Post has reported. Until then, political ads with AI may have some state regulations to follow, but otherwise aren’t restricted by any AI-specific FEC rules.“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert said. “But it’s closing in on that point, and we need to move really fast.” More

  • in

    Want to come up with a winning election ad campaign? Just be honest | Torsten Bell

    There are so many elections this year but how to go about winning them? Labour has a sub-optimal, but impressively consistent strategy: waiting (usually a decade and a half in opposition).It’s paying off again with huge swings to them in last week’s two byelections. But this approach requires patience and most parties around the world are less keen on waiting that long. So they spend a lot of time and money trying to win, which means election adverts. In the US, TV ads are centre stage. In the UK, those are largely banned (even GB News is meant to be providing news when Tory MPs interview each other) but online ads are big business.Those involved in politics have very strong views about the kind of ads that work. They absolutely have to be positive about your offer. Or negative about your ghastly opponent. It’s imperative they’re about issues, not personalities. Or the opposite. The only problem with those election gurus’ certainties? Different kinds of ads work at different times and places. So found research with access to an intriguing data source: experiments conducted by campaign teams during 2018 and 2020 US elections to test ad options before choosing which to air; 617 ads were tested in 146 survey experiments.Researchers showed that quality matters – it’s not unusual for an advert to be 50% more or less persuasive than average. But one kind is not generally more persuasive and the type of ads that worked in 2018 didn’t have the same effect in 2020.So, if you’re trying to get yourself elected, my advice is to base your campaign on the evidence, not just your hunch. See it as good practice. After all, we’d ideally run the country that way. More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    When dead children are just the price of doing business, Zuckerberg’s apology is empty | Carole Cadwalladr

    I don’t generally approve of blood sports but I’m happy to make an exception for the hunting and baiting of Silicon Valley executives in a congressional committee room. But then I like expensive, pointless spectacles. And waterboarding tech CEOs in Congress is right up there with firework displays, a brief, thrillingly meaningless sensation on the retina and then darkness.Last week’s grilling of Mark Zuckerberg and his fellow Silicon Valley Übermenschen was a classic of the genre: front pages, headlines, and a genuinely stand-out moment of awkwardness in which he was forced to face victims for the first time ever and apologise: stricken parents holding the photographs of their dead children lost to cyberbullying and sexual exploitation on his platform.Less than six hours later, his company delivered its quarterly results, Meta’s stock price surged by 20.3% delivering a $200bn bump to the company’s market capitalisation and, if you’re counting, which as CEO he presumably does, a $700m sweetener for Zuckerberg himself. Those who listened to the earnings call tell me there was no mention of dead children.A day later, Biden announced, “If you harm an American, we will respond”, and dropped missiles on more than 80 targets across Syria and Iraq. Sure bro, just so long as the Americans aren’t teenagers with smart phones. US tech companies routinely harm Americans, and in particular, American children, though to be fair they routinely harm all other nationalities’ children too: the Wall Street Journal has shown Meta’s algorithms enable paedophiles to find each other. New Mexico’s attorney general is suing the company for being the “largest marketplace for predators and paedophiles globally”. A coroner in Britain found that 14-year-old Molly Jane Russell, “died from an act of self-harm while suffering from depression and the negative effects of online content” – which included Instagram videos depicting suicide.And while dispatching a crack squad of Navy Seals to Menlo Park might be too much to hope for, there are other responses that the US Congress could have mandated, such as, here’s an idea, a law. Any law. One that, say, prohibits tech companies from treating dead children as just a cost of doing business.Because demanding that tech companies don’t enable paedophiles to find and groom children is the lowest of all low-hanging fruit in the tech regulation space. And yet even that hasn’t happened yet. What America urgently needs is to act on its anti-trust laws and break up these companies as a first basic step. It needs to take an axe to Section 230, the law that gives platforms immunity from lawsuits for hosting harmful or illegal content.It needs basic product safety legislation. Imagine GlaxoSmithKline launched an experimental new wonder drug last year. A drug that has shown incredible benefits, including curing some forms of cancer and slowing down ageing. It might also cause brain haemorrhages and abort foetuses, but the data on that is not yet in so we’ll just have to wait and see. There’s a reason that doesn’t happen. They’re called laws. Drug companies go through years of testing. Because they have to. Because at some point, a long time ago, Congress and other legislatures across the world did their job.Yet Silicon Valley’s latest extremely disruptive technology, generative AI, was released into the wild last year without even the most basic federally mandated product testing. Last week, deep fake porn images of the most famous female star on the planet, Taylor Swift, flooded social media platforms, which had no legal obligation to take them down – and hence many of them didn’t.But who cares? It’s only violence being perpetrated against a woman. It’s only non-consensual sexual assault, algorithmically distributed to millions of people across the planet. Punishing women is the first step in the rollout of any disruptive new technology, so get used to that, and if you think deep fakes are going to stop with pop stars, good luck with that too.You thought misinformation during the US election and Brexit vote in 2016 was bad? Well, let’s wait and see what 2024 has to offer. Could there be any possible downside to releasing this untested new technology – one that enables the creation of mass disinformation at scale for no cost – at the exact moment in which more people will go to the polls than at any time in history?You don’t actually have to imagine where that might lead because it’s already happened. A deep fake targeting a progressive candidate dropped days before the Slovakian general election in October. It’s impossible to know what impact it had or who created it, but the candidate lost, and the opposition pro-Putin candidate won. CNN reports that the messaging of the deepfake echoed that put out by Russia’s foreign intelligence service, just an hour before it dropped. And where was Facebook in all of this, you ask? Where it usually is, refusing to take many of the deep fake posts down.Back in Congress, grilling tech execs is something to do to fill the time in between the difficult job of not passing tech legislation. It’s now six years since the Cambridge Analytica scandal when Zuckerberg became the first major tech executive to be commanded to appear before Congress. That was a revelation because it felt like Facebook might finally be brought to heel.But Wednesday’s outing was Zuckerberg’s eighth. And neither Facebook, nor any other tech platform, has been brought to heel. The US has passed not a single federal law. Meanwhile, Facebook has done some exculpatory techwashing of its name to remove the stench of data scandals and Kremlin infiltration and occasionally offers up its CEO for a ritual slaughtering on the Senate floor.To understand America’s end-of-empire waning dominance in the world, its broken legislature and its capture by corporate interests, the symbolism of a senator forcing Zuckerberg to apologise to bereaved parents while Congress – that big white building stormed by insurrectionists who found each other on social media platforms – does absolutely nothing to curb his company’s singular power is as good as any place to start.We’ve had eight years to learn the lessons of 2016 and yet here we are. Britain has responded by weakening the body that protects our elections and degrading our data protection laws to “unlock post-Brexit opportunities”. American congressional committees are now a cargo cult that go through ritualised motions of accountability. Meanwhile, there’s a new tech wonder drug on the market that may create untold economic opportunities or lethal bioweapons and the destabilisation of what is left of liberal democracy. Probably both. Carole Cadwalladr is a reporter and feature writer for the Observer More

  • in

    When Mark Zuckerberg can face US senators and claim the moral high ground, we’re through the looking glass | Marina Hyde

    Did you catch a clip of the tech CEOs in Washington this week? The Senate judiciary committee had summoned five CEOs to a hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. There was Meta’s Mark Zuckerberg, TikTok’s Shou Zi Chew, Snapchat’s Evan Spiegel, Discord’s Jason Citron and X’s Linda Yaccarino – and a predictable vibe of “Senator, I’m a parent myself …” Listen, these moguls simply want to provide the tools to help families and friends connect with each other. Why must human misery and untold, tax-avoidant billions attend them at every turn?If you did see footage from the hearing, it was probably one of two moments of deliberately clippable news content. Ranking committee member Lindsey Graham addressed Zuckerberg with the words: “I know you don’t mean it to be so, but you have blood on your hands.” Well, ditto, Senator. “You have a product that is killing people,” continued Graham, who strangely has yet to make the same point to the makers of whichever brand of AR-15 he proudly owns, or indeed to the makers of the assault rifles responsible for another record high of US school shootings last year. Firearms fatalities are the number one cause of death among US children and teenagers, a fact the tech CEOs at this hearing politely declined to mention, because no one likes a whatabouterist. And after all, the point of these things is to just get through the posturing of politicians infinitely less powerful than you, then scoot back to behaving precisely as you were before. Zuckerberg was out of there in time to report bumper results and announce Meta’s first ever dividend on Thursday. At time of writing, its shares were soaring.Anyhow, if it wasn’t that clip, maybe it was the one of Zuckerberg being goaded by sedition fist-pumper Josh Hawley into apologising to those in the committee room audience who had lost children to suicide following exploitation on his platform. Thanks to some stagey prodding by Senator Hawley, who famously encouraged the mob on 6 January 2020 (before later being filmed running away from them after they stormed the Capitol), Zuckerberg turned round, stood up, and faced his audience of the bereaved. “I’m sorry for everything you’ve all gone through,” he began. Helpfully, a transcribed version of this off-the-cuff moment found its way into a Meta press release minutes after the event.View image in fullscreenSo I guess that was the hearing. “Tense”, “heated”, “stunning” – listen, if adjectival cliches were legislation, this exercise would have been something more than pointless. And yet, they’re not and it wasn’t. There really ought to be a genre name for this kind of performative busywork – the theatre of failure, perhaps.Other outcomes were once available. Back in 1994, the CEOs of seven big tobacco firms took their oaths before a Senate committee, then spouted a communal line that nicotine wasn’t addictive. Within two years, all seven had quit the tobacco industry – a development not unrelated to the fact that all seven were under investigation by the justice department for perjury. Those were different times, and not just because we probably wouldn’t slap them with the “seven dwarfs” moniker now. These days, you can’t escape the sense that old guys were shouting at Zuckerberg at a hearing six years ago, while he offered 2018’s variation on his favourite blandishment: “We know we have more work to do”. And you suspect they’ll be shouting at him again in five years’ time, when he will still know they have more work to do. “If you’re waiting on these guys to solve the problem,” sniffed Graham of the tech CEOs, “we’re gonna die waiting.” Again, the senator speaks of what he knows. There is always talk of legislation, but there is never really much legislation.There’s a line near the start of the movie version of Ready Player One, the cult dystopian book about a VR world that weirdly feels like the lodestar for Zuckerberg’s pivot towards the metaverse: “I was born in 2027,” explains the teenage protagonist, “after the corn syrup droughts, after the bandwidth riots … after people stopped trying to fix problems, and just tried to outlive them.” It was hard to watch any amount of Wednesday’s hearing – it’s hard to watch a lot of news about the intersection of politics and mega-business these days, in fact – and not feel we are in a very similar place. Few of the politicians giving it the hero act could be said to have left the world in a better place than the one in which they found it when they took office. A necrotic form of politics has gripped the Republican party in particular, and this is the vacuum in which they have been downgraded by corporations they don’t even understand, let alone have the will, foresight, or political skill to control.“Companies over countries,” as Mark Zuckerberg said a long time ago. This once-unformed thought becomes more realised all the time, with the Meta boss last year explaining that, “Increasingly, the real world is a combination of the physical world we inhabit and the digital world we are building.” The added irony is that the more the Lindsey Grahams fail the real world, the more people retreat further into the unregulated embrace of the worlds that the Mark Zuckerbergs run. It’s going to take so much more than the theatre of failure to solve it – but bad actors currently dominate the bill.
    Marina Hyde is a Guardian columnist More

  • in

    Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live | André Spicer

    During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified.If AI-produced hallucinations are used to answer important but difficult to verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change. The Brookings Institution points out that there are many positive ways generative AI could be used in politics. But at the moment its negative uses are most obvious, and more likely to affect us imminently. It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
    André Spicer is professor of organisational behaviour at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit More