More stories

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More

  • in

    When the tech boys start asking for new regulations, you know something’s up | John Naughton

    Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.What I’ve been readingCapitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon. More

  • in

    OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful’ AI

    The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said “regulation of AI is essential” as he testified in his first appearance in front of the US Congress.Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms.“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his prepared remarks.Altman suggested the US government might consider licensing and testing requirements for development and release of AI models. He proposed establishing a set of safety standards and a specific test models would have to pass before they can be deployed, as well as allowing independent auditors to examine the models before they are launched. He also argued existing frameworks like Section 230, which releases platforms from liability for the content its users post, would not be the right way to regulate the system.“For a very new technology we need a new framework,” Altman said.Both Altman and Gary Marcus, an emeritus professor of psychology and neural science at New York University who also testified at the hearing, called for a new regulatory agency for the technology. AI is complicated and moving fast, Marcus argued, making “an agency whose full-time job” is to regulate it crucial.Throughout the hearing, senators drew parallels between social media and generative AI, and the lessons lawmakers had learned from the government’s failure to act on regulating social platforms.Yet the hearing was far less contentious than those at which the likes of the Meta CEO, Mark Zuckerberg, testified. Many lawmakers gave Altman credit for his calls for regulation and acknowledgment of the pitfalls of generative AI. Even Marcus, brought on to provide skepticism about the technology, called Altman’s testimony sincere.The hearing came as renowned and respected AI experts and ethicists, including former Google researchers Dr Timnit Gebru, who co-led the company’s ethical AI team, and Meredith Whitaker, have been sounding the alarm about the rapid adoption of generative AI, arguing the technology is over-hyped. “The idea that this is going to magically become a source of social good … is a fantasy used to market these programs,” Whitaker, now the president of secure messaging app Signal, recently said in an interview with Meet the Press Reports.Generative AI is a probability machine “designed to spit out things that seem plausible” based on “massive amounts of effectively surveillance data that has been scraped from the web”, she argued.Senators Josh Hawley and Richard Blumenthal said this hearing is just the first step in understanding the technology.Blumenthal said he recognized what he described as the “promises” of the technology including “curing cancer, developing new understandings of physics and biology, or modeling climate and weather”.Potential risks Blumenthal said he was worried about include deepfakes, weaponized disinformation, housing discrimination, harassment of women and impersonation frauds. “For me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers,” he said.Altman said that while OpenAI was building tools that will one day “address some of humanity’s biggest challenges like climate changes and curing cancer”, the current systems were not capable of doing these things yet.But he believes the benefits of the tools deployed so far “vastly outweigh the risks” and said the company conducts extensive testing and implements safety and monitoring systems before releasing any new system.“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” Altman said.Altman said the technology will significantly affect the job market but he believes “there will be far greater jobs on the other side of this”.“The jobs will get better,” he said. “I think it’s important to think of GPT as a tool not a creature … GPT 4 and tools like it are good at doing tasks, not jobs. GPT 4 will, I think, entirely automate away some jobs and it will create new ones that we believe will be much better.”Altman also said he was very concerned about the impact that large language model services will have on elections and misinformation, particularly ahead of the primaries.“There’s a lot that we can and do do,” Altman said in response to a question from Senator Amy Klobuchar about a tweet ChatGPT crafted that listed fake polling locations. “There are things that the model won’t do and there is monitoring. At scale … we can detect someone generating a lot of those [misinformation] tweets.”Altman didn’t have an answer yet for how content creators whose work is being used in AI-generated songs, articles or other works can be compensated, saying the company is engaged with artists and other entities on what that economic model could look like. When asked by Klobuchar about how he plans to remedy threats to local news publications whose content is being scraped and used to train these models, Altman said he hopes the tool would help journalists but that “if there are things that we can do to help local news, we’d certainly like to”.Touched upon but largely missing from the conversation was the potential danger of a small group of power players dominating the industry, a dynamic Whitaker has warned risks entrenching existing power dynamics.“There are only a handful of companies in the world that have the combination of data and infrastructural power to create what we’re calling AI from nose-to-tail,” she said in the Meet the Press interview. “We’re now in a position that this overhyped technology is being created, distributed and ultimately shaped to serve the economic interests of these same handful of actors.” More

  • in

    Breakfast with Chad: Techno-feudalism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Mind Blowing: The Startling Reality of Conscious Machines

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Breakfast with Chad: Posthumanism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Breakfast with Chad: Who sabotaged the Nord Stream pipelines?

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More