More stories

  • in

    Harris wants to bring ‘joy, joy, joy’ to Americans. What about Palestinians? | Arwa Mahdawi

    Muslim Women for Harris is disbandingGot any spare brooms to hand? I think the folk at the Democratic national convention may need a few extra because they’ve been very busy this week trying to sweep the carnage in Gaza under the rug.Hope and joy have been the big themes of the convention. On Wednesday, Hakeem Jeffries, the House minority leader, told the crowd that working to get Kamala Harris elected would mean “joy, joy, joy comes in the morning”. It is wonderful to see all this exuberance, all this optimism for a brighter future. But it is also impossible not to contrast the revelry in Chicago with the Biden administration-sponsored suffering coming out of Gaza.Well, it’s impossible for some of us, anyway. For plenty of delegates at the convention, the suffering of Palestinians, the harrowing images on social media of charred babies and toddlers in Gaza whose heads have been caved in from US-manufactured bombs, seem to be nothing more than an annoying distraction. Pro-Palestinian protesters at the convention haven’t just been met with stony faces, they’ve been met with jeers and violence. One delegate inside the convention was caught on camera repeatedly hitting a Muslim woman in the head with a “We Love Joe” sign. The woman’s crime was that she had peacefully unfurled a banner saying “Stop Arming Israel”. It’s not clear who the man assaulting this woman was but one imagines he will not face any consequences.To be fair, Gaza hasn’t been completely ignored. On Monday, there was a panel centered on Palestinian human rights, in which Dr Tanya Haj-Hassan, a pediatric doctor who treated patients in Gaza, talked about the horrors she had witnessed. But the panel, while important, wasn’t on the main stage. It wasn’t given star billing like the parents of the Israeli-American hostage Hersh Goldberg-Polin, who gave an emotional speech on Wednesday. It felt a lot like pro-Palestinian activists had just been tossed a few crumbs.For a brief moment, it did seem like a Palestinian might get a proper chance to speak. The Uncommitted National Movement, which launched an anti-war protest vote during the primaries, had been urging convention officials to include two Palestinian American speakers on the convention’s main stage. “We are learning that Israeli hostages’ families will be speaking from the main stage. We strongly support that decision and also strongly hope that we will also be hearing from Palestinians who’ve endured the largest civilian death toll since 1948,” the movement’s statement released on Tuesday read.By Wednesday evening, however, it seemed clear that the convention had rejected these requests. In response, a group of uncommitted delegates staged a sit-in in front of Chicago’s United Center. Ilhan Omar joined the demonstration, and Alexandria Ocasio-Cortez called in via FaceTime.In light of the convention’s refusal to have a Palestinian American speaker, the group Muslim Women for Harris made the decision to disband and withdraw support for Harris. “The family of the Israeli hostage that was on the stage tonight, has shown more empathy towards Palestinian Americans and Palestinians, than our candidate or the DNC has,” Muslim Women for Harris’s statement read.For those of us who have been cautiously optimistic that Harris might break from Joe Biden’s disastrous policy of unconditional support for Israel, this week has been bitterly disappointing. Whoever wins this election, it seems clear joy, joy, joy will not be coming to Gaza anytime soon. Just more bombs, bombs, bombs.Dismiss ‘grannies’ as frail old biddies at your perilWhether it’s “Nans against Nazis” protesting in Liverpool or the Raging Nannies getting arrested at US army recruitment centers, older women are some of the toughest activists out there, writes Sally Feldman.Woman, 75, uses gardening tools to fill in potholes outside home in Scottish villageArmed with a bucket and spade, Jenny Paterson undertook the resurfacing work against her doctor’s orders. She’d had surgery and wasn’t supposed to lift things but said: “I’m fine and I’m not a person to sit around and do nothing anyway.” Which has given me some inspiration to pick up a rake and go tackle the raggedy roads of Philadelphia.The late Queen Elizabeth II thought Donald Trump was ‘very rude’Apparently, she also “believed Trump ‘must have some sort of arrangement’ with his wife, Melania, or else why would she have remained married to him?”skip past newsletter promotionafter newsletter promotionHow Tanya Smith stole $40m, evaded the FBI and broke out of prisonThe Guardian has a fascinating profile of Smith that touches on how the FBI couldn’t catch her for so long because they didn’t think a Black woman was capable of orchestrating her crimes. In Smith’s memoir, she recounts how one officer told her that “neeee-grroes murder, steal and rob, but they don’t have the brains to commit sophisticated crimes like this”.A clueless Alicia Silverstone eats poisonous fruit off a bushIf you’re wandering the streets of London and see a bush in someone’s front garden with mysterious fruit on it, should you a) admire it and move on? Or b) reach through the fence and film a TikTok of yourself munching the lil street snack while asking whether anyone knows what the heck it is? This week, Silverstone chose option b. The woman thinks vaccines are dodgy and yet she has no problem sticking an unknown fruit into her mouth. Turns out it was toxic but Silverstone has confirmed she’s OK, which means we can all laugh at her without feeling too bad about it.Women use ChatGPT 16%-20% less than their male peersThat’s according to two recent studies examined by the Economist. One explanation for this was that high-achieving women appeared to impose an AI ban on themselves. “It’s the ‘good girl’ thing,” one researcher said. “It’s this idea that ‘I have to go through this pain, I have to do it on my own and I shouldn’t cheat and take short-cuts.’” Very demure, very mindful.Patriarchal law cuts some South African women off from owning their homesBack in the 1990s, South Africa introduced a new land law (the Upgrading of Land Tenure Rights Act) that was supposed to fix the injustices of apartheid. It upgraded the property rights of Black long-term leaseholders so they could own their homes. But only a man could hold the property permit, effectively pushing women out of inheriting. Since the 1990s, there have been challenges and changes to the Upgrading Act, but experts say that women’s property rights are still not sufficiently recognized and “customary law has placed women outside the law”.The week in pawtriarchyThey stared into the void of an arcade game, and the void stared back. Punters at a Pennsylvania custard shop were startled when they realized that the cute little groundhog nestled among the stuffed animals in a mechanical-claw game was a real creature. Nobody knows exactly how he got into the game but he has since been rescued and named Colonel Custard. “It’s a good story that ended well,” the custard shop manager said. “He got set free. No one got bit.” More

  • in

    Iranian group used ChatGPT to try to influence US election, OpenAI says

    OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues.The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel’s presence at the Olympic Games and then shared it via social media accounts and websites, Open AI said.Investigation by the Microsoft-backed AI company showed ChatGPT was used for generating long-form articles and shorter social media comments.OpenAI said the operation did not appear to have achieved meaningful audience engagement.The majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media.The accounts have been banned from using OpenAI’s services and the company continues to monitor activities for any further attempts to violate policies, it said.Earlier in August, a Microsoft threat-intelligence report said the Iranian network Storm-2035, comprising four websites masquerading as news outlets, was actively engaging US voter groups on opposing ends of the political spectrum.The engagement was being built with “polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict”, the report stated.The Democratic candidate, Kamala Harris, and her Republican rival, Donald Trump, are locked in a tight race, ahead of the presidential election on 5 November.The AI firm said in May it had disrupted five covert influence operations that sought to use its models for “deceptive activity” across the internet. More

  • in

    How A.I. Imitates Restaurant Reviews

    A new study showed people real restaurant reviews and ones produced by A.I. They couldn’t tell the difference.The White Clam Pizza at Frank Pepe Pizzeria Napoletana in New Haven, Conn., is a revelation. The crust, kissed by the intense heat of the coal-fired oven, achieves a perfect balance of crispness and chew. Topped with freshly shucked clams, garlic, oregano and a dusting of grated cheese, it is a testament to the magic that simple, high-quality ingredients can conjure.Sound like me? It’s not. The entire paragraph, except the pizzeria’s name and the city, was generated by GPT-4 in response to a simple prompt asking for a restaurant critique in the style of Pete Wells.I have a few quibbles. I would never pronounce any food a revelation, or describe heat as a kiss. I don’t believe in magic, and rarely call anything perfect without using “nearly” or some other hedge. But these lazy descriptors are so common in food writing that I imagine many readers barely notice them. I’m unusually attuned to them because whenever I commit a cliché in my copy, I get boxed on the ears by my editor.He wouldn’t be fooled by the counterfeit Pete. Neither would I. But as much as it pains me to admit, I’d guess that many people would say it’s a four-star fake.The person responsible for Phony Me is Balazs Kovacs, a professor of organizational behavior at Yale School of Management. In a recent study, he fed a large batch of Yelp reviews to GPT-4, the technology behind ChatGPT, and asked it to imitate them. His test subjects — people — could not tell the difference between genuine reviews and those churned out by artificial intelligence. In fact, they were more likely to think the A.I. reviews were real. (The phenomenon of computer-generated fakes that are more convincing than the real thing is so well known that there’s a name for it: A.I. hyperrealism.)Dr. Kovacs’s study belongs to a growing body of research suggesting that the latest versions of generative A.I. can pass the Turing test, a scientifically fuzzy but culturally resonant standard. When a computer can dupe us into believing that language it spits out was written by a human, we say it has passed the Turing test.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    More Than Words: 10 Charts That Defined 2023

    Some years are defined by a single event or person — a pandemic, a recession, an insurrection — while others are buffeted by a series of disparate forces. Such was 2023. The economy and inflation remained front of mind until the war in Gaza grabbed headlines and the world’s attention — all while Donald Trump’s […] More

  • in

    Plus-Size Female Shoppers ‘Deserve Better’

    More from our inbox:Why Trump’s Supporters Love HimChatGPT Is PlagiarismThe Impact of China’s Economic WoesThe ‘Value’ of CollegeKim SaltTo the Editor:Re “Just Make It, Toots,” by Elizabeth Endicott (Opinion guest essay, Aug. 20):Despite the fact that two-thirds of American women are size 14 or above, brands and retailers continue to overlook and disregard plus-size women whose dollars are as green as those held by “straight size” women.The root cause is simple, and it’s not that it’s more expensive or time-consuming; these excuses have been bandied about for years. There are not enough clothes available to plus-size women because brands and retailers assume that larger women will just accept whatever they’re given, since they have in the past.As Ms. Endicott pointed out in her essay, this is no longer the case — women are finding other ways to express themselves through clothing that fits their bodies, their styles and their budgets, from making clothes themselves to shopping at independent designers and boutiques.We still have a long way to go, but for every major retailer that dips a toe into the market and just as quickly pulls back, there are new designers and stores willing to step in and take their place.Plus-size women deserve more and deserve better. Those who won’t cater to them do so at their own peril.Shanna GoldstoneNew YorkThe writer is the founder and C.E.O. of Pari Passu, an apparel company that sells clothing to women sizes 12 to 24.To the Editor:Plus-size people aren’t the only folks whose clothing doesn’t fit. I wore a size 10 for decades, but most clothes wouldn’t fit my wide well-muscled shoulders. Apparently being really fit is just as bad as being a plus size.I wasn’t alone; most of my co-workers had similar problems. Don’t even get me started about having a short back and a deep pelvis. I found only one brand of pants that came close to fitting and have worn them for almost 40 years. They definitely are not a fashion statement.Eloise TwiningUkiah, Calif.To the Editor:Thank you, Elizabeth Endicott, for revealing the ways that historically marginalized consumers grapple with retail trends. You recognized that “plus size is now the American average.”As someone who works for a company that sells clothing outside of the traditional gender binary, I’d add that gender neutral clothing will also soon be an American retail norm. It’s now up to large-scale retailers to decide if they want to meet this wave of demand, or miss out on contemporary consumers.Ashlie GrilzProvidence, R.I.The writer is brand director for Peau De Loup.Why Trump’s Supporters Love HimSam Whitney/The New York TimesTo the Editor:Re “The Thing Is, Most Republicans Really Like Trump,” by Kristen Soltis Anderson (Opinion guest essay, Aug. 30):Ms. Anderson writes that one of the most salient reasons that Republican voters favor Donald Trump as their presidential nominee is that they believe he is “best poised” to beat Joe Biden. I do not concur.His likability is not based primarily on his perceived electability. Nor is his core appeal found in policy issues such as budget deficits, import tariffs or corporate tax relief. It won’t even be found in his consequential appointments to the Supreme Court.Politics is primarily visceral, not cerebral. When Mr. Trump denounces the elites that he claims are hounding him with political prosecutions, his followers concur and channel their own grievances and resentments with his.When Mr. Trump rages against the professional political class and “fake news,” his acolytes applaud because they themselves feel ignored and disrespected.Mr. Trump is more than an entertaining self-promoter. He offers oxygen for self-esteem, and his supporters love him for it.John R. LeopoldStoney Beach, Md.ChatGPT Is Plagiarism“I do want students to learn to use it,” Yazmin Bahena, a middle school social studies teacher, said about ChatGPT. “They are going to grow up in a world where this is the norm.”Ricardo Nagaoka for The New York TimesTo the Editor:Re “Schools Shift to Embrace ChatGPT,” by Natasha Singer (news article, Aug. 26):What gets lost in this discussion is that these schools are authorizing a form of academic plagiarism and outright theft of the texts authors have created. This is why over 8,000 authors have signed a petition to the A.I. companies that have “scraped” (the euphemistic term they use for “stolen”) their intellectual properties and repackaged them as their own property to be sold for profit. In the process, the A.I. chatbots are depriving authors of the fruits of their labor.What a lesson to teach our nation’s children. This is the very definition of theft. Schools that accept this are contributing to the ethical breakdown of a nation already deeply challenged by a culture of cheating.Dennis M. ClausenEscondido, Calif.The writer is an author and professor at the University of San Diego.The Impact of China’s Economic WoesThe Port of Oakland in California. China only accounted for 7.5 percent of U.S. exports in 2022.Jim Wilson/The New York TimesTo the Editor:Re “China’s Woes Are Unlikely to Hamper U.S. Growth” (Business, Aug. 28):Lydia DePillis engages in wishful thinking in arguing that the fallout of China’s deep economic troubles for the U.S. economy probably will be limited.China is the world’s second-largest economy, until recently the main engine of world economic growth and a major consumer of internationally traded commodities. As such, a major Chinese economic setback would cast a dark cloud over the world economic recovery.While Ms. DePillis is correct in asserting that China’s direct impact on our economy might be limited, its indirect impact could be large, particularly if it precipitates a world economic recession.China’s economic woes could spill over to its Asian trade partners and to economies like Germany, Australia and the commodity-dependent emerging market economies, which all are heavily dependent on the Chinese market for their exports.Desmond LachmanWashingtonThe writer is a senior fellow at the American Enterprise Institute.The ‘Value’ of CollegeSarah Reingewirtz/MediaNews Group — Los Angeles Daily News, via Getty ImagesTo the Editor:Re “Let’s Stop Pretending College Degrees Don’t Matter,” by Ben Wildavsky (Opinion guest essay, Aug. 26):There are quite a few things wrong with Mr. Wildavsky’s assessment of the value of a college education. But I’ll focus on the most obvious: Like so many pundits, he equates value with money, pointing out that those with college degrees earn more than those without.Some do, some don’t. I have a Ph.D. from an Ivy League university, but the electrician who dealt with a very minor problem in my apartment earns considerably more than I do. So, for that matter, does the plumber.What about satisfaction, taking pleasure in one’s accomplishments? Do we really think that the coder takes more pride in their work than does the construction worker who told me he likes to drive around the city with his children and point out the buildings he helped build? He didn’t need a college degree to find his work meaningful.How about organizing programs that prepare high school students for work, perhaps through apprenticeships, and paying all workers what their efforts are worth?Erika RosenfeldNew York More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More