More stories

  • in

    China Sows Disinformation About Hawaii Fires Using New Techniques

    Beijing’s influence campaign using artificial intelligence is a rapid change in tactics, researchers from Microsoft and other organizations say.When wildfires swept across Maui last month with destructive fury, China’s increasingly resourceful information warriors pounced.The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was the result of a secret “weather weapon” being tested by the United States. To bolster the plausibility, the posts carried photographs that appeared to have been generated by artificial intelligence programs, making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign.For China — which largely stood on the sidelines of the 2016 and 2020 U.S. presidential elections while Russia ran hacking operations and disinformation campaigns — the effort to cast the wildfires as a deliberate act by American intelligence agencies and the military was a rapid change of tactics.Until now, China’s influence campaigns have been focused on amplifying propaganda defending its policies on Taiwan and other subjects. The most recent effort, revealed by researchers from Microsoft and a range of other organizations, suggests that Beijing is making more direct attempts to sow discord in the United States.The move also comes as the Biden administration and Congress are grappling with how to push back on China without tipping the two countries into open conflict, and with how to reduce the risk that A.I. is used to magnify disinformation.The impact of the Chinese campaign — identified by researchers from Microsoft, Recorded Future, the RAND Corporation, NewsGuard and the University of Maryland — is difficult to measure, though early indications suggest that few social media users engaged with the most outlandish of the conspiracy theories.Brad Smith, the vice chairman and president of Microsoft, whose researchers analyzed the covert campaign, sharply criticized China for exploiting a natural disaster for political gain.“I just don’t think that’s worthy of any country, much less any country that aspires to be a great country,” Mr. Smith said in an interview on Monday.China was not the only country to make political use of the Maui fires. Russia did as well, spreading posts that emphasized how much money the United States was spending on the war in Ukraine and that suggested the cash would be better spent at home for disaster relief.The researchers suggested that China was building a network of accounts that could be put to use in future information operations, including the next U.S. presidential election. That is the pattern that Russia set in the year or so leading up to the 2016 election.“This is going into a new direction, which is sort of amplifying conspiracy theories that are not directly related to some of their interests, like Taiwan,” said Brian Liston, a researcher at Recorded Future, a cybersecurity company based in Massachusetts.A destroyed neighborhood in Lahaina, Hawaii, last month. China has made the wildfires a target of disinformation.Go Nakamura for The New York TimesIf China does engage in influence operations for the election next year, U.S. intelligence officials have assessed in recent months, it is likely to try to diminish President Biden and raise the profile of former President Donald J. Trump. While that may seem counterintuitive to Americans who remember Mr. Trump’s effort to blame Beijing for what he called the “China virus,” the intelligence officials have concluded that Chinese leaders prefer Mr. Trump. He has called for pulling Americans out of Japan, South Korea and other parts of Asia, while Mr. Biden has cut off China’s access to the most advanced chips and the equipment made to produce them.China’s promotion of a conspiracy theory about the fires comes after Mr. Biden vented in Bali last fall to Xi Jinping, China’s president, about Beijing’s role in the spread of such disinformation. According to administration officials, Mr. Biden angrily criticized Mr. Xi for the spread of false accusations that the United States operated biological weapons laboratories in Ukraine.There is no indication that Russia and China are working together on information operations, according to the researchers and administration officials, but they often echo each other’s messages, particularly when it comes to criticizing U.S. policies. Their combined efforts suggest a new phase of the disinformation wars is about to begin, one bolstered by the use of A.I. tools.“We don’t have direct evidence of coordination between China and Russia in these campaigns, but we’re certainly finding alignment and a sort of synchronization,” said William Marcellino, a researcher at RAND and an author of a new report warning that artificial intelligence will enable a “critical jump forward” in global influence operations.The wildfires in Hawaii — like many natural disasters these days — spawned numerous rumors, false reports and conspiracy theories almost from the start.Caroline Amy Orr Bueno, a researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security, reported that a coordinated Russian campaign began on Twitter, the social media platform now known as X, on Aug. 9, a day after the fires started.It spread the phrase, “Hawaii, not Ukraine,” from one obscure account with few followers through a series of conservative or right-wing accounts like Breitbart and ultimately Russian state media, reaching thousands of users with a message intended to undercut U.S. military assistance to Ukraine.President Biden has criticized President Xi Jinping of China for the spread of false accusations about the United States and Ukraine.Florence Lo/ReutersChina’s state media apparatus often echoes Russian themes, especially animosity toward the United States. But in this case, it also pursued a distinct disinformation campaign.Recorded Future first reported that the Chinese government mounted a covert campaign to blame a “weather weapon” for the fires, identifying numerous posts in mid-August falsely claiming that MI6, the British foreign intelligence service, had revealed “the amazing truth behind the wildfire.” Posts with the exact language appeared on social media sites across the internet, including Pinterest, Tumblr, Medium and Pixiv, a Japanese site used by artists.Other inauthentic accounts spread similar content, often accompanied with mislabeled videos, including one from a popular TikTok account, The Paranormal Chic, that showed a transformer explosion in Chile. According to Recorded Future, the Chinese content often echoed — and amplified — posts by conspiracy theorists and extremists in the United States, including white supremacists.The Chinese campaign operated across many of the major social media platforms — and in many languages, suggesting it was aimed at reaching a global audience. Microsoft’s Threat Analysis Center identified inauthentic posts in 31 languages, including French, German and Italian, but also in less prominent ones like Igbo, Odia and Guarani.The artificially generated images of the Hawaii wildfires identified by Microsoft’s researchers appeared on multiple platforms, including a Reddit post in Dutch. “These specific A.I.-generated images appear to be exclusively used” by Chinese accounts used in this campaign, Microsoft said in a report. “They do not appear to be present elsewhere online.”Clint Watts, the general manager of Microsoft’s Threat Analysis Center, said that China appeared to have adopted Russia’s playbook for influence operations, laying the groundwork to influence politics in the United States and other countries.“This would be Russia in 2015,” he said, referring to the bots and inauthentic accounts Russia created before its extensive online influence operation during the 2016 election. “If we look at how other actors have done this, they are building capacity. Now they’re building accounts that are covert.”Natural disasters have often been the focus of disinformation campaigns, allowing bad actors to exploit emotions to accuse governments of shortcomings, either in preparation or in response. The goal can be to undermine trust in specific policies, like U.S. support for Ukraine, or more generally to sow internal discord. By suggesting the United States was testing or using secret weapons against its own citizens, China’s effort also seemed intended to depict the country as a reckless, militaristic power.“We’ve always been able to come together in the wake of humanitarian disasters and provide relief in the wake of earthquakes or hurricanes or fires,” said Mr. Smith, who is presenting some of Microsoft’s findings to Congress on Tuesday. “And to see this kind of pursuit instead is both, I think deeply disturbing and something that the global community should draw a red line around and put off-limits.” More

  • in

    Today’s Top News: Trump Gets a Trial Date, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes. Hosted by Annie Correal, the new morning show features three top stories from reporters across the newsroom and around the world, so you always have a sense of what’s happening, even if you only have a few minutes to spare.Former President Donald J. Trump faces federal and state investigations in New York, Georgia and Washington.Doug Mills/The New York TimesOn Today’s Episode:An Update on Tropical Storm IdaliaJudge Sets Trial Date in March for Trump’s Federal Election Case, with Glenn ThrushA.I. Comes to the U.S. Air Force, with Eric LiptonEli Cohen More

  • in

    Our Immigration System: ‘A Waste of Talent’

    More from our inbox:Cruelty at the BorderLimiting the President’s Pardon PowersAre A.I. Weapons Next?U.S. Food Policy Causes Poor Food ChoicesMateo Miño, left, in the church in Queens where he experienced a severe anxiety attack two days after arriving in New York.Christopher Lee for The New York TimesTo the Editor:“As Politicians Cry Crisis, Migrants Get a Toehold” (news article, July 15) points up the irrationality of the U.S. immigration system. As this article shows, migrants are eager to work, and they are filling significant gaps in fields such as construction and food delivery, but there are still great unmet needs for home health aides and nursing assistants.The main reason for this disjunction lies in federal immigration law, which offers no dedicated visa slots for these occupations (as it does for professionals and even for seasonal agricultural and resort workers) because they are considered “unskilled.”Instead, the law stipulates, applicants must demonstrate that they are “performing work for which qualified workers are not available in the United States” — clearly a daunting task for individual migrants.As a result, many do end up working in fields like home health care but without documentation and are thus vulnerable to exploitation if not deportation. With appropriate reforms, our system would be capable of meeting both the country’s needs for essential workers and migrants’ needs for safe havens.Sonya MichelSilver Spring, Md.The writer is professor emerita of history and women’s and gender studies at the University of Maryland, College Park.To the Editor:We have refugee doctors and nurses who are driving taxi cabs. What a waste of talent that is needed in so many areas of our country.Why isn’t there a program to use their knowledge and skills by working with medical associations to qualify them, especially if they agree to work in parts of the country that have a shortage of doctors and nurses? It would be a win-win situation.There are probably other professions where similar ideas would work.David AlbendaNew YorkCruelty at the BorderTexas Department of Public Safety troopers look over the Rio Grande, as migrants walk by.Suzanne Cordeiro/Agence France-Presse — Getty ImagesTo the Editor:Re “Officers Voice Concerns Over Aggressive Tactics at the Border in Texas” (news article, July 20):In the past year, I have done immigration-related legal work in New York City with recently arrived asylum seekers from all over the world: Venezuela, China, Honduras, Guatemala, Ecuador and Ghana. Most entered the U.S. on foot through the southern border. Some spent weeks traversing the perilous Darién Gap — an unforgiving jungle — and all are fleeing from horribly violent and scary situations.Texas’ barbed wire is not going to stop them.I am struck by the message of the mayor of Eagle Pass, Rolando Salinas Jr., who, supportive of legal migration and orderly law enforcement, said, “What I am against is the use of tactics that hurt people.” I desperately hope we can all agree about this.There should be no place for immigration enforcement tactics that deliberately and seriously injure people.I was disturbed to read that Texas is hiding razor wire in dark water and deploying floating razor wire-wrapped “barrel traps.” These products of Gov. Greg Abbott’s xenophobia are cruel to a staggering degree.Noa Gutow-EllisNew YorkThe writer is a law school intern at the Kathryn O. Greenberg Immigration Justice Clinic at the Benjamin N. Cardozo School of Law.Limiting the President’s Pardon Powers Tom Brenner for The New York TimesTo the Editor:Re “U.S. Alleges Push at Trump’s Club to Erase Footage” (front page, July 28) and “Sudden Obstacle Delays Plea Deal for Biden’s Son” (front page, July 27):With Donald Trump campaigning to return to the White House while under felony indictment, and with Hunter Biden’s legal saga unresolved, there should be bipartisan incentive in Congress for proposing a constitutional amendment limiting the president’s pardon power.A proposed amendment should provide that the president’s “reprieves and pardons” power under Article II, Section 2, shall not apply to offenses, whether committed in office or out, by the president himself or herself; the vice president and cabinet-level officers; any person whose unlawful conduct was solicited by or intended to benefit any of these officials; or a close family member of any of these individuals.Stephen A. SilverSan FranciscoThe writer is a lawyer.To the Editor:Beyond asking “Where’s my Roy Cohn?” Donald Trump may now ask, “Where’s my Rose Mary Woods?”David SchubertCranford, N.J.Are A.I. Weapons Next? Andreas Emil LundTo the Editor:Re “Our Oppenheimer Moment: The Creation of A.I. Weapons,” by Alexander C. Karp (Opinion guest essay, July 30):Mr. Karp argues that to protect our way of life, we must integrate artificial intelligence into weapons systems, citing our atomic might as precedent. However, nuclear weapons are sophisticated and difficult to produce. A.I. capabilities are software, leaving them vulnerable to theft, cyberhacking and data poisoning by adversaries.The risk of proliferation beyond leading militaries was appreciated by the United States and the Soviet Union when banning bioweapons, and the same applies to A.I. It also carries an unacceptable risk of conflict escalation, illustrated in our recent film “Artificial Escalation.”J. Robert Oppenheimer’s legacy offers a different lesson when it comes to advanced general-purpose A.I. systems. The nuclear arms race has haunted our world with annihilation for 78 years. It was luck that spared us. That race ebbed only as leaders came to understand that such a war would destroy humanity.The same is true now. To survive, we must recognize that the reckless pursuit and weaponization of inscrutable, probably uncontrollable advanced A.I. is not a winnable one. It is a suicide race.Anthony AguirreSanta Cruz, Calif.The writer is the executive director and a co-founder of the Future of Life Institute.U.S. Food Policy Causes Poor Food Choices Steven May/AlamyTo the Editor:Re “Vegans Make Smaller Mark on the Planet Than Others” (news article, July 22):While I agree that people could help reduce greenhouse-gas emissions by eating plants only, I find it crucial to note that food policy is the main reason for poor food choices.Food choices follow food policy, and U.S. food policy is focused on meat, dairy, fish and eggs. Our massive network of agriculture universities run “animal science” programs, providing billions of dollars’ worth of training, public relations, research, experimentation and sales for animal products.Our government provides subsidies to the meat, dairy, fish and egg industries far beyond what fruits, vegetables and other plant foods receive. Federal and state agriculture officials are typically connected to the meat or dairy industry. The public pays the cost of animal factories’ contamination of water and soil, and of widespread illness linked to eating animals since humans are natural herbivores.No wonder the meat, dairy, fish and egg industries have so much money for advertising, marketing and public relations, keeping humans deceived about their biological nature and what is good for them to eat.David CantorGlenside, Pa.The writer is founder and director of Responsible Policies for Animals. More

  • in

    A Trump-Biden Rematch That Many Are Dreading

    More from our inbox:Perils of A.I., and Limits on Its DevelopmentAn image from a televised presidential debate in 2020.Damon Winter/The New York TimesTo the Editor:Re “The Presidential Rematch Nobody Wants,” by Pamela Paul (column, July 21):Ms. Paul asks, “Have you met anyone truly excited about Joe Biden running for re-election?”I am wildly enthusiastic about President Biden, who is the best president in my lifetime. His legislation to repair America’s infrastructure and bring back chip manufacturing are both huge accomplishments. Mr. Biden has done more to combat climate change, the existential issue of the day, than all the presidents who have gone before him.Mr. Biden extracted us from the endless morass of Afghanistan. He has marshaled the free peoples of the world to stop the Russian takeover of Ukraine, giving dictators around the world pause.Mr. Biden is the first president in a generation to really believe in unions and to emphasize the issues of working people, understanding how much jobs matter.I might wish he were 20 years younger. I wish I were 20 years younger.Most important, Joe Biden is an honorable man at a time when his biggest rivals do not know the meaning of the word. Being honorable is the essential virtue, without which youth or glibness do not matter.I support his re-election with all my heart and soul.Gregg CoodleyPortland, Ore.To the Editor:We endured (barely) four years of Donald Trump. Now we have Joe Biden, whose time has come and gone, and third party disrupters who know they cannot win but are looking for publicity.Mr. Biden had his turn, and is exceedingly arrogant to believe that he is our best hope. His good sense and moral values won’t help if Donald Trump wins against him, which is eminently possible. The Democratic Party must nominate a powerfully charismatic candidate.Mitchell ZuckermanNew Hope, Pa.To the Editor:I think Pamela Paul misses the point entirely. No, Biden supporters are not jumping up and down in a crazed frenzy like Trump supporters. That is actually a good thing. People like me who fully support President Biden’s re-election are sick and tired of the nonstop insanity that is Donald Trump. I’m very happy to have a sound, calm, upstanding president who actually gets things done for middle- and working-class Americans.Excitement isn’t the answer to solving America’s problems. A president who gets things done is — like Joe Biden!Sue EverettChattanooga, Tenn.To the Editor:Pamela Paul is spot on in her diagnosis of the depressing likelihood of Trump vs. Biden, Round 2.The solution is money, as is true in all things in American politics. The Big Money donors in the Democratic Party should have a conference call with Team Biden and tell it, flat out, we’re not supporting the president’s re-election. It’s time for a younger generation of leaders.Without their money, President Biden would realize that he cannot run a competitive campaign. But in a strange echo of how Republican leaders genuflect to Donald Trump and don’t confront him, the wealthy contributors to the Democratic Party do exactly the same with Mr. Biden.Ethan PodellRutherford Island, MaineTo the Editor:In an ideal world, few would want a presidential rematch. Donald Trump is a menace, and it would be nice to have a Democratic nominee who is young, charismatic and exciting. But in the real world, I favor a Trump-Biden rematch, if Mr. Trump is the Republican nominee.Mr. Biden might shuffle like a senior, and mumble his words, but he is a decent man who loves our country and has delivered beyond expectations.In leadership crises, Americans yearn for shiny new saviors riding into town on a stallion. I prefer an honest old shoe whom we can count on to get us through an election of a lifetime.Jerome T. MurphyCambridge, Mass.The writer is a retired Harvard professor and dean who taught courses on leadership.To the Editor:I am grateful to Pamela Paul for articulating and encapsulating how I, and probably many others, feel about the impending 2024 presidential race. I appreciate the stability that President Biden returned to the White House and our national politics. However, the future demands so much more than Mr. Biden or any other announced candidate can deliver.Christine CunhaBolinas, Calif.To the Editor:Pamela Paul presents many reasons, in her view, why President Biden is a flawed candidate, including that Mr. Biden’s “old age is showing.” As an example, she writes that during an interview on MSNBC he appeared to wander off the set.Fox News has been pushing this phony notion relentlessly, claiming that he walked off while the host was still talking. In fact, the interview was over, Mr. Biden shook hands with the host, they both said goodbye, and while Mr. Biden left the set, the host faced the camera and announced what was coming up next on her show.Howard EhrlichmanHuntington, N.Y.Perils of A.I., and Limits on Its DevelopmentOpenAI’s logo at its offices in San Francisco. The company is testing an image analysis feature for its ChatGPT chatbot. Jim Wilson/The New York TimesTo the Editor:Re “New Worries That Chatbot Reads Faces” (Business, July 19):The integration of facial surveillance and generative A.I. carries a warning: Without prohibitions on the use of certain A.I. techniques, the United States could easily construct a digital dystopia, adopting A.I. systems favored by authoritarian governments for social control.Our report “Artificial Intelligence and Democratic Values” established that facial surveillance is among the most controversial A.I. deployments in the world. UNESCO urged countries to prohibit the use of A.I. for mass surveillance. The European Parliament proposes a ban in the pending E.U. Artificial Intelligence Act. And Clearview AI, the company that scraped images from websites, is now prohibited in many countries.Earlier this year, we urged the Federal Trade Commission to open an investigation of OpenAI. We specifically asked the agency to prevent the deployment of future versions of ChatGPT, such as the technique that will make it possible to match facial images with data across the internet.We now urge the F.T.C. to expedite the investigation and clearly prohibit the use of A.I. techniques for facial surveillance. Even the White House announcement of voluntary standards for the A.I. industry offers no guarantee of protection.Legal standards, not industry assurances, are what is needed now.Merve HickokLorraine KisselburghMarc RotenbergWashingtonThe writers are, respectively, the president, the chair and the executive director of the Center for A.I. and Digital Policy, an independent research organization. Ms. Hickok testified before Congress in March on the need to establish guardrails for A.I.To the Editor:Re “Pressed by Biden, Big Tech Agrees to A.I. Rules” (front page, July 22):It is troubling that the Biden administration is jumping in and exacting “voluntary” limitations on the development of A.I. technologies. The government manifestly lacks the expertise and knowledge necessary to ascertain what guardrails might be appropriate, and the inevitable outcome will be to stifle innovation and reduce competition, the worst possible result.Imagine what the internet would be today had the government played a similarly intrusive and heavy-handed role at its inception.Kenneth A. MargolisChappaqua, N.Y. More

  • in

    A.I.’s Use in Elections Sets Off a Scramble for Guardrails

    Gaps in campaign rules allow politicians to spread images and messaging generated by increasingly powerful artificial intelligence technology.In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park.In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop.In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality.What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.Increasingly, political consultants, election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide.As the 2024 U.S. presidential race starts to heat up, some of the campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Gov. Ron DeSantis of Florida posted fake images of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with fund-raising messages drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.Some politicians see artificial intelligence as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts.At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of false narratives churned out by computer or a fabricated image of urban decay can reinforce prejudices and widen the partisan divide by showing voters what they expect to see, experts say.The technology is already far more powerful than manual manipulation — not perfect, but fast improving and easy to learn. In May, the chief executive of OpenAI, Sam Altman, whose company helped kick off an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee that he was nervous about election season.He said the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation” was “a significant area of concern.”Representative Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “is poised to be the first election where A.I.-generated content is prevalent.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that used artificially generated material to carry a disclaimer. A similar bill in Washington State was recently signed into law.The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code.“People are going to be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral election takes place on Monday.A conservative candidate in the race, Anthony Furey, a former news columnist, recently laid out his platform in a document that was dozens of pages long and filled with synthetically generated content to help him make his tough-on-crime position.A closer look clearly showed that many of the images were not real: One laboratory scene featured scientists who looked like alien blobs. A woman in another rendering wore a pin on her cardigan with illegible lettering; similar markings appeared in an image of caution tape at a construction site. Mr. Furey’s campaign also used a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.Anthony Furey, a candidate in Toronto’s mayoral election on Monday, used an A.I. image of a woman with three arms.The other candidates mined that image for laughs in a debate this month: “We’re actually using real pictures,” said Josh Matlow, who showed a photo of his family and added that “no one in our pictures have three arms.”Still, the sloppy renderings were used to amplify Mr. Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged using the technology in his campaign, adding that “we’re going to have a couple of laughs here as we proceed with learning more about A.I.”Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process. Misinformation is a constant risk; one of Mr. Furey’s rivals said in a debate that while members of her staff used ChatGPT, they always fact-checked its output.“If someone can create noise, build uncertainty or develop false narratives, that could be an effective way to sway voters and win the race,” Darrell M. West, a senior fellow for the Brookings Institution, wrote in a report last month. “Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can nudge people in one direction or another could end up being decisive.”Increasingly sophisticated A.I. content is appearing more frequently on social networks that have been largely unwilling or unable to police it, said Ben Colman, the chief executive of Reality Defender, a company that offers services to detect A.I. The feeble oversight allows unlabeled synthetic content to do “irreversible damage” before it is addressed, he said.“Explaining to millions of users that the content they already saw and shared was fake, well after the fact, is too little, too late,” Mr. Colman said.For several days this month, a Twitch livestream has run a nonstop, not-safe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “A.I. entities,” but if an organized political campaign created such content and it spread widely without any disclosure, it could easily degrade the value of real material, disinformation experts said.Politicians could shrug off accountability and claim that authentic footage of compromising actions was not real, a phenomenon known as the liar’s dividend. Ordinary citizens could make their own fakes, while others could entrench themselves more deeply in polarized information bubbles, believing only what sources they chose to believe.“If people can’t trust their eyes and ears, they may just say, ‘Who knows?’” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, wrote in an email. “This could foster a move from healthy skepticism that encourages good habits (like lateral reading and searching for reliable sources) to an unhealthy skepticism that it is impossible to know what is true.” More

  • in

    Ron DeSantis’s Entry Into the Republican Race

    More from our inbox:The Futility of Debating TrumpListen to Trans People, and Detransitioners TooRegulating A.I.: Can It Be Done?Splitting Finances During DivorceMusing About the ‘Best’ Eze Amos for The New York TimesTo the Editor:Re “Hot Mic, Dead Air and Eventually, DeSantis Speaks” (front page, May 25):So Ron DeSantis finally entered the race. Among his highest priorities is a crusade against D.E.I. (diversity, equity and inclusion) and “woke” that we must all witness now.I have three questions for Mr. DeSantis:First: What is wrong with diversity? Ecosystems are more resilient if there is diversity. Likewise for human societies. And diverse societies are more fascinating. Color is interesting; monochrome is boring.Second: What is wrong with equity? Don’t all Americans believe in equality of opportunity and equality before the law? And we know that extreme inequality of income and wealth hurts the economy.Third: What is wrong with inclusion? Which group do we propose to leave out? Don’t all God’s creatures have a place in the choir?Bonus question: D.E.I. is what wokeness is all about. What is so bad about wokeness? Whom does it harm? Where is the angry mob? Why should “woke” go to Florida to die?I put these questions to the governor.Michael P. BaconWestbrook, MaineTo the Editor:While Twitter may have its share of weaknesses, Gov. Ron DeSantis has skillfully demonstrated his leadership qualities and strengths. Choosing facts over fear, education over indoctrination, law and order over rioting and disorder — Mr. DeSantis’s record speaks for itself.Because of his common sense and guidance, Florida is growing now more than ever as people are migrating and planting new roots in the Sunshine State. With Florida as the model, we need look no further than Ron DeSantis as our nation’s future.JoAnn Lee FrankClearwater, Fla.The Futility of Debating Trump Doug Mills/The New York TimesTo the Editor:It is not too early to mention presidential debates. The Times should make an unprecedented recommendation that the sitting president not debate former President Donald Trump during the 2024 campaign.One simply cannot debate an inveterate, incessant liar. I mean that in the most literal sense: Lying is not debating, and it takes two to engage in debate. It cannot be done.Witness the recent CNN debacle, where, even when checked assiduously by the moderator, Mr. Trump repeated nothing but lies. Everyone who could have conceivably been convinced that the former president ignores the truth completely was already convinced. All others will never be convinced.Therefore, there is no upside whatsoever to sharing the stage with such a mendacious bloviator. In fact, it may serve only as an opportunity for the former president to call for another round of “stand back and stand by.” Should President Biden give him that opportunity?David NeuschulzChatham, N.J.Listen to Trans People, and Detransitioners TooChloe Cole, who lived for years as a transgender boy before returning to her female identity, now travels the country promoting bans on transition care for minors. She received a standing ovation at Gov. Ron DeSantis’s State of the State speech in Florida in March.Phil Sears/Associated PressTo the Editor:Re “G.O.P. Focuses on Rare Stories of Trans Regret” (front page, May 17):While the article rightly notes that the campaign to ban gender transition in minors is led by Republicans, it falls into the trap of viewing youth gender medicine and detransition as a right-versus-left issue. Many people who support equality for trans and detrans people insist that a public health lens is crucial.The article doesn’t mention the growing transnational archive of people who detransition, commonly with feelings of regret for having transitioned. If you look at countries with national universal health care systems like Sweden, youth gender care has recently evolved following state-funded reviews of transgender treatment. By contrast, in the U.S., our highly privatized and compartmentalized managed care system contributes to the politicization of this issue to the detriment of all.Perhaps this is why the article seems to downplay the trauma that saturates detransitioners’ testimonies. To mourn the loss of one’s breasts or ability to reproduce is no small matter.Journalists should stop equating detransition with an attack on transgender people. Instead, they should see young people testifying to medical harm as a call for accountability and strive to understand the full range of their experiences without fueling the dangerous right-left divide.Daniela ValdesNew Brunswick, N.J.The writer is a doctoral candidate at Rutgers University who researches detransition.Regulating A.I.: Can It Be Done?Sam Altman, chief executive of OpenAI, believes that developers are on a path to building machines that can do anything the human brain can do.Ian C. Bates for The New York TimesTo the Editor:Re “The Most Important Man in Tech (Right Now)” (Business, May 17):Warnings about the enormous dangers of artificial intelligence are warranted, but mere calls for “regulations” are empty. The question is not whether regulatory regimes are needed, but how to control the uses to which A.I. can be put.Anything human or nonhuman that is capable of creative thought is also capable of creating mechanisms for self-preservation, for survival. The quest for a “precision regulation approach to A.I.” is likely to prove elusive.Norman Cousins, Carl Sagan, Alvin Toffler and many others have presciently warned that technological advances provided both a cure to some of humanity’s afflictions and a curse, potentially threatening human existence.One doomsday scenario would be for tech scientists to ask A.I. itself for methods to control its use and abuse, only to receive a chilling reply: “Nice try!”Charles KegleyColumbia, S.C.The writer is emeritus professor of international relations at the University of South Carolina.Splitting Finances During Divorce Lisk FengTo the Editor:Re “Rebuilding Finances After Divorce” (Business, May 18):Your article is correct in advising spouses that they may “land in financial hot water” unless they seek expert advice concerning splitting retirement assets at divorce. But getting good advice, while a necessity, is not enough.Even if a spouse is awarded a share of a 401(k) or pension benefit as part of a divorce decree, that alone is not enough. Under the federal private pension law ERISA, spouses must obtain a special court order called a qualified domestic relations order (better known as a Q.D.R.O.) to get their rightful share of private retirement benefits.This should be done earlier, not later. Getting a Q.D.R.O. after a divorce is much harder — and sometimes impossible — to get.So, to protect themselves at divorce, the word “Q.D.R.O.” should be part of every woman’s vocabulary.Karen FriedmanWashingtonThe writer is the executive director of the Pension Rights Center.Musing About the ‘Best’ O.O.P.S.To the Editor:Re “Our Endless, Absurd Quest to Get the Very Best,” by Rachel Connolly (Opinion guest essay, May 21):As far as I’m concerned, the best of anything is the one that meets my particular needs, not those of the reviewer, not those of the critic and not those of anybody else.Likewise, what’s best for me is not necessarily best for you. I guess you could say that the “best” is not an absolute; it’s relative.Jon LeonardSan Marcos, TexasTo the Editor:While some may suffer from a relentless pursuit of perfection, some struggle with making choices, period. I’ve witnessed parents trying to get their toddlers to make choices about food, clothing, activities, etc. Hello, they’re 2!I wonder how many suffer from what I call “compulsive comparison” chaos, when one goes shopping after purchasing an item to make sure they got the best deal, even if satisfied with their purchase. True madness.Vicky T. RobinsonWoodbridge, Va. More

  • in

    A Campaign Aide Didn’t Write That Email. A.I. Did.

    The Democratic Party has begun testing the use of artificial intelligence to write first drafts of some fund-raising messages, appeals that often perform better than those written entirely by human beings.Fake A.I. images of Donald J. Trump getting arrested in New York spread faster than they could be fact-checked last week.And voice-cloning tools are producing vividly lifelike audio of President Biden — and many others — saying things they did not actually say.Artificial intelligence isn’t just coming soon to the 2024 campaign trail. It’s already here.The swift advance of A.I. promises to be as disruptive to the political sphere as to broader society. Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake at a moment when the acceptance of universal truths — that Mr. Biden beat Mr. Trump in 2020, for example — is already being strained.And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?Inside campaigns, artificial intelligence is expected to soon help perform mundane tasks that previously required fleets of interns. Republican and Democratic engineers alike are racing to develop tools to harness A.I. to make advertising more efficient, to engage in predictive analysis of public behavior, to write more and more personalized copy and to discover new patterns in mountains of voter data. The technology is evolving so fast that most predict a profound impact, even if specific ways in which it will upend the political system are more speculation than science.“It’s an iPhone moment — that’s the only corollary that everybody will appreciate,” said Dan Woods, the chief technology officer on Mr. Biden’s 2020 campaign. “It’s going to take pressure testing to figure out whether it’s good or bad — and it’s probably both.”OpenAI, whose ChatGPT chatbot ushered in the generative-text gold rush, has already released a more advanced model. Google has announced plans to expand A.I. offerings inside popular apps like Google Docs and Gmail, and is rolling out its own chatbot. Microsoft has raced a version to market, too. A smaller firm, ElevenLabs, has developed a text-to-audio tool that can mimic anyone’s voice in minutes. Midjourney, a popular A.I. art generator, can conjure hyper-realistic images with a few lines of text that are compelling enough to win art contests.“A.I. is about to make a significant change in the 2024 election because of machine learning’s predictive ability,” said Brad Parscale, Mr. Trump’s first 2020 campaign manager, who has since founded a digital firm that advertises some A.I. capabilities.Disinformation and “deepfakes” are the dominant fear. While forgeries are nothing new to politics — a photoshopped image of John Kerry and Jane Fonda was widely shared in 2004 — the ability to produce and share them has accelerated, with viral A.I. images of Mr. Trump being restrained by the police only the latest example. A fake image of Pope Francis in a white puffy coat went viral in recent days, as well.Many are particularly worried about local races, which receive far less scrutiny. Ahead of the recent primary in the Chicago mayoral race, a fake video briefly sprung up on a Twitter account called “Chicago Lakefront News” that impersonated one candidate, Paul Vallas.“Unfortunately, I think people are going to figure out how to use this for evil faster than for improving civic life,” said Joe Rospars, who was chief strategist on Senator Elizabeth Warren’s 2020 campaign and is now the chief executive of a digital consultancy.Those who work at the intersection of politics and technology return repeatedly to the same historical hypothetical: If the infamous “Access Hollywood” tape broke today — the one in which Mr. Trump is heard bragging about assaulting women and getting away with it — would Mr. Trump acknowledge it was him, as he did in 2016?The nearly universal answer was no.“I think about that example all the time,” said Matt Hodges, who was the engineering director on Mr. Biden’s 2020 campaign and is now executive director of Zinc Labs, which invests in Democratic technology. Republicans, he said, “may not use ‘fake news’ anymore. It may be ‘Woke A.I.’”For now, the frontline function of A.I. on campaigns is expected to be writing first drafts of the unending email and text cash solicitations.“Given the amount of rote, asinine verbiage that gets produced in politics, people will put it to work,” said Luke Thompson, a Republican political strategist.As an experiment, The New York Times asked ChatGPT to produce a fund-raising email for Mr. Trump. The app initially said, “I cannot take political sides or promote any political agenda.” But then it immediately provided a template of a potential Trump-like email.The chatbot denied a request to make the message “angrier” but complied when asked to “give it more edge,” to better reflect the often apocalyptic tone of Mr. Trump’s pleas. “We need your help to send a message to the radical left that we will not back down,” the revised A.I. message said. “Donate now and help us make America great again.”Among the prominent groups that have experimented with this tool is the Democratic National Committee, according to three people briefed on the efforts. In tests, the A.I.-generated content the D.N.C. has used has, as often as not, performed as well or better than copy drafted entirely by humans, in terms of generating engagement and donations.Party officials still make edits to the A.I. drafts, the people familiar with the efforts said, and no A.I. messages have yet been written under the name of Mr. Biden or any other person, two people said. The D.N.C. declined to comment.Higher Ground Labs, a small venture capital firm that invests in political technology for progressives, is currently working on a project, called Quiller, to more systematically use A.I. to write, send and test the effectiveness of fund-raising emails — all at once.“A.I. has mostly been marketing gobbledygook for the last three cycles,” said Betsy Hoover, a founding partner at Higher Ground Labs who was the director of digital organizing for President Barack Obama’s 2012 campaign. “We are at a moment now where there are things people can do that are actually helpful.”Political operatives, several of whom were granted anonymity to discuss potentially unsavory uses of artificial intelligence they are concerned about or planning to deploy, raised a raft of possibilities.Some feared bad actors could leverage A.I. chatbots to distract or waste a campaign’s precious staff time by pretending to be potential voters. Others floated producing deepfakes of their own candidate to generate personalized videos — thanking supporters for their donations, for example. In India, one candidate in 2020 produced a deepfake to disseminate a video of himself speaking in different languages; the technology is far superior now.Mr. Trump himself shared an A.I. image in recent days that appeared to show him kneeling in prayer. He posted it on Truth Social, his social media site, with no explanation.One strategist predicted that the next generation of dirty tricks could be direct-to-voter misinformation that skips social media sites entirely. What if, this strategist said, an A.I. audio recording of a candidate was sent straight to the voice mail of voters on the eve of an election?Synthetic audio and video are already swirling online, much of it as parody.On TikTok, there is an entire genre of videos featuring Mr. Biden, Mr. Obama and Mr. Trump profanely bantering, with the A.I.-generated audio overlaid as commentary during imaginary online video gaming sessions.On “The Late Show,” Stephen Colbert recently used A.I. audio to have the Fox News host Tucker Carlson “read” aloud his text messages slamming Mr. Trump. Mr. Colbert labeled the audio as A.I. and the image on-screen showed a blend of Mr. Carlson’s face and a Terminator cyborg for emphasis.The right-wing provocateur Jack Posobiec pushed out a “deepfake” video last month of Mr. Biden announcing a national draft because of the conflict in Ukraine. It was quickly seen by millions.“The videos we’ve seen in the last few weeks are really the canary in the coal mine,” said Hany Farid, a professor of computer science at University of California at Berkeley, who specializes in digital forensics. “We measure advances now not in years but in months, and there are many months before the election.”Some A.I. tools were deployed in 2020. The Biden campaign created a program, code-named Couch Potato, that linked facial recognition, voice-to-text and other tools to automate the transcription of live events, including debates. It replaced the work of a host of interns and aides, and was immediately searchable through an internal portal.The technology has improved so quickly, Mr. Woods said, that off-the-shelf tools are “1,000 times better” than what had to be built from scratch four years ago.One looming question is what campaigns can and cannot do with OpenAI’s powerful tools. One list of prohibited uses last fall lumped together “political campaigns, adult content, spam, hateful content.”Kim Malfacini, who helped create the OpenAI’s rules and is on the company’s trust and safety team, said in an interview that “political campaigns can use our tools for campaigning purposes. But it’s the scaled use that we are trying to disallow here.” OpenAI revised its usage rules after being contacted by The Times, specifying now that “generating high volumes of campaign materials” is prohibited.Tommy Vietor, a former spokesman for Mr. Obama, dabbled with the A.I. tool from ElevenLabs to create a faux recording of Mr. Biden calling into the popular “Pod Save America” podcast that Mr. Vietor co-hosts. He paid a few dollars and uploaded real audio of Mr. Biden, and out came an audio likeness.“The accuracy was just uncanny,” Mr. Vietor said in an interview.The show labeled it clearly as A.I. But Mr. Vietor could not help noticing that some online commenters nonetheless seemed confused. “I started playing with the software thinking this is so much fun, this will be a great vehicle for jokes,” he said, “and finished thinking, ‘Oh God, this is going to be a big problem.’” More

  • in

    Breakfast with Chad: Techno-feudalism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More