More stories

  • in

    A Gaza Father’s Worries About His Children

    More from our inbox:A Temporary House Speaker?Republicans, Stand Up for UkraineWork Permits for ImmigrantsIs A.I. Art … Art?An injured woman and her child after an Israeli bombing near their house in the Gaza Strip.Samar Abu Elouf for The New York TimesTo the Editor:Re “What More Must the Children of Gaza Suffer?,” by Fadi Abu Shammalah (Opinion guest essay, Oct. 13):My heart goes out, and I cry over the suffering of Palestinian children in Gaza. They have done nothing to deserve war after war after war.However, to ignore Hamas’s responsibility for contributing to that suffering is to miss the whole picture. Hamas rules Gaza, and it has chosen to buy missiles and weapons with funds that were meant to build a better society for Gazan civilians.Last weekend’s attack was designed by Hamas to prompt a heavy response by Israel and stir up the pot, probably to kill a Saudi-Israeli peace deal, even if it meant sacrificing Palestinian civilians in the process. We can lay the blame for the Gazan children who have been killed in recent days at the feet of both the Israel Defense Forces and Hamas.Aaron SteinbergWhite Plains, N.Y.To the Editor:Thank you for publishing Opinion guest essays from Rachel Goldberg (“I Hope Someone Somewhere Is Being Kind to My Boy,” nytimes.com, Oct. 12) and Fadi Abu Shammalah. These essays, for the most part, demonstrate the dire disconnect between Israelis and Palestinians for decades.Ms. Goldberg and Mr. Abu Shammalah describe the horrors from their perspectives (terrorists or fighters; most vicious assaults on Jews since the Holocaust or terrifying violence raining down on Gaza).Despair is a shared theme in these articles. There is also a glimmer of hope found in the similar, heartbreaking pleas of loving parents for their children. Is now the time for mothers and fathers around the world to stand together for all children? If not now, when?Daniel J. CallaghanRoanoke, Va.To the Editor:Thank you for publishing Fadi Abu Shammalah’s essay. I’m hoping that hearing from a Palestinian in Gaza at this incredibly terrifying time might help your readers better understand the importance for all of us to call for immediate de-escalation to prevent Israel’s impending invasion.Shame on those who do not do what they can to prevent this assault on humanity. Let’s end this current horror show.Mona SalmaSan FranciscoTo the Editor:Regarding Fadi Abu Shammalah’s essay, “What More Must the Children of Gaza Suffer?”:Maybe Hamas should have considered that question before deciding to attack Israel.Jon DreyerStow, Mass.A Temporary House Speaker?Representative Steve Scalise, Republican of Louisiana, announcing his withdrawal as a candidate for House speaker on Thursday night. He hopes to remain as the party’s No. 2 House leader.Kenny Holston/The New York TimesTo the Editor:Re “Scalise Departs Speaker’s Race as G.O.P. Feuds” (front page, Oct. 13):Given the urgent state of affairs (Israel-Gaza, Ukraine, looming government shutdown), wouldn’t it be a good idea for the Republicans in the House of Representatives to pick a temporary speaker? Someone who doesn’t want the job permanently but would take the role through, say, early January.One would think that having the speaker role be temporary would make it easier to arrive at a compromise.Shaun BreidbartPelham, N.Y.Republicans, Stand Up for Ukraine David Guttenfelder for The New York TimesTo the Editor:Re “G.O.P. Resistance to Aid in Ukraine Expands in House” (front page, Oct. 6):Where do Republicans stand? On the side of autocracy or democracy? Dare I ask? The Ukrainians are on the front lines, fighting and dying to preserve the values of the West. Republicans, stand up and be counted!Norman SasowskyNew Paltz, N.Y.Work Permits for Immigrants Illustration by Rebecca Chew/The New York TimesTo the Editor:In your Oct. 8 editorial, “The Cost of Inaction on Immigration,” you correctly identified one potential benefit from proactive immigration policies. If Congress were not so frozen by the anti-immigration fringe, immigrants could fill the urgent gaps in the American labor market and propel our economy forward.President Biden can and should also expand work permits for long-term undocumented immigrants using an existing administrative process called parole.The organization I lead, the American Business Immigration Coalition, published a letter on behalf of more than 300 business leaders from across the country and a bipartisan group of governors and members of Congress clamoring for this solution.The farmworkers, Dreamers not covered by DACA and undocumented spouses of U.S. citizens who stand to benefit already live and belong in our communities. The advantages for businesses and everyday life in our cities and fields would be enormous, and this should not be held hostage to dysfunction in Congress.Rebecca ShiChicagoIs A.I. Art … Art?A.I. Excels at Making Bad Art. Can an Artist Teach It to Create Something Good?David Salle, one of America’s most thoughtful painters, wants to see if an algorithm can learn to mimic his style — and nourish his own creativity in the process.To the Editor:Re “Turning an Algorithm Into an Art Student” (Arts & Leisure, Oct. 1):A.I. art seems a commercially viable idea, but artistically it falls very far short of reasoned creativity and inspiration. When you remove the 95 percent perspiration from the artistic act, is it art anymore? I don’t think so.David Salle’s original work is inspired. The work produced by his A.I. assistant (no matter how much it is curated by the artist), I am afraid, will never be.I hope he makes money from it, as most artists don’t or can’t make a living with their inspired, personally or collectively produced art. They cannot because the market typically prefers a sanitized, digitized, broadly acceptable, “generically good” art product — something that has been produced and edited to satisfy the largest number of consumers/users/viewers. The market will embrace A.I. inevitably.I fear the day when A.I.-written operas, musicals, concerts and symphonies are performed by A.I. musicians in front of A.I. audiences. With A.I. critics writing A.I. reviews for A.I. readers of A.I. newspapers.Eric AukeeLos AngelesThe writer is an architect. More

  • in

    Today’s Top News: Key Takeaways From the G.O.P. Debate, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes. Hosted by Annie Correal, the new morning show features three top stories from reporters across the newsroom and around the world, so you always have a sense of what’s happening, even if you only have a few minutes to spare.The candidates mostly ignored former President Donald J. Trump’s overwhelming lead during the debate last night.Todd Heisler/The New York TimesOn Today’s Episode:5 Takeaways From Another Trump-Free Republican Debate, with Jonathan SwanMeet the A.I. Jane Austen: Meta Weaves A.I. Throughout Its Apps, with Mike IsaacHow Complete Was Stephen Sondheim’s Final Musical?, with Michael PaulsonEli Cohen More

  • in

    Will Hurd Releases A.I. Plan, a First in the Republican Presidential Field

    The former Texas congressman’s plan takes an expansive view of both the potential and the risks of artificial intelligence, calling for it to be used more widely but also tightly regulated.The policy plan on artificial intelligence released by former Representative Will Hurd of Texas on Wednesday makes him the first candidate in the Republican presidential field to formally propose a way to navigate the uses and dangers of a technology so thorny he likened it to nuclear fission.“Nuclear fission controlled gives you nuclear power — clean, cheap, limitless power,” Mr. Hurd said in an interview with The New York Times. “Nuclear fission uncontrolled gives you nuclear weapons that can destroy the world. And I think A.I. is equivalent.”The plan, first reported by Axios, takes an expansive view of both the potential and the risks of artificial intelligence. He calls for A.I. to be used much more widely than it currently is — both in administrative tasks within the federal government and in highly sensitive areas like national defense — but also supports regulating the industry more tightly than is typical of many Republicans’ approach to private industries.Among his proposals are calls to ensure compensation when people’s intellectual property is used in A.I.-generated content, as well as name, image and likeness protections against so-called deepfakes. He would also seek to require permits for companies that want to build A.I. models and to impose “strict regulations” on exports of A.I. technology, and would reject any exemptions for developers from liability under existing laws.Artificial intelligence has already begun to change political campaigns themselves, with some operatives using it to write first drafts of fund-raising messages and automate tedious tasks — and to spread disinformation, including fake images of opponents.Mr. Hurd has struggled to gain traction in the Republican primary field. He did not qualify for the first debate in August because he failed to reach 1 percent support in enough polls, and he remains at risk of failing to meet the even higher thresholds to qualify for the second debate next week.But that he would be the first candidate to release a formal plan on artificial intelligence tracks with his professional background.He once worked as a senior adviser at a cybersecurity firm called FusionX, and made cybersecurity one of his main focuses as a legislator. He also led the House Oversight Subcommittee on Information Technology, where he organized hearings on artificial intelligence in 2018, long before it entered the mainstream. After leaving Congress in 2021, he joined the board of OpenAI, the artificial intelligence laboratory that developed ChatGPT.“Artificial intelligence is a technology that transcends borders,” Mr. Hurd said at the first congressional A.I. hearing in 2018. “We have allies and adversaries, both nation-states and individual hackers, who are pursuing artificial intelligence with all they have because dominance in artificial intelligence is a guaranteed leg up in the realm of geopolitics and economics.”His plan suggests employing A.I. tools within military, intelligence and border security agencies and using those tools to make the government “more responsive to the needs of everyday Americans.” He said in the interview that this could include using A.I. to issue passports and visas, summarize publicly available information for intelligence agencies, predict what federal support individual communities will need as a hurricane approaches and identify the cause of backlogs at poorly performing Veterans Affairs centers.Current A.I. models have a well-documented tendency to “hallucinate” and provide inaccurate or fabricated information. Mr. Hurd’s plan does not address that problem. He said he envisioned A.I. helping migrants learn English and helping students with math, and was “not as concerned” with hallucination in those contexts.“I think we can achieve the promise of A.I. while minimizing the risk,” he said. More

  • in

    China Sows Disinformation About Hawaii Fires Using New Techniques

    Beijing’s influence campaign using artificial intelligence is a rapid change in tactics, researchers from Microsoft and other organizations say.When wildfires swept across Maui last month with destructive fury, China’s increasingly resourceful information warriors pounced.The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was the result of a secret “weather weapon” being tested by the United States. To bolster the plausibility, the posts carried photographs that appeared to have been generated by artificial intelligence programs, making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign.For China — which largely stood on the sidelines of the 2016 and 2020 U.S. presidential elections while Russia ran hacking operations and disinformation campaigns — the effort to cast the wildfires as a deliberate act by American intelligence agencies and the military was a rapid change of tactics.Until now, China’s influence campaigns have been focused on amplifying propaganda defending its policies on Taiwan and other subjects. The most recent effort, revealed by researchers from Microsoft and a range of other organizations, suggests that Beijing is making more direct attempts to sow discord in the United States.The move also comes as the Biden administration and Congress are grappling with how to push back on China without tipping the two countries into open conflict, and with how to reduce the risk that A.I. is used to magnify disinformation.The impact of the Chinese campaign — identified by researchers from Microsoft, Recorded Future, the RAND Corporation, NewsGuard and the University of Maryland — is difficult to measure, though early indications suggest that few social media users engaged with the most outlandish of the conspiracy theories.Brad Smith, the vice chairman and president of Microsoft, whose researchers analyzed the covert campaign, sharply criticized China for exploiting a natural disaster for political gain.“I just don’t think that’s worthy of any country, much less any country that aspires to be a great country,” Mr. Smith said in an interview on Monday.China was not the only country to make political use of the Maui fires. Russia did as well, spreading posts that emphasized how much money the United States was spending on the war in Ukraine and that suggested the cash would be better spent at home for disaster relief.The researchers suggested that China was building a network of accounts that could be put to use in future information operations, including the next U.S. presidential election. That is the pattern that Russia set in the year or so leading up to the 2016 election.“This is going into a new direction, which is sort of amplifying conspiracy theories that are not directly related to some of their interests, like Taiwan,” said Brian Liston, a researcher at Recorded Future, a cybersecurity company based in Massachusetts.A destroyed neighborhood in Lahaina, Hawaii, last month. China has made the wildfires a target of disinformation.Go Nakamura for The New York TimesIf China does engage in influence operations for the election next year, U.S. intelligence officials have assessed in recent months, it is likely to try to diminish President Biden and raise the profile of former President Donald J. Trump. While that may seem counterintuitive to Americans who remember Mr. Trump’s effort to blame Beijing for what he called the “China virus,” the intelligence officials have concluded that Chinese leaders prefer Mr. Trump. He has called for pulling Americans out of Japan, South Korea and other parts of Asia, while Mr. Biden has cut off China’s access to the most advanced chips and the equipment made to produce them.China’s promotion of a conspiracy theory about the fires comes after Mr. Biden vented in Bali last fall to Xi Jinping, China’s president, about Beijing’s role in the spread of such disinformation. According to administration officials, Mr. Biden angrily criticized Mr. Xi for the spread of false accusations that the United States operated biological weapons laboratories in Ukraine.There is no indication that Russia and China are working together on information operations, according to the researchers and administration officials, but they often echo each other’s messages, particularly when it comes to criticizing U.S. policies. Their combined efforts suggest a new phase of the disinformation wars is about to begin, one bolstered by the use of A.I. tools.“We don’t have direct evidence of coordination between China and Russia in these campaigns, but we’re certainly finding alignment and a sort of synchronization,” said William Marcellino, a researcher at RAND and an author of a new report warning that artificial intelligence will enable a “critical jump forward” in global influence operations.The wildfires in Hawaii — like many natural disasters these days — spawned numerous rumors, false reports and conspiracy theories almost from the start.Caroline Amy Orr Bueno, a researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security, reported that a coordinated Russian campaign began on Twitter, the social media platform now known as X, on Aug. 9, a day after the fires started.It spread the phrase, “Hawaii, not Ukraine,” from one obscure account with few followers through a series of conservative or right-wing accounts like Breitbart and ultimately Russian state media, reaching thousands of users with a message intended to undercut U.S. military assistance to Ukraine.President Biden has criticized President Xi Jinping of China for the spread of false accusations about the United States and Ukraine.Florence Lo/ReutersChina’s state media apparatus often echoes Russian themes, especially animosity toward the United States. But in this case, it also pursued a distinct disinformation campaign.Recorded Future first reported that the Chinese government mounted a covert campaign to blame a “weather weapon” for the fires, identifying numerous posts in mid-August falsely claiming that MI6, the British foreign intelligence service, had revealed “the amazing truth behind the wildfire.” Posts with the exact language appeared on social media sites across the internet, including Pinterest, Tumblr, Medium and Pixiv, a Japanese site used by artists.Other inauthentic accounts spread similar content, often accompanied with mislabeled videos, including one from a popular TikTok account, The Paranormal Chic, that showed a transformer explosion in Chile. According to Recorded Future, the Chinese content often echoed — and amplified — posts by conspiracy theorists and extremists in the United States, including white supremacists.The Chinese campaign operated across many of the major social media platforms — and in many languages, suggesting it was aimed at reaching a global audience. Microsoft’s Threat Analysis Center identified inauthentic posts in 31 languages, including French, German and Italian, but also in less prominent ones like Igbo, Odia and Guarani.The artificially generated images of the Hawaii wildfires identified by Microsoft’s researchers appeared on multiple platforms, including a Reddit post in Dutch. “These specific A.I.-generated images appear to be exclusively used” by Chinese accounts used in this campaign, Microsoft said in a report. “They do not appear to be present elsewhere online.”Clint Watts, the general manager of Microsoft’s Threat Analysis Center, said that China appeared to have adopted Russia’s playbook for influence operations, laying the groundwork to influence politics in the United States and other countries.“This would be Russia in 2015,” he said, referring to the bots and inauthentic accounts Russia created before its extensive online influence operation during the 2016 election. “If we look at how other actors have done this, they are building capacity. Now they’re building accounts that are covert.”Natural disasters have often been the focus of disinformation campaigns, allowing bad actors to exploit emotions to accuse governments of shortcomings, either in preparation or in response. The goal can be to undermine trust in specific policies, like U.S. support for Ukraine, or more generally to sow internal discord. By suggesting the United States was testing or using secret weapons against its own citizens, China’s effort also seemed intended to depict the country as a reckless, militaristic power.“We’ve always been able to come together in the wake of humanitarian disasters and provide relief in the wake of earthquakes or hurricanes or fires,” said Mr. Smith, who is presenting some of Microsoft’s findings to Congress on Tuesday. “And to see this kind of pursuit instead is both, I think deeply disturbing and something that the global community should draw a red line around and put off-limits.” More

  • in

    Today’s Top News: Trump Gets a Trial Date, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes. Hosted by Annie Correal, the new morning show features three top stories from reporters across the newsroom and around the world, so you always have a sense of what’s happening, even if you only have a few minutes to spare.Former President Donald J. Trump faces federal and state investigations in New York, Georgia and Washington.Doug Mills/The New York TimesOn Today’s Episode:An Update on Tropical Storm IdaliaJudge Sets Trial Date in March for Trump’s Federal Election Case, with Glenn ThrushA.I. Comes to the U.S. Air Force, with Eric LiptonEli Cohen More

  • in

    Our Immigration System: ‘A Waste of Talent’

    More from our inbox:Cruelty at the BorderLimiting the President’s Pardon PowersAre A.I. Weapons Next?U.S. Food Policy Causes Poor Food ChoicesMateo Miño, left, in the church in Queens where he experienced a severe anxiety attack two days after arriving in New York.Christopher Lee for The New York TimesTo the Editor:“As Politicians Cry Crisis, Migrants Get a Toehold” (news article, July 15) points up the irrationality of the U.S. immigration system. As this article shows, migrants are eager to work, and they are filling significant gaps in fields such as construction and food delivery, but there are still great unmet needs for home health aides and nursing assistants.The main reason for this disjunction lies in federal immigration law, which offers no dedicated visa slots for these occupations (as it does for professionals and even for seasonal agricultural and resort workers) because they are considered “unskilled.”Instead, the law stipulates, applicants must demonstrate that they are “performing work for which qualified workers are not available in the United States” — clearly a daunting task for individual migrants.As a result, many do end up working in fields like home health care but without documentation and are thus vulnerable to exploitation if not deportation. With appropriate reforms, our system would be capable of meeting both the country’s needs for essential workers and migrants’ needs for safe havens.Sonya MichelSilver Spring, Md.The writer is professor emerita of history and women’s and gender studies at the University of Maryland, College Park.To the Editor:We have refugee doctors and nurses who are driving taxi cabs. What a waste of talent that is needed in so many areas of our country.Why isn’t there a program to use their knowledge and skills by working with medical associations to qualify them, especially if they agree to work in parts of the country that have a shortage of doctors and nurses? It would be a win-win situation.There are probably other professions where similar ideas would work.David AlbendaNew YorkCruelty at the BorderTexas Department of Public Safety troopers look over the Rio Grande, as migrants walk by.Suzanne Cordeiro/Agence France-Presse — Getty ImagesTo the Editor:Re “Officers Voice Concerns Over Aggressive Tactics at the Border in Texas” (news article, July 20):In the past year, I have done immigration-related legal work in New York City with recently arrived asylum seekers from all over the world: Venezuela, China, Honduras, Guatemala, Ecuador and Ghana. Most entered the U.S. on foot through the southern border. Some spent weeks traversing the perilous Darién Gap — an unforgiving jungle — and all are fleeing from horribly violent and scary situations.Texas’ barbed wire is not going to stop them.I am struck by the message of the mayor of Eagle Pass, Rolando Salinas Jr., who, supportive of legal migration and orderly law enforcement, said, “What I am against is the use of tactics that hurt people.” I desperately hope we can all agree about this.There should be no place for immigration enforcement tactics that deliberately and seriously injure people.I was disturbed to read that Texas is hiding razor wire in dark water and deploying floating razor wire-wrapped “barrel traps.” These products of Gov. Greg Abbott’s xenophobia are cruel to a staggering degree.Noa Gutow-EllisNew YorkThe writer is a law school intern at the Kathryn O. Greenberg Immigration Justice Clinic at the Benjamin N. Cardozo School of Law.Limiting the President’s Pardon Powers Tom Brenner for The New York TimesTo the Editor:Re “U.S. Alleges Push at Trump’s Club to Erase Footage” (front page, July 28) and “Sudden Obstacle Delays Plea Deal for Biden’s Son” (front page, July 27):With Donald Trump campaigning to return to the White House while under felony indictment, and with Hunter Biden’s legal saga unresolved, there should be bipartisan incentive in Congress for proposing a constitutional amendment limiting the president’s pardon power.A proposed amendment should provide that the president’s “reprieves and pardons” power under Article II, Section 2, shall not apply to offenses, whether committed in office or out, by the president himself or herself; the vice president and cabinet-level officers; any person whose unlawful conduct was solicited by or intended to benefit any of these officials; or a close family member of any of these individuals.Stephen A. SilverSan FranciscoThe writer is a lawyer.To the Editor:Beyond asking “Where’s my Roy Cohn?” Donald Trump may now ask, “Where’s my Rose Mary Woods?”David SchubertCranford, N.J.Are A.I. Weapons Next? Andreas Emil LundTo the Editor:Re “Our Oppenheimer Moment: The Creation of A.I. Weapons,” by Alexander C. Karp (Opinion guest essay, July 30):Mr. Karp argues that to protect our way of life, we must integrate artificial intelligence into weapons systems, citing our atomic might as precedent. However, nuclear weapons are sophisticated and difficult to produce. A.I. capabilities are software, leaving them vulnerable to theft, cyberhacking and data poisoning by adversaries.The risk of proliferation beyond leading militaries was appreciated by the United States and the Soviet Union when banning bioweapons, and the same applies to A.I. It also carries an unacceptable risk of conflict escalation, illustrated in our recent film “Artificial Escalation.”J. Robert Oppenheimer’s legacy offers a different lesson when it comes to advanced general-purpose A.I. systems. The nuclear arms race has haunted our world with annihilation for 78 years. It was luck that spared us. That race ebbed only as leaders came to understand that such a war would destroy humanity.The same is true now. To survive, we must recognize that the reckless pursuit and weaponization of inscrutable, probably uncontrollable advanced A.I. is not a winnable one. It is a suicide race.Anthony AguirreSanta Cruz, Calif.The writer is the executive director and a co-founder of the Future of Life Institute.U.S. Food Policy Causes Poor Food Choices Steven May/AlamyTo the Editor:Re “Vegans Make Smaller Mark on the Planet Than Others” (news article, July 22):While I agree that people could help reduce greenhouse-gas emissions by eating plants only, I find it crucial to note that food policy is the main reason for poor food choices.Food choices follow food policy, and U.S. food policy is focused on meat, dairy, fish and eggs. Our massive network of agriculture universities run “animal science” programs, providing billions of dollars’ worth of training, public relations, research, experimentation and sales for animal products.Our government provides subsidies to the meat, dairy, fish and egg industries far beyond what fruits, vegetables and other plant foods receive. Federal and state agriculture officials are typically connected to the meat or dairy industry. The public pays the cost of animal factories’ contamination of water and soil, and of widespread illness linked to eating animals since humans are natural herbivores.No wonder the meat, dairy, fish and egg industries have so much money for advertising, marketing and public relations, keeping humans deceived about their biological nature and what is good for them to eat.David CantorGlenside, Pa.The writer is founder and director of Responsible Policies for Animals. More

  • in

    A Trump-Biden Rematch That Many Are Dreading

    More from our inbox:Perils of A.I., and Limits on Its DevelopmentAn image from a televised presidential debate in 2020.Damon Winter/The New York TimesTo the Editor:Re “The Presidential Rematch Nobody Wants,” by Pamela Paul (column, July 21):Ms. Paul asks, “Have you met anyone truly excited about Joe Biden running for re-election?”I am wildly enthusiastic about President Biden, who is the best president in my lifetime. His legislation to repair America’s infrastructure and bring back chip manufacturing are both huge accomplishments. Mr. Biden has done more to combat climate change, the existential issue of the day, than all the presidents who have gone before him.Mr. Biden extracted us from the endless morass of Afghanistan. He has marshaled the free peoples of the world to stop the Russian takeover of Ukraine, giving dictators around the world pause.Mr. Biden is the first president in a generation to really believe in unions and to emphasize the issues of working people, understanding how much jobs matter.I might wish he were 20 years younger. I wish I were 20 years younger.Most important, Joe Biden is an honorable man at a time when his biggest rivals do not know the meaning of the word. Being honorable is the essential virtue, without which youth or glibness do not matter.I support his re-election with all my heart and soul.Gregg CoodleyPortland, Ore.To the Editor:We endured (barely) four years of Donald Trump. Now we have Joe Biden, whose time has come and gone, and third party disrupters who know they cannot win but are looking for publicity.Mr. Biden had his turn, and is exceedingly arrogant to believe that he is our best hope. His good sense and moral values won’t help if Donald Trump wins against him, which is eminently possible. The Democratic Party must nominate a powerfully charismatic candidate.Mitchell ZuckermanNew Hope, Pa.To the Editor:I think Pamela Paul misses the point entirely. No, Biden supporters are not jumping up and down in a crazed frenzy like Trump supporters. That is actually a good thing. People like me who fully support President Biden’s re-election are sick and tired of the nonstop insanity that is Donald Trump. I’m very happy to have a sound, calm, upstanding president who actually gets things done for middle- and working-class Americans.Excitement isn’t the answer to solving America’s problems. A president who gets things done is — like Joe Biden!Sue EverettChattanooga, Tenn.To the Editor:Pamela Paul is spot on in her diagnosis of the depressing likelihood of Trump vs. Biden, Round 2.The solution is money, as is true in all things in American politics. The Big Money donors in the Democratic Party should have a conference call with Team Biden and tell it, flat out, we’re not supporting the president’s re-election. It’s time for a younger generation of leaders.Without their money, President Biden would realize that he cannot run a competitive campaign. But in a strange echo of how Republican leaders genuflect to Donald Trump and don’t confront him, the wealthy contributors to the Democratic Party do exactly the same with Mr. Biden.Ethan PodellRutherford Island, MaineTo the Editor:In an ideal world, few would want a presidential rematch. Donald Trump is a menace, and it would be nice to have a Democratic nominee who is young, charismatic and exciting. But in the real world, I favor a Trump-Biden rematch, if Mr. Trump is the Republican nominee.Mr. Biden might shuffle like a senior, and mumble his words, but he is a decent man who loves our country and has delivered beyond expectations.In leadership crises, Americans yearn for shiny new saviors riding into town on a stallion. I prefer an honest old shoe whom we can count on to get us through an election of a lifetime.Jerome T. MurphyCambridge, Mass.The writer is a retired Harvard professor and dean who taught courses on leadership.To the Editor:I am grateful to Pamela Paul for articulating and encapsulating how I, and probably many others, feel about the impending 2024 presidential race. I appreciate the stability that President Biden returned to the White House and our national politics. However, the future demands so much more than Mr. Biden or any other announced candidate can deliver.Christine CunhaBolinas, Calif.To the Editor:Pamela Paul presents many reasons, in her view, why President Biden is a flawed candidate, including that Mr. Biden’s “old age is showing.” As an example, she writes that during an interview on MSNBC he appeared to wander off the set.Fox News has been pushing this phony notion relentlessly, claiming that he walked off while the host was still talking. In fact, the interview was over, Mr. Biden shook hands with the host, they both said goodbye, and while Mr. Biden left the set, the host faced the camera and announced what was coming up next on her show.Howard EhrlichmanHuntington, N.Y.Perils of A.I., and Limits on Its DevelopmentOpenAI’s logo at its offices in San Francisco. The company is testing an image analysis feature for its ChatGPT chatbot. Jim Wilson/The New York TimesTo the Editor:Re “New Worries That Chatbot Reads Faces” (Business, July 19):The integration of facial surveillance and generative A.I. carries a warning: Without prohibitions on the use of certain A.I. techniques, the United States could easily construct a digital dystopia, adopting A.I. systems favored by authoritarian governments for social control.Our report “Artificial Intelligence and Democratic Values” established that facial surveillance is among the most controversial A.I. deployments in the world. UNESCO urged countries to prohibit the use of A.I. for mass surveillance. The European Parliament proposes a ban in the pending E.U. Artificial Intelligence Act. And Clearview AI, the company that scraped images from websites, is now prohibited in many countries.Earlier this year, we urged the Federal Trade Commission to open an investigation of OpenAI. We specifically asked the agency to prevent the deployment of future versions of ChatGPT, such as the technique that will make it possible to match facial images with data across the internet.We now urge the F.T.C. to expedite the investigation and clearly prohibit the use of A.I. techniques for facial surveillance. Even the White House announcement of voluntary standards for the A.I. industry offers no guarantee of protection.Legal standards, not industry assurances, are what is needed now.Merve HickokLorraine KisselburghMarc RotenbergWashingtonThe writers are, respectively, the president, the chair and the executive director of the Center for A.I. and Digital Policy, an independent research organization. Ms. Hickok testified before Congress in March on the need to establish guardrails for A.I.To the Editor:Re “Pressed by Biden, Big Tech Agrees to A.I. Rules” (front page, July 22):It is troubling that the Biden administration is jumping in and exacting “voluntary” limitations on the development of A.I. technologies. The government manifestly lacks the expertise and knowledge necessary to ascertain what guardrails might be appropriate, and the inevitable outcome will be to stifle innovation and reduce competition, the worst possible result.Imagine what the internet would be today had the government played a similarly intrusive and heavy-handed role at its inception.Kenneth A. MargolisChappaqua, N.Y. More

  • in

    A.I.’s Use in Elections Sets Off a Scramble for Guardrails

    Gaps in campaign rules allow politicians to spread images and messaging generated by increasingly powerful artificial intelligence technology.In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park.In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop.In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality.What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.Increasingly, political consultants, election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide.As the 2024 U.S. presidential race starts to heat up, some of the campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Gov. Ron DeSantis of Florida posted fake images of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with fund-raising messages drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.Some politicians see artificial intelligence as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts.At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of false narratives churned out by computer or a fabricated image of urban decay can reinforce prejudices and widen the partisan divide by showing voters what they expect to see, experts say.The technology is already far more powerful than manual manipulation — not perfect, but fast improving and easy to learn. In May, the chief executive of OpenAI, Sam Altman, whose company helped kick off an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee that he was nervous about election season.He said the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation” was “a significant area of concern.”Representative Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “is poised to be the first election where A.I.-generated content is prevalent.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that used artificially generated material to carry a disclaimer. A similar bill in Washington State was recently signed into law.The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code.“People are going to be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral election takes place on Monday.A conservative candidate in the race, Anthony Furey, a former news columnist, recently laid out his platform in a document that was dozens of pages long and filled with synthetically generated content to help him make his tough-on-crime position.A closer look clearly showed that many of the images were not real: One laboratory scene featured scientists who looked like alien blobs. A woman in another rendering wore a pin on her cardigan with illegible lettering; similar markings appeared in an image of caution tape at a construction site. Mr. Furey’s campaign also used a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.Anthony Furey, a candidate in Toronto’s mayoral election on Monday, used an A.I. image of a woman with three arms.The other candidates mined that image for laughs in a debate this month: “We’re actually using real pictures,” said Josh Matlow, who showed a photo of his family and added that “no one in our pictures have three arms.”Still, the sloppy renderings were used to amplify Mr. Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged using the technology in his campaign, adding that “we’re going to have a couple of laughs here as we proceed with learning more about A.I.”Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process. Misinformation is a constant risk; one of Mr. Furey’s rivals said in a debate that while members of her staff used ChatGPT, they always fact-checked its output.“If someone can create noise, build uncertainty or develop false narratives, that could be an effective way to sway voters and win the race,” Darrell M. West, a senior fellow for the Brookings Institution, wrote in a report last month. “Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can nudge people in one direction or another could end up being decisive.”Increasingly sophisticated A.I. content is appearing more frequently on social networks that have been largely unwilling or unable to police it, said Ben Colman, the chief executive of Reality Defender, a company that offers services to detect A.I. The feeble oversight allows unlabeled synthetic content to do “irreversible damage” before it is addressed, he said.“Explaining to millions of users that the content they already saw and shared was fake, well after the fact, is too little, too late,” Mr. Colman said.For several days this month, a Twitch livestream has run a nonstop, not-safe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “A.I. entities,” but if an organized political campaign created such content and it spread widely without any disclosure, it could easily degrade the value of real material, disinformation experts said.Politicians could shrug off accountability and claim that authentic footage of compromising actions was not real, a phenomenon known as the liar’s dividend. Ordinary citizens could make their own fakes, while others could entrench themselves more deeply in polarized information bubbles, believing only what sources they chose to believe.“If people can’t trust their eyes and ears, they may just say, ‘Who knows?’” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, wrote in an email. “This could foster a move from healthy skepticism that encourages good habits (like lateral reading and searching for reliable sources) to an unhealthy skepticism that it is impossible to know what is true.” More