More stories

  • in

    Five Takeaways From The Times’s Investigation Into Child Influencers

    Instagram does not allow children under 13 to have accounts, but parents are allowed to run them — and many do so for daughters who aspire to be social media influencers.What often starts as a parent’s effort to jump-start a child’s modeling career, or win favors from clothing brands, can quickly descend into a dark underworld dominated by adult men, many of whom openly admit on other platforms to being sexually attracted to children, an investigation by The New York Times found.Thousands of so-called mom-run accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.Nearly one in three preteens list influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers. But health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.The pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their daughter’s images. These are some key findings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Promises Give Tech Earnings from Meta and Others a Jolt

    Companies like Meta that could tout their work in the fast-growing field saw a benefit in their fourth-quarter results — and won praise from eager investors.Mark Zuckerberg, Meta’s C.E.O., spoke expansively to analysts about his company’s work on A.I.Carlos Barria/ReutersA.I. and cost cuts lift Big Tech Earlier this week, Mark Zuckerberg of Meta endured a grilling on Capitol Hill and publicly apologized to relatives of victims of online abuse. Little more than a day later, he had a lot to crow about, as his business delivered some of its best quarterly earnings in years.Meta’s results illustrate how the most recent earnings season has gone for Big Tech: a mostly positive period in which companies that could claim the benefits of artificial intelligence and cost-cutting were hailed the most on Wall Street.Meta shot the lights out. After years of facing questions about its ad business and its ability to cope with scandals, the parent of Facebook and Instagram reported that fourth-quarter profits tripled from a year ago. A.I. was credited for some of that, with the technology helping make its core ad business more effective. So too was cost-cutting, which included tens of thousands of layoffs as part of the company’s self-described “year of efficiency.”Meta’s profit was so good that the company will soon start paying stock dividends for the first time (which could total $700 million a year for Zuckerberg alone) and announced a $50 billion buyback. It’s a sign that the tech giant is “coming of age,” according to one analyst, joining Microsoft and Apple in making regular payouts to investors.Zuckerberg pledged more investment in A.I. — “Expect us to continue investing aggressively in this area,” he said on an earnings call — and the company said it had largely concluded its cost cuts. But some analysts said that Meta will eventually have to show a return on that spending.Amazon also touted its A.I. initiatives. Much of its earnings call was spent talking about Rufus, a new smart assistant intended to help shoppers find what they’re looking for. (It may also allow Amazon to reduce ad spending on Google and social media platforms.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Tech CEOs Got Grilled, but New Rules Are Still a Question

    Tech leaders faced a grilling in the Senate, and one offered an apology. But skeptics fear little will change this time.Five tech C.E.O.s faced a grilling yesterday, but it’s unclear whether new laws to impose more safeguards for online children’s safety will pass.Kenny Holston/The New York TimesA lot of heat, but will there be regulation?Five technology C.E.O.s endured hours of grilling by senators on both sides of the aisle about their apparent failures to make their platforms safer for children, with some lawmakers accusing them of having “blood” on their hands.But for all of the drama, including Mark Zuckerberg of Meta apologizing to relatives of online child sex abuse victims, few observers believe that there’s much chance of concrete action.“Your product is killing people,” Senator Josh Hawley, Republican of Missouri, flatly told Zuckerberg at Wednesday’s hearing. Over 3.5 hours, members of the Senate Judiciary Committee laid into the Meta chief and the heads of Discord, Snap, TikTok and X over their policies. (Before the hearing began, senators released internal Meta documents that showed that executives had rejected efforts to devote more resources to safeguard children.)But tech C.E.O.s offered only qualified support for legislative efforts. Those include the Kids Online Safety Act, or KOSA, which would require tech platforms to take “reasonable measures” to prevent harm, and STOP CSAM and EARN IT, two bills that would curtail some of the liability shield given to those companies by Section 230 of the Communications Decency Act.Both Evan Spiegel of Snap and Linda Yaccarino of X backed KOSA, and Yaccarino also became the first tech C.E.O. to back the STOP CSAM Act. But neither endorsed EARN IT.Zuckerberg called for legislation to force Apple and Google — neither of which was asked to testify — to be held responsible for verifying app users’ ages. But he otherwise emphasized that Meta had already offered resources to keep children safe.Shou Chew of TikTok noted only that his company expected to invest over $2 billion in trust and safety measures this year.Jason Citron of Discord allowed that Section 230 “needs to be updated,” and his company later said that it supports “elements” of STOP CSAM.Experts worry that we’ve seen this play out before. Tech companies have zealously sought to defend Section 230, which protects them from liability for content users post on their platforms. Some lawmakers say altering it would be crucial to holding online platforms to account.Meanwhile, tech groups have fought efforts by states to tighten the use of their services by children. Such laws would lead to a patchwork of regulations that should instead be addressed by Congress, the industry has argued.Congress has failed to move meaningfully on such legislation. Absent a sea change in congressional will, Wednesday’s drama may have been just that.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More

  • in

    4,789 Facebook Accounts in China Impersonated Americans, Meta Says

    The company warned that the inauthentic accounts underscored the threat of foreign election interference in 2024.Meta announced on Thursday that it had removed thousands of Facebook accounts based in China that were impersonating Americans debating political issues in the United States. The company warned that the campaign presaged coordinated international efforts to influence the 2024 presidential election.The network of fake accounts — 4,789 in all — used names and photographs lifted from elsewhere on the internet and copied partisan political content from X, formerly known as Twitter, Meta said in its latest quarterly adversarial threat analysis. The copied material included posts by prominent Republican and Democratic politicians, the report said.The campaign appeared intended not to favor one side or another but to highlight the deep divisions in American politics, a tactic that Russia’s influence campaigns have used for years in the United States and elsewhere.Meta warned that the campaign underscored the threat facing a confluence of elections around the world in 2024 — from India in April to the United States in November.“Foreign threat actors are attempting to reach audiences ahead of next year’s various elections, including in the U.S. and Europe,” the company’s report said, “and we need to remain alert to their evolving tactics and targeting across the internet.”Although Meta did not attribute the latest campaign to China’s Communist government, it noted that the country had become the third-most-common geographic source for coordinated inauthentic behavior on Facebook and other social media platforms, after Russia and Iran.The Chinese network was the fifth that Meta has detected and taken down this year, more than in any other nation, suggesting that China is stepping up its covert influence efforts. While previous campaigns focused on Chinese issues, the latest ones have weighed more directly into domestic U.S. politics.“This represents the most notable change in the threat landscape, when compared with the 2020 election cycle,” the company said in the threat report.Meta’s report followed a series of disclosures about China’s global information operations, including a recent State Department report that accused China of spending billions on “deceptive and coercive methods” to shape the global information environment.Microsoft and other researchers have also linked China to the spread of conspiracy theories claiming that the U.S. government deliberately caused the deadly wildfires in Hawaii this year.The latest inauthentic accounts removed by Meta sought “to hijack authentic partisan narratives,” the report said. It detailed several examples in which the accounts copied and pasted, under their own names, partisan posts from politicians — often using language and symbols indicating the posts were originally on X.Two Facebook posts a month apart in August and September, for example, copied opposing statements on abortion from two members of the U.S. House from Texas — Sylvia R. Garcia, a Democrat, and Ronny Jackson, a Republican.The accounts also linked to mainstream media organizations and shared posts by X’s owner, Elon Musk. They liked and reposted content from actual Facebook users on other topics as well, like games, fashion models and pets. The activity suggested that the accounts were intended to build a network of seemingly authentic accounts to push a coordinated message in the future.Meta also removed a similar, smaller network from China that mostly targeted India and Tibet but also the United States. In the case of Tibet, the users posed as pro-independence activists who accused the Dalai Lama of corruption and pedophilia.Meta warned that while it had removed the accounts, the same networks continued to use accounts on other platforms, including X, YouTube, Gettr, Telegram and Truth Social, warning that foreign adversaries were diversifying the sources of their operations.In its report, Meta also weighed in on Republican attacks on the U.S. government’s role in monitoring disinformation online, a political and legal fight that has reached the Supreme Court in a challenge brought by the attorneys general of Missouri and Louisiana.While Republicans have accused officials of coercing social media platforms to censor content, including at a hearing in the House on Thursday, Meta said coordination among tech companies, government and law enforcement had disrupted foreign threats.“This type of information sharing can be particularly critical in disrupting malicious foreign campaigns by sophisticated threat actors who coordinate their operations outside of our platforms,” the report said. More

  • in

    La IA hace campaña en las elecciones de Argentina

    Los afiches que salpican las calles de Buenos Aires tienen un cierto toque soviético.Había uno de Sergio Massa, uno de los candidatos presidenciales de Argentina, vestido con una camisa con lo que parecían ser medallas militares, señalando a un cielo azul. Lo rodeaban cientos de personas mayores —con atuendos monótonos, rostros serios y a menudo desfigurados— que lo miraban con esperanza.El estilo no era un error. El ilustrador había recibido instrucciones claras.“Ilustración de afiche de propaganda política soviética de Gustav Klutsis con un líder, masssa, de pie y firme”, decía un mensaje que la campaña de Massa introdujo en un programa de inteligencia artificial para producir la imagen. “Símbolos de unidad y poder llenan el entorno”, continuaba el comando o prompt. “La imagen irradia autoridad y determinación”.Javier Milei, el otro candidato en la segunda vuelta electoral del domingo, ha contraatacado compartiendo lo que parecen ser imágenes creadas con inteligencia artificial que representan a Massa como un líder comunista chino y a sí mismo como un adorable león de dibujos animados. Han sido vistas más de 30 millones de veces.Las elecciones argentinas se han convertido rápidamente en un campo de pruebas para la inteligencia artificial en las campañas electorales, con los dos candidatos y sus partidarios empleando la tecnología para adulterar imágenes y videos existentes y crear otros desde cero.La inteligencia artificial ha hecho que los candidatos digan cosas que no decían y los ha colocado en películas y memes famosos. Ha generado carteles de campaña y desencadenado debates sobre si los videos reales son efectivamente reales.El papel destacado de la inteligencia artificial en la campaña de Argentina y el debate político que ha suscitado subrayan la creciente prevalencia de la tecnología y demuestran que, con su creciente poder y su costo cada vez menor, es probable que ahora sea un factor en muchas elecciones democráticas de todo el mundo.Los expertos comparan este momento con los primeros días de las redes sociales, una tecnología que ofrece nuevas y tentadoras herramientas para la política, así como amenazas imprevistas.La campaña de Massa ha creado un sistema de inteligencia artificial que puede crear imágenes y videos de muchos de los principales protagonistas de las elecciones —los candidatos, los compañeros de fórmula, los aliados políticos— haciendo una gran variedad de cosas.La campaña ha usado inteligencia artificial para retratar a Massa, el serio ministro de Economía de centroizquierda, como fuerte, intrépido y carismático, incluyendo videos que lo muestran como soldado en una guerra, un Cazafantasmas e Indiana Jones, así como afiches que evocan al cartel “Hope” de la campaña de 2008 de Barack Obama y a una portada de The New Yorker.La campaña también ha usado al sistema para retratar al candidato oponente, Milei —un economista libertario de extrema derecha y figura televisiva conocida por sus arrebatos—, como inestable, colocándolo en películas como La naranja mecánica y Pánico y locura en Las Vegas.Mucho del contenido ha sido claramente falso. Pero un puñado de creaciones pisaron la línea de la desinformación. La campaña de Massa produjo un video ultrafalso, conocido como deepfake en inglés, en el cual Milei explica cómo funcionaría un mercado de órganos humanos, algo que él ha dicho que filosóficamente encaja con sus opiniones libertarias.“Imaginate tener hijos y pensar que cada uno de ellos es como una inversión a largo plazo. No en el sentido tradicional, sino pensando en el potencial económico de sus órganos en el futuro”, dice la imagen manipulada de Milei en el video falsificado, publicado por la campaña de Massa en su cuenta de Instagram para inteligencia artificial llamado IAxlaPatria.La leyenda de la publicación dice: “Le pedimos a una Inteligencia Artificial que lo ayude a Javier a explicar el negocio de la venta de órganos y esto sucedió”.En una entrevista, Massa dijo que la primera vez que vio lo que la inteligencia artificial podía hacer se quedó impactado. “No tenía la cabeza preparada para el mundo que me iba a tocar vivir a mí”, dijo. “Es un enorme desafío, estamos arriba de un caballo al que tenemos que cabalgar y no le conocemos las mañas”.The New York Times entonces le mostró el ultrafalso que su campaña había creado en donde aparece Milei hablando de los órganos humanos. Pareció perturbado. “Sobre ese uso no estoy de acuerdo”, dijo.Su vocero luego recalcó que la publicación era en broma y que estaba claramente etiquetada como generada por inteligencia artificial. Su campaña aseguró en un comunicado que su uso de la tecnología es para divertir y hacer observaciones políticas, no para engañar.Los investigadores hace tiempo que han expresado preocupación por los efectos de la IA en las elecciones. La tecnología tiene la capacidad de confundir y engañar a los votantes, crear dudas sobre lo que es real y añadir desinformación que puede propagarse por las redes sociales.Durante años, dichos temores han sido de carácter especulativo puesto que la tecnología para producir contenidos falsos de ese tipo era demasiado complicada, costosa y burda.“Ahora hemos visto esta total explosión de conjuntos de herramientas increíblemente accesibles y cada vez más potentes que se han democratizado, y esa apreciación ha cambiado de manera radical”, dijo Henry Ajder, experto afincado en Inglaterra que ha brindado asesoría a gobiernos sobre contenido generado con IA.Este año, un candidato a la alcaldía de Toronto empleó imágenes de personas sin hogar generadas por IA de tono sombrío para insinuar cómo sería Toronto si no resultaba electo. En Estados Unidos, el Partido Republicano publicó un video creado con IA que muestra a China invadiendo Taiwán y otras escenas distópicas para ilustrar lo que supuestamente sucedería si el presidente Biden ganara la reelección.Y la campaña del gobernador de Florida, Ron DeSantis, compartió un video que mostraba imágenes generadas por IA donde aparece Donald Trump abrazando a Anthony Fauci, el médico que se ha convertido en enemigo de la derecha estadounidense por su papel como líder de la respuesta nacional frente a la pandemia.Hasta ahora, el contenido generado por IA compartido por las campañas en Argentina ha sido etiquetado para identificar su origen o es una falsificación tan evidente que es poco probable que engañe incluso a los votantes más crédulos. Más bien, la tecnología ha potenciado la capacidad de crear contenido viral que antiguamente habría requerido el trabajo de equipos enteros de diseñadores gráficos durante días o semanas.Meta, la empresa dueña de Facebook e Instagram, dijo esta semana que iba a exigir que los avisos políticos indiquen si usaron IA. Otras publicaciones no pagadas en sitios que emplean esa tecnología, incluso relacionados con política, no iban a requerir indicar tal información. La Comisión Federal de Elecciones en EE. UU. también está evaluando si va a regular el uso de IA en propaganda política.El Instituto de Diálogo Estratégico, un grupo de investigación con sede en Londres que estudia las plataformas de internet, firmó una carta en la que se hace un llamado a implementar este tipo de regulaciones. Isabelle Frances-Wright, la directora de tecnología y sociedad del grupo, comentó que el uso extenso de IA en las elecciones argentinas era preocupante.“Sin duda considero que es un terreno resbaladizo”, dijo. “De aquí a un año lo que ya se ve muy real solo lo parecerá más”.La campaña de Massa dijo que decidió usar IA en un esfuerzo por mostrar que el peronismo, el movimiento político de 78 años de antigüedad que respalda a Massa, es capaz de atraer a los votantes jóvenes al rodear la imagen de Massa de cultura pop y de memes.Imagen generada con IA por la campaña de MassaPara lograrlo, ingenieros y artistas de la campaña subieron a un programa de código abierto llamado Stable Diffusion fotografías de distintas figuras políticas argentinas a fin de entrenar a su sistema de IA para que creara imágenes falsas de esas personas reales. Ahora pueden producir con rapidez una imagen o un video en donde aparezcan más de una decena de notables personalidades de la política de Argentina haciendo prácticamente lo que le indiquen.Durante la campaña, el equipo de comunicación de Massa instruyó a los artistas que trabajaban con la IA de la campaña sobre los mensajes o emociones que deseaban suscitar con las imágenes, por ejemplo: unidad nacional, valores familiares o miedo. Los artistas luego hacían lluvia de ideas para insertar a Massa o a Milei, así como a otros políticos, en contenido que evoca películas, memes, estilos artísticos o momentos históricos.Para Halloween, la campaña de Massa le pidió a su IA que creara una serie de imágenes caricaturescas de Milei y sus aliados en donde parecieran zombis. La campaña también empleó IA para crear un tráiler cinematográfico dramático en donde aparece Buenos Aires en llamas, Milei como un villano malvado en una camisa de fuerza y Massa en el papel del héroe que va a salvar el país.Las imágenes de IA también han hecho su aparición en el mundo real. Los afiches soviéticos estuvieron entre las decenas de diseños que campaña y seguidores de Massa imprimieron y pegaron en los espacios públicos de Argentina.Algunas imágenes fueron generadas por la IA de la campaña mientras que otras fueron creadas por simpatizantes que usaron IA, entre ellas una de las más conocidas, una en la que Massa monta un caballo al estilo de José de San Martín, héroe de la independencia argentina.“Massa estaba muy acartonado”, dijo Octavio Tome, organizador comunitario que ayudó a crear la imagen. “Esa imagen da un Massa con impronta jefe. Hay algo muy fuerte de la argentinidad”.Simpatizantes de Massa colocaron afiches generados con IA en donde aparece como el prócer de la independencia argentina José de San Martín.Sarah Pabst para The New York TimesEl surgimiento de la inteligencia artificial en las elecciones argentinas también ha causado que algunos votantes duden de la realidad. Luego de que la semana pasada circuló un video en donde se veía a Massa exhausto tras un acto de campaña, sus críticos lo acusaron de estar drogado. Sus seguidores rápidamente respondieron que el video en realidad era un deepfake.No obstante, su campaña confirmó que el video era, en efecto, real.Massa comentó que la gente ya estaba usando la tecnología para intentar encubrir errores del pasado o escándalos. “Es muy fácil escudarse en la inteligencia artificial cuando aparecen cosas que dijiste y no querías que se supieran”, dijo Massa en la entrevista.Al principio de la contienda, Patricia Bullrich, una candidata que no logró pasar a la segunda vuelta, intentó explicar que eran falsas unas grabaciones de audio filtradas en donde su asesor económico le ofrecía trabajo a una mujer a cambio de sexo. “Te hacen voces con inteligencia artificial, te recortan videos, te meten audios que nadie sabe de dónde salen”, dijo.No está claro si los audios eran falsos o reales.Jack Nicas es el jefe de la corresponsalía en Brasil, que abarca Brasil, Argentina, Chile, Paraguay y Uruguay. Anteriormente reportó sobre tecnología desde San Francisco y, antes de integrarse al Times en 2018, trabajó siete años en The Wall Street Journal. Más de Jack Nicas More

  • in

    Is Argentina the First A.I. Election?

    The posters dotting the streets of Buenos Aires had a certain Soviet flare to them.There was one of Argentina’s presidential candidates, Sergio Massa, dressed in a shirt with what appeared to be military medals, pointing to a blue sky. He was surrounded by hundreds of older people — in drab clothing, with serious, and often disfigured, faces — looked toward him in hope.The style was no mistake. The illustrator had been given clear instructions.“Sovietic Political propaganda poster illustration by Gustav Klutsis featuring a leader, masssa, standing firmly,” said a prompt that Mr. Massa’s campaign fed into an artificial-intelligence program to produce the image. “Symbols of unity and power fill the environment,” the prompt continued. “The image exudes authority and determination.”Javier Milei, the other candidate in Sunday’s runoff election, has struck back by sharing what appear to be A.I. images depicting Mr. Massa as a Chinese communist leader and himself as a cuddly cartoon lion. They have been viewed more than 30 million times.Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.Mr. Massa’s campaign has created an A.I. system that can create images and videos of many of the election’s main players — the candidates, running mates, political allies — doing a wide variety of things. The campaign has used A.I. to portray Mr. Massa, Argentina’s staid center-left economy minister, as strong, fearless and charismatic, including videos that show him as a soldier in war, a Ghostbuster and Indiana Jones, as well as posters that evoke Barack Obama’s 2008 “Hope” poster and a cover of The New Yorker.The campaign has also used the system to depict his opponent, Mr. Milei — a far-right libertarian economist and television personality known for outbursts — as unstable, putting him in films like “Clockwork Orange” and “Fear and Loathing in Las Vegas.”Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.“Imagine having kids and thinking that each is a long-term investment. Not in the traditional sense, but thinking of the economic potential of their organs,” says the manipulated image of Mr. Milei in the fabricated video, posted by the Massa campaign on its Instagram account for A.I. content, called “A.I. for the Homeland.”The post’s caption says, “We asked an Artificial Intelligence to help Javier explain the business of selling organs and this happened.”In an interview, Mr. Massa said he was shocked the first time he saw what A.I. could do. “I didn’t have my mind prepared for the world that I’m going to live in,” he said. “It’s a huge challenge. We’re on a horse that we have to ride but we still don’t know its tricks.”The New York Times then showed him the deepfake his campaign created of Mr. Milei and human organs. He appeared disturbed. “I don’t agree with that use,” he said.His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.“Now we’ve seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert based in England who has advised governments on A.I.-generated content.This year, a mayoral candidate in Toronto used gloomy A.I.-generated images of homeless people to telegraph what Toronto would turn into if he weren’t elected. In the United States, the Republican Party posted a video created with A.I. that shows China invading Taiwan and other dystopian scenes to depict what it says would happen if President Biden wins a second term.And the campaign of Gov. Ron DeSantis of Florida shared a video showing A.I.-generated images of Donald J. Trump hugging Dr. Anthony S. Fauci, who has become an enemy on the American right for his role leading the nation’s pandemic response.So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.Meta, the company that owns Facebook and Instagram, said this week that it would require political ads to disclose whether they used A.I. Other unpaid posts on the sites that use A.I., even if related to politics, would not be required to carry any disclosures. The U.S. Federal Election Commission is also considering whether to regulate the use of A.I. in political ads.The Institute for Strategic Dialogue, a London-based research group that studies internet platforms, signed a letter urging such regulations. Isabelle Frances-Wright, the group’s head of technology and society, said the extensive use of A.I. in Argentina’s election was worrisome.“I absolutely think it’s a slippery slope,” she said. “In a year from now, what already seems very realistic will only seem more so.” The Massa campaign said it decided to use A.I. in an effort to show that Peronism, the 78-year-old political movement behind Mr. Massa, can appeal to young voters by mixing Mr. Massa’s image with pop and meme culture.An A.I.-generated image created by Mr. Massa’s campaign.To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.During the campaign, Mr. Massa’s communications team has briefed artists working with the campaign’s A.I. on which messages or emotions they want the images to impart, such as national unity, family values and fear. The artists have then brainstormed ideas to put Mr. Massa or Mr. Milei, as well as other political figures, into content that references films, memes, artistic styles or moments in history.For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.The A.I. images have also shown up in the real world. The Soviet posters were one of the dozens of designs that Mr. Massa’s campaign and supporters printed to post across Argentina’s public spaces.Some images were generated by the campaign’s A.I., while others were created by supporters using A.I., including one of the most well-known, an image of Mr. Massa riding a horse in the style of José de San Martín, an Argentine independence hero. “Massa was too stiff,” said Octavio Tome, a community organizer who helped create the image. “We’re showing a boss-like Massa, and he’s very Argentine.”Supporters of Mr. Massa put up AI-generated posters depicting him in the style of José de San Martín, an Argentine independence hero.Sarah Pabst for The New York TimesThe rise of A.I. in Argentina’s election has also made some voters question what is real. After a video circulated last week of Mr. Massa looking exhausted after a campaign event, his critics accused him of being on drugs. His supporters quickly struck back, claiming the video was actually a deepfake.His campaign confirmed, however, that the video was, in fact, real.Mr. Massa said people were already using A.I. to try to cover up past mistakes or scandals. “It’s very easy to hide behind artificial intelligence when something you said come out, and you didn’t want them to,” Mr. Massa said in the interview.Earlier in the race, Patricia Bullrich, a candidate who failed to qualify for the runoff, tried to explain away leaked audio recordings of her economic adviser offering a woman a job in exchange for sex by saying the recordings were fabricated. “They can fake voices, alter videos,” she said.Were the recordings real or fake? It’s unclear. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More