More stories

  • in

    Tech CEOs Got Grilled, but New Rules Are Still a Question

    Tech leaders faced a grilling in the Senate, and one offered an apology. But skeptics fear little will change this time.Five tech C.E.O.s faced a grilling yesterday, but it’s unclear whether new laws to impose more safeguards for online children’s safety will pass.Kenny Holston/The New York TimesA lot of heat, but will there be regulation?Five technology C.E.O.s endured hours of grilling by senators on both sides of the aisle about their apparent failures to make their platforms safer for children, with some lawmakers accusing them of having “blood” on their hands.But for all of the drama, including Mark Zuckerberg of Meta apologizing to relatives of online child sex abuse victims, few observers believe that there’s much chance of concrete action.“Your product is killing people,” Senator Josh Hawley, Republican of Missouri, flatly told Zuckerberg at Wednesday’s hearing. Over 3.5 hours, members of the Senate Judiciary Committee laid into the Meta chief and the heads of Discord, Snap, TikTok and X over their policies. (Before the hearing began, senators released internal Meta documents that showed that executives had rejected efforts to devote more resources to safeguard children.)But tech C.E.O.s offered only qualified support for legislative efforts. Those include the Kids Online Safety Act, or KOSA, which would require tech platforms to take “reasonable measures” to prevent harm, and STOP CSAM and EARN IT, two bills that would curtail some of the liability shield given to those companies by Section 230 of the Communications Decency Act.Both Evan Spiegel of Snap and Linda Yaccarino of X backed KOSA, and Yaccarino also became the first tech C.E.O. to back the STOP CSAM Act. But neither endorsed EARN IT.Zuckerberg called for legislation to force Apple and Google — neither of which was asked to testify — to be held responsible for verifying app users’ ages. But he otherwise emphasized that Meta had already offered resources to keep children safe.Shou Chew of TikTok noted only that his company expected to invest over $2 billion in trust and safety measures this year.Jason Citron of Discord allowed that Section 230 “needs to be updated,” and his company later said that it supports “elements” of STOP CSAM.Experts worry that we’ve seen this play out before. Tech companies have zealously sought to defend Section 230, which protects them from liability for content users post on their platforms. Some lawmakers say altering it would be crucial to holding online platforms to account.Meanwhile, tech groups have fought efforts by states to tighten the use of their services by children. Such laws would lead to a patchwork of regulations that should instead be addressed by Congress, the industry has argued.Congress has failed to move meaningfully on such legislation. Absent a sea change in congressional will, Wednesday’s drama may have been just that.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More

  • in

    4,789 Facebook Accounts in China Impersonated Americans, Meta Says

    The company warned that the inauthentic accounts underscored the threat of foreign election interference in 2024.Meta announced on Thursday that it had removed thousands of Facebook accounts based in China that were impersonating Americans debating political issues in the United States. The company warned that the campaign presaged coordinated international efforts to influence the 2024 presidential election.The network of fake accounts — 4,789 in all — used names and photographs lifted from elsewhere on the internet and copied partisan political content from X, formerly known as Twitter, Meta said in its latest quarterly adversarial threat analysis. The copied material included posts by prominent Republican and Democratic politicians, the report said.The campaign appeared intended not to favor one side or another but to highlight the deep divisions in American politics, a tactic that Russia’s influence campaigns have used for years in the United States and elsewhere.Meta warned that the campaign underscored the threat facing a confluence of elections around the world in 2024 — from India in April to the United States in November.“Foreign threat actors are attempting to reach audiences ahead of next year’s various elections, including in the U.S. and Europe,” the company’s report said, “and we need to remain alert to their evolving tactics and targeting across the internet.”Although Meta did not attribute the latest campaign to China’s Communist government, it noted that the country had become the third-most-common geographic source for coordinated inauthentic behavior on Facebook and other social media platforms, after Russia and Iran.The Chinese network was the fifth that Meta has detected and taken down this year, more than in any other nation, suggesting that China is stepping up its covert influence efforts. While previous campaigns focused on Chinese issues, the latest ones have weighed more directly into domestic U.S. politics.“This represents the most notable change in the threat landscape, when compared with the 2020 election cycle,” the company said in the threat report.Meta’s report followed a series of disclosures about China’s global information operations, including a recent State Department report that accused China of spending billions on “deceptive and coercive methods” to shape the global information environment.Microsoft and other researchers have also linked China to the spread of conspiracy theories claiming that the U.S. government deliberately caused the deadly wildfires in Hawaii this year.The latest inauthentic accounts removed by Meta sought “to hijack authentic partisan narratives,” the report said. It detailed several examples in which the accounts copied and pasted, under their own names, partisan posts from politicians — often using language and symbols indicating the posts were originally on X.Two Facebook posts a month apart in August and September, for example, copied opposing statements on abortion from two members of the U.S. House from Texas — Sylvia R. Garcia, a Democrat, and Ronny Jackson, a Republican.The accounts also linked to mainstream media organizations and shared posts by X’s owner, Elon Musk. They liked and reposted content from actual Facebook users on other topics as well, like games, fashion models and pets. The activity suggested that the accounts were intended to build a network of seemingly authentic accounts to push a coordinated message in the future.Meta also removed a similar, smaller network from China that mostly targeted India and Tibet but also the United States. In the case of Tibet, the users posed as pro-independence activists who accused the Dalai Lama of corruption and pedophilia.Meta warned that while it had removed the accounts, the same networks continued to use accounts on other platforms, including X, YouTube, Gettr, Telegram and Truth Social, warning that foreign adversaries were diversifying the sources of their operations.In its report, Meta also weighed in on Republican attacks on the U.S. government’s role in monitoring disinformation online, a political and legal fight that has reached the Supreme Court in a challenge brought by the attorneys general of Missouri and Louisiana.While Republicans have accused officials of coercing social media platforms to censor content, including at a hearing in the House on Thursday, Meta said coordination among tech companies, government and law enforcement had disrupted foreign threats.“This type of information sharing can be particularly critical in disrupting malicious foreign campaigns by sophisticated threat actors who coordinate their operations outside of our platforms,” the report said. More

  • in

    La IA hace campaña en las elecciones de Argentina

    Los afiches que salpican las calles de Buenos Aires tienen un cierto toque soviético.Había uno de Sergio Massa, uno de los candidatos presidenciales de Argentina, vestido con una camisa con lo que parecían ser medallas militares, señalando a un cielo azul. Lo rodeaban cientos de personas mayores —con atuendos monótonos, rostros serios y a menudo desfigurados— que lo miraban con esperanza.El estilo no era un error. El ilustrador había recibido instrucciones claras.“Ilustración de afiche de propaganda política soviética de Gustav Klutsis con un líder, masssa, de pie y firme”, decía un mensaje que la campaña de Massa introdujo en un programa de inteligencia artificial para producir la imagen. “Símbolos de unidad y poder llenan el entorno”, continuaba el comando o prompt. “La imagen irradia autoridad y determinación”.Javier Milei, el otro candidato en la segunda vuelta electoral del domingo, ha contraatacado compartiendo lo que parecen ser imágenes creadas con inteligencia artificial que representan a Massa como un líder comunista chino y a sí mismo como un adorable león de dibujos animados. Han sido vistas más de 30 millones de veces.Las elecciones argentinas se han convertido rápidamente en un campo de pruebas para la inteligencia artificial en las campañas electorales, con los dos candidatos y sus partidarios empleando la tecnología para adulterar imágenes y videos existentes y crear otros desde cero.La inteligencia artificial ha hecho que los candidatos digan cosas que no decían y los ha colocado en películas y memes famosos. Ha generado carteles de campaña y desencadenado debates sobre si los videos reales son efectivamente reales.El papel destacado de la inteligencia artificial en la campaña de Argentina y el debate político que ha suscitado subrayan la creciente prevalencia de la tecnología y demuestran que, con su creciente poder y su costo cada vez menor, es probable que ahora sea un factor en muchas elecciones democráticas de todo el mundo.Los expertos comparan este momento con los primeros días de las redes sociales, una tecnología que ofrece nuevas y tentadoras herramientas para la política, así como amenazas imprevistas.La campaña de Massa ha creado un sistema de inteligencia artificial que puede crear imágenes y videos de muchos de los principales protagonistas de las elecciones —los candidatos, los compañeros de fórmula, los aliados políticos— haciendo una gran variedad de cosas.La campaña ha usado inteligencia artificial para retratar a Massa, el serio ministro de Economía de centroizquierda, como fuerte, intrépido y carismático, incluyendo videos que lo muestran como soldado en una guerra, un Cazafantasmas e Indiana Jones, así como afiches que evocan al cartel “Hope” de la campaña de 2008 de Barack Obama y a una portada de The New Yorker.La campaña también ha usado al sistema para retratar al candidato oponente, Milei —un economista libertario de extrema derecha y figura televisiva conocida por sus arrebatos—, como inestable, colocándolo en películas como La naranja mecánica y Pánico y locura en Las Vegas.Mucho del contenido ha sido claramente falso. Pero un puñado de creaciones pisaron la línea de la desinformación. La campaña de Massa produjo un video ultrafalso, conocido como deepfake en inglés, en el cual Milei explica cómo funcionaría un mercado de órganos humanos, algo que él ha dicho que filosóficamente encaja con sus opiniones libertarias.“Imaginate tener hijos y pensar que cada uno de ellos es como una inversión a largo plazo. No en el sentido tradicional, sino pensando en el potencial económico de sus órganos en el futuro”, dice la imagen manipulada de Milei en el video falsificado, publicado por la campaña de Massa en su cuenta de Instagram para inteligencia artificial llamado IAxlaPatria.La leyenda de la publicación dice: “Le pedimos a una Inteligencia Artificial que lo ayude a Javier a explicar el negocio de la venta de órganos y esto sucedió”.En una entrevista, Massa dijo que la primera vez que vio lo que la inteligencia artificial podía hacer se quedó impactado. “No tenía la cabeza preparada para el mundo que me iba a tocar vivir a mí”, dijo. “Es un enorme desafío, estamos arriba de un caballo al que tenemos que cabalgar y no le conocemos las mañas”.The New York Times entonces le mostró el ultrafalso que su campaña había creado en donde aparece Milei hablando de los órganos humanos. Pareció perturbado. “Sobre ese uso no estoy de acuerdo”, dijo.Su vocero luego recalcó que la publicación era en broma y que estaba claramente etiquetada como generada por inteligencia artificial. Su campaña aseguró en un comunicado que su uso de la tecnología es para divertir y hacer observaciones políticas, no para engañar.Los investigadores hace tiempo que han expresado preocupación por los efectos de la IA en las elecciones. La tecnología tiene la capacidad de confundir y engañar a los votantes, crear dudas sobre lo que es real y añadir desinformación que puede propagarse por las redes sociales.Durante años, dichos temores han sido de carácter especulativo puesto que la tecnología para producir contenidos falsos de ese tipo era demasiado complicada, costosa y burda.“Ahora hemos visto esta total explosión de conjuntos de herramientas increíblemente accesibles y cada vez más potentes que se han democratizado, y esa apreciación ha cambiado de manera radical”, dijo Henry Ajder, experto afincado en Inglaterra que ha brindado asesoría a gobiernos sobre contenido generado con IA.Este año, un candidato a la alcaldía de Toronto empleó imágenes de personas sin hogar generadas por IA de tono sombrío para insinuar cómo sería Toronto si no resultaba electo. En Estados Unidos, el Partido Republicano publicó un video creado con IA que muestra a China invadiendo Taiwán y otras escenas distópicas para ilustrar lo que supuestamente sucedería si el presidente Biden ganara la reelección.Y la campaña del gobernador de Florida, Ron DeSantis, compartió un video que mostraba imágenes generadas por IA donde aparece Donald Trump abrazando a Anthony Fauci, el médico que se ha convertido en enemigo de la derecha estadounidense por su papel como líder de la respuesta nacional frente a la pandemia.Hasta ahora, el contenido generado por IA compartido por las campañas en Argentina ha sido etiquetado para identificar su origen o es una falsificación tan evidente que es poco probable que engañe incluso a los votantes más crédulos. Más bien, la tecnología ha potenciado la capacidad de crear contenido viral que antiguamente habría requerido el trabajo de equipos enteros de diseñadores gráficos durante días o semanas.Meta, la empresa dueña de Facebook e Instagram, dijo esta semana que iba a exigir que los avisos políticos indiquen si usaron IA. Otras publicaciones no pagadas en sitios que emplean esa tecnología, incluso relacionados con política, no iban a requerir indicar tal información. La Comisión Federal de Elecciones en EE. UU. también está evaluando si va a regular el uso de IA en propaganda política.El Instituto de Diálogo Estratégico, un grupo de investigación con sede en Londres que estudia las plataformas de internet, firmó una carta en la que se hace un llamado a implementar este tipo de regulaciones. Isabelle Frances-Wright, la directora de tecnología y sociedad del grupo, comentó que el uso extenso de IA en las elecciones argentinas era preocupante.“Sin duda considero que es un terreno resbaladizo”, dijo. “De aquí a un año lo que ya se ve muy real solo lo parecerá más”.La campaña de Massa dijo que decidió usar IA en un esfuerzo por mostrar que el peronismo, el movimiento político de 78 años de antigüedad que respalda a Massa, es capaz de atraer a los votantes jóvenes al rodear la imagen de Massa de cultura pop y de memes.Imagen generada con IA por la campaña de MassaPara lograrlo, ingenieros y artistas de la campaña subieron a un programa de código abierto llamado Stable Diffusion fotografías de distintas figuras políticas argentinas a fin de entrenar a su sistema de IA para que creara imágenes falsas de esas personas reales. Ahora pueden producir con rapidez una imagen o un video en donde aparezcan más de una decena de notables personalidades de la política de Argentina haciendo prácticamente lo que le indiquen.Durante la campaña, el equipo de comunicación de Massa instruyó a los artistas que trabajaban con la IA de la campaña sobre los mensajes o emociones que deseaban suscitar con las imágenes, por ejemplo: unidad nacional, valores familiares o miedo. Los artistas luego hacían lluvia de ideas para insertar a Massa o a Milei, así como a otros políticos, en contenido que evoca películas, memes, estilos artísticos o momentos históricos.Para Halloween, la campaña de Massa le pidió a su IA que creara una serie de imágenes caricaturescas de Milei y sus aliados en donde parecieran zombis. La campaña también empleó IA para crear un tráiler cinematográfico dramático en donde aparece Buenos Aires en llamas, Milei como un villano malvado en una camisa de fuerza y Massa en el papel del héroe que va a salvar el país.Las imágenes de IA también han hecho su aparición en el mundo real. Los afiches soviéticos estuvieron entre las decenas de diseños que campaña y seguidores de Massa imprimieron y pegaron en los espacios públicos de Argentina.Algunas imágenes fueron generadas por la IA de la campaña mientras que otras fueron creadas por simpatizantes que usaron IA, entre ellas una de las más conocidas, una en la que Massa monta un caballo al estilo de José de San Martín, héroe de la independencia argentina.“Massa estaba muy acartonado”, dijo Octavio Tome, organizador comunitario que ayudó a crear la imagen. “Esa imagen da un Massa con impronta jefe. Hay algo muy fuerte de la argentinidad”.Simpatizantes de Massa colocaron afiches generados con IA en donde aparece como el prócer de la independencia argentina José de San Martín.Sarah Pabst para The New York TimesEl surgimiento de la inteligencia artificial en las elecciones argentinas también ha causado que algunos votantes duden de la realidad. Luego de que la semana pasada circuló un video en donde se veía a Massa exhausto tras un acto de campaña, sus críticos lo acusaron de estar drogado. Sus seguidores rápidamente respondieron que el video en realidad era un deepfake.No obstante, su campaña confirmó que el video era, en efecto, real.Massa comentó que la gente ya estaba usando la tecnología para intentar encubrir errores del pasado o escándalos. “Es muy fácil escudarse en la inteligencia artificial cuando aparecen cosas que dijiste y no querías que se supieran”, dijo Massa en la entrevista.Al principio de la contienda, Patricia Bullrich, una candidata que no logró pasar a la segunda vuelta, intentó explicar que eran falsas unas grabaciones de audio filtradas en donde su asesor económico le ofrecía trabajo a una mujer a cambio de sexo. “Te hacen voces con inteligencia artificial, te recortan videos, te meten audios que nadie sabe de dónde salen”, dijo.No está claro si los audios eran falsos o reales.Jack Nicas es el jefe de la corresponsalía en Brasil, que abarca Brasil, Argentina, Chile, Paraguay y Uruguay. Anteriormente reportó sobre tecnología desde San Francisco y, antes de integrarse al Times en 2018, trabajó siete años en The Wall Street Journal. Más de Jack Nicas More

  • in

    Is Argentina the First A.I. Election?

    The posters dotting the streets of Buenos Aires had a certain Soviet flare to them.There was one of Argentina’s presidential candidates, Sergio Massa, dressed in a shirt with what appeared to be military medals, pointing to a blue sky. He was surrounded by hundreds of older people — in drab clothing, with serious, and often disfigured, faces — looked toward him in hope.The style was no mistake. The illustrator had been given clear instructions.“Sovietic Political propaganda poster illustration by Gustav Klutsis featuring a leader, masssa, standing firmly,” said a prompt that Mr. Massa’s campaign fed into an artificial-intelligence program to produce the image. “Symbols of unity and power fill the environment,” the prompt continued. “The image exudes authority and determination.”Javier Milei, the other candidate in Sunday’s runoff election, has struck back by sharing what appear to be A.I. images depicting Mr. Massa as a Chinese communist leader and himself as a cuddly cartoon lion. They have been viewed more than 30 million times.Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.Mr. Massa’s campaign has created an A.I. system that can create images and videos of many of the election’s main players — the candidates, running mates, political allies — doing a wide variety of things. The campaign has used A.I. to portray Mr. Massa, Argentina’s staid center-left economy minister, as strong, fearless and charismatic, including videos that show him as a soldier in war, a Ghostbuster and Indiana Jones, as well as posters that evoke Barack Obama’s 2008 “Hope” poster and a cover of The New Yorker.The campaign has also used the system to depict his opponent, Mr. Milei — a far-right libertarian economist and television personality known for outbursts — as unstable, putting him in films like “Clockwork Orange” and “Fear and Loathing in Las Vegas.”Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.“Imagine having kids and thinking that each is a long-term investment. Not in the traditional sense, but thinking of the economic potential of their organs,” says the manipulated image of Mr. Milei in the fabricated video, posted by the Massa campaign on its Instagram account for A.I. content, called “A.I. for the Homeland.”The post’s caption says, “We asked an Artificial Intelligence to help Javier explain the business of selling organs and this happened.”In an interview, Mr. Massa said he was shocked the first time he saw what A.I. could do. “I didn’t have my mind prepared for the world that I’m going to live in,” he said. “It’s a huge challenge. We’re on a horse that we have to ride but we still don’t know its tricks.”The New York Times then showed him the deepfake his campaign created of Mr. Milei and human organs. He appeared disturbed. “I don’t agree with that use,” he said.His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.“Now we’ve seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert based in England who has advised governments on A.I.-generated content.This year, a mayoral candidate in Toronto used gloomy A.I.-generated images of homeless people to telegraph what Toronto would turn into if he weren’t elected. In the United States, the Republican Party posted a video created with A.I. that shows China invading Taiwan and other dystopian scenes to depict what it says would happen if President Biden wins a second term.And the campaign of Gov. Ron DeSantis of Florida shared a video showing A.I.-generated images of Donald J. Trump hugging Dr. Anthony S. Fauci, who has become an enemy on the American right for his role leading the nation’s pandemic response.So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.Meta, the company that owns Facebook and Instagram, said this week that it would require political ads to disclose whether they used A.I. Other unpaid posts on the sites that use A.I., even if related to politics, would not be required to carry any disclosures. The U.S. Federal Election Commission is also considering whether to regulate the use of A.I. in political ads.The Institute for Strategic Dialogue, a London-based research group that studies internet platforms, signed a letter urging such regulations. Isabelle Frances-Wright, the group’s head of technology and society, said the extensive use of A.I. in Argentina’s election was worrisome.“I absolutely think it’s a slippery slope,” she said. “In a year from now, what already seems very realistic will only seem more so.” The Massa campaign said it decided to use A.I. in an effort to show that Peronism, the 78-year-old political movement behind Mr. Massa, can appeal to young voters by mixing Mr. Massa’s image with pop and meme culture.An A.I.-generated image created by Mr. Massa’s campaign.To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.During the campaign, Mr. Massa’s communications team has briefed artists working with the campaign’s A.I. on which messages or emotions they want the images to impart, such as national unity, family values and fear. The artists have then brainstormed ideas to put Mr. Massa or Mr. Milei, as well as other political figures, into content that references films, memes, artistic styles or moments in history.For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.The A.I. images have also shown up in the real world. The Soviet posters were one of the dozens of designs that Mr. Massa’s campaign and supporters printed to post across Argentina’s public spaces.Some images were generated by the campaign’s A.I., while others were created by supporters using A.I., including one of the most well-known, an image of Mr. Massa riding a horse in the style of José de San Martín, an Argentine independence hero. “Massa was too stiff,” said Octavio Tome, a community organizer who helped create the image. “We’re showing a boss-like Massa, and he’s very Argentine.”Supporters of Mr. Massa put up AI-generated posters depicting him in the style of José de San Martín, an Argentine independence hero.Sarah Pabst for The New York TimesThe rise of A.I. in Argentina’s election has also made some voters question what is real. After a video circulated last week of Mr. Massa looking exhausted after a campaign event, his critics accused him of being on drugs. His supporters quickly struck back, claiming the video was actually a deepfake.His campaign confirmed, however, that the video was, in fact, real.Mr. Massa said people were already using A.I. to try to cover up past mistakes or scandals. “It’s very easy to hide behind artificial intelligence when something you said come out, and you didn’t want them to,” Mr. Massa said in the interview.Earlier in the race, Patricia Bullrich, a candidate who failed to qualify for the runoff, tried to explain away leaked audio recordings of her economic adviser offering a woman a job in exchange for sex by saying the recordings were fabricated. “They can fake voices, alter videos,” she said.Were the recordings real or fake? It’s unclear. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    These 2024 Candidates Have Signed Up For Threads, Meta’s Twitter Alternative

    The bulk of the G.O.P. field is there, with some notable holdouts: Donald J. Trump, the front-runner, and his top rival, Ron DeSantis.While the front-runners in the 2024 presidential race have yet to show up on Threads, the new Instagram app aimed at rivaling Twitter, many of the long-shot candidates were quick to take advantage of the platform’s rapidly growing audience.“Buckle up and join me on Threads!” Senator Tim Scott, Republican of South Carolina, wrote in a caption accompanying a selfie of himself and others in a car that he posted on Thursday — by that morning, the app had already been downloaded more than 30 million times, putting it on track to be the most rapidly downloaded app ever.But President Biden, former President Donald J. Trump and Gov. Ron DeSantis of Florida remain absent from the platform so far.And that may be just fine with Adam Mosseri, the head of Instagram, who told The Times’s “Hard Fork” podcast on Thursday that he does not expect Threads to become a destination for news or politics, arenas where Twitter has dominated the public discourse.“I don’t want to lean into hard news at all. I don’t think there’s much that we can or should do to discourage it on Instagram or in Threads, but I don’t think we’ll do anything to encourage it,” Mr. Mosseri said.The app, released on Wednesday, was presented as an alternative to Twitter, with which many users became disillusioned after it was purchased by Elon Musk in October.Lawyers for Twitter threatened legal action against Meta, the company that owns Instagram, Facebook and Threads, accusing it of using trade secrets from former Twitter employees to build the new platform. Mr. Musk tweeted on Thursday, “Competition is fine, cheating is not.”Mr. Trump has not been active on Twitter recently either, despite Mr. Musk’s lifting the ban that was put on Mr. Trump’s account after the Jan. 6, 2021, attack on the Capitol. The former president has instead kept his focus on Truth Social, the right-wing social network he launched in 2021.But many of the G.O.P. candidates have begun making their pitches on Threads.Nikki Haley, the former United Nations ambassador and former governor of South Carolina, made a video compilation of her campaign events her first post on the app. “Strong and proud. Not weak and woke,” she wrote on Thursday. “That is the America I see.”Gov. Doug Burgum of North Dakota posted footage of his July 4 campaign appearances in New Hampshire, alongside a message on Wednesday that said he and his wife were “looking forward to continuing our time here.”And Will Hurd, a former Texas congressman, made a fund-raising pitch to viewers on Wednesday.“Welcome to Threads,” he said in a video posted on the app. “I’m looking forward to continuing the conversation here with you on the issues, my candidacy, where I’ll be and everything our campaign has going on.”Francis Suarez, the Republican mayor of Miami, and Larry Elder, a conservative talk radio host, also shared their campaign pitches on the platform, as did two candidates running in the Democratic primary: Robert F. Kennedy Jr., a leading vaccine skeptic, and Marianne Williamson, a self-help author. Even Cornel West, a professor and progressive activist running as a third-party candidate, has posted.Former Vice President Mike Pence and Vivek Ramaswamy, a tech entrepreneur, also established accounts — but have yet to post.Among the holdouts: Former Gov. Asa Hutchinson of Arkansas and former Gov. Chris Christie of New Jersey, both Republicans.The White House has not said whether Mr. Biden will join Threads. Andrew Bates, a White House spokesman, said on Thursday that the administration would “keep you all posted if we do.” More

  • in

    Hun Sen’s Facebook Page Goes Dark After Spat with Meta

    Prime Minister Hun Sen, an avid user of the platform, had vowed to delete his account after Meta’s oversight board said he had used it to threaten political violence.The usually very active Facebook account for Prime Minister Hun Sen of Cambodia appeared to have been deleted on Friday, a day after the oversight board for Meta, Facebook’s parent company, recommended that he be suspended from the platform for threatening political opponents with violence.The showdown pits the social media behemoth against one of Asia’s longest-ruling autocrats.Mr. Hun Sen, 70, has ruled Cambodia since 1985 and maintained power partly by silencing his critics. He is a staunch ally of China, a country whose support comes free of American-style admonishments on the value of human rights and democratic institutions.A note Friday on Mr. Hun Sen’s account, which had about 14 million followers, said that its content “isn’t available right now.” It was not immediately clear whether Meta had suspended the account or if Mr. Hun Sen had preemptively deleted it, as he had vowed to do in a post late Thursday on Telegram, a social media platform where he has a much smaller following.“That he stopped using Facebook is his private right,” Phay Siphan, a spokesman for the Cambodian government, told The New York Times on Friday. “Other Cambodians use it, and that’s their right.”The company-appointed oversight board for Meta had on Thursday recommended a minimum six-month suspension of Mr. Hun Sen’s accounts on Facebook and Instagram, which Meta also owns. The board also said that one of Mr. Hun Sen’s Facebook videos had violated Meta’s rules on “violence and incitement” and should be taken down.In the video, Mr. Hun Sen delivered a speech in which he responded to allegations of vote-stealing by calling on his political opponents to choose between the legal system and “a bat.”“If you say that’s freedom of expression, I will also express my freedom by sending people to your place and home,” Mr. Hun Sen said in the speech, according to Meta.Meta had previously decided to keep the video online under a policy that allows the platform to allow content that violates Facebook’s community standards on the grounds that it is newsworthy and in the public interest. But the oversight board said on Thursday that it was overturning the decision, calling it “incorrect.”A post on Facebook by Cambodian government official Duong Dara, which includes an image of the official Facebook page of Mr. Hun Sen.Tang Chhin Sothy/Agence France-Presse — Getty ImagesThe board added that its recommendation to suspend Mr. Hun Sen’s accounts for at least six months was justified given the severity of the violation and his “history of committing human rights violations and intimidating political opponents, and his strategic use of social media to amplify such threats.”Meta later said in a statement that it would remove the offending video to comply with the board’s decision. The company also said that it would respond to the suspension recommendation after analyzing it.Critics of Facebook have long said that the platform can undermine democracy, promote violence and help politicians unfairly target their critics, particularly in countries with weak institutions.Mr. Hun Sen has spent years cracking down on the news media and political opposition in an effort to consolidate his grip on power. In February, he ordered the shutdown of one of the country’s last independent news outlets, saying he did not like its coverage of his son and presumed successor, Lt. Gen. Hun Manet.Under Mr. Hun Sen, the government has also pushed for more government surveillance of the internet, a move that rights groups say makes it even easier for the authorities to monitor and punish online content.Mr. Hun Sen’s large Facebook following may overstate his actual support. In 2018, one of his most prominent political opponents, Sam Rainsy, argued in a California court that the prime minister used so-called click farms to accumulate millions of counterfeit followers.Mr. Sam Rainsy, who lives in exile, also argued that Mr. Hun Sen had used Facebook to spread false news stories and death threats directed at political opponents. The court later denied his request that Facebook be compelled to release records of advertising purchases by Mr. Hun Sen and his allies.In 2017, an opposition political party that Mr. Sam Rainsy had led, the Cambodia National Rescue Party, was dissolved by the country’s highest court. More recently, the Cambodian authorities have disqualified other opposition parties from running in a general election next month.At a public event in Cambodia on Friday, Mr. Hun Sen said that his political opponents outside the country were surely happy with his decision to quit Facebook.“You have to be aware that if I order Facebook to be shut down in Cambodia, it will strongly affect you,” he added, speaking at an event for garment workers ahead of the general election. “But this is not the path that I choose.” More

  • in

    Facebook Failed to Stop Ads Threatening Election Workers

    The ads, submitted by researchers, were rejected by YouTube and TikTok.Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote. More