More stories

  • in

    Instagram and Facebook Subscriptions Are a New Focus of Child Safety Suit

    New Mexico’s attorney general has accused Meta of not protecting children from sexual predators on its platforms. He now wants to know how it polices subscribers to accounts featuring children.The New Mexico attorney general, who last year sued Meta alleging that it did not protect children from sexual predators and had made false claims about its platforms’ safety, announced Monday that his office would examine how the company’s paid-subscription services attract predators.Attorney General Raúl Torrez said he had formally requested documentation from the social media company about subscriptions on Facebook and Instagram, which are frequently available on children’s accounts run by parents.Instagram does not allow users under 13, but accounts that focus entirely on children are permitted as long as they are managed by an adult. The New York Times published an investigation on Thursday into girl influencers on the platform, reporting that the so-called mom-run accounts charge followers up to $19.99 a month for additional photos as well as chat sessions and other extras.The Times found that adult men subscribe to the accounts, including some who actively participate in forums where people discuss the girls in sexual terms.“This deeply disturbing pattern of conduct puts children at risk — and persists despite a wave of lawsuits and congressional investigations,” Mr. Torrez said in a statement.Mr. Torrez filed a complaint in December that accused Meta of enabling harmful activity between adults and minors on Facebook and Instagram and failing to detect and remove such activity when it was reported. The allegations were based, in part, on findings from accounts Mr. Torrez’s office created, including one for a fictitious 14-year-old girl that received an offer of $180,000 to appear in a pornographic video.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A Marketplace of Girl Influencers Managed by Moms and Stalked by Men

    This box represents a real photo of a 9-year-old girl in a golden bikini lounging on a towel. The photo was posted on her Instagram account, which is run by adults. 1 🔥🔥🔥 wooowww Mama mia ❤️❤️🥰💯🤗 Great body😍🔥❤️ Love 😍😍😍😍 Perfect bikini body ❤️❤️❤️❤️❤️😋😋😋😍😍😍🔥🔥🔥🔥🔥 Mmmmmmmmm take that bikini off 😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️ You’re sooooo hot ❤️🤗💋🌺🌹🌹💯 […] More

  • in

    Five Takeaways From The Times’s Investigation Into Child Influencers

    Instagram does not allow children under 13 to have accounts, but parents are allowed to run them — and many do so for daughters who aspire to be social media influencers.What often starts as a parent’s effort to jump-start a child’s modeling career, or win favors from clothing brands, can quickly descend into a dark underworld dominated by adult men, many of whom openly admit on other platforms to being sexually attracted to children, an investigation by The New York Times found.Thousands of so-called mom-run accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.Nearly one in three preteens list influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers. But health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.The pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their daughter’s images. These are some key findings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Promises Give Tech Earnings from Meta and Others a Jolt

    Companies like Meta that could tout their work in the fast-growing field saw a benefit in their fourth-quarter results — and won praise from eager investors.Mark Zuckerberg, Meta’s C.E.O., spoke expansively to analysts about his company’s work on A.I.Carlos Barria/ReutersA.I. and cost cuts lift Big Tech Earlier this week, Mark Zuckerberg of Meta endured a grilling on Capitol Hill and publicly apologized to relatives of victims of online abuse. Little more than a day later, he had a lot to crow about, as his business delivered some of its best quarterly earnings in years.Meta’s results illustrate how the most recent earnings season has gone for Big Tech: a mostly positive period in which companies that could claim the benefits of artificial intelligence and cost-cutting were hailed the most on Wall Street.Meta shot the lights out. After years of facing questions about its ad business and its ability to cope with scandals, the parent of Facebook and Instagram reported that fourth-quarter profits tripled from a year ago. A.I. was credited for some of that, with the technology helping make its core ad business more effective. So too was cost-cutting, which included tens of thousands of layoffs as part of the company’s self-described “year of efficiency.”Meta’s profit was so good that the company will soon start paying stock dividends for the first time (which could total $700 million a year for Zuckerberg alone) and announced a $50 billion buyback. It’s a sign that the tech giant is “coming of age,” according to one analyst, joining Microsoft and Apple in making regular payouts to investors.Zuckerberg pledged more investment in A.I. — “Expect us to continue investing aggressively in this area,” he said on an earnings call — and the company said it had largely concluded its cost cuts. But some analysts said that Meta will eventually have to show a return on that spending.Amazon also touted its A.I. initiatives. Much of its earnings call was spent talking about Rufus, a new smart assistant intended to help shoppers find what they’re looking for. (It may also allow Amazon to reduce ad spending on Google and social media platforms.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Tech CEOs Got Grilled, but New Rules Are Still a Question

    Tech leaders faced a grilling in the Senate, and one offered an apology. But skeptics fear little will change this time.Five tech C.E.O.s faced a grilling yesterday, but it’s unclear whether new laws to impose more safeguards for online children’s safety will pass.Kenny Holston/The New York TimesA lot of heat, but will there be regulation?Five technology C.E.O.s endured hours of grilling by senators on both sides of the aisle about their apparent failures to make their platforms safer for children, with some lawmakers accusing them of having “blood” on their hands.But for all of the drama, including Mark Zuckerberg of Meta apologizing to relatives of online child sex abuse victims, few observers believe that there’s much chance of concrete action.“Your product is killing people,” Senator Josh Hawley, Republican of Missouri, flatly told Zuckerberg at Wednesday’s hearing. Over 3.5 hours, members of the Senate Judiciary Committee laid into the Meta chief and the heads of Discord, Snap, TikTok and X over their policies. (Before the hearing began, senators released internal Meta documents that showed that executives had rejected efforts to devote more resources to safeguard children.)But tech C.E.O.s offered only qualified support for legislative efforts. Those include the Kids Online Safety Act, or KOSA, which would require tech platforms to take “reasonable measures” to prevent harm, and STOP CSAM and EARN IT, two bills that would curtail some of the liability shield given to those companies by Section 230 of the Communications Decency Act.Both Evan Spiegel of Snap and Linda Yaccarino of X backed KOSA, and Yaccarino also became the first tech C.E.O. to back the STOP CSAM Act. But neither endorsed EARN IT.Zuckerberg called for legislation to force Apple and Google — neither of which was asked to testify — to be held responsible for verifying app users’ ages. But he otherwise emphasized that Meta had already offered resources to keep children safe.Shou Chew of TikTok noted only that his company expected to invest over $2 billion in trust and safety measures this year.Jason Citron of Discord allowed that Section 230 “needs to be updated,” and his company later said that it supports “elements” of STOP CSAM.Experts worry that we’ve seen this play out before. Tech companies have zealously sought to defend Section 230, which protects them from liability for content users post on their platforms. Some lawmakers say altering it would be crucial to holding online platforms to account.Meanwhile, tech groups have fought efforts by states to tighten the use of their services by children. Such laws would lead to a patchwork of regulations that should instead be addressed by Congress, the industry has argued.Congress has failed to move meaningfully on such legislation. Absent a sea change in congressional will, Wednesday’s drama may have been just that.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More

  • in

    4,789 Facebook Accounts in China Impersonated Americans, Meta Says

    The company warned that the inauthentic accounts underscored the threat of foreign election interference in 2024.Meta announced on Thursday that it had removed thousands of Facebook accounts based in China that were impersonating Americans debating political issues in the United States. The company warned that the campaign presaged coordinated international efforts to influence the 2024 presidential election.The network of fake accounts — 4,789 in all — used names and photographs lifted from elsewhere on the internet and copied partisan political content from X, formerly known as Twitter, Meta said in its latest quarterly adversarial threat analysis. The copied material included posts by prominent Republican and Democratic politicians, the report said.The campaign appeared intended not to favor one side or another but to highlight the deep divisions in American politics, a tactic that Russia’s influence campaigns have used for years in the United States and elsewhere.Meta warned that the campaign underscored the threat facing a confluence of elections around the world in 2024 — from India in April to the United States in November.“Foreign threat actors are attempting to reach audiences ahead of next year’s various elections, including in the U.S. and Europe,” the company’s report said, “and we need to remain alert to their evolving tactics and targeting across the internet.”Although Meta did not attribute the latest campaign to China’s Communist government, it noted that the country had become the third-most-common geographic source for coordinated inauthentic behavior on Facebook and other social media platforms, after Russia and Iran.The Chinese network was the fifth that Meta has detected and taken down this year, more than in any other nation, suggesting that China is stepping up its covert influence efforts. While previous campaigns focused on Chinese issues, the latest ones have weighed more directly into domestic U.S. politics.“This represents the most notable change in the threat landscape, when compared with the 2020 election cycle,” the company said in the threat report.Meta’s report followed a series of disclosures about China’s global information operations, including a recent State Department report that accused China of spending billions on “deceptive and coercive methods” to shape the global information environment.Microsoft and other researchers have also linked China to the spread of conspiracy theories claiming that the U.S. government deliberately caused the deadly wildfires in Hawaii this year.The latest inauthentic accounts removed by Meta sought “to hijack authentic partisan narratives,” the report said. It detailed several examples in which the accounts copied and pasted, under their own names, partisan posts from politicians — often using language and symbols indicating the posts were originally on X.Two Facebook posts a month apart in August and September, for example, copied opposing statements on abortion from two members of the U.S. House from Texas — Sylvia R. Garcia, a Democrat, and Ronny Jackson, a Republican.The accounts also linked to mainstream media organizations and shared posts by X’s owner, Elon Musk. They liked and reposted content from actual Facebook users on other topics as well, like games, fashion models and pets. The activity suggested that the accounts were intended to build a network of seemingly authentic accounts to push a coordinated message in the future.Meta also removed a similar, smaller network from China that mostly targeted India and Tibet but also the United States. In the case of Tibet, the users posed as pro-independence activists who accused the Dalai Lama of corruption and pedophilia.Meta warned that while it had removed the accounts, the same networks continued to use accounts on other platforms, including X, YouTube, Gettr, Telegram and Truth Social, warning that foreign adversaries were diversifying the sources of their operations.In its report, Meta also weighed in on Republican attacks on the U.S. government’s role in monitoring disinformation online, a political and legal fight that has reached the Supreme Court in a challenge brought by the attorneys general of Missouri and Louisiana.While Republicans have accused officials of coercing social media platforms to censor content, including at a hearing in the House on Thursday, Meta said coordination among tech companies, government and law enforcement had disrupted foreign threats.“This type of information sharing can be particularly critical in disrupting malicious foreign campaigns by sophisticated threat actors who coordinate their operations outside of our platforms,” the report said. More

  • in

    La IA hace campaña en las elecciones de Argentina

    Los afiches que salpican las calles de Buenos Aires tienen un cierto toque soviético.Había uno de Sergio Massa, uno de los candidatos presidenciales de Argentina, vestido con una camisa con lo que parecían ser medallas militares, señalando a un cielo azul. Lo rodeaban cientos de personas mayores —con atuendos monótonos, rostros serios y a menudo desfigurados— que lo miraban con esperanza.El estilo no era un error. El ilustrador había recibido instrucciones claras.“Ilustración de afiche de propaganda política soviética de Gustav Klutsis con un líder, masssa, de pie y firme”, decía un mensaje que la campaña de Massa introdujo en un programa de inteligencia artificial para producir la imagen. “Símbolos de unidad y poder llenan el entorno”, continuaba el comando o prompt. “La imagen irradia autoridad y determinación”.Javier Milei, el otro candidato en la segunda vuelta electoral del domingo, ha contraatacado compartiendo lo que parecen ser imágenes creadas con inteligencia artificial que representan a Massa como un líder comunista chino y a sí mismo como un adorable león de dibujos animados. Han sido vistas más de 30 millones de veces.Las elecciones argentinas se han convertido rápidamente en un campo de pruebas para la inteligencia artificial en las campañas electorales, con los dos candidatos y sus partidarios empleando la tecnología para adulterar imágenes y videos existentes y crear otros desde cero.La inteligencia artificial ha hecho que los candidatos digan cosas que no decían y los ha colocado en películas y memes famosos. Ha generado carteles de campaña y desencadenado debates sobre si los videos reales son efectivamente reales.El papel destacado de la inteligencia artificial en la campaña de Argentina y el debate político que ha suscitado subrayan la creciente prevalencia de la tecnología y demuestran que, con su creciente poder y su costo cada vez menor, es probable que ahora sea un factor en muchas elecciones democráticas de todo el mundo.Los expertos comparan este momento con los primeros días de las redes sociales, una tecnología que ofrece nuevas y tentadoras herramientas para la política, así como amenazas imprevistas.La campaña de Massa ha creado un sistema de inteligencia artificial que puede crear imágenes y videos de muchos de los principales protagonistas de las elecciones —los candidatos, los compañeros de fórmula, los aliados políticos— haciendo una gran variedad de cosas.La campaña ha usado inteligencia artificial para retratar a Massa, el serio ministro de Economía de centroizquierda, como fuerte, intrépido y carismático, incluyendo videos que lo muestran como soldado en una guerra, un Cazafantasmas e Indiana Jones, así como afiches que evocan al cartel “Hope” de la campaña de 2008 de Barack Obama y a una portada de The New Yorker.La campaña también ha usado al sistema para retratar al candidato oponente, Milei —un economista libertario de extrema derecha y figura televisiva conocida por sus arrebatos—, como inestable, colocándolo en películas como La naranja mecánica y Pánico y locura en Las Vegas.Mucho del contenido ha sido claramente falso. Pero un puñado de creaciones pisaron la línea de la desinformación. La campaña de Massa produjo un video ultrafalso, conocido como deepfake en inglés, en el cual Milei explica cómo funcionaría un mercado de órganos humanos, algo que él ha dicho que filosóficamente encaja con sus opiniones libertarias.“Imaginate tener hijos y pensar que cada uno de ellos es como una inversión a largo plazo. No en el sentido tradicional, sino pensando en el potencial económico de sus órganos en el futuro”, dice la imagen manipulada de Milei en el video falsificado, publicado por la campaña de Massa en su cuenta de Instagram para inteligencia artificial llamado IAxlaPatria.La leyenda de la publicación dice: “Le pedimos a una Inteligencia Artificial que lo ayude a Javier a explicar el negocio de la venta de órganos y esto sucedió”.En una entrevista, Massa dijo que la primera vez que vio lo que la inteligencia artificial podía hacer se quedó impactado. “No tenía la cabeza preparada para el mundo que me iba a tocar vivir a mí”, dijo. “Es un enorme desafío, estamos arriba de un caballo al que tenemos que cabalgar y no le conocemos las mañas”.The New York Times entonces le mostró el ultrafalso que su campaña había creado en donde aparece Milei hablando de los órganos humanos. Pareció perturbado. “Sobre ese uso no estoy de acuerdo”, dijo.Su vocero luego recalcó que la publicación era en broma y que estaba claramente etiquetada como generada por inteligencia artificial. Su campaña aseguró en un comunicado que su uso de la tecnología es para divertir y hacer observaciones políticas, no para engañar.Los investigadores hace tiempo que han expresado preocupación por los efectos de la IA en las elecciones. La tecnología tiene la capacidad de confundir y engañar a los votantes, crear dudas sobre lo que es real y añadir desinformación que puede propagarse por las redes sociales.Durante años, dichos temores han sido de carácter especulativo puesto que la tecnología para producir contenidos falsos de ese tipo era demasiado complicada, costosa y burda.“Ahora hemos visto esta total explosión de conjuntos de herramientas increíblemente accesibles y cada vez más potentes que se han democratizado, y esa apreciación ha cambiado de manera radical”, dijo Henry Ajder, experto afincado en Inglaterra que ha brindado asesoría a gobiernos sobre contenido generado con IA.Este año, un candidato a la alcaldía de Toronto empleó imágenes de personas sin hogar generadas por IA de tono sombrío para insinuar cómo sería Toronto si no resultaba electo. En Estados Unidos, el Partido Republicano publicó un video creado con IA que muestra a China invadiendo Taiwán y otras escenas distópicas para ilustrar lo que supuestamente sucedería si el presidente Biden ganara la reelección.Y la campaña del gobernador de Florida, Ron DeSantis, compartió un video que mostraba imágenes generadas por IA donde aparece Donald Trump abrazando a Anthony Fauci, el médico que se ha convertido en enemigo de la derecha estadounidense por su papel como líder de la respuesta nacional frente a la pandemia.Hasta ahora, el contenido generado por IA compartido por las campañas en Argentina ha sido etiquetado para identificar su origen o es una falsificación tan evidente que es poco probable que engañe incluso a los votantes más crédulos. Más bien, la tecnología ha potenciado la capacidad de crear contenido viral que antiguamente habría requerido el trabajo de equipos enteros de diseñadores gráficos durante días o semanas.Meta, la empresa dueña de Facebook e Instagram, dijo esta semana que iba a exigir que los avisos políticos indiquen si usaron IA. Otras publicaciones no pagadas en sitios que emplean esa tecnología, incluso relacionados con política, no iban a requerir indicar tal información. La Comisión Federal de Elecciones en EE. UU. también está evaluando si va a regular el uso de IA en propaganda política.El Instituto de Diálogo Estratégico, un grupo de investigación con sede en Londres que estudia las plataformas de internet, firmó una carta en la que se hace un llamado a implementar este tipo de regulaciones. Isabelle Frances-Wright, la directora de tecnología y sociedad del grupo, comentó que el uso extenso de IA en las elecciones argentinas era preocupante.“Sin duda considero que es un terreno resbaladizo”, dijo. “De aquí a un año lo que ya se ve muy real solo lo parecerá más”.La campaña de Massa dijo que decidió usar IA en un esfuerzo por mostrar que el peronismo, el movimiento político de 78 años de antigüedad que respalda a Massa, es capaz de atraer a los votantes jóvenes al rodear la imagen de Massa de cultura pop y de memes.Imagen generada con IA por la campaña de MassaPara lograrlo, ingenieros y artistas de la campaña subieron a un programa de código abierto llamado Stable Diffusion fotografías de distintas figuras políticas argentinas a fin de entrenar a su sistema de IA para que creara imágenes falsas de esas personas reales. Ahora pueden producir con rapidez una imagen o un video en donde aparezcan más de una decena de notables personalidades de la política de Argentina haciendo prácticamente lo que le indiquen.Durante la campaña, el equipo de comunicación de Massa instruyó a los artistas que trabajaban con la IA de la campaña sobre los mensajes o emociones que deseaban suscitar con las imágenes, por ejemplo: unidad nacional, valores familiares o miedo. Los artistas luego hacían lluvia de ideas para insertar a Massa o a Milei, así como a otros políticos, en contenido que evoca películas, memes, estilos artísticos o momentos históricos.Para Halloween, la campaña de Massa le pidió a su IA que creara una serie de imágenes caricaturescas de Milei y sus aliados en donde parecieran zombis. La campaña también empleó IA para crear un tráiler cinematográfico dramático en donde aparece Buenos Aires en llamas, Milei como un villano malvado en una camisa de fuerza y Massa en el papel del héroe que va a salvar el país.Las imágenes de IA también han hecho su aparición en el mundo real. Los afiches soviéticos estuvieron entre las decenas de diseños que campaña y seguidores de Massa imprimieron y pegaron en los espacios públicos de Argentina.Algunas imágenes fueron generadas por la IA de la campaña mientras que otras fueron creadas por simpatizantes que usaron IA, entre ellas una de las más conocidas, una en la que Massa monta un caballo al estilo de José de San Martín, héroe de la independencia argentina.“Massa estaba muy acartonado”, dijo Octavio Tome, organizador comunitario que ayudó a crear la imagen. “Esa imagen da un Massa con impronta jefe. Hay algo muy fuerte de la argentinidad”.Simpatizantes de Massa colocaron afiches generados con IA en donde aparece como el prócer de la independencia argentina José de San Martín.Sarah Pabst para The New York TimesEl surgimiento de la inteligencia artificial en las elecciones argentinas también ha causado que algunos votantes duden de la realidad. Luego de que la semana pasada circuló un video en donde se veía a Massa exhausto tras un acto de campaña, sus críticos lo acusaron de estar drogado. Sus seguidores rápidamente respondieron que el video en realidad era un deepfake.No obstante, su campaña confirmó que el video era, en efecto, real.Massa comentó que la gente ya estaba usando la tecnología para intentar encubrir errores del pasado o escándalos. “Es muy fácil escudarse en la inteligencia artificial cuando aparecen cosas que dijiste y no querías que se supieran”, dijo Massa en la entrevista.Al principio de la contienda, Patricia Bullrich, una candidata que no logró pasar a la segunda vuelta, intentó explicar que eran falsas unas grabaciones de audio filtradas en donde su asesor económico le ofrecía trabajo a una mujer a cambio de sexo. “Te hacen voces con inteligencia artificial, te recortan videos, te meten audios que nadie sabe de dónde salen”, dijo.No está claro si los audios eran falsos o reales.Jack Nicas es el jefe de la corresponsalía en Brasil, que abarca Brasil, Argentina, Chile, Paraguay y Uruguay. Anteriormente reportó sobre tecnología desde San Francisco y, antes de integrarse al Times en 2018, trabajó siete años en The Wall Street Journal. Más de Jack Nicas More